Container cloud platform construction

Container cloud platform construction

1. Node planning

IP CPU name node
192.168.100.10 Master kubernetes cluster

2. Basic environment configuration

Download the installation package to the root directory and extract it to the /opt directory

[root@localhost ~]# mount -o loop chinaskills_cloud_paas_v2.0.2.iso /mnt/
cmount: /dev/loop0 写保护,将以只读方式挂载
[root@localhost ~]# cp -r /mnt/* /opt/
[root@localhost ~]# umount /mnt/
1.1 Install kubeeasy

kubeeasy is a professional deployment tool for Kubernetes clusters, which greatly simplifies the process. Its features are as follows:

  • Fully automated installation process
  • Support DNS identification cluster
  • Supports self-healing: everything runs in an autoscaling group
  • Supports multiple operating systems (such as Debian, Ubuntu 16.04, CentOs7, RHEL, etc.)
  • Support high availability

Install the kubeeasy tool on the master node

[root@localhost ~]# mv /opt/kubeeasy /usr/bin/
1.2 Install dependency packages

This step mainly completes the installation of docker-ce, git, unzip, vim, wget and other tools

Execute the following command on the master node to complete the installation of dependent packages

[root@localhost ~]# kubeeasy install depend --host 192.168.100.10 --user root --password Abc@1234 --offline-file /opt/dependencies/base-rpms.tar.gz 

The parameters are explained as follows:

  • –host: If you use a single node to build here, just fill in the host IP (if there are other node IPs, just separate them with commas)
  • –password: Host login password, all nodes must keep the same password
  • –offline-file: Offline installation package path

You can run the command tail -f /var/log/kubeinstall.log to view installation details or troubleshoot errors.

1.3 Configure SSH key-free

(No configuration is required for a single node)
When installing a Kubernetes cluster, you need to configure password-free login between each node of the Kubernetes cluster to facilitate file transfer and communication.
Execute the following command on the master node to complete the connectivity test of the cluster node :

[root@localhost ~]# kubeeasy check ssh \
--host 10.24.2.10,10.24.2.11 \
--user root \
--password Abc@1234

Execute the following command on the master node to complete the key-free configuration among all nodes in the cluster:

[root@localhost ~]# kubeeasy create ssh-keygen \
--master 10.24.2.10 \
--worker 10.24.2.11 \
--user root --password Abc@1234

The –mater parameter is followed by the master node IP, and the –worker parameter is followed by all worker node IPs.

Install Kubernetes cluster

Execute the following command on the master node to deploy the kubernetes cluster

[root@localhost ~]# kubeeasy install kubernetes --master 192.168.100.10 --user root --password Abc@1234 --version 1.22.1 --offline-file /opt/kubernetes.tar.gz 

Explanation of some parameters:

  • –master: Master node IP
  • –worker: Node node IP, if there are multiple Node nodes, separate them with commas
  • –version: Kubernetes version here can only be 1.22.1

You can run the command tail -f /var/log/kubeinstall.log to view installation details or troubleshoot errors.

Check the cluster status after deployment:

[root@k8s-master-node1 ~]# kubectl cluster-info
Kubernetes control plane is running at https://apiserver.cluster.local:6443
CoreDNS is running at https://apiserver.cluster.local:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Check the node load:

[root@k8s-master-node1 ~]# kubectl top nodes --use-protocol-buffers
NAME               CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master-node1   390m         3%     1967Mi          25%   

Log in to Yiyiyunyun development platform

http://master_ip:30080

image-20230213183228048

base case

Use the nginx image to create a Pod named exam in the default namespace and set the environment variable exam, whose value is 2022

[root@k8s-master-node1 ~]# vi exam.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: exam
  namespace: default
spec:
  containers:
  - image: nginx:latest
    name: nginx
    imagePullPolicy: IfNotPresent
    env:
    - name: "exam"
      value: "2022"
[root@k8s-master-node1 ~]# kubectl apply -f  exam.yaml 
pod/exam created
[root@k8s-master-node1 ~]# kubectl get -f  exam.yaml 
NAME   READY   STATUS    RESTARTS   AGE
exam   1/1     Running   0          4m9s

Install and deploy Istio

The version installed this time is 1.12.0

Execute the following command on the master node to install the Istio service grid environment

[root@k8s-master-node1 ~]# kubeeasy add --istio istio

View Pod

[root@k8s-master-node1 ~]# kubectl -n istio-system get pods
NAME                                   READY   STATUS    RESTARTS   AGE
grafana-6ccd56f4b6-8q8kg               1/1     Running   0          59s
istio-egressgateway-7f4864f59c-jbsbw   1/1     Running   0          74s
istio-ingressgateway-55d9fb9f-8sbd7    1/1     Running   0          74s
istiod-555d47cb65-hcl69                1/1     Running   0          78s
jaeger-5d44bc5c5d-b9cq7                1/1     Running   0          58s
kiali-9f9596d69-ssn57                  1/1     Running   0          58s
prometheus-64fd8ccd65-xdrzz            2/2     Running   0          58s

View Istio version information

[root@k8s-master-node1 ~]# istioctl version
client version: 1.12.0
control plane version: 1.12.0
data plane version: 1.12.0 (2 proxies)

Istio visual access

http://master_ip:33000

Access the Grafana interface

image-20230213184133189

Visit Kiali

http://master_ip:20001

image-20230213184226312

There are also some access interfaces

http://master_ip:30090

http://master_ip:30686

3.3 Basic usage of istioctl

istioctl is used to create, list, modify and delete configuration resources in the Istio system

The available routing and traffic management configuration types are: virtualservice, gateway, destinationrule, serviceentry, httpapispecbinding, quotaspec, quotaspecbinding, servicerole, servicerolebinding, policy.

Use the following command to show that istioctl can access the name of the Istio configuration file.

[root@k8s-master-node1 ~]# istioctl profile list
Istio configuration profiles:
    default
    demo
    empty
    external
    minimal
    openshift
    preview
    remote
    #default: 根据istioOpperator API的默认设置启用相关组件,适用于生产环境
    #demo: 部署较多组件演示istio功能
    #minimal:类似于default,仅部署控制平台
    #remore: 用于配置共享control plane多集群
    #empty: 不部署任何组件,通常帮助用户自定义profifle时生成基础配置
    #preview:包含预览性的profile,可探索新功能,不保证稳定性和安全及性能  

Display the configuration information of the configuration file

[root@k8s-master-node1 ~]# istioctl profile dump demo
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  components:
    base:
      enabled: true
    cni:
      enabled: false
    egressGateways:
    - enabled: true
      k8s:
        resources:
          requests:
            cpu: 10m
            memory: 40Mi
      name: istio-egressgateway
    ingressGateways:
    - enabled: true
      k8s:
        resources:
          requests:
            cpu: 10m
            memory: 40Mi
        service:
          ports:
          - name: status-port
            port: 15021
            targetPort: 15021
          - name: http2
            port: 80
            targetPort: 8080
          - name: https
            port: 443
            targetPort: 8443
          - name: tcp
            port: 31400
            targetPort: 31400
          - name: tls
            port: 15443
            targetPort: 15443
      name: istio-ingressgateway
    istiodRemote:
      enabled: false
    pilot:
      enabled: true
      k8s:
        env:
        - name: PILOT_TRACE_SAMPLING
          value: "100"
        resources:
          requests:
            cpu: 10m
            memory: 100Mi
  hub: docker.io/istio
  meshConfig:
    accessLogFile: /dev/stdout
    defaultConfig:
      proxyMetadata: {
    
    }
    enablePrometheusMerge: true
  profile: demo
  tag: 1.12.0
  values:
    base:
      enableCRDTemplates: false
      validationURL: ""
    defaultRevision: ""
    gateways:
      istio-egressgateway:
        autoscaleEnabled: false
        env: {
    
    }
        name: istio-egressgateway
        secretVolumes:
        - mountPath: /etc/istio/egressgateway-certs
          name: egressgateway-certs
          secretName: istio-egressgateway-certs
        - mountPath: /etc/istio/egressgateway-ca-certs
          name: egressgateway-ca-certs
          secretName: istio-egressgateway-ca-certs
        type: ClusterIP
      istio-ingressgateway:
        autoscaleEnabled: false
        env: {
    
    }
        name: istio-ingressgateway
        secretVolumes:
        - mountPath: /etc/istio/ingressgateway-certs
          name: ingressgateway-certs
          secretName: istio-ingressgateway-certs
        - mountPath: /etc/istio/ingressgateway-ca-certs
          name: ingressgateway-ca-certs
          secretName: istio-ingressgateway-ca-certs
        type: LoadBalancer
    global:
      configValidation: true
      defaultNodeSelector: {
    
    }
      defaultPodDisruptionBudget:
        enabled: true
      defaultResources:
        requests:
          cpu: 10m
      imagePullPolicy: ""
      imagePullSecrets: []
      istioNamespace: istio-system
      istiod:
        enableAnalysis: false
      jwtPolicy: third-party-jwt
      logAsJson: false
      logging:
        level: default:info
      meshNetworks: {
    
    }
      mountMtlsCerts: false
      multiCluster:
        clusterName: ""
        enabled: false
      network: ""
      omitSidecarInjectorConfigMap: false
      oneNamespace: false
      operatorManageWebhooks: false
      pilotCertProvider: istiod
      priorityClassName: ""
      proxy:
        autoInject: enabled
        clusterDomain: cluster.local
        componentLogLevel: misc:error
        enableCoreDump: false
        excludeIPRanges: ""
        excludeInboundPorts: ""
        excludeOutboundPorts: ""
        image: proxyv2
        includeIPRanges: '*'
        logLevel: warning
        privileged: false
        readinessFailureThreshold: 30
        readinessInitialDelaySeconds: 1
        readinessPeriodSeconds: 2
        resources:
          limits:
            cpu: 2000m
            memory: 1024Mi
          requests:
            cpu: 10m
            memory: 40Mi
        statusPort: 15020
        tracer: zipkin
      proxy_init:
        image: proxyv2
        resources:
          limits:
            cpu: 2000m
            memory: 1024Mi
          requests:
            cpu: 10m
            memory: 10Mi
      sds:
        token:
          aud: istio-ca
      sts:
        servicePort: 0
      tracer:
        datadog: {
    
    }
        lightstep: {
    
    }
        stackdriver: {
    
    }
        zipkin: {
    
    }
      useMCP: false
    istiodRemote:
      injectionURL: ""
    pilot:
      autoscaleEnabled: false
      autoscaleMax: 5
      autoscaleMin: 1
      configMap: true
      cpu:
        targetAverageUtilization: 80
      enableProtocolSniffingForInbound: true
      enableProtocolSniffingForOutbound: true
      env: {
    
    }
      image: pilot
      keepaliveMaxServerConnectionAge: 30m
      nodeSelector: {
    
    }
      podLabels: {
    
    }
      replicaCount: 1
      traceSampling: 1
    telemetry:
      enabled: true
      v2:
        enabled: true
        metadataExchange:
          wasmEnabled: false
        prometheus:
          enabled: true
          wasmEnabled: false
        stackdriver:
          configOverride: {
    
    }
          enabled: false
          logging: false
          monitoring: false
          topology: false

Show configuration file differences

[root@k8s-master-node1 ~]# istioctl profile diff default demo
The difference between profiles:
 apiVersion: install.istio.io/v1alpha1
 kind: IstioOperator
 metadata:
   creationTimestamp: null
   namespace: istio-system
 spec:
   components:
     base:
       enabled: true
     cni:
       enabled: false
     egressGateways:
-    - enabled: false
+    - enabled: true
+      k8s:
+        resources:
+          requests:
+            cpu: 10m
+            memory: 40Mi
       name: istio-egressgateway
     ingressGateways:
     - enabled: true
+      k8s:
+        resources:
+          requests:
+            cpu: 10m
+            memory: 40Mi
+        service:
+          ports:
+          - name: status-port
+            port: 15021
+            targetPort: 15021
+          - name: http2
+            port: 80
+            targetPort: 8080
+          - name: https
+            port: 443
+            targetPort: 8443
+          - name: tcp
+            port: 31400
+            targetPort: 31400
+          - name: tls
+            port: 15443
+            targetPort: 15443
       name: istio-ingressgateway
     istiodRemote:
       enabled: false
     pilot:
       enabled: true
+      k8s:
+        env:
+        - name: PILOT_TRACE_SAMPLING
+          value: "100"
+        resources:
+          requests:
+            cpu: 10m
+            memory: 100Mi
   hub: docker.io/istio
   meshConfig:
+    accessLogFile: /dev/stdout
     defaultConfig:
       proxyMetadata: {
    
    }
     enablePrometheusMerge: true
   profile: default
   tag: 1.12.0
   values:
     base:
       enableCRDTemplates: false
       validationURL: ""
     defaultRevision: ""
     gateways:
       istio-egressgateway:
-        autoscaleEnabled: true
+        autoscaleEnabled: false
         env: {
    
    }
         name: istio-egressgateway
         secretVolumes:
         - mountPath: /etc/istio/egressgateway-certs
           name: egressgateway-certs
           secretName: istio-egressgateway-certs
         - mountPath: /etc/istio/egressgateway-ca-certs
           name: egressgateway-ca-certs
           secretName: istio-egressgateway-ca-certs
         type: ClusterIP
       istio-ingressgateway:
-        autoscaleEnabled: true
+        autoscaleEnabled: false
         env: {
    
    }
         name: istio-ingressgateway
         secretVolumes:
         - mountPath: /etc/istio/ingressgateway-certs
           name: ingressgateway-certs
           secretName: istio-ingressgateway-certs
         - mountPath: /etc/istio/ingressgateway-ca-certs
           name: ingressgateway-ca-certs
           secretName: istio-ingressgateway-ca-certs
         type: LoadBalancer
     global:
       configValidation: true
       defaultNodeSelector: {
    
    }
       defaultPodDisruptionBudget:
         enabled: true
       defaultResources:
         requests:
           cpu: 10m
       imagePullPolicy: ""
       imagePullSecrets: []
       istioNamespace: istio-system
       istiod:
         enableAnalysis: false
       jwtPolicy: third-party-jwt
       logAsJson: false
       logging:
         level: default:info
       meshNetworks: {
    
    }
       mountMtlsCerts: false
       multiCluster:
         clusterName: ""
         enabled: false
       network: ""
       omitSidecarInjectorConfigMap: false
       oneNamespace: false
       operatorManageWebhooks: false
       pilotCertProvider: istiod
       priorityClassName: ""
       proxy:
         autoInject: enabled
         clusterDomain: cluster.local
         componentLogLevel: misc:error
         enableCoreDump: false
         excludeIPRanges: ""
         excludeInboundPorts: ""
         excludeOutboundPorts: ""
         image: proxyv2
         includeIPRanges: '*'
         logLevel: warning
         privileged: false
         readinessFailureThreshold: 30
         readinessInitialDelaySeconds: 1
         readinessPeriodSeconds: 2
         resources:
           limits:
             cpu: 2000m
             memory: 1024Mi
           requests:
-            cpu: 100m
-            memory: 128Mi
+            cpu: 10m
+            memory: 40Mi
         statusPort: 15020
         tracer: zipkin
       proxy_init:
         image: proxyv2
         resources:
           limits:
             cpu: 2000m
             memory: 1024Mi
           requests:
             cpu: 10m
             memory: 10Mi
       sds:
         token:
           aud: istio-ca
       sts:
         servicePort: 0
       tracer:
         datadog: {
    
    }
         lightstep: {
    
    }
         stackdriver: {
    
    }
         zipkin: {
    
    }
       useMCP: false
     istiodRemote:
       injectionURL: ""
     pilot:
-      autoscaleEnabled: true
+      autoscaleEnabled: false
       autoscaleMax: 5
       autoscaleMin: 1
       configMap: true
       cpu:
         targetAverageUtilization: 80
       deploymentLabels: null
       enableProtocolSniffingForInbound: true
       enableProtocolSniffingForOutbound: true
       env: {
    
    }
       image: pilot
       keepaliveMaxServerConnectionAge: 30m
       nodeSelector: {
    
    }
       podLabels: {
    
    }
       replicaCount: 1
       traceSampling: 1
     telemetry:
       enabled: true
       v2:
         enabled: true
         metadataExchange:
           wasmEnabled: false
         prometheus:
           enabled: true
           wasmEnabled: false
         stackdriver:
           configOverride: {
    
    }
           enabled: false
           logging: false
           monitoring: false
           topology: false

The service mesh can be outlined using the proxy-status or ps command

[root@k8s-master-node1 ~]# istioctl proxy-status
NAME                                                  CDS        LDS        EDS        RDS          ISTIOD                      VERSION
istio-egressgateway-7f4864f59c-ps7cs.istio-system     SYNCED     SYNCED     SYNCED     NOT SENT     istiod-555d47cb65-z9m58     1.12.0
istio-ingressgateway-55d9fb9f-bwzsl.istio-system      SYNCED     SYNCED     SYNCED     NOT SENT     istiod-555d47cb65-z9m58     1.12.0
[root@k8s-master-node1 ~]# istioctl ps
NAME                                                  CDS        LDS        EDS        RDS          ISTIOD                      VERSION
istio-egressgateway-7f4864f59c-ps7cs.istio-system     SYNCED     SYNCED     SYNCED     NOT SENT     istiod-555d47cb65-z9m58     1.12.0
istio-ingressgateway-55d9fb9f-bwzsl.istio-system      SYNCED     SYNCED     SYNCED     NOT SENT     istiod-555d47cb65-z9m58     1.12.0

If an agent is missing from the output list, it means that it is not currently connected to the Polit instance, so it cannot receive any configuration.

Also, if it is marked as stale, it means there is a mesh problem or the Pilot needs to be extended

istio allows retrieving proxy configuration information using the proxy-config or pc command

Retrieve information about the cluster configuration of the Envoy instance in a specific Pod

# istioctl proxy-config cluster <pod-name> [flages]
                         all         #检索指定pod中Envoy的所有配置
                         bootstrap   #检索指定pod中Envoy的引导配置
                         cluster     #检索指定pod中Envoy的集群配置
                         endpoint    #检索指定pod中Envoy的端点配置
                         listener    #检索指定pod中Envoy的侦听器配置
                         log         #实验)检索指定pod中Envoy的日志级别
                         rootca      #比较比较两个给定pod的rootca值
                         route       #检索指定pod中Envoy的路由配置
                         secret      #检索指定pod中Envoy的机密配置
base case

Complete the installation of the istio service grid environment on the Kubernetes cluster, then create a new namespace exam, and enable automatic sidecar injection for the namespace (you can also modify the configuration file name for automatic injection)

[root@k8s-master-node1 ~]# kubectl create namespace exam
namespace/exam created
[root@k8s-master-node1 ~]# kubectl label namespace exam istio-injection=enabled
namespace/exam labeled

Execute kubectl -n istio-system get all command and kubectl get ns exam --show-labels command on the master node to verify

[root@k8s-master-node1 ~]# kubectl -n istio-system get all
NAME                                       READY   STATUS    RESTARTS   AGE
pod/grafana-6ccd56f4b6-dz9gx               1/1     Running   0          28m
pod/istio-egressgateway-7f4864f59c-ps7cs   1/1     Running   0          29m
pod/istio-ingressgateway-55d9fb9f-bwzsl    1/1     Running   0          29m
pod/istiod-555d47cb65-z9m58                1/1     Running   0          29m
pod/jaeger-5d44bc5c5d-5vmw8                1/1     Running   0          28m
pod/kiali-9f9596d69-qkb6f                  1/1     Running   0          28m
pod/prometheus-64fd8ccd65-s92vb            2/2     Running   0          28m
.....
[root@k8s-master-node1 ~]# kubectl get ns exam --show-labels
NAME   STATUS   AGE     LABELS
exam   Active   7m52s   istio-injection=enabled,kubernetes.io/metadata.name=exam

Install and deploy KubeVirt

Install KubeVirt on the master node

[root@k8s-master-node1 ~]# kubeeasy add --virt kubevirt

View Pod

[root@k8s-master-node1 ~]# kubectl -n kubevirt get pod
NAME                              READY   STATUS    RESTARTS   AGE
virt-api-86f9d6d4f-2mntr          1/1     Running   0          89s
virt-api-86f9d6d4f-vf8vc          1/1     Running   0          89s
virt-controller-54b79f5db-4xp4t   1/1     Running   0          64s
virt-controller-54b79f5db-kq4wj   1/1     Running   0          64s
virt-handler-gxtv6                1/1     Running   0          64s
virt-operator-6fbd74566c-nnrdn    1/1     Running   0          119s
virt-operator-6fbd74566c-wblgx    1/1     Running   0          119s

basic use

Create vmi

[root@k8s-master-node1 ~]# kubectl create -f vmi.yaml

View vmi

[root@k8s-master-node1 ~]# kubectl get vmis

delete vmi

[root@k8s-master-node1 ~]# kubectl delete vmis <vmi-name>

virhctl tool

virtctl is a command line tool similar to Kubectl that comes with KubeVirt. It can directly manage virtual machines and control the start, stop, restart, etc. of virtual machines.

# 启动虚拟机
virtctl start <vmi-name>
# 停止虚拟机
virtctl stop <vmi-name>
# 重启虚拟机
virtctl restart <vmi-name>
base case

Complete the installation of the KubeVirt virtualization environment on the Kubernetes cluster. After completion, execute the kubectl -n kubevirt get deployment command on the master node for verification.

[root@k8s-master-node1 ~]# kubectl -n kubevirt get deployment
NAME              READY   UP-TO-DATE   AVAILABLE   AGE
virt-api          2/2     2            2           14m
virt-controller   2/2     2            2           14m
virt-operator     2/2     2            2           15m

Install and deploy Hanbor warehouse

Execute the following command to install the hanbor warehouse

[root@k8s-master-node1 ~]# kubeeasy add --registry harbor

Check the Harbor warehouse status after deployment is completed

[root@k8s-master-node1 ~]# systemctl status harbor
● harbor.service - Harbor
   Loaded: loaded (/usr/lib/systemd/system/harbor.service; enabled; vendor preset: disabled)
   Active: active (running) since 二 2023-02-14 18:16:05 CST; 10s ago
     Docs: http://github.com/vmware/harbor
 Main PID: 35753 (docker-compose)
    Tasks: 14
   Memory: 8.2M
   CGroup: /system.slice/harbor.service
           └─35753 /usr/local/bin/docker-compose -f /opt/harbor/docker-compose.yml up

214 18:16:05 k8s-master-node1 docker-compose[35753]: Container redis  Running
214 18:16:05 k8s-master-node1 docker-compose[35753]: Container harbor-portal  Running
214 18:16:05 k8s-master-node1 docker-compose[35753]: Container registry  Running
214 18:16:05 k8s-master-node1 docker-compose[35753]: Container harbor-core  Running
214 18:16:05 k8s-master-node1 docker-compose[35753]: Container nginx  Running
214 18:16:05 k8s-master-node1 docker-compose[35753]: Container harbor-jobservice  Running
214 18:16:05 k8s-master-node1 docker-compose[35753]: Attaching to harbor-core, harbor-db, harbor-jobservice, harbor-log, harbor-portal, nginx, redis, regist...gistryctl
214 18:16:06 k8s-master-node1 docker-compose[35753]: registry           | 172.18.0.8 - - [14/Feb/2023:10:16:06 +0000] "GET / HTTP/1.1" 200 0 "" "Go-http-client/1.1"
214 18:16:06 k8s-master-node1 docker-compose[35753]: registryctl        | 172.18.0.8 - - [14/Feb/2023:10:16:06 +0000] "GET /api/health HTTP/1.1" 200 9
214 18:16:06 k8s-master-node1 docker-compose[35753]: harbor-portal      | 172.18.0.8 - - [14/Feb/2023:10:16:06 +0000] "GET / HTTP/1.1" 200 532 "-" "Go-http-client/1.1"
Hint: Some lines were ellipsized, use -l to show in full.

Access the Harbor warehouse through http://master_ip on the web side

image-20230214101817357

Log in to Harbor using the administrator account (admin/Harbor12345)

image-20230214102023329

Helm commonly used commands

View version information

[root@k8s-master-node1 ~]# helm version
version.BuildInfo{
    
    Version:"v3.7.1", GitCommit:"1d11fcb5d3f3bf00dbe6fe31b8412839a96b3dc4", GitTreeState:"clean", GoVersion:"go1.16.9"}

View currently installed Charts

[root@k8s-master-node1 ~]# helm list
NAME    NAMESPACE       REVISION        UPDATED STATUS  CHART   APP VERSION

Query Charts

# helm search <chart-name>

Query Charts status

# helm status RELEASE_NAME 

Create Charts

# helm create helm_charts

Delete Charts

# helm delete RELEASE_NAME

Pack Charts

# cd helm_charts && helm package ./

View the generated yaml file

# helm template helm_charts-xxx.tgz
base case

Complete the deployment of Harbor image warehouse and Helm package management tool on the master node.

Use the nginx image to customize a Chart. The Deployment name is nginx and the number of copies is 1. Then deploy the Chart to the default namespace and the Release name is web.

[root@k8s-master-node1 ~]# helm create mychart
Creating mych
[root@k8s-master-node1 ~]# rm -rf mychart/templates/*
[root@k8s-master-node1 ~]# kubectl create deployment nginx --image=nginx --dry-run=client -o yaml > mychart/templates/deployment.yaml
[root@k8s-master-node1 ~]# vi mychart/templates/deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  strategy: {
    
    }
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx
        resources: {
    
    }
status: {
    
    }
[root@k8s-master-node1 ~]# helm install web mychart
NAME: web
LAST DEPLOYED: Tue Feb 14 18:43:57 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

Execute the helm stauts web command on the master node to verify

[root@k8s-master-node1 ~]# helm status web
NAME: web
LAST DEPLOYED: Tue Feb 14 18:43:57 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

Reset cluster

If the cluster deployment fails or malfunctions, it needs to be redeployed.

[root@k8s-master-node1 ~]# kubeeasy reset

Guess you like

Origin blog.csdn.net/qq_52089863/article/details/129065387