[Cloud Native] Comprehensive list of commonly used kubectl commands

Table of contents

1. Resource management methods

 kubectl command list

2. Comprehensive list of commonly used kubectl commands

2.2 Project life cycle: Create-->Publish-->Update-->Rollback-->Delete

1. Create kubectl create command

2. Issue the kubectl expose command

3. Update kubectl set

4. Roll back kubectl rollout 

5. Delete kubectl delete

3. Declarative management method


1. Resource management methods

①Declarative resource management method ( via command line )

The only entrance for kubernetes cluster management cluster resources is to call the apiserver interface through the corresponding method 2. kubectl is the official CLI command line tool, used to communicate with the apiserver, organize and convert the commands entered by the user on the command line into apiserver capabilities Identified information to achieve an effective way to manage various k8s resources

 ②Declarative resource management method ( via yaml file )

1. Suitable for modifying resources
2. The declarative resource management method relies on the resource configuration manifest file to manage resources. The resource configuration
manifest file has two formats: yaml (human-friendly, easy to read), json (easy to parse through the API interface) )
3. Resource management is defined in advance in the unified resource configuration list, and then applied to the k8s cluster through declarative commands.
4. Grammar format: kubectl create/apply/delete -f xxxx.yaml

 kubectl command list

k8s Chinese documentation: http://docs.kubernetes.org.cn/683.html

2. Comprehensive list of commonly used kubectl commands

It is more convenient to add, delete and check resources, but it is not easy to modify them.

View version information

kubectl version

View resource object abbreviation

kubectl api-resources

View cluster information

kubectl cluster-info

Configure kubectl auto-completion

source <(kubectl completion bash)    #(临时)

vim  /etc/bashrc/

#底行添加

source <(kubectl completion bash)

bash

Node node view log

journalctl -u kubelet -f

View basic information

kubectl get <resource> [-o wide|json|yaml] [-n namespace]

Obtain resource-related information, -n specifies the command space, -o specifies the output format.
Resource can be a specific resource name, such as pod nginx-xxx; it can also be a resource type, such as pod; or all (only several core resources are displayed, and Incomplete)
--all-namespaces or -A: means to display all command spaces,
--show-labels: display all labels
-l app: only display resources with the label
app -l app=nginx: only display the app label, The resource whose value is nginx

Check the master node status
kubectl get componentstatuses
kubectl get cs

View the command space
kubectl get namespace
kubectl get ns

The role of command space: used to allow resources of the same type in different command spaces to have the same name

View all resources in the default namespace
kubectl get all [-n default]

Create namespace app
kubectl create ns app
kubectl get ns

delete namespace app
kubectl delete namespace app
kubectl get ns            

Create a replica controller (deployment) in the namespace kube-public to start the Pod (nginx-wl)
kubectl create deployment nginx-wl --image=nginx -n kube-public

Describe the details of a resource
kubectl describe deployment nginx-wl -n kube-public
kubectl describe pod nginx-wl-d47f99cb6-hv6gz -n kube-public

View pod information in the namespace kube-public
kubectl get pods -n kube-public
NAME READY STATUS RESTARTS AGE
nginx-wl-d47f99cb6-hv6gz 1/1 Running 0 24m

kubectl exec can log in to the container across hosts, while docker exec can only log in to the container on the host where the container is located.
kubectl exec -it nginx-wl-d47f99cb6-hv6gz bash -n kube-public

Delete (restart) pod resources. Due to the existence of replica controllers such as deployment/rc, deleting the pod will also restart it.
kubectl delete pod nginx-wl-d47f99cb6-hv6gz -n kube-public

If the pod cannot be deleted and is always in the terminate state, you must forcefully delete the pod
kubectl delete pod <pod-name> -n <namespace> --force --grace-period=0
#grace-period indicates the transition survival period, the default is 30s , allowing the POD to slowly terminate the container process on it before deleting the pod, thus exiting gracefully. 0 means to terminate the pod immediately.

Expand and shrink
kubectl scale deployment nginx-wl --replicas=2 -n kube-public # Expand
kubectl scale deployment nginx-wl --replicas=1 -n kube-public # Shrink

Delete the replica controller
kubectl delete deployment nginx-wl -n kube-public
kubectl delete deployment/nginx-wl -n kube-public

2.2 Project life cycle: Create-->Publish-->Update-->Rollback-->Delete

1. Create kubectl create command

Create and run one or more container images.
Create a deployment or job to manage containers.
kubectl create --help

//Start the nginx instance, expose the container port 80, and set the number of replicas to 3
kubectl create deployment nginx --image=nginx:1.14 --port=80 --replicas=3

kubectl get pods
kubectl get all


2. Issue the kubectl expose command

Expose resources as new Services.
kubectl expose --help

Create a service for the deployment's nginx and forward it to the container's port 80 through the Service's port 80. The Service's name is nginx-service and its type is NodePort kubectl
expose deployment nginx --port=80 --target-port=80 -- name=nginx-service --type=NodePort 

The reason why Kubernetes requires Service is that, on the one hand, the IP of the Pod is not fixed (the Pod may be rebuilt), and on the other hand, there is always a need for load balancing between a group of Pod instances.
Service implements access to a group of Pods through Label Selector.
For container applications, Kubernetes provides a VIP (virtual IP)-based bridge method to access the Service, and then the Service redirects to the corresponding Pod.

Type of service:
●ClusterIP: Provides a virtual IP within the cluster for Pod access (default type of service)

●NodePort: Open a port on each Node for external access. Kubernetes will open a port on each Node and the port of each Node is the same. Programs outside the Kubernetes cluster can use NodeIp:NodePort. Access Service.
Each port can only be one service, and the port range can only be 30000-32767.

●LoadBalancer: Map the LoadBalancer to the LoadBalancer address provided by the cloud service provider by setting it. This usage is only used in scenarios where the Service is set up on the cloud platform of a public cloud service provider. Accessed through an external load balancer, usually deploying LoadBalancer on a cloud platform requires additional costs.
After the service is submitted, Kubernetes will call CloudProvider to create a load balancing service for you on the public cloud, and configure the IP address of the proxied Pod to the load balancing service as the backend.

●externalName: Maps the service name to a DNS domain name, which is equivalent to the CNAME record of the DNS service. It is used to allow the Pod to access resources outside the cluster. It does not bind any resources itself.

headless clusterIP headless mode  

View the pod network status details and the ports exposed by the Service
kubectl get pods,svc -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
pod/nginx-cdb6b5b95-fjm2x 1/1 Running 0 44s 172.17.26.3 192.168.80.11 <none>
pod/nginx-cdb6b5b95-g28wz 1/1 Running 0 44s 172.17.36.3 192.168.80.12 <none>
pod/nginx-cdb6b5b95-x4m24 1/1 Running 0 44s 172.17.36.2 192.168.80.12 <none>

NAME                    TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE   SELECTOR
service/kubernetes      ClusterIP   10.0.0.1     <none>        443/TCP        14d   <none>
service/nginx-service   NodePort    10.0.0.189   <none>        80:44847/TCP   18s   run=nginx

View the nodes associated with the backend
kubectl get endpoints

View the description information of the service
kubectl describe svc nginx

Operate on the node01 node and check the load balancing port
yum install ipvsadm -y
ipvsadm -Ln
//IP and port for external access
TCP 192.168.80.11:44847 rr
  -> 172.17.26.3:80 Masq 1 0 0         
  -> 172.17.36.2: 80 Masq 1 0 0         
  -> 172.17.36.3:80 Masq 1 0 0     
//IP and port TCP accessed within the pod cluster group
10.0.0.189:80 rr
  -> 172.17.26.3:80 Masq 1 0 0         
  -> 172.17. 36.2:80 Masq 1 0 0         
  -> 172.17.36.3:80 Masq 1 0 0         

Operate on the node02 node and check the load balancing port in the same way:
yum install ipvsadm -y
ipvsadm -Ln
TCP 192.168.80.12:44847 rr
  -> 172.17.26.3:80 Masq 1 0 0         
  -> 172.17.36.2:80 Masq 1 0 0         
  - > 172.17.36.3:80 Masq 1 0 0         

TCP 10.0.0.189:80 rr
  -> 172.17.26.3:80 Masq 1 0 0         
  -> 172.17.36.2:80 Masq 1 0 0         
  -> 172.17.36.3:80 Masq 1 0 0         

curl 10.0.0.189
curl 192.168.80.11:44847
//View the access log in master01 operation
kubectl logs nginx-
cdb6b5b95-fjm2x kubectl logs nginx-cdb6b5b95-g28wz
kubectl logs nginx-cdb6b5b95-x4m24


3. Update kubectl set

Change some information about existing application resources.
kubectl set --help

Get the modified template
kubectl set image --help
Examples:

Set a deployment's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'.

  kubectl set image deployment/nginx busybox=busybox nginx=nginx:1.9.1

Check the current nginx version number
curl -I http://192.168.80.11:44847
curl -I http://192.168.80.12:44847

Update nginx version to version 1.15
kubectl set image deployment/nginx nginx=nginx:1.15

In the dynamic listening pod state, since the rolling update method is used, a new pod will be generated first, and then an old pod will be deleted, and so on kubectl get
pods -w

Look at the IP address of the updated Pod again.
kubectl get pods -o wide

Look at the nginx version number
curl -I http://192.168.80.11:44847
curl -I http://192.168.80.12:44847


4. Roll back kubectl rollout 

Rollback management of resources
kubectl rollout --help

View historical versions
kubectl rollout history deployment/nginx 

Perform rollback to the previous version
kubectl rollout undo deployment/nginx

Execute rollback to the specified version
kubectl rollout undo deployment/nginx --to-revision=1

Check rollback status
kubectl rollout status deployment/nginx


5. Delete kubectl delete

//Delete the replica controller
kubectl delete deployment/nginx

//Delete service
kubectl delete svc/nginx-service

kubectl get all

Canary Release

The Deployment controller supports custom control of the scrolling rhythm during the update process, such as "pause" or "resume" update operations. For example, the update process is paused immediately after the first batch of new Pod resources are created. At this time, only a part of the new version of the application exists, and the main part is still the old version. Then, filter a small portion of user requests and route them to the new version of the Pod application, and continue to observe whether it can run stably and in the desired way. After confirming that there is no problem, continue to complete the remaining rolling update of Pod resources, otherwise roll back the update operation immediately. This is called a canary release.
(1) Update the deployment version and configure the pause deployment
kubectl set image deployment/nginx nginx=nginx:1.14 && kubectl rollout pause deployment/nginx

kubectl rollout status deployment/nginx #Observe the update status

(2) Monitoring the update process, you can see that a new resource has been added, but an old resource has not been deleted as expected. This is because the pause command kubectl get pods -w is used

curl [-I] 10.0.0.189
curl [-I] 192.168.80.11:44847

(3) Make sure there is no problem with the updated pod, and continue to update
kubectl rollout resume deployment/nginx

(4) Check the last update status
kubectl get pods -w 

curl [-I] 10.0.0.189
curl [-I] 192.168.80.11:44847

3. Declarative management method

View the resource configuration list
kubectl get deployment nginx -o yaml

Explain resource configuration manifest
kubectl explain deployment.metadata

kubectl get service nginx -o yaml
kubectl explain service.metadata

Modify the resource configuration list and apply
it. Offline modification:
Modify the yaml file and use kubectl apply -f xxxx.yaml file to make it effective.
Note: When apply does not take effect, first use delete to clear the resource, and then apply to create the resource.

kubectl get service nginx -o yaml > nginx-svc.yaml
vim nginx-svc.yaml                #修改port: 8080
kubectl delete -f nginx-svc.yaml
kubectl apply -f nginx-svc.yaml
kubectl get svc

Online modification:
directly use kubectl edit service nginx to edit the resource configuration list online and save and exit to take effect immediately (such as port: 888)
PS: This modification method will not modify the content of the yaml file.


//Delete the resource configuration list.
Declarative deletion:
kubectl delete service nginx

Declarative delete:
kubectl delete -f nginx-svc.yaml

Guess you like

Origin blog.csdn.net/m0_71888825/article/details/132799511