【Cloud Native】Advanced Pod

1. Resource limitations

When defining a Pod, you can optionally set the number of resources required for each container. The most common configurable resources are CPU and memory size, among other types of resources.

When a request resource is specified for a container in a Pod, it represents the minimum amount of resources required for the container to run, and the scheduler uses this information to decide which node to schedule the Pod to. When limit resources are also specified for the container, kubelet will ensure that the running container does not use more than the set limit resources. Kubelet will also reserve the set request resource amount for the container for use by the container.

If the node where the Pod is running has enough available resources, the container can use more resources than the set request. However, the container cannot use more resources than the set limit.

If the memory limit value is set for the container, but the memory request value is not set, Kubernetes will automatically set a request value that matches the memory limit. Similarly, if the CPU limit value is set for a container but the CPU request value is not set, Kubernetes automatically sets the CPU request value for it and matches it with the CPU limit value.

Official website example

https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/

Resource requests and limits for Pods and containers:

spec.containers[].resources.requests.cpu		//定义创建容器时预分配的CPU资源
spec.containers[].resources.requests.memory		//定义创建容器时预分配的内存资源
spec.containers[].resources.limits.cpu			//定义 cpu 的资源上限 
spec.containers[].resources.limits.memory		//定义内存的资源上限

CPU resource unit

  • The request and limit of CPU resources are in cpu. One CPU in Kubernetes is equivalent to 1 vCPU (1 hyperthread).
  • Kubernetes also supports requests with fractional CPUs. A container whose spec.containers[].resources.requests.cpu is 0.5 can obtain half of the CPU resources of a cpu (similar to Cgroup's time slicing of CPU resources). The expression 0.1 is equivalent to the expression 100m (millicore), which means that the total amount of CPU time that the container can use every 1000 milliseconds is 0.1*1000 milliseconds.
  • Kubernetes does not allow setting CPU resources with a precision less than 1m.

memory resource unit

The memory request and limit are in bytes. It can be expressed as an integer, or in units of base 10 exponent (E, P, T, G, M, K), or in units of base 2 exponent (Ei, Pi, Ti, Gi, Mi, Ki) to represent.
For example: 1KB=10 3=1000, 1MB=10 6=1000000=1000KB, 1GB=10^9=1000000000=1000MB
1KiB=2 10=1024, 1MiB=2 20=1048576=1024KiB

Example 1:

apiVersion: v1
kind: Pod
metadata:
  name: frontend
spec:
  containers:
  - name: app
    image: images.my-company.example/app:v4
    env:
    - name: MYSQL_ROOT_PASSWORD
      value: "password"
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"
  - name: log-aggregator
    image: images.my-company.example/log-aggregator:v6
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"

The Pod in this example has two containers. The request value of each container is 0.25 cpu and 64MiB memory, and the limit value of each container is 0.5 cpu and 128MiB memory. Then it can be considered that the total resource request of the Pod is 0.5 cpu and 128 MiB memory, and the total resource limit is 1 cpu and 256 MiB memory.

Example 2:

vim pod2.yaml
apiVersion: v1
kind: Pod
metadata:
  name: frontend
spec:
  containers:
  - name: web
    image: nginx
    env:
    - name: WEB_ROOT_PASSWORD
      value: "password"
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"
  - name: db
    image: mysql
    env:
    - name: MYSQL_ROOT_PASSWORD
      value: "abc123"
    resources:
      requests:
        memory: "512Mi"
        cpu: "0.5"
      limits:
        memory: "1Gi"
        cpu: "1"

kubectl apply -f pod2.yaml
kubectl describe pod frontend

kubectl get pods -o wide
NAME       READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
frontend   2/2     Running   5          15m   10.244.2.4   node02   <none>           <none>

kubectl describe nodes node02				#由于当前虚拟机有2个CPU,所以Pod的CPU Limits一共占用了50%
Namespace                  Name                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                  ----                           ------------  ----------  ---------------  -------------  ---
  default                    frontend                       500m (25%)    1 (50%)     128Mi (3%)       256Mi (6%)     16m
  kube-system                kube-flannel-ds-amd64-f4pbp    100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      19h
  kube-system                kube-proxy-pj4wp               0 (0%)        0 (0%)      0 (0%)           0 (0%)         19h
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                600m (30%)  1100m (55%)
  memory             178Mi (4%)  306Mi (7%)
  ephemeral-storage  0 (0%)      0 (0%)

Insert image description here
Insert image description here

2. Restart strategy

  • When the container in the Pod exits, restart the container through the kubelet on the node. Applies to all containers in the Pod.

1. Always: When the container terminates and exits, always restart the container. Default policy
2. OnFailure: When the container exits abnormally (exit status code is non-0), restart the container; if it exits normally, the container will not be restarted.
3. Never: When the container terminates Exit and never restart the container.
#Note: K8S does not support restarting Pod resources, only deletion and reconstruction.
When using yaml to create Deployment and StatefulSet types, the restartPolicy can only be Always. When kubectl run creates a Pod, you can choose three strategies: Always, OnFailure, and Never.

kubectl edit deployment nginx-deployment
......
  restartPolicy: Always


//示例
vim pod3.yaml
apiVersion: v1
kind: Pod
metadata:
  name: foo
spec:
  containers:
  - name: busybox
    image: busybox
    args:
    - /bin/sh
    - -c
    - sleep 30; exit 3


kubectl apply -f pod3.yaml

//查看Pod状态,等容器启动后30秒后执行exit退出进程进入error状态,就会重启次数加1
kubectl get pods
NAME                              READY   STATUS             RESTARTS   AGE
foo                               1/1     Running            1          50s


kubectl delete -f pod3.yaml

vim pod3.yaml
apiVersion: v1
kind: Pod
metadata:
  name: foo
spec:
  containers:
  - name: busybox
    image: busybox
    args:
    - /bin/sh
    - -c
    - sleep 30; exit 3
  restartPolicy: Never
#注意:跟container同一个级别

kubectl apply -f pod3.yaml

//容器进入error状态不会进行重启
kubectl get pods -w

Insert image description here

Insert image description here
Insert image description here
Insert image description here
Insert image description here
Insert image description here

3. Health check, also called Probe

  • Probes are periodic diagnostics performed on the container by the kubelet.

Three rules for probes:

●livenessProbe: Determine whether the container is running. If the probe fails, the kubelet kills the container and the container sets the Pod state according to the restartPolicy. If the container does not provide a liveness probe, the default state is Success.

●readinessProbe: Determine whether the container is ready to accept requests. If the detection fails, the endpoint controller will delete the Pod's IP address from all service endpoints that match the Pod. The ready state before the initial delay defaults to Failure. If the container does not provide a readiness probe, the default status is Success.

●startupProbe (added in version 1.17): Determines whether the application in the container has been started, mainly for applications that cannot determine the specific startup time. If the startupProbe probe is configured, all other probes are inactive until the startupProbe status is Success, and no other probes take effect until it succeeds. If startupProbe fails, kubelet will kill the container and the container will be restarted according to the restartPolicy. If the container is not configured with startupProbe, the default status is Success.

  • Note: The above rules can be defined at the same time. Before the readinessProbe test is successful, the running state of the Pod will not change to the ready state.

Probe supports three inspection methods:

●exec: Execute the specified command within the container. If the return code is 0 when the command exits, the diagnosis is considered successful.

●tcpSocket: Perform TCP inspection (three-way handshake) on the IP address of the container on the specified port. If the port is open, the diagnosis is considered successful.

●httpGet: Perform an HTTPGet request to the IP address of the container on the specified port and uri path. If the response's status code is greater than or equal to 200 and less than 400, the diagnosis is considered successful.

Each probe will have one of three results:

●Success: Indicates that the container passed the test.
●Failure: Indicates that the container failed the test.
●Unknown: Indicates that the detection is not performed normally.

Official website example:

https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/

3.1 Example 1: exec mode

apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness
  name: liveness-exec
spec:
  containers:
  - name: liveness
    image: k8s.gcr.io/busybox
    args:
    - /bin/sh
    - -c
    - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 60
    livenessProbe:
      exec:
        command:
        - cat
        - /tmp/healthy
      failureThreshold: 1
      initialDelaySeconds: 5
      periodSeconds: 5

#initialDelaySeconds: Specifies that kubelet should wait for 5 seconds before executing the first detection, that is, the first detection will not start until 6 seconds after the container is started. The default is 0 seconds, the minimum value is 0.
#periodSeconds: Specifies that the kubelet should perform a survival probe every 5 seconds. The default is 10 seconds. The minimum value is 1.
#failureThreshold: When a probe fails, the number of times Kubernetes will retry before giving up. Aborting in the case of liveness detection means restarting the container. Pods that are abandoned during readiness detection will be labeled as not ready. The default value is 3. The minimum value is 1.
#timeoutSeconds: How many seconds to wait after the probe times out. The default value is 1 second. The minimum value is 1. (Prior to Kubernetes 1.20, the exec probe ignored timeoutSeconds and the probe continued running indefinitely, possibly even beyond the configured deadline, until results were returned.)

You can see that there is only one container in the Pod. The kubelet needs to wait 5 seconds before performing the first probe. The kubelet will perform a survival probe every 5 seconds. kubelet executes the command cat /tmp/healthy in the container to detect. If the command is executed successfully and the return value is 0, kubelet will consider the container to be healthy and alive. When the 31st second is reached, this command returns a non-zero value and the kubelet will kill the container and restart it.

vim exec.yaml
apiVersion: v1
kind: Pod
metadata:
  name: liveness-exec
  namespace: default
spec:
  containers:
  - name: liveness-exec-container
    image: busybox
    imagePullPolicy: IfNotPresent
    command: ["/bin/sh","-c","touch /tmp/live ; sleep 30; rm -rf /tmp/live; sleep 3600"]
    livenessProbe:
      exec:
        command: ["test","-e","/tmp/live"]
      initialDelaySeconds: 1
      periodSeconds: 3
	  
kubectl create -f exec.yaml

kubectl describe pods liveness-exec
Events:
  Type     Reason     Age               From               Message
  ----     ------     ----              ----               -------
  Normal   Scheduled  51s               default-scheduler  Successfully assigned default/liveness-exec-pod to node02
  Normal   Pulled     46s               kubelet, node02    Container image "busybox" already present on machine
  Normal   Created    46s               kubelet, node02    Created container liveness-exec-container
  Normal   Started    45s               kubelet, node02    Started container liveness-exec-container
  Warning  Unhealthy  8s (x3 over 14s)  kubelet, node02    Liveness probe failed:
  Normal   Killing    8s                kubelet, node02    Container liveness-exec-container failed liveness probe,will be restarted

kubectl get pods -w
NAME                READY   STATUS    RESTARTS   AGE
liveness-exec       1/1     Running   1          85s

3.2 Example 2: httpGet method

apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness
  name: liveness-http
spec:
  containers:
  - name: liveness
    image: k8s.gcr.io/liveness
    args:
    - /server
    livenessProbe:
      httpGet:
        path: /healthz
        port: 8080
        httpHeaders:
        - name: Custom-Header
          value: Awesome
      initialDelaySeconds: 3
      periodSeconds: 3

In this configuration file, you can see that Pod has only one container. The initialDelaySeconds field tells kubelet that it should wait 3 seconds before performing the first probe. The periodSeconds field specifies that kubelet executes a survival probe every 3 seconds. The kubelet will send an HTTP GET request to the service running in the container (the service will listen on port 8080) to perform the detection. If the handler under the /healthz path on the server returns a success code, the kubelet considers the container to be healthy. If the handler returns a failure code, the kubelet kills the container and restarts it.

Any return code greater than or equal to 200 and less than 400 indicates success, other return codes indicate failure.

vim httpget.yaml
apiVersion: v1
kind: Pod
metadata:
  name: liveness-httpget
  namespace: default
spec:
  containers:
  - name: liveness-httpget-container
    image: soscscs/myapp:v1
    imagePullPolicy: IfNotPresent
    ports:
    - name: http
      containerPort: 80
    livenessProbe:
      httpGet:
        port: http
        path: /index.html
      initialDelaySeconds: 1
      periodSeconds: 3
      timeoutSeconds: 10
	  
kubectl create -f httpget.yaml

kubectl exec -it liveness-httpget -- rm -rf /usr/share/nginx/html/index.html

kubectl get pods
NAME               READY   STATUS    RESTARTS   AGE
liveness-httpget   1/1     Running   1          2m44s

3.3 Example 3: tcpSocket method

apiVersion: v1
kind: Pod
metadata:
  name: goproxy
  labels:
    app: goproxy
spec:
  containers:
  - name: goproxy
    image: k8s.gcr.io/goproxy:0.1
    ports:
    - containerPort: 8080
    readinessProbe:
      tcpSocket:
        port: 8080
      initialDelaySeconds: 5
      periodSeconds: 10
    livenessProbe:
      tcpSocket:
        port: 8080
      initialDelaySeconds: 15
      periodSeconds: 20

This example uses both the readinessProbe and livenessProbe probes. The kubelet will send the first readinessProbe probe 5 seconds after the container starts. This will attempt to connect to port 8080 of the goproxy container. If the detection is successful, the kubelet will continue to run detections every 10 seconds. In addition to the readinessProbe probe, this configuration includes a livenessProbe probe. The kubelet will perform the first livenessProbe detection 15 seconds after the container is started. Just like the readinessProbe probe, it will try to connect to port 8080 of the goproxy container. If the livenessProbe probe fails, the container will be restarted.

vim tcpsocket.yaml
apiVersion: v1
kind: Pod
metadata:
  name: probe-tcp
spec:
  containers:
  - name: nginx
    image: soscscs/myapp:v1
    livenessProbe:
      initialDelaySeconds: 5
      timeoutSeconds: 1
      tcpSocket:
        port: 8080
      periodSeconds: 10
      failureThreshold: 2

kubectl create -f tcpsocket.yaml

kubectl exec -it probe-tcp  -- netstat -natp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      1/nginx: master pro

kubectl get pods -w
NAME        READY   STATUS    RESTARTS   AGE
probe-tcp   1/1     Running             0          1s
probe-tcp   1/1     Running             1          25s       #第一次是 init(5秒) + period(10秒) * 2
probe-tcp   1/1     Running             2          45s       #第二次是 period(10秒) + period(10秒)  重试了两次
probe-tcp   1/1     Running             3          65s

3.4 Example 4: Readiness detection

vim readiness-httpget.yaml
apiVersion: v1
kind: Pod
metadata:
  name: readiness-httpget
  namespace: default
spec:
  containers:
  - name: readiness-httpget-container
    image: soscscs/myapp:v1
    imagePullPolicy: IfNotPresent
    ports:
    - name: http
      containerPort: 80
    readinessProbe:
      httpGet:
        port: 80
        path: /index1.html
      initialDelaySeconds: 1
      periodSeconds: 3
    livenessProbe:
      httpGet:
        port: http
        path: /index.html
      initialDelaySeconds: 1
      periodSeconds: 3
      timeoutSeconds: 10

kubectl create -f readiness-httpget.yaml

Readiness detection failed and could not enter READY state

kubectl get pods 
NAME                READY   STATUS    RESTARTS   AGE
readiness-httpget   0/1     Running   0          18s

kubectl exec -it readiness-httpget sh
 # cd /usr/share/nginx/html/
 # ls
50x.html    index.html
 # echo 123 > index1.html 
 # exit

kubectl get pods 
NAME                READY   STATUS    RESTARTS   AGE
readiness-httpget   1/1     Running   0          2m31s

kubectl exec -it readiness-httpget -- rm -rf /usr/share/nginx/html/index.html

kubectl get pods -w
NAME                READY   STATUS    RESTARTS   AGE
readiness-httpget   1/1     Running   0          4m10s
readiness-httpget   0/1     Running   1          4m15s

3.5 Example 5: Readiness Detection 2

vim readiness-myapp.yaml
apiVersion: v1
kind: Pod
metadata:
  name: myapp1
  labels:
     app: myapp
spec:
  containers:
  - name: myapp
    image: soscscs/myapp:v1
    ports:
    - name: http
      containerPort: 80
    readinessProbe:
      httpGet:
        port: 80
        path: /index.html
      initialDelaySeconds: 5
      periodSeconds: 5
      timeoutSeconds: 10 
---
apiVersion: v1
kind: Pod
metadata:
  name: myapp2
  labels:
     app: myapp
spec:
  containers:
  - name: myapp
    image: soscscs/myapp:v1
    ports:
    - name: http
      containerPort: 80
    readinessProbe:
      httpGet:
        port: 80
        path: /index.html
      initialDelaySeconds: 5
      periodSeconds: 5
      timeoutSeconds: 10 
---
apiVersion: v1
kind: Pod
metadata:
  name: myapp3
  labels:
     app: myapp
spec:
  containers:
  - name: myapp
    image: soscscs/myapp:v1
    ports:
    - name: http
      containerPort: 80
    readinessProbe:
      httpGet:
        port: 80
        path: /index.html
      initialDelaySeconds: 5
      periodSeconds: 5
      timeoutSeconds: 10 
---
apiVersion: v1
kind: Service
metadata:
  name: myapp
spec:
  selector:
    app: myapp
  type: ClusterIP
  ports:
  - name: http
    port: 80
    targetPort: 80

kubectl create -f readiness-myapp.yaml

kubectl get pods,svc,endpoints -o wide
NAME         READY   STATUS    RESTARTS   AGE     IP            NODE     NOMINATED NODE   READINESS GATES
pod/myapp1   1/1     Running   0          3m42s   10.244.2.13   node02   <none>           <none>
pod/myapp2   1/1     Running   0          3m42s   10.244.1.15   node01   <none>           <none>
pod/myapp3   1/1     Running   0          3m42s   10.244.2.14   node02   <none>           <none>

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE     SELECTOR
......
service/myapp        ClusterIP   10.96.138.13   <none>        80/TCP    3m42s   app=myapp

NAME                   ENDPOINTS                                      AGE
......
endpoints/myapp        10.244.1.15:80,10.244.2.13:80,10.244.2.14:80   3m42s


kubectl exec -it pod/myapp1 -- rm -rf /usr/share/nginx/html/index.html
  • The readiness detection fails, the Pod cannot enter the READY state, and the endpoint controller will delete the IP address of the Pod from endpoints.
kubectl get pods,svc,endpoints -o wide
NAME         READY   STATUS    RESTARTS   AGE     IP            NODE     NOMINATED NODE   READINESS GATES
pod/myapp1   0/1     Running   0          5m17s   10.244.2.13   node02   <none>           <none>
pod/myapp2   1/1     Running   0          5m17s   10.244.1.15   node01   <none>           <none>
pod/myapp3   1/1     Running   0          5m17s   10.244.2.14   node02   <none>           <none>

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE     SELECTOR
......
service/myapp        ClusterIP   10.96.138.13   <none>        80/TCP    5m17s   app=myapp

NAME                   ENDPOINTS                       AGE
......
endpoints/myapp        10.244.1.15:80,10.244.2.14:80   5m17s

4. Start and exit actions

im post.yaml
apiVersion: v1
kind: Pod
metadata:
  name: lifecycle-demo
spec:
  containers:
  - name: lifecycle-demo-container
    image: soscscs/myapp:v1
    lifecycle:   #此为关键字段
      postStart:
        exec:
          command: ["/bin/sh", "-c", "echo Hello from the postStart handler >> /var/log/nginx/message"]      
      preStop:
        exec:
          command: ["/bin/sh", "-c", "echo Hello from the poststop handler >> /var/log/nginx/message"]
    volumeMounts:
    - name: message-log
      mountPath: /var/log/nginx/
      readOnly: false
  initContainers:
  - name: init-myservice
    image: soscscs/myapp:v1
    command: ["/bin/sh", "-c", "echo 'Hello initContainers'   >> /var/log/nginx/message"]
    volumeMounts:
    - name: message-log
      mountPath: /var/log/nginx/
      readOnly: false
  volumes:
  - name: message-log
    hostPath:
      path: /data/volumes/nginx/log/
      type: DirectoryOrCreate

kubectl create -f post.yaml

kubectl get pods -o wide
NAME             READY   STATUS    RESTARTS   AGE    IP            NODE     NOMINATED NODE   READINESS GATES
lifecycle-demo   1/1     Running   0          2m8s   10.244.2.28   node02   <none>           <none>

kubectl exec -it lifecycle-demo -- cat /var/log/nginx/message
Hello initContainers
Hello from the postStart handler

View on node02 node

[root@node02 ~]# cd /data/volumes/nginx/log/
[root@node02 log]# ls
access.log  error.log  message
[root@node02 log]# cat message 
Hello initContainers
Hello from the postStart handler
#由上可知,init Container先执行,然后当一个主容器启动后,Kubernetes 将立即发送 postStart 事件。

//删除 pod 后,再在 node02 节点上查看
kubectl delete pod lifecycle-demo

[root@node02 log]# cat message 
Hello initContainers
Hello from the postStart handler
Hello from the poststop handler
#由上可知,当在容器被终结之前, Kubernetes 将发送一个 preStop 事件。




Guess you like

Origin blog.csdn.net/wang_dian1/article/details/132192094