In previous articles, the mainly explained how simple docker to manage and operate the container and to avoid the pit k8s get started, beginning in the next article, we will continue learning modules k8s
pod is valid
In k8s inside, there are a lot of new technology concepts, there is a thing called pod. pod inside the cluster is the smallest unit k8s operation and deployment, its design concept is that a pod may carry a plurality of containers, and the shared network address and file systems, access inside the container to each other by inter-process communication.
Official pictures accompanied by:
Copy the controller (Replication Controller, RC)
Usually we have a cluster in which hundreds or thousands of k8s pod, then for pod management also need to have some mechanism for internal k8s man named RC (Replication Controller) copy controller.
RC main thing is the number of nodes monitoring pod, when we start pod pod hope there are multiple copies of RC can be used to control the number of starts, if there is part of the pod hung up period, pod RC will reboot automatically.
k8s inside the pod common operating scenario
1. In the practical operation of the process, if you encounter this situation below, and have to get up a pod
[root@localhost ~]# kubectl get pods NAME READY STATUS RESTARTS AGE mysql-rc-p8blq 0/1 ErrImagePull 0 16h nginx 1/1 Running 0 29h nginx-deployment-54f57cf6bf-f9p92 0/1 ContainerCreating 0 77s nginx-deployment-54f57cf6bf-mqq7x 0/1 ImagePullBackOff 0 18m nginx-deployment-9f46bb5-kwxwh 0/1 ImagePullBackOff 0 13m tomcat-565cd88dc7-qlqtk 1/1 Running 0 2d3h
This situation may occur when the pod failed to start, then this time the corresponding pod how to terminate it?
Container to be deleted by the following command:
[root@localhost k8s]# kubectl delete -f ./mysql-rc.yaml replicationcontroller "mysql-rc" deleted [root@localhost k8s]# kubectl delete -f ./mysql-svc.yaml service "mysql-svc" deleted [root@localhost k8s]# kubectl delete -f ./nginx-deployment.yaml deployment.apps "nginx-deployment" deleted [root@localhost k8s]# kubectl get pods NAME READY STATUS RESTARTS AGE nginx 1/1 Running 0 29h tomcat-565cd88dc7-qlqtk 1/1 Running 0 2d3h
2. How to run a single container pods
kubectl run example --image=nginx
3. Review the details of a pod
[root@localhost k8s]# kubectl get pod nginx -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx 1/1 Running 0 29h 172.17.0.7 minikube <none> <none>
4. View details describing the contents of a pod inside
[root@localhost k8s]# kubectl describe pod nginx Name: nginx Namespace: default Priority: 0 Node: minikube/10.1.10.51 Start Time: Mon, 02 Dec 2019 09:49:28 +0800 Labels: app=pod-example Annotations: <none> Status: Running IP: 172.17.0.7 IPs: IP: 172.17.0.7 Containers: web: Container ID: docker://53d066b49233d17724b8fd0d5a4d6a963f33e6ea4e0805beb7745ee267683ed8 Image: nginx Image ID: docker-pullable://nginx@sha256:50cf965a6e08ec5784009d0fccb380fc479826b6e0e65684d9879170a9df8566 Port: 80/TCP Host Port: 0/TCP State: Running Started: Mon, 02 Dec 2019 09:51:22 +0800 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-7mltd (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-7mltd: Type: Secret (a volume populated by a Secret) SecretName: default-token-7mltd Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: <none>
5. Replace the pod configuration rules corresponding file
kubectl replace --force -f rules file
6. Let's say you accidentally wrong address mirrored in the process control pod node, resulting in pod failed to start, this time we can remove a pod on top of the node machine, for example:
To delete a node name as the nginx pod:
[root@localhost k8s]# kubectl delete pod nginx pod "nginx" deleted
7. Delete a machine top deployment information:
[root@localhost k8s]# kubectl delete deployment tomcat deployment.apps "tomcat" deleted
The multi-step activated pod container
We start multiple containers pod, it is best to operate using the create command, and is best to use yaml (or json format) this file when it is created to manage the launch container:
kubectl create -f FILE
Yaml format usually starts pod of basic documents as follows:
apiVersion: V1 kind: Pod Metadata: name: RSS - Site Labels: App: Web spec: Containers: - name: container. 1 Image: image name. 1 the ports: - containerPort: 80 - name: container. 1 Image: image name 2 the ports: - containerPort: 88 spec:
If you want multiple docker container during startup, then write one more name, image, ports such configuration can be.
In the pod startup k8s, if occurs repeatedly found mirrored pull fails, usually because the mirror source address error, this time need to reset docker mirror source address:
# vi /etc/docker/daemon.json { "registry-mirrors": ["http://hub-mirror.c.163.com"] } systemctl restart docker.service
After setting up this json file, we can restart the operation.
After you write a good yaml file, again using kubectl create -f FILE command.
Finally kubectl get pod to verify the status of an instruction to pod.
Similarly, if we need to load java program with the pod, then, for example, said that springboot application, simply springboot docker applications packaged into a mirror, and then yml configuration which introduced enough.
9. Review the pod internal node log information
logs kubectl <pod_name> kubectl logs -f <pod_name> # similar to the tail -f ways to view (tail -f real-time view the log file tail -f log file log)
We are in front of the main container-based technology explain some of the combat operation so much api command of the vessel, its design ideas behind the architecture but what is it?
The basic architecture of 10.kubernetes
With a simple words to introduce, kubernetes is a container of cluster management system, the container can be achieved by kubernetes clustered deployment automation, automated capacity expansion, maintenance and so on.
kubernetes cluster is composed of a master responsible for the management of each node, each node is managed node which may be a number of virtual machines or small machines. On each node node will operate a variety of pod, and the pod will usually be a variety of docker container.
The basic architecture of FIG follows:
Master module
The main role is to send commands to kubectl kubernetes, by using apiserver each node to invoke kubernets internal nodes deployed and controlled.
ApiServer main job is to each node node CRUD, referring to the operation of the node data, we shall not explain etcd. etcd main storage data of some nodes, and each kubectl will be sent from the instruction stored in the first apiserver the etcd.
Controller manager role is mainly to monitor the situation of each container. Controller manager will monitor data for each cluster resources through listening Api Server inside a List Watch provides an interface to adjust the state of the resource.
For chestnut terms:
In hundreds of micro-services system, assuming that said there was a node crash, so this time Controller manager will automatically repair node failover, so you can well relieve the pressure on operation and maintenance personnel .
Scheduler is primarily a scheduled action will deploy the container to a specified upper machine, and then the pod node resource mapping, when the number of nodes increases, Scheduler will automatically allocated resources. So it plainly, Scheduler is the boss, it was decided that each pod to which you want to place in the top node and command kubelet deployment container.
Node module
Node is k8s working node, Node is typically a physical machine or a virtual machine, running three services each Node, respectively Container runtime, kubelet, kube-proxy categories.
kubelet mainly responsible for receiving the master's command and execution, but also to maintain the life cycle of the container.
The main role kube-proxy is responsible for load balancing, handling forwarding traffic problems.
Container runtime is responsible for running the real image management as well as pod and containers.
From the system architecture, technical concepts and design K8s, we can see that the core of the system K8s two design concepts: one is fault-tolerant, one is easily scalable. The actual fault tolerance is the basis for stability and security of K8s system, easy extension is to ensure a friendly K8s to change, it can quickly increase the basic iterative new features.