Container Cloud Solution Deployment Documentation

Overview

Virtual Machines and Containers

The virtual machine and container technology realize the abstraction and encapsulation of the server computing environment, and can be used as a software entity for the application program running on the server platform. Multiple virtual machines and containers can be deployed on one physical server, or multiple servers can be consolidated on a smaller number of physical servers, but container technology is not intended to completely replace traditional virtual machine technology. There are essential differences between the two:

  • Virtual machine technology belongs to system virtualization, which essentially simulates a real computer resource, including virtual CPU, storage, and various virtual hardware devices - which means that it also has its own complete guest operating system;

  • Container technology is application virtualization, using the sandbox mechanism to virtualize an application running environment, each running container instance can have its own independent various namespaces (that is, resources) including: process, file system, network, IPC , hostname, etc;

The virtual machine runs the entire operating system on a virtual hardware platform, thereby providing a complete operating environment for the application to run, and at the same time, it also needs to consume more host hardware resources; while the resources used by containerized applications such as Docker are host The host system provides only a certain degree of isolation and restriction on the use of resources. Compared with virtual machines, containers have higher resource utilization efficiency, smaller instance sizes, and faster creation and migration. This means that compared to virtual machines, more container instances can be deployed on the same hardware device, but the degree of isolation of computing resources is not as good as that of virtual machines, so instances running on the same host are more likely to affect each other, and even the host Operating system crashes directly affect all container applications running on the host.

In actual production scenarios, it is necessary to face many cross-host containers to work together, to support various types of workloads, and to meet reliability, scalability, and performance problems to be solved with business growth, and related The evolution of network, storage, cluster, and container to container cloud will appear.

Building a container cloud faces many challenges. First of all, the networking capability of the Docker engine is relatively weak, and it can provide better support in a single host network. However, after the Docker cluster is extended to multiple hosts, the interaction between Docker containers across hosts and network management and maintenance will face huge challenges, even unsustainable. Fortunately, many companies have developed their own products to solve this problem, such as Calico, Flannel, Weave, including Docker Overlay provided after docker version 1.9 Network support, the core idea is to connect containers on different hosts to the same virtual network, here we choose Flannel from CoreOS as the cluster network solution.

In the production environment, the number of containers required for business applications is huge, and the dependencies between container applications are complex. If you completely rely on complex related information such as manual recording and configuration, and you need to meet the deployment, operation, monitoring, and migration of cluster operations , High availability and other operation and maintenance requirements are really insufficient, so the need for orchestration and deployment was born. Tools that can solve these problems include Swarm, Fleet, Kubernetes, and Mesos, etc. After comprehensive consideration, Kubernetes from Google is finally suspended. ---- A full-scale and comprehensive container management platform with functions such as elastic scaling, vertical expansion, grayscale upgrade, service discovery, service orchestration, error recovery and performance monitoring, which can easily meet the operation and maintenance of many business applications.

Container cloud is the basic unit of resource segmentation and scheduling, which encapsulates the entire software runtime environment and provides developers and system administrators with a platform for building, publishing and running distributed applications. Finally, a diagram is used to summarize the ecological technology stack of the entire container:

Container Ecological Technology Stack

Deployment planning

Referring to the figure above, take four machines as an example for planning, one host is the master, the other three hosts are node nodes, all hosts are deployed with centos7, and each host is named according to its role:

IP CPU name
10.1.10.101 master
10.1.10.102 node1
10.1.10.103 node2
10.1.10.104 node3

The configuration that needs to be done on each host is:

  • Turn off firewall, disable SElinux
  • Configure the software source, edit the file /etc/yum.repos.d/CentOS-Base.repo, and complete the following modifications:
[base]
name=CentOS-$releasever - Base - 163.com
baseurl=http://mirrors.163.com/centos/7.4.1708/os/x86_64/
gpgcheck=0
enabled=1

[extras]
name=CentOS-$releasever - Extras - 163.com
baseurl=http://mirrors.163.com/centos/7.4.1708/extras/x86_64/
gpgcheck=0
enabled=1

[k8s-1.8]
name=K8S 1.8
baseurl=http://onwalk.net/repo/
gpgcheck=0
enabled=1
  • To add the resolution of four host names, you can modify /etc/hosts and add the following configuration:
10.1.10.101  master
10.1.10.102  node1
10.1.10.103  node2
10.1.10.104  node3

Docker cluster deployment and configuration

First of all, we need to realize container access across physical machines - containers in different physical machines can access each other, four machines, etcd are deployed on the master host, and flannel and docker are installed on three machines.

host install software
master etcd
node1 flannel、docker
node2 flannel、docker
node3 flannel、docker

Simply put, flannel does three things:

  1. After the data is sent from the source container, it is forwarded to the flannel0 virtual network card through the docker0 virtual network card of the host where it is located. This is a P2P virtual network card, and the flanneld service listens on the other end of the network card. Flannel also achieves this effect by modifying the routing table of Node.
  2. The flanneld service of the source host encapsulates the original data content in UDP and delivers it to the flanneld service of the destination node according to its own routing table. After the data arrives, it is unpacked, and then directly enters the flannel0 virtual network card of the destination node, and then forwarded to the destination host's flanneld service. The docker0 virtual network card is finally routed to the target container by docker0 just like the native container communication.
  3. So that the addresses allocated by the containers on each node do not conflict. After Flannel allocates the available IP address segment for each node through Etcd, it modifies the startup parameters of Docker. The parameter "--bip=XXXX/X" limits the IP range obtained by the node container.

As mentioned above, you need to do the following:

  • Start the Etcd background process
  • Add Flannel configuration in Etcd
  • Start the Flanneld daemon
  • Configure Docker's startup parameters
  • Restart the Docker daemon

The startup sequence of the three components is: etcd->flanned->docker.

Master host configuration

  1. Modify the configuration file: /etc/etcd/etcd.conf, refer to the modification as follows (where ip is the master host IP):
ETCD_LISTEN_CLIENT_URLS="http://10.1.11.101:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://10.1.11.101:2379"
  • ETCD_LISTEN_CLIENT_URLS The address of the external service, the client will connect here to interact with etcd
  • ETCD_ADVERTISE_CLIENT_URLS informs the client url, which is the url of the service
  1. Restart the etcd service to confirm the running status:
systemctl restart etcd
  1. After confirming that the modification is correct, add the initial settings related to flannel:
etcdctl -C http://10.1.11.101:2379 set /flannel/network/config '{"Network": "192.168.0.0/16"}'

Configuration of the node host

  1. Modify the flannel configuration /etc/sysconfig/flanneld, refer to the following modifications:
FLANNEL_ETCD_ENDPOINTS="http://master:2379"
FLANNEL_OPTIONS='-etcd-prefix="/flannel/network"'
  1. Modify /lib/systemd/system/docker.service to add a line of configuration in the [Service] section, which also allows docker to read the network parameters provided by flanneld to docker. The reference modification is as follows:
EnvironmentFile=-/run/flannel/docker.env
  1. Add the startup parameters required by the Kubelet service, modify the configuration file, and add it in the configuration file /etc/sysconfig/docker configuration item OPTIONS
--exec-opt native.cgroupdriver=systemd -H unix:///var/run/docker.sock
  • Optional parameter: --insecure-registry=registry.localhost Make a custom docker registry
  1. Restart services in order
systemctl daemon-reload
systemctl restart flanneld
systemctl restart docker
  1. Do the same for other node hosts, and finally run a docker container instance on each node to check whether the network between each container is interconnected. If all goes well, the configuration of the container cluster network across physical machines is completed.

The overall architecture of K8s

The overall architecture of Kubenetes is shown in the figure below, which mainly includes apiserver, scheduler, controller-manager, kubelet, and proxy.

Overall architecture diagram of Kubenetes

  • etcd is responsible for cluster coordination and service discovery
  • The master side runs three components:
  • apiserver: The entrance of the kubernetes system, which encapsulates the addition, deletion, modification, and query operations of core objects, and provides it to external customers and internal components in the form of RESTFul interface. The REST objects it maintains are persisted to etcd (a distributed strongly consistent key/value store).

  • scheduler: Responsible for resource scheduling of the cluster and assigning machines to newly created pods.

  • controller-manager: Responsible for executing various controllers, there are currently two categories:

    • Endpoint-controller: Periodically associate service and pod (the association information is maintained by the endpoint object) to ensure that the mapping from service to pod is always up-to-date.
    • replication-controller: Periodically associate the replicationController and the pod to ensure that the number of replications defined by the replicationController is always consistent with the actual number of running pods.
  • The minion side runs two components:
  • kubelet: Responsible for managing and controlling docker containers, such as starting/stopping, monitoring running status, etc. It periodically gets the pods assigned to the machine from etcd and starts or stops the corresponding containers based on the pod information. At the same time, it also receives HTTP requests from the apiserver and reports the running status of the pod.
  • proxy: Responsible for providing a proxy for the pod. It periodically fetches all services from etcd and creates proxies based on the service information. When a client pod wants to access other pods, the access request will be forwarded through the native proxy.

k8s deployment and configuration

After preparing the cluster network of the fast physical host, install kubernetes-master on the master host, and install kubernetes-node on the remaining three node hosts

host install software
master kubernetes-master 、 kubernetes-common 、 kubernetes-client
node1 kubernetes-node 、 kubernetes-common
node2 kubernetes-node 、 kubernetes-common
node3 kubernetes-node 、 kubernetes-common

Cluster Architecture

  • master: run service apiserver, controllerManager, scheduler
  • node1 : run service kubelet, proxy
  • node2 : run service kubelet, proxy
  • node3 : run service kubelet, proxy

security authentication for kubernetes

The generation process of the CA-signed bidirectional certificate is as follows:

  • Create CA root certificate
  • Generate a certificate for kube-apiserver, sign it with a CA certificate, and set startup parameters
  • According to the number of k8s clusters, generate a certificate for each host, sign it with a CA certificate, and set the service startup parameters on the corresponding node
  1. Generate CA, private key, certificate
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ca.key -out ca.crt -subj "/CN=master"  
  • The CommonName of the CA needs to be the same as the hostname of the server running kube-apiserver.
  1. Create the private key and server certificate of apiServer

Create a certificate configuration file /etc/kubernetes/openssl.cnf and specify the target domain name and IP that all access services will use in alt_names; because the SSL/TLS protocol requires the server address to be consistent with the subjectAltName information in the server certificate signed by the CA

[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
DNS.5 = localhost
DNS.6 = master
IP.1 = 127.0.0.1
IP.2 = 192.168.1.1
IP.3 = 10.1.10.101

The last two IPs are the first available value in the clusterIP value range and the IP of the master machine. k8s will automatically create a service and the corresponding endpoint to provide apiServer services for the containers in the cluster; the service uses the first available clusterIP as the virtual IP by default, which is placed in the default namespace, named kubernetes, and the port is 443; openssl. DNS1~4 in cnf are the domain names that will be used when accessing this service from the container.

  1. Create a private key and certificate assigned to apiServer
mkdir -pv /etc/kubernetes/ca/
cd /etc/kubernetes/ca/
openssl genrsa -out server.key 2048
openssl req -new -key server.key -out server.csr -subj "/CN=master" -config ../openssl.cnf
openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt -days 9000 -extensions v3_req -extfile ../openssl.cnf
  • Because the controllerManager, scheduler and apiservers run on the same host, the CommonName is master
  • Verify the certificate:openssl verify -CAfile ca.crt server.crt
  1. Create client certificates used by various components to access apiServer
for NAME in client node1 node2 node3  
do
    openssl genrsa -out $NAME.key 2048
    openssl req -new -key $NAME.key -out $NAME.csr -subj "/CN=$NAME"
    openssl x509 -req -days 9000 -in $NAME.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out $NAME.crt 
done
  • Note that setting CN(CommonName) must take effect in the domain name resolution in the k8s cluster (client node1 node2 node3).
  • Verify the certificate:openssl verify -CAfile ca.crt *.crt
  1. Cluster distribution certificate
  • Create the /etc/kubernetes/kubeconfig template, the startup parameters of each client component service need to use: "--kubeconfig=/etc/kubernetes/kubeconfig"
kubectl config set-cluster k8s-cluster --server=https://10.1.10.101:6443 --certificate-authority=/etc/kubernetes/ca/ca.crt 
kubectl config set-credentials default-admin --certificate-authority=/etc/kubernetes/ca/ca.crt --client-key=/etc/kubernetes/ca/client.key --client-certificate=/etc/kubernetes/ca/client.crt
kubectl config set-context default-system --cluster=k8s-cluster --user=default-admin
kubectl config use-context default-system
kubectl config view > /etc/kubernetes/kubeconfig

The content of the generated configuration file is as follows:

apiVersion: v1
clusters:
- cluster:
    certificate-authority: /etc/kubernetes/ca/ca.crt
    server: https://10.1.10.101:6443
  name: k8s-cluster
contexts:
- context:
    cluster: k8s-cluster
    user: default-admin
  name: default-system
current-context: default-system
kind: Config
preferences: {}
users:
- name: default-admin
  user:
    client-certificate: /etc/kubernetes/ca/client.crt
    client-key: /etc/kubernetes/ca/client.key
  • Finally, certificates, KEY files and corresponding configuration files need to be distributed to each host
/etc/kubernetes/kubeconfig : 所有主机的共用配置文件
ca.crt                     : CA根证书
client.key client.crt      : 提供给运行在k8s-master主机上的controllerManager、scheduler服务和kubectl工具使用   
node1.key node1.crt        : 提供给运行在node1主机上的kubelet, proxy服务使用
node2.key node2.crt        : 提供给运行在node2主机上的kubelet, proxy服务使用
node3.key node3.crt        : 提供给运行在node3主机上的kubelet, proxy服务使用
  • Because the controllerManager and scheduler run on the master host, use client.key client.crt directly
/etc/kubernetes/kubeconfig -> master 主机: /etc/kubernetes/kubeconfig
ca.crt    ->               master 主机: /etc/kubernetes/ca/ca.crt
node1.crt ->               master 主机: /etc/kubernetes/ca/client.crt
node1.key ->               master 主机: /etc/kubernetes/ca/client.key
  • The node host needs to copy the corresponding certificate to the corresponding directory and rename the file. Taking the node1 host as an example, the operation is as follows, and the rest are similar:
/etc/kubernetes/kubeconfig -> node1 主机: /etc/kubernetes/kubeconfig
ca.crt    ->               node1 主机: /etc/kubernetes/ca/ca.crt
node1.crt ->               node1 主机: /etc/kubernetes/ca/client.crt
node1.key ->               node1 主机: /etc/kubernetes/ca/client.key
  • The location of the certificate stored on each host is as follows, which needs to be consistent with the kubeconfig configuration
/etc/kubernetes/kubeconfig
/etc/kubernetes/ca/ca.crt
/etc/kubernetes/ca/client.crt
/etc/kubernetes/ca/client.key
  • Because the root certificate of the self-built CA is not trusted by the k8s component by default, all hosts need to add the CA root certificate to the trust list
  1. 安装 ca-certificates package: yum install ca-certificates
  2. 启用dynamic CA configuration feature: update-ca-trust force-enable
  3. Add a new trusted root certificate cp /etc/kubernetes/ca/ca.crt /etc/pki/ca-trust/source/anchors/
  4. Update list: update-ca-trust extract

public configuration of k8s

The common configuration of all components recorded in the /etc/kubernetes/config configuration is used when the following services are started:

  • cube apiserver
  • kube-controller-manager
  • kube-scheduler
  • kubelet
  • kube-proxy

The contents of /etc/kubernetes/config are as follows:

KUBE_MASTER="--master=https://10.1.10.101:6443"
KUBE_CONFIG="--kubeconfig=/etc/kubernetes/kubeconfig"
KUBE_COMMON_ARGS="--logtostderr=true --v=1" 

k8s-master configuration

  • Modify the kube-api-server service configuration file /etc/kubernetes/apiserver :
KUBE_ETCD_SERVERS="--storage-backend=etcd3 --etcd-servers=http://10.1.10.101:2379"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
KUBE_API_ARGS="--service-node-port-range=80-65535 --service-cluster-ip-range=192.168.1.0/16 --bind-address=0.0.0.0 --insecure-port=0 --secure-port=6443 --client-ca-file=/etc/kubernetes/ca/ca.crt --tls-cert-file=/etc/kubernetes/ca/server.crt --tls-private-key-file=/etc/kubernetes/ca/server.key"
  • The port listening for SSL/TLS here is 6443; if you specify a port smaller than 1024, it may cause the apiServer to fail to start.

  • On the master machine, port 8080 is opened by default to provide unencrypted HTTP services, which can be turned off by the --insecure-port=0 parameter

  • Modify the kube-controller-manager service configuration file /etc/kubernetes/controller-manager

KUBE_CONTROLLER_MANAGER_ARGS="--cluster-signing-cert-file=/etc/kubernetes/ca/server.crt --cluster-signing-key-file=/etc/kubernetes/ca/server.key --root-ca-file=/etc/kubernetes/ca/ca.crt --kubeconfig=/etc/kubernetes/kubeconfig"
  • Modify the kube-scheduler service configuration file /etc/kubernetes/scheduler
KUBE_SCHEDULER_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig"
  • Restart the master service after completing the configuration:
systemctl restart kube-apiserver
systemctl restart kube-controller-manager
systemctl restart kube-scheduler

k8s-node configuration

Take the node1 host as an example, the other hosts are similar

  • Modify the kubelet service configuration file /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_HOSTNAME="--hostname-override=node1"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=docker.io/tianyebj/pod-infrastructure"
KUBELET_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig --cgroup-driver=systemd --fail-swap-on=false --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice"
  • Modify the kube-proxy service configuration file /etc/kubernetes/proxy
KUBE_PROXY_ARG="--kubeconfig=/etc/kubernetes/kubeconfig"
  • After completing the configuration, restart the node service:
systemctl restart kubelet
systemctl restart kube-proxy
  • Finally verify the running status of the node
kubectl --kubeconfig=/etc/kubernetes/kubeconfig get nodes

Operation and management of k8s

  1. Create nginx instance configurationnginx-rc-v1.yaml
apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx-v1 
  version: v1
spec:
  replicas: 3
  selector:
    app: nginx
    version: v1
  template:
    metadata:
      labels:
        app: nginx
        version: v1 
    spec:
      containers:
        - name: nginx
          image: nginx:v1 
          ports:
            - containerPort: 80
  1. Create a service configuration corresponding to nginxnginx-src.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx 
spec:
  type: NodePort
  ports:
    - port: 80
      targetPort: 80
      nodePort: 80 
  selector:
      app: nginx
  1. Execute the following command to complete the deployment of an nginx instance
kubectl --kubeconfig=/etc/kubernetes/kubeconfig create -f nginx-rc-v1.yaml 
kubectl --kubeconfig=/etc/kubernetes/kubeconfig create -f nginx-srv.yaml 
  1. Other operation reference
  • Expand the Replication Controller to the number of replicas 4:kubectl --kubeconfig=/etc/kubernetes/kubeconfig cale rc nginx --replicas=4
  • Create a V2 Replication Controller configuration filenginx-rc-v2.yaml
apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx-v2 
  version: v2
spec:
  replicas: 4
  selector:
    app: nginx
    version: v2
  template:
    metadata:
      labels:
        app: nginx
        version: v2 
    spec:
      containers:
        - name: nginx
          image: nginx:v2 
          ports:
            - containerPort: 80
  • Upgrade the nginx pod from V1 to V2 in a grayscale upgrade:kubectl --kubeconfig=/etc/kubernetes/kubeconfig rolling-update nginx-v2 -f nginx-rc-v2.yaml --update-period=10s

Some Kubernetes Operation Instance Concept Reference

  • Pods

Pod is the basic operation unit of Kubernetes. One or more related containers form a Pod. Usually, the containers in the Pod run the same application. The containers contained in the Pod run on the same Minion (Host), which is regarded as a unified management unit and share the same volumes and network namespace/IP and Port space.

  • Services

Services is also the basic operation unit of Kubernetes, an abstraction of real application services. Each service is supported by many corresponding containers. Through the port of Proxy and the service selector, the service request is passed to the container that provides the service to the backend, which is expressed as A single access interface, the outside does not need to know how the backend works, which brings great benefits to extending or maintaining the backend.

  • Replication Controllers

The Replication Controller ensures that the specified number of pod replicas (replicas) are running in the Kubernetes cluster at any time. If the number of pod replicas (replicas) is less than the specified number, the Replication Controller will start a new Container, otherwise it will kill the excess to ensure that the number is not equal. Change. Replication Controller creates pods using pre-defined pod templates. Once created successfully, the pod template has nothing to do with the created pods. You can modify the pod template without affecting the created pods, or you can directly update the pods created by Replication Controller. . For pods created using pod templates, the Replication Controller is associated with the label selector, and the corresponding pods can be deleted by modifying the label of the pods. Replication Controller is mainly used as follows:

  1. Rescheduling: As mentioned above, the Replication Controller ensures that the specified pod replicas (replicas) in the Kubernetes cluster are running, even in the event of a node failure.
  2. Scaling : Horizontally scale up or down running pods by modifying the number of replicas in the Replication Controller.
  3. Rolling updates : The Replication Controller is designed so that pods can be replaced one by one for rolling updates services.
  4. Multiple release :tracks: If you need to run multiple release services in the system, Replication Controller uses labels to distinguish multiple release tracks.
  • Labels

Labels are key/value pairs used to distinguish Pod, Service, and Replication Controller. A Pod, Service, and Replication Controller can have multiple labels, but the key of each label can only correspond to one value. Labels are the basis for the operation of the Service and Replication Controller. In order to forward the request to access the Service to multiple containers that provide services on the backend, the correct container is selected by identifying the labels of the container. Similarly, Replication Controller also uses labels to manage a set of containers created through a pod template, which makes it easier and more convenient for Replication Controller to manage multiple containers, no matter how many there are.

Documentation reference

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324398709&siteId=291194637