kubeadm is Kubernetes official tool for quick installation Kubernetes cluster, along Kubernetes each version release will be updated simultaneously, some of the practical aspects of the cluster configuration kubeadm have to make adjustments, you can learn to cluster configuration Kubernetes official in experiments kubeadm Some new best practices.
OS
Ubuntu 16.04 +, Debian 9, CentOS 7, RHEL 7, Fedora 25/26 (best-effort), additional
memory 2GB +, 2 core CPU +
can communicate between the cluster nodes
unique host name per node, MAC address, and product_uuid
check the MAC address: ifconfig -a use ip link or
check product_uuid: cat / sys / class / dmi / id / product_uuid
prohibited swap partition. In order to make kubelet work
ready
1.1 System Configuration
Correspondence between host names and IP:
[root@k8s-master ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.201 k8s-master
192.168.1.202 k8s-node1
192.168.1.203 k8s-node2
If each host firewall enabled, you need to open ports Kubernetes various components needed, you can view Installing kubeadm the "Check required ports" section. Here simplicity disable the firewall at each node:
systemctl stop firewalld
systemctl disable firewalld
Disable SELINUX: sed -i 's/enforcing/disabled/' /etc/selinux/config <br/>setenforce 0
or
vi /etc/selinux/config
SELINUX=disabled
Close swap:
swapoff -a # 临时
vim /etc/fstab # 永久
synchronised time:
yum install ntpdate -y
ntpdate ntp.api.bz
Creating /etc/sysctl.d/k8s.conf file, add the following:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
Run the change to take effect:
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf
1.2kube-proxy open precondition of ipvs
Since ipvs has been added to the trunk of the kernel, so open ipvs is kube-proxy premise needs to load the kernel module, execute the following script on all Kubernetes nodes node1 and node2:
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
The above script creates a /etc/sysconfig/modules/ipvs.modules file, ensure that the node restart automatically load the required modules. Use lsmod | grep -e ip_vs -e nf_conntrack_ipv4 command to check whether the required kernel module has been loaded correctly.
Next also need to ensure that each node ipset packages are installed yum install ipset
for ease of viewing ipvs the proxy rules, it is best to install management tools ipvsadm yum install ipvsadm
If the above prerequisites If not, even if kube-proxy configuration opens ipvs mode, It will be returned to iptables mode.
1.3 Installation Docker
1.6 Kubernetes started from CRI (Container Runtime Interface) interfaces runtime container. The default runtime container is still Docker, using kubelet built dockershim CRI achieve.
The yum install docker Source:
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Docker view the latest version:
yum list docker-ce.x86_64 --showduplicates |sort -r
[root@go-docker ~]# yum list docker-ce.x86_64 --showduplicates |sort -r
* updates: mirrors.aliyun.com
Loading mirror speeds from cached hostfile
Loaded plugins: fastestmirror, langpacks
* extras: mirrors.aliyun.com
docker-ce.x86_64 3:19.03.5-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.4-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.3-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.2-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.1-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.0-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.9-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.8-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.7-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.6-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.5-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.4-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.3-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.2-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.1-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.0-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.3.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.2.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.1.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.0.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.03.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 18.03.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.12.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.12.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.09.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.09.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.2.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.3.ce-1.el7 docker-ce-stable
docker-ce.x86_64 17.03.2.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.0.ce-1.el7.centos docker-ce-stable
* base: mirrors.aliyun.com
Available Packages
Kubernetes 1.16 a list of currently supported version is 1.13.1 docker, 17.03, 17.06, 17.09, 18.06, 18.09. Here 18.09.7 install version docker in each node.
yum makecache fast
yum install -y --setopt=obsoletes=0 \ docker-ce-18.09.7-3.el7
systemctl start docker
systemctl enable docker
Confirm that the default policies iptables filter table FOWARD chain (pllicy) is ACCEPT.
iptables -nvL
[root@k8s-master ~]# iptables -nvL
Chain INPUT (policy ACCEPT 20 packets, 2866 bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 DOCKER-ISOLATION-STAGE-1 all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy ACCEPT 19 packets, 2789 bytes)
pkts bytes target prot opt in out source destination
Chain DOCKER (1 references)
pkts bytes target prot opt in out source destination
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
pkts bytes target prot opt in out source destination
0 0 DOCKER-ISOLATION-STAGE-2 all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-ISOLATION-STAGE-2 (1 references)
pkts bytes target prot opt in out source destination
0 0 DROP all -- * docker0 0.0.0.0/0 0.0.0.0/0
0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-USER (1 references)
pkts bytes target prot opt in out source destination
0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
1.4 modify docker cgroup driver to systemd
According to the contents of the document CRI installation of, for use systemd as the release init system in Linux using systemd as cgroup driver docker can ensure that the server node is more stable in a tight resource situation, so here modify cgroup driver on each node docker is systemd.
Create or modify /etc/docker/daemon.json:
vim /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
Restart docker:
systemctl restart docker
docker info | grep Cgroup
Cgroup Driver: systemd