Centos installation and deployment Kubernetes (k8s) steps use kubeadm method


Machine address:

192.168.0.35 k8s-master
192.168.0.39 k8s-node1
192.168.0.116 k8s-node2

1. Modify the system configuration

Modify the name of each machine

 hostnamectl set-hostname k8s-master
 hostnamectl set-hostname k8s-node1
 hostnamectl set-hostname k8s-node2

Turn off firewall and selinux

 systemctl stop firewalld && systemctl disable firewalld

Temporarily close selinux:

setenforce 0

Close permanently:

vim /etc/selinux/config

Modify selinux to disabled or permissive and restart to take effect
insert image description here
Configure local parsing

  echo '''
192.168.0.35 k8s-master
192.168.0.39 k8s-node1
192.168.0.116 k8s-node2
''' >> /etc/hosts

insert image description here
Ensure the uniqueness of each node MAC address and product_uuid

你可以使用命令 ip linkifconfig -a 来获取网络接口的 MAC 地址
可以使用 sudo cat /sys/class/dmi/id/product_uuid 命令对 product_uuid 校验
一般来讲,硬件设备会拥有唯一的地址,但是有些虚拟机的地址可能会重复。 Kubernetes 使用这些值来唯一确定集群中的节点。 如果这些值在每个节点上不唯一,可能会导致安装失败。

Synchronize time
If there is no problem with the time on each machine and there is no deviation, you can skip this step.

# 安装ntp工具
yum install -y ntp

#设置时区
timedatectl set-timezone 'Asia/Shanghai'

# 同步时间
ntpdate ntp1.aliyun.com

Upgrade the kernel
link: linux(centos7.6–>7.9) kernel upgrade
update yum—you can skip this step if the version is high

sudo yum update

2. Install the docker application

Each node needs to download

yum install -y yum-utils device-mapper-persistent-data lvm2 git

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum -y install docker-ce 

Note:
If you use a mounted disk, you need to change the docker data directory

If it is a docker that has been started, you need to stop the docker service and then move the data in the /var/lib/docker/ directory to the new directory mv /var/lib/docker/ /data/docker/

Modify the docker default data storage directory configuration, add the following content to the /etc/docker/daemon.json file, if there is no /etc/docker/daemon.json file, create a new file

vim /etc/docker/daemon.json

{
    
    

 "data-root": "/data/docker"

}

start docker

systemctl start docker   

systemctl enable docker
  设置开机自启

insert image description here

Close the swap partition

Kubernetes 1.8 began to require that the system's Swap be turned off. If it is not turned off, the kubelet will not be able to start under the default configuration. Method 1, change this limit through the kubelet startup parameter --fail-swap-on=false. Method 2, close the Swap of the system.

Each node needs to be shut down

swapoff -a

vim /etc/fstab Comment out the automatic mounting of SWAP
insert image description here
Use free -m to check whether it is closed
insert image description here

3. Pull the docker image

During initialization, it will automatically pull the image, but the automatic pull uses the source address of the k8s official website, which is prone to failure, so manually pull the image of aliyun. Note that the version of the pulled docker image must be consistent with the version of kubelet and kubectl. Directly specify the image of Alibaba Cloud when initializing below.

vim dockerpull.sh

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.26.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.26.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.26.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.26.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.9.3
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.6-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9

Execute the script to pull the image

bash dockerpull.sh 

Check to see if the images are pulled successfully

docker images  

insert image description here

4. cri-dockerd installation

Earlier versions of Kubernetes only worked with a specific container runtime: Docker Engine. Later, Kubernetes added support for using other container runtimes. The CRI standard was created to enable interoperability between the orchestrator and many different container runtimes. Docker Engine does not implement the (CRI) interface, so the Kubernetes project created special code dockershim to help with the transition, Dockershim code has been a temporary solution (hence the name: shim). Kubernetes has removed support for Dockershim since v1.24, and Docker Engine does not support the CRI specification by default, so the two cannot be directly integrated. To this end, Mirantis and Docker jointly created the cri-dockerd project to provide Docker Engine with a shim that can support the CRI specification, so that Kubernetes can control Docker based on CRI.

Note that deploying the latest version XXXX does not use docker containers by default, so you need to download a plugin
All nodes need to download
https://github.com/Mirantis/cri-dockerd/releases

Download the corresponding version of the cri-dockerd rpm package, pay attention to the version and linux version, you can also download and upload to all hosts by yourself

wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.2/cri-dockerd-0.3.2-3.el7.x86_64.rpm

安装
rpm -ivh cri-dockerd-0.3.2-3.el7.x86_64.rpm

insert image description here

Modify the ExecStart configuration kubelet in the /usr/lib/systemd/system/cri-docker.service file to use the pause image

vim /usr/lib/systemd/system/cri-docker.service


ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9

insert image description here
reload & start now

systemctl daemon-reload

systemctl enable --now cri-docker

Load ipvs related kernel modules

vim mod.sh

#!/bin/bash

modprobe ip_vs

modprobe ip_vs_rr

modprobe ip_vs_wrr

modprobe ip_vs_sh

modprobe nf_conntrack_ipv4

modprobe br_netfilter

insert image description here

chmod +x mod.sh

bash   mod.sh

insert image description here

scp mod.sh  k8s-node1:/root/   

scp mod.sh  k8s-node2:/root/ 

Remember to execute it after sending

vim /etc/rc.local   如果重新开机,需要重新加载,需要写在 /etc/rc.local 中开机自动加载
添加这句 bash /root.mod.sh

insert image description here

chmod +x /etc/rc.local

Configure forwarding related parameters, otherwise errors may occur

cat <<EOF >  /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

vm.swappiness=0

EOF

Make the configuration take effect

sysctl --system  

Check if the loading is successful

lsmod | grep ip_vs

insert image description here

5. Install kubeadm and kubelet

All nodes need to install

  • Configure Aliyun's yum source
cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF

Copy to two other machines

 scp /etc/yum.repos.d/kubernetes.repo   k8s-node1:/etc/yum.repos.d/kubernetes.repo

 scp /etc/yum.repos.d/kubernetes.repo   k8s-node2:/etc/yum.repos.d/kubernetes.repo
  • Install
yum makecache fast -y

yum install -y kubelet kubeadm kubectl ipvsadm

#Note, this default is to download the latest version

If you want to download an older version, follow it with the specified version number

yum install -y kubelet-1.22.2-0.x86_64 kubeadm-1.22.2-0.x86_64 kubectl-1.22.2-0.x86_64 ipvsadm  

start kubelet

systemctl daemon-reload

systemctl enable kubelet && systemctl restart kubelet
systemctl status kubelet

insert image description here
(Each node will report an error), regardless of the initialization of the master node.

  • Configure master node initialization
  • kubeadm init --help can view the specific parameter usage of the command

Execute initialization on the master node (node ​​node does not need to execute)

Parameter details:

apiserver-advertise-address specifies the IP of the apiserver, that is, the IP of the master node

image-repository Set the mirror warehouse as the domestic Alibaba Cloud mirror warehouse

kubernetes-version Set the k8s version, which is consistent with the kubeadm version in step 3

service-cidr This is to set the network of the node node, temporarily set it like this

pod-network-cidr This is to set the network of the node node, temporarily set it like this

cri-socket set cri use cri-dockerd
to view the version

kubeadm  version

insert image description here

kubeadm init --apiserver-advertise-address=192.168.0.35 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.27.2 --service-cidr=10.168.0.0/12 --pod-network-cidr=10.244.0.0/16 --cri-socket unix:///var/run/cri-dockerd.sock  --ignore-preflight-errors=all

After executing the initialization command, check whether the echo is successful and return 0 for success

echo $?

insert image description here

kubeadm join 192.168.0.35:6443 --token 62kh2k.7lfpud7ridya1it7 \
	--discovery-token-ca-cert-hash sha256:b55066a01216999577c2260bad6349ab8e293bff58ec9ea041b2c7c7bb51913e

insert image description here
After the initialization is complete, operate according to the command prompt

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

如果使用的root用户可以操作这条命令

  export KUBECONFIG=/etc/kubernetes/admin.conf

You can view the masterinsert image description here

6. Configure the flannel network plug-in

When the master node is completed, k8s has already asked us to configure the pod network. The implementation of the Pod network on the k8s system needs to rely on third-party plug-ins for various types. There are many flannels we use here.

cd ~ && mkdir flannel && cd flannel
wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

Check the image version and then use docker to pull it, all nodes pull
vim kube-flannel.yml
insert image description here

insert image description here
Pull all nodes of the image to be used to pull

docker pull docker.io/flannel/flannel:v0.22.0

docker pull docker.io/flannel/flannel-cni-plugin:v1.1.2

reload file

kubectl apply -f kube-flannel.yml

insert image description here
Check the status of the master node

kubectl  get nodes

insert image description here
From NoReady to Ready, then our flannel is deployed

7. Node node joins the cluster operation

Node1 node operation

kubeadm join 192.168.0.35:6443 --token 62kh2k.7lfpud7ridya1it7 \
	--discovery-token-ca-cert-hash sha256:b55066a01216999577c2260bad6349ab8e293bff58ec9ea041b2c7c7bb51913e --cri-socket unix:///var/run/cri-dockerd.sock

insert image description here
Check the status again, and the two nodes of Node have also joined successfully.
insert image description here
Look at the running status of flannel

 kubectl get pods --namespace kube-flannel   

insert image description here
So far the construction is complete.

Guess you like

Origin blog.csdn.net/m0_46400195/article/details/131070214