Centos7系统使用kubeadm方式安装k8s集群v1.26.1版本

kubeadm方式安装k8s集群

一、准备机器

主机 说明
192.168.0.11 master节点,能连外网,官网最低要求2核2G
192.168.0.12 node1节点,能连外网,官网最低要求2核2G
192.168.0.13 node2节点,能连外网,官网最低要求2核2G

二、服务器环境配置

2.1 关闭防火墙(所有节点)

关闭防火墙并设置开机不启动

systemctl stop firewalld
systemctl disable firewalld

2.2 禁用selinux(所有节点)

#修改/etc/selinux/config文件中的SELINUX=disabled
vim /etc/selinux/config

2.3 关闭swap分区(所有节点)

修改后重启服务器生效

vim /etc/fstab						#永久禁用swap,删除或注释掉/etc/fstab里的swap设备的挂载命令即可
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

2.4 Centos7内核升级(所有节点)

CentOS 7.x 系统自带的 3.10.x 内核存在一些 Bugs,导致运行的 Docker、Kubernetes 不稳定
参考:升级centos系统内核

2.5 设置主机名(所有节点)

cat >> /etc/hosts <<EOF
192.168.0.11 master
192.168.0.12 worker01
192.168.0.13 worker02
EOF

2.6 时间同步(所有节点)

yum -y install ntp
systemctl start ntpd
systemctl enable ntpd

三、安装docker(所有节点)

按照此方法安装dockercentos安装docker-ce

配置docker

#registry-mirrors是配置docker镜像源
#exec-opts是配置Cgroup Driver为systemd,因为k8s使用的是systemd
cat <<EOF > /etc/docker/daemon.json
{
    "registry-mirrors": [
        "http://hub-mirror.c.163.com",
        "https://docker.mirrors.ustc.edu.cn",
        "https://registry.docker-cn.com"
    ],
    "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

#重启docker后查看是否生效
systemctl restart docker
docker info | grep -i "Cgroup Driver"

四、安装cri-dockerd(所有节点)

#从https://github.com/Mirantis/cri-dockerd/releases中下载最新的rpm包,手动下载后上传到服务器里
rpm -ivh cri-dockerd-0.3.1-3.el7.x86_64.rpm

#修改/usr/lib/systemd/system/cri-docker.service文件中的ExecStart配置
vim /usr/lib/systemd/system/cri-docker.service
ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.7

systemctl daemon-reload
systemctl enable --now cri-docker

五、配置kubernetes的阿里云yum源(所有节点)

baseurl地址末尾的x86_64值需要根据系统修改,输入 uname -m 以查看该值。 例如,x86_64 的 baseurl URL 可以是:https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

六、yum安装kubeadm、kubelet、kubectl(所有节点)

在3台虚拟机上都执行安装kubeadm、kubelet、kubectl(kubeadm和kubectl都是工具,kubelet才是系统服务)

#删除之前的
yum -y remove kubelet kubeadm kubectl

#查看yum可获取的kubeadm版本,这里安装1.26.1版本,不指定版本的话默认安装最新版本
yum list --showduplicates | grep  kubeadm
#安装kubeadm、kubelet、kubectl
yum -y install kubelet-1.26.1 kubeadm-1.26.1 kubectl-1.26.1
#设置kubelet开机自启(先不用启动,也起不了,后面kubeadm init初始化master时会自动拉起kubelet)
systemctl enable kubelet

七、初始化master节点的控制面板(master节点)

执行下面命令可能会报错,按照此Kubernetes常见报错进行相应修改

# kubeadm init --help可以查看命令的具体参数用法

#在master节点执行初始化(node节点不用执行)
#apiserver-advertise-address  指定apiserver的IP,即master节点的IP
#image-repository  设置镜像仓库为国内的阿里云镜像仓库
#kubernetes-version  设置k8s的版本,跟步骤三的kubeadm版本一致
#service-cidr  这是设置node节点的网络的,暂时这样设置
#pod-network-cidr  这是设置node节点的网络的,暂时这样设置
#cri-socket  设置cri使用cri-dockerd

kubeadm init \
--apiserver-advertise-address=192.168.0.11 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.26.1 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--cri-socket unix:///var/run/cri-dockerd.sock \
--ignore-preflight-errors=all

如果上面kubeadm init命令有错误,执行下面命令重置kubeadm及无用的镜像

#重置Kubeadm
kubeadm reset -f

#删除docker无用的镜像容器
docker system prune -f

上面执行后出现下面结果表示成功

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.11:6443 --token xw8o4d.ly5o9kxgbodtrykw \
        --discovery-token-ca-cert-hash sha256:2fbb2be8829dd90f789b13269f2ef4d8de6a39bc568c61e3a6a00ea3c95efd94

根据上面结果在master节点执行相应命令(直接复制上面提示的命令即可)

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf

八、将node节点加入k8s集群

#命令最后加上cri-socket指定使用cri-dockerd
kubeadm join 192.168.0.11:6443 --token xw8o4d.ly5o9kxgbodtrykw \
        --discovery-token-ca-cert-hash sha256:2fbb2be8829dd90f789b13269f2ef4d8de6a39bc568c61e3a6a00ea3c95efd94 \
        --cri-socket unix:///var/run/cri-dockerd.sock

上面命令报错后根据报错内容进行相应修改即可,例如下面报错信息

[preflight] Running pre-flight checks
        [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
        [ERROR Port-10250]: Port 10250 is in use
        [ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

删除已有配置文件并重启kubelet即可

kubeadm reset --cri-socket unix:///var/run/cri-dockerd.sock


rm -f /etc/kubernetes/kubelet.conf
rm -f /etc/kubernetes/pki/ca.crt
systemctl stop kubelet

九、在master节点配置pod网络创建

执行kubectl get nodes后查看发现都是NotReady,需要配置CNI网络插件

[root@master ~]# kubectl get nodes
NAME       STATUS     ROLES           AGE   VERSION
master     NotReady   control-plane   58m   v1.26.1
worker01   NotReady   <none>          37m   v1.26.1
worker02   NotReady   <none>          25m   v1.26.1
#在master节点配置pod网络创建
#node节点加入k8s集群后,在master上执行kubectl get nodes发现状态是NotReady,因为还没有部署CNI网络插件,其实在步骤四初始化
#完成master节点的时候k8s已经叫我们去配置pod网络了。在k8s系统上Pod网络的实现依赖于第三方插件进行,这类插件有近数十种之多,较为
#著名的有flannel、calico、canal和kube-router等,简单易用的实现是为CoreOS提供的flannel项目。

#执行下面这条命令在线配置pod网络,因为是国外网站,所以可能报错,测试去http://ip.tool.chinaz.com/网站查到
#域名raw.githubusercontent.com对应的IP,把域名解析配置到/etc/hosts文件,然后执行在线配置pod网络,多尝试几次即可成功。
[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml					
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@master ~]# kubectl get pods -n kube-system						#查看pod状态
NAME                             READY   STATUS     RESTARTS   AGE
coredns-7f6cbbb7b8-bm2gl         0/1     Pending    0          86m
coredns-7f6cbbb7b8-frq8l         0/1     Pending    0          86m
etcd-master                      1/1     Running    1          87m
kube-apiserver-master            1/1     Running    1          87m
kube-controller-manager-master   1/1     Running    1          87m
kube-flannel-ds-5rwkt            0/1     Init:1/2   0          2m13s
kube-flannel-ds-9fqkl            1/1     Running    0          2m13s
kube-flannel-ds-bvgh4            1/1     Running    0          2m13s
kube-proxy-8vmqg                 1/1     Running    0          59m
kube-proxy-ll9hw                 1/1     Running    0          86m
kube-proxy-zndg7                 1/1     Running    0          59m
kube-scheduler-master            1/1     Running    1          87m
# 重启服务器后获取
[root@master ~]# kubectl get nodes										#pod网络已经配置完成,状态已经是Ready
NAME       STATUS     ROLES           AGE   VERSION
master     Ready      control-plane   58m   v1.26.1
worker01   Ready      <none>          37m   v1.26.1
worker02   Ready      <none>          25m   v1.26.1

十、测试k8s集群

在k8s中创建一个pod,验证是否正常运行

[root@master ~]# kubectl create deployment httpd --image=httpd					#创建一个httpd服务测试
deployment.apps/httpd created
[root@master ~]# kubectl expose deployment httpd --port=80 --type=NodePort		#端口就写80,如果你写其他的可能防火墙拦截了
service/httpd exposed
[root@master ~]# kubectl get pod,svc											#对外暴露端口
NAME                         READY   STATUS    RESTARTS   AGE
pod/httpd-757fb56c8d-w42l5   1/1     Running   0          39s

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
service/httpd        NodePort    10.102.83.215   <none>        80:30176/TCP     26s			#30176端口就是对外映射的端口
service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP          112m
[root@master ~]# 
#作为初学者,以上命令先不用纠结,端口就写80即可,如果你写其他的端口可能防火墙拦截了,网页就访问不了

网页测试访问,使用master节点的IP或者node节点的IP都可以访问,端口就是30176,如下所示,这就说明我们k8s已经部署完成,网络ok。
在这里插入图片描述

猜你喜欢

转载自blog.csdn.net/weixin_39190162/article/details/128785555