k8s 1.24.17版本部署(使用Flannel插件)

1.k8s集群环境准备

推荐阅读:
https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

1.1 环境准备

环境准备:
	硬件配置: 2core 4GB
	磁盘: 50GB+
	操作系统: Ubuntu 22.04.04 LTS
	IP和主机名:
		10.0.0.231 master231
		10.0.0.232 worker232
		10.0.0.233 worker233

1.2 关闭swap分区

swapoff -a && sysctl -w vm.swappiness=0  # 临时关闭
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab  # 基于配置文件关闭

1.3 确保各个节点MAC地址或product_uuid唯一

ifconfig  eth0  | grep ether | awk '{print $2}'
cat /sys/class/dmi/id/product_uuid 

1.4 检查网络节点是否互通

ping baidu.com -c 10 

1.5.允许iptable检查桥接流量

modprobe bridge
modprobe br_netfilter
cat <<EOF | tee /etc/modules-load.d/k8s.conf
bridge
br_netfilter
EOF

cat <<EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl --system

1.6 检查端口是否被占用

参考链接: https://kubernetes.io/zh-cn/docs/reference/networking/ports-and-protocols/

检查master节点和worker节点的各组件端口是否被占用。

1.7 安装Containerd

略
[root@master241 ~]# ctr version
Client:
  Version:  v1.6.36
  Revision: 88c3d9bc5b5a193f40b7c14fa996d23532d6f956
  Go version: go1.22.7

Server:
  Version:  v1.6.36
  Revision: 88c3d9bc5b5a193f40b7c14fa996d23532d6f956
  UUID: 40e0c4d0-7d11-45af-bcd4-e390d85c9954
[root@master241 ~]# 
[root@master241 ~]# ctr ns ls
NAME LABELS 

1.8 所有节点安装kubeadm,kubelet,kubectl

1.8.1 K8S所有节点配置软件源

apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - 
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
apt-get update

1.8.2 查看一下当前环境支持的k8s版本

[root@master241 ~]# apt-cache madison kubeadm
   kubeadm |  1.28.2-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.28.1-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.28.0-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   ...
   kubeadm | 1.23.17-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm | 1.23.16-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm | 1.23.15-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm | 1.23.14-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   ...

1.8.3 所有节点安装 kubelet kubeadm kubectl

apt-get -y install kubelet=1.24.17-00 kubeadm=1.24.17-00 kubectl=1.24.17-00

1.8.4 检查各组件版本

[root@worker242 ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.17", GitCommit:"953be8927218ec8067e1af2641e540238ffd7576", GitTreeState:"clean", BuildDate:"2023-02-22T13:33:14Z", GoVersion:"go1.19.6", Compiler:"gc", Platform:"linux/amd64"}
[root@worker242 ~]# 
[root@worker242 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.17", GitCommit:"953be8927218ec8067e1af2641e540238ffd7576", GitTreeState:"clean", BuildDate:"2023-02-22T13:34:27Z", GoVersion:"go1.19.6", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[root@worker242 ~]# 
[root@worker242 ~]# kubelet --version
Kubernetes v1.23.17
[root@worker242 ~]# 


温馨提示:
	其他两个节点都要检查下,避免你安装的版本和我不一致!
	
参考链接:
https://kubernetes.io/zh/docs/tasks/tools/install-kubectl-linux/

1.9 检查时区

[root@master241 ~]# ln -svf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime 
'/etc/localtime' -> '/usr/share/zoneinfo/Asia/Shanghai'
[root@master241 ~]# 
[root@master241 ~]# ll /etc/localtime
lrwxrwxrwx 1 root root 33 Apr  7 17:34 /etc/localtime -> /usr/share/zoneinfo/Asia/Shanghai
[root@master241 ~]# 
[root@master241 ~]# date -R
Mon, 07 Apr 2025 17:34:34 +0800

2.基于kubeadm组件初始化K8S的master组件

2.1.提前导入镜像

[root@master241 ~]# ctr -n k8s.io i ls | awk 'NR>=1{print $1}' | grep google_containers | grep -v sha256
registry.aliyuncs.com/google_containers/coredns:v1.8.6
registry.aliyuncs.com/google_containers/etcd:3.5.6-0
registry.aliyuncs.com/google_containers/kube-apiserver:v1.24.17
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.24.17
registry.aliyuncs.com/google_containers/kube-proxy:v1.24.17
registry.aliyuncs.com/google_containers/kube-scheduler:v1.24.17
registry.aliyuncs.com/google_containers/pause:3.7
registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6
如何导出镜像?
[root@master241 ~]# ctr -n k8s.io i export master-v1.24.17.tar.gz `ctr -n k8s.io i ls | awk 'NR>=1{print $1}' | grep google_containers | grep -v sha256`
[root@master241 ~]# 
[root@master241 ~]# ll -h master-v1.24.17.tar.gz 
-rw-r--r-- 1 root root 226M Apr  7 17:46 master-v1.24.17.tar.gz

2.2 使用kubeadm初始化master节点

[root@master231 ~]# kubeadm init --kubernetes-version=v1.24.17 --image-repository registry.aliyuncs.com/google_containers  --pod-network-cidr=10.100.0.0/16 --service-cidr=10.200.0.0/16  --service-dns-domain=dezyan.com
...

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.0.241:6443 --token 2tw3l6.zoei48h2bawc7t5z \
	--discovery-token-ca-cert-hash sha256:da5e232386796fa2378189fce4881e7fe5a8a864b8c68b3f74e7859e1cf4fd2c 

3.拷贝授权文件,用于管理K8S集群

[root@master241 ~]# mkdir -p $HOME/.kube
[root@master241 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master241 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config	

4.查看master组件是否正常工作

[root@master241 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
controller-manager   Healthy   ok                              
scheduler            Healthy   ok                              
etcd-0               Healthy   {"health":"true","reason":""}   
[root@master241 ~]# 

5.查看工作节点

[root@master241 ~]# kubectl get no -o wide
NAME        STATUS     ROLES           AGE     VERSION    INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION       CONTAINER-RUNTIME
master241   NotReady   control-plane   9m14s   v1.24.17   10.0.0.241    <none>        Ubuntu 22.04.4 LTS   5.15.0-119-generic   containerd://1.6.36

6.基于kubeadm部署worker组件

6.1提前导入镜像

[root@worker242 ~]# ctr -n k8s.io i ls | awk 'NR>=1{print $1}' | grep google_containers | grep -v sha256
registry.aliyuncs.com/google_containers/kube-proxy:v1.24.17
registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6

[root@worker243 ~]# ctr -n k8s.io i ls | awk 'NR>=1{print $1}' | grep google_containers | grep -v sha256
registry.aliyuncs.com/google_containers/kube-proxy:v1.24.17
registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6

6.2 将worker节点加入到master集群

[root@worker242 ~]# kubeadm join 10.0.0.241:6443 --token 2tw3l6.zoei48h2bawc7t5z --discovery-token-ca-cert-hash sha256:da5e232386796fa2378189fce4881e7fe5a8a864b8c68b3f74e7859e1cf4fd2c

[root@worker243 ~]# kubeadm join 10.0.0.241:6443 --token 2tw3l6.zoei48h2bawc7t5z --discovery-token-ca-cert-hash sha256:da5e232386796fa2378189fce4881e7fe5a8a864b8c68b3f74e7859e1cf4fd2c

6.3 验证worker节点是否加入成功

[root@master241 ~]# kubectl get no -o wide
NAME        STATUS     ROLES           AGE     VERSION    INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION       CONTAINER-RUNTIME
master241   NotReady   control-plane   19m     v1.24.17   10.0.0.241    <none>        Ubuntu 22.04.4 LTS   5.15.0-119-generic   containerd://1.6.36
worker242   NotReady   <none>          7m58s   v1.24.17   10.0.0.242    <none>        Ubuntu 22.04.4 LTS   5.15.0-119-generic   containerd://1.6.36
worker243   NotReady   <none>          7s      v1.24.17   10.0.0.243    <none>        Ubuntu 22.04.4 LTS   5.15.0-119-generic   containerd://1.6.36

7.部署CNI插件之Flannel实战

7.1 导入镜像

7.2 下载资源清单并修改自己的网段

wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

[root@master241 ~]# sed -i '/16/s#244#100#' kube-flannel.yml 
[root@master241 ~]# 
[root@master241 ~]# grep 16 kube-flannel.yml 
      "Network": "10.100.0.0/16",

7.3 开始安装Flannel

[root@master241 ~]# kubectl apply -f kube-flannel.yml 

7.4 查看Pod信息

[root@master241 ~]# kubectl get pods -A
NAMESPACE      NAME                                READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-5g7mw               1/1     Running   0          7m56s
kube-flannel   kube-flannel-ds-bxjjq               1/1     Running   0          4s
kube-flannel   kube-flannel-ds-rx97h               1/1     Running   0          7m56s
kube-system    coredns-74586cf9b6-8rm7n            1/1     Running   0          43m
kube-system    coredns-74586cf9b6-zd76c            1/1     Running   0          43m
kube-system    etcd-master241                      1/1     Running   0          43m
kube-system    kube-apiserver-master241            1/1     Running   0          43m
kube-system    kube-controller-manager-master241   1/1     Running   0          43m
kube-system    kube-proxy-kkqsb                    1/1     Running   0          32m
kube-system    kube-proxy-njdqc                    1/1     Running   0          43m
kube-system    kube-proxy-tv98r                    1/1     Running   0          24m
kube-system    kube-scheduler-master241            1/1     Running   0          43m
[root@master241 ~]# 

7.5 创建Pod测试网络插件是否正常

[root@master241 ~]# cat test-cni.yaml
apiVersion: v1
kind: Pod
metadata:
  name: xiuxian-v1
spec:
  nodeName: worker242
  containers:
  - image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1 
    name: xiuxian

---

apiVersion: v1
kind: Pod
metadata:
  name: xiuxian-v2
spec:
  nodeName: worker243
  containers:
  - image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v2
    name: xiuxian
    
    
[root@master241 ~]# kubectl apply -f  test-cni.yaml
pod/xiuxian-v1 created
pod/xiuxian-v2 created

[root@master241 ~]# kubectl get pods -o wide
NAME         READY   STATUS    RESTARTS   AGE     IP           NODE        NOMINATED NODE   READINESS GATES
xiuxian-v1   1/1     Running   0          4m17s   10.100.1.4   worker242   <none>           <none>
xiuxian-v2   1/1     Running   0          4s      10.100.2.3   worker243   <none>           <none>
[root@master241 ~]# 
[root@master241 ~]# curl  10.100.1.4
[root@master241 ~]# curl  10.100.2.3 

猜你喜欢

转载自blog.csdn.net/dingzy1/article/details/147062698
今日推荐