k8s node节点部署(v1.13.10)

系统环境:

node节点 操作系统: CentOS-7-x86_64-DVD-1908.iso

node节点 IP地址: 192.168.1.204

node节点 hostname(主机名, 请和保持node节点主机名 和master不同):k8s.node03

目标: 在该机器安装k8s node节点,并加入指定集群

步骤如下:

1. 安装基础工具

    

yum install vim
yum install lrzsz
yum install docker
systemctl start docker
systemctl enable docker

2. 检查node节点所在系统时间,若和master节点时间不同,请修改node节点和master节点保持一致(我这里时间和master基本一致,故不修改),修改时间的方式请 参考这里

[root@k8s ~]# date
Sun Oct 20 03:08:01 EDT 2019
[root@k8s ~]# 

3. 关闭防火墙, 如果是公网主机请设置网络安全组,开放必要端口

systemctl stop firewalld
systemctl disable firewalld

4. 关闭SELINUX

setenforce 0

编辑文件  vim /etc/selinux/config 如下

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected. 
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

5. 创建k8s配置文件 /etc/sysctl.d/k8s.conf

cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1
EOF

6. 执行以下命令使修改生效.

modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf

7.添加k8s 阿里源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[k8s]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum makecache

8.安装kubelet, kubectl kubeadm (指定版本号(1.13.1)安装时 这三个东西请按照以下顺序安装)

yum install -y kubelet-1.13.1
yum install kubectl-1.13.1
yum install kubeadm-1.13.1

9.检查已安装的kubelet kubectl kubeadm 版本号

[root@k8s ~]# yum list installed|grep kube
kubeadm.x86_64                      1.13.1-0                           @k8s     
kubectl.x86_64                      1.13.1-0                           @k8s     
kubelet.x86_64                      1.13.1-0                           @k8s     
kubernetes-cni.x86_64               0.6.0-0                            @k8s 

10.拉取必要docker 镜像. 因为网络(需要梯子),如果node节点所在网络有梯子(可以直接访问到域名 k8s.gcr.io),那么这个步骤可以忽略。

10.1,我这里的master节点的这些基础docker 镜像的获取方式如下。当然这些命令可以在node节点重新执行一遍,为了节约下载镜像时间,我直接从master节点打包发送到node节点

docker pull docker.io/mirrorgooglecontainers/kube-apiserver-amd64:v1.13.1
docker tag docker.io/mirrorgooglecontainers/kube-apiserver-amd64:v1.13.1 k8s.gcr.io/kube-apiserver:v1.13.1
docker pull docker.io/mirrorgooglecontainers/kube-controller-manager-amd64:v1.13.1
docker tag docker.io/mirrorgooglecontainers/kube-controller-manager-amd64:v1.13.1 k8s.gcr.io/kube-controller-manager:v1.13.1
docker pull docker.io/mirrorgooglecontainers/kube-scheduler-amd64:v1.13.1
docker tag docker.io/mirrorgooglecontainers/kube-scheduler-amd64:v1.13.1 k8s.gcr.io/kube-scheduler:v1.13.1
docker pull docker.io/mirrorgooglecontainers/kube-proxy-amd64:v1.13.1
docker tag docker.io/mirrorgooglecontainers/kube-proxy-amd64:v1.13.1 k8s.gcr.io/kube-proxy:v1.13.1
docker pull docker.io/mirrorgooglecontainers/pause-amd64:3.1
docker tag docker.io/mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause:3.1
docker pull docker.io/mirrorgooglecontainers/etcd-amd64:3.2.24
docker tag docker.io/mirrorgooglecontainers/etcd-amd64:3.2.24 k8s.gcr.io/etcd:3.2.24
docker pull docker.io/coredns/coredns:1.2.6
docker tag docker.io/coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6

10.2 , 【master节点】将master节点的所有必要docker 镜像保存成压缩包(如果你安装的不是1.13.1,那么save的时候这些tag请和 docker images 对照,切莫写错tag),并复制到node节点所在设备

docker save k8s.gcr.io/kube-proxy:v1.13.1 k8s.gcr.io/coredns:1.2.6 k8s.gcr.io/kube-controller-manager:v1.13.1 k8s.gcr.io/kube-apiserver:v1.13.1 k8s.gcr.io/kube-scheduler:v1.13.1 k8s.gcr.io/etcd:3.2.24 k8s.gcr.io/pause:3.1 -o k8s.1.13.1.tar
scp k8s.1.13.1.tar 192.168.1.204:~/

10.3 , 【node节点】从压缩包 恢复docker镜像

cd ~
docker load -i k8s.1.13.1.tar

11. 关闭 swap

swapoff -a && sed -i '/ swap / s/^/#/' /etc/fstab
echo "vm.swappiness=0">>/etc/sysctl.d/k8s.conf
sysctl -p /etc/sysctl.d/k8s.conf
echo 'Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --fail-swap-on=false"'>>/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl daemon-reload
systemctl restart kubelet

12. 添加到集群

kubeadm join 192.168.1.201:6443 --token 6xnc86.n3ftiy9cu9wuyl5a --discovery-token-ca-cert-hash sha256:6dbcec4d2e20e8936e8d74714a194fa838cc6544f98bde41cd766b69b7a4fc12

13. 【master节点】检查该节点是否部署成功,可以看到k8s.node03已添加成功

[root@localhost ~]# kubectl get nodes
NAME                    STATUS   ROLES    AGE     VERSION
k8s.node01              Ready    <none>   6h27m   v1.13.1
k8s.node02              Ready    <none>   6h38m   v1.13.1
k8s.node03              Ready    <none>   61s     v1.13.1
localhost.localdomain   Ready    master   7h32m   v1.13.1

猜你喜欢

转载自www.cnblogs.com/tu13/p/centos_k8s_node.html