kubeadm部署k8s多master节点的高可用集群

外部etcd集群部署可参考:https://www.cnblogs.com/zhangmingcheng/p/13625664.html

Nginx+Keepalived集群搭建 参考:https://www.cnblogs.com/zhangmingcheng/p/13646907.html

因网络文章介绍的内容复杂说不清或者错误较多,本文经实践详细介绍多master节点的k8s集群部署,方便新手入门。

一、环境:仅使用最小化环境,生产环境建议使用3个master节点

1.三台maser节点上部署etcd集群

2.使用VIP进行kubeadm初始化master

master1   192.168.200.92

master2   192.168.200.93

node1   192.168.200.95

vip 192.168.200.200

扫描二维码关注公众号,回复: 14227196 查看本文章

注意:本次是通过本地虚拟机服务器进行部署,如果使用阿里云服务器部署,由于阿里云服务器不支持VIP,可以通过SLB做负载均衡

二、服务器初始化

1.环境初始化

为了方便,直接使用脚本,每台机器都要操作。

#!/bin/bash
# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld

# 关闭selinux
sed -i  's#SELINUX=enforcing#SELINUX=diabled#' /etc/selinux/config
setenforce 0

# 配置主机名
hostnamectl set-hostname k8s-master2

# 配置域名解析
cat >> /etc/hosts << EOF
192.168.200.92    k8s-master1
192.168.200.93    k8s-master2
192.168.200.95    k8s-node1
EOF

# 关闭交换分区
swapoff -a
sed -i 's#\/dev/mapper/centos-swap#\#/dev/mapper/centos-swap#' /etc/fstab

#安装docker
yum install -y vim ntpdate wget epel-release
ntpdate ntp1.aliyun.com
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum -y install docker-ce-18.06.1.ce-3.el7
systemctl enable docker && systemctl start docker
docker --version

#将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

#配置k8s源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2.加载ipvs模块:

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules

3.keepalived、nginx安装,因为要使用到vip,此步骤较简单略,参考文首链接操作。(此步master节点安装)

贴出nginx和keepalived的配置

cat /etc/nginx/nginx.conf

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;

events {
worker_connections 1024;
}
stream {
log_format proxy '$remote_addr $remote_port - [$time_local] $status $protocol '
'"$upstream_addr" "$upstream_bytes_sent" "$upstream_connect_time"' ;
access_log /var/log/nginx/nginx-proxy.log proxy;
upstream kubernetes_lb{
server 192.168.200.92:6443 weight=5 max_fails=3 fail_timeout=30s;
server 192.168.200.93:6443 weight=5 max_fails=3 fail_timeout=30s;
}
server {
listen 7443;
proxy_connect_timeout 30s;
proxy_timeout 30s;
proxy_pass kubernetes_lb;
}
}

cat /etc/keepalived/keepalived.conf

global_defs {
notification_email {
[email protected]
}
notification_email_from [email protected]
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_1
}
vrrp_instance VI_1 {
state BACKUP
interface eno16777736  #网卡设备名称,根据自己网卡信息进行更改
virtual_router_id 88
advert_int 1
priority 50
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.200.201/24  # 这就就是虚拟IP地址
}
}

注:上面配置仅摘自一台master节点,各master节点配置不尽相同,需对应修改。

4.安装k8s组件,所有节点执行

yum install -y kubelet-1.16.3 kubeadm-1.16.3 kubectl-1.16.3

三、k8s部署

编写初始化脚本

1.vim  kubeadm-init.yaml 

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.200.92
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: test-k8s-master-1
  taints: null
key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
controlPlaneEndpoint: "192.168.200.201:7443"
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers   #国内只能使用该镜像站拉取相关镜像
kind: ClusterConfiguration
kubernetesVersion: v1.16.3
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"

注:此步骤注意yaml格式,网络文章错误多在格式缩进被忽略,导致后面无法正常使用。

2.下载镜像

kubeadm config images pull --config kubeadm-init.yaml #此步骤如出现apiversion等错误大多是yaml文件格式错误。
执行成功界面如下
[root@k8s-master1 home]# kubeadm config images pull --config kubeadm-init.yaml
W0609 15:15:34.678610   20911 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta2", Kind:"InitConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "key"
W0609 15:15:34.679801   20911 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta2", Kind:"ClusterConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "scheduler"
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.16.3
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.16.3
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.16.3
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.16.3
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.15-0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.2

3.集群初始化

kubeadm init --config kubeadm-init.yaml

执行成功出现节点加入命令 

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities 
and service account keys on each node and then running the following as root:

  kubeadm join 192.168.200.201:7443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:cac5554bf90fe63c73e4d025dbd94f64f70612fbc470b3cd894cfabf74e0cc43 \
    --control-plane 	  

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.200.201:7443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:cac5554bf90fe63c73e4d025dbd94f64f70612fbc470b3cd894cfabf74e0cc43

按提示操作执行命令:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

此时即可使用kubectl get nodes查看当前节点状态了。

4.其它master节点复制配置

USER=root
CONTROL_PLANE_IPS="test-k8s-master-2 test-k8s-master-3"
for host in ${CONTROL_PLANE_IPS}; do
ssh "${USER}"@$host "mkdir -p /etc/kubernetes/pki/etcd"
scp /etc/kubernetes/pki/ca.* "${USER}"@$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.* "${USER}"@$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.* "${USER}"@$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.* "${USER}"@$host:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/admin.conf "${USER}"@$host:/etc/kubernetes/
done

5.新master节点使用上面init后的命令加入

  kubeadm join 192.168.200.201:7443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:cac5554bf90fe63c73e4d025dbd94f64f70612fbc470b3cd894cfabf74e0cc43 \
    --control-plane 

6.node节点加入

  kubeadm join 192.168.200.201:7443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:cac5554bf90fe63c73e4d025dbd94f64f70612fbc470b3cd894cfabf74e0cc43 

7.网络部署

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml

此时可查看node情况

[root@k8s-master2 ~]# kubectl get nodes
NAME                STATUS   ROLES    AGE   VERSION
k8s-master2         Ready    master   14m   v1.16.3
k8s-node1           Ready    <none>   55s   v1.16.3
k8s-master1         Ready    master   18m   v1.16.3

猜你喜欢

转载自blog.csdn.net/yuemancanyang/article/details/117754821