使用kubeadm部署高可用 k8s 1.9.2

高可用 k8s 1.9.2安装:

节点信息:
主机名称 IP 备注
docker09 10.211.121.9 master和etcd
docker10 10.211.121.10 master和etcd
docker22 10.211.121.22 master和etcd
vip-keepalive 10.211.121.102 vip用于高可用

一、系统初始化
1、优化yum源:
sudo rpm -Uvh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system

#关闭swap
swapoff -a

2、升级os内核
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
yum --enablerepo=elrepo-kernel install kernel-ml-devel kernel-ml -y
grub2-set-default 0
reboot

3、节点ssh互信配置(省略)

二、安装docker,目前k8s最高支持 docker17-03 版本
#安装docker

sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sudo yum makecache fast

安装指定版本:
https://yq.aliyun.com/articles/110806

yum install -y https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm
yum install -y docker-ce-17.03.2.ce-1.el7.centos

#解决/var分区过小的问题
mkdir -p /data0/docker/var && ln -s /data0/docker/var /var/lib/docker

#私有仓库和镜像加速
mkdir /etc/docker/
echo "
{
"storage-driver": "overlay2",
"storage-opts": [ "overlay2.override_kernel_check=true" ],
"registry-mirrors": ["https://pej3ico7.mirror.aliyuncs.com"],
"insecure-registries":["10.211.121.26:5000","10.211.121.9:5000"],
"live-restore" : false
}

" >> /etc/docker/daemon.json

systemctl start docker

三、安装k8s
高可用集群部署官方文档:
https://kubernetes.io/docs/setup/independent/high-availability/

开始之前:
1、由于网络原因,无法联网下载镜像等,离线下载对应安装包和镜像,目前已经下载好了,当然最好还是×××解决,不然各种毛病,需要梯子可私聊我:
链接:https://pan.baidu.com/s/1dzQyiq 密码:dyvi

#安装kubelet 、kubectl 、cni
cd k8s192 && yum localinstall *rpm

#load镜像
for i in ls *tar;do docker load < $i ;done

2、修改cgroup-driver为cgroupfs:
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
sed -i 's/systemd/cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

3、命令补全:
yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

Setting up an HA etcd cluster
https://kubernetes.io/docs/setup/independent/high-availability/
1、参考官方文档,Create etcd CA certs
2、Run etcd
选择systemd方式,即非容器方式部署到物理机上。etcd 软件包需要×××下载。
官方文档有个问题:
ExecStart=/usr/local/bin/etcd --name ${PEER_NAME}
这里的 PEER_NAME 变量 必须要和下面的 <etcd0>一致,否则启动会失败
--initial-cluster <etcd0>=https://<etcd0-ip-address>:2380,<etcd1>=https://<etcd1-ip-address>:2380,<etcd2>=https://<etcd2-ip-address>:2380 \

第一个etcd启动时,会等待其他节点启动,待其他节点启动后,etcd才能正常。

检查etc的状态,如下为正常:
etcdctl --endpoints=https://10.211.121.9:2379 --ca-file=/etc/kubernetes/pki/etcd/ca.pem --cert-file=/etc/kubernetes/pki/etcd/server.pem --key-file=/etc/kubernetes/pki/etcd/server-key.pemcluster-health

member 6ebda37987af36d is healthy: got healthy result from https://10.211.121.22:2379
member 15f530c6e1580621 is healthy: got healthy result from https://10.211.121.9:2379
member a43675f5f779e638 is healthy: got healthy result from https://10.211.121.10:2379
cluster is healthy

3、Set up master Load Balancer

这里选择 on-site ,用keepalived 在3个节点上构建。

4、Run kubeadm init on master0
使用kubeadm 初始化master节点
#以防万一,初始化kubeadm并清除以前安装可能留下的痕迹
kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/

apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
advertiseAddress: 10.211.121.102
etcd:
endpoints:

  • https://10.211.121.9:2379
  • https://10.211.121.10:2379
  • https://10.211.121.22:2379
    caFile: /etc/kubernetes/pki/etcd/ca.pem
    certFile: /etc/kubernetes/pki/etcd/client.pem
    keyFile: /etc/kubernetes/pki/etcd/client-key.pem
    kubernetesVersion: 1.9.2
    networking:
    podSubnet: 10.244.0.0/16
    apiServerCertSANs:

    • 10.211.121.9
    • 10.211.121.10
    • 10.211.121.22
    • docker09
    • docker10
    • docker22
      apiServerExtraArgs:
      endpoint-reconciler-type: lease

    advertiseAddress: 10.211.121.102 是VIP 地址。
    podSubnet: 10.244.0.0/16 选择和flannel 组件部署的同一个子网 ,否则会失败。
    (必须指定版本,否则会远程拉取镜像最终失败)

执行:
kubeadm init --config=/etc/kubernetes/config.yaml

把生成的密钥文件,同步到其他master节点:
scp root@<master0-ip-address>:/etc/kubernetes/pki/ /etc/kubernetes/pki
rm apiserver.

5、在其他master节点,同样执行
kubeadm init --config=/etc/kubernetes/config.yaml

6、安装网络模块
这里使用flannel网络模块,覆盖网络。
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml

或者:
kubectl apply -f https://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/hosted/kubeadm/calico.yaml

https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/

7、 为了测试我们把master 设置为 可部署role
默认情况下,为了保证master的安全,master是不会被调度到app的。你可以取消这个限制通过输入:
kubectl taint nodes --all node-role.kubernetes.io/master-

至此,部署完成,3个master节点。

kubectl get node

NAME STATUS ROLES AGE VERSION
docker09 Ready master 33m v1.9.2
docker10 NotReady master 31m v1.9.2
docker22 Ready master 30m v1.9.2

参考文档:
https://kubernetes.io/docs/setup/independent/high-availability/
https://www.kubernetes.org.cn/3808.html

猜你喜欢

转载自blog.51cto.com/devops9527/2117906