Based kubeadm deployment kubernetes (v1.15.2) cluster

By building a single control plane k8s clusters to handle all types of non-online business, especially as cloud-native application development, testing, experimental learning scenarios, although not HA deployment but also quite enough. This article focuses on the control plane K8S single recording cluster installations, application cloud, cloud transition reserves the native application basis.

If resources are sufficient, then (10 or more servers, for three APIserver, 3 etcd station for storing, for at least three working node, as a load balancing station 1), the control plane can deploy multiple high availability cluster environment. The following is a high-availability cluster topologies, for reference:

kubeadm-ha-topology-external-etcd

Setup procedure is different, but the whole idea is similar, refer to the installation 's official website document , the document clear, also better operation.

First, the preparatory work

Hardware resource requirements, the proposed 4-core or more CPU, 8GB RAM or above, Ubuntu 16.04 or more, or CentOS 7 or later operating system, ensure the normal network traffic across all servers, one server as a control plane node, the remaining number of servers as a working node, I We prepare four working node. General information on the following:

name CPU RAM IP THE installation use
CPN-1 4U 8GB 10.163.10.6 ubuntu18.04 Rocker, kubeadm, omelets, kubectl Control Plane Node
WN-1 4U 8GB 10.163.10.7 ubuntu18.04 Rocker, kubeadm, omelets Worker Node
WN-2 4U 8GB 10.163.10.8 ubuntu18.04 Rocker, kubeadm, omelets Worker Node
WN-2 4U 8GB 10.163.10.9 ubuntu18.04 Rocker, kubeadm, omelets Worker Node
WN-2 4U 8GB 10.163.10.10 ubuntu18.04 Rocker, kubeadm, omelets Worker Node

Installation docker

K8S supports a variety of container runtime environment, where select docker as the runtime environment, first of all nodes server installation docker, currently kubernetes latest version (v1.15.2) can be fully compatible with the highest version supported docker is v18.06, so the installation here this version v18.06.
Refer to the official website of the document .

# 删除旧版本docker
$ sudo apt-get remove docker docker-engine docker.io containerd runc

# 更新 apt 
$ sudo apt-get update

# 安装工具包
$ sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg-agent \
    software-properties-common

# 添加Docker官方 GPG key
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

# 添加 stable apt 源
$ sudo add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"

# 安装 Docker CE
$ sudo apt-get update
$ sudo apt-get install docker-ce=18.06.3~ce~3-0~ubuntu docker-ce-cli=18.06.3~ce~3-0~ubuntu containerd.io

如果因网络环境原因从官网仓库安装速度较慢,可以使用阿里云镜像仓库安装,具体步骤如下:

# step 1: 安装必要的一些系统工具
sudo apt-get update
sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common
# step 2: 安装GPG证书
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
# Step 3: 写入软件源信息
sudo add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
# Step 4: 更新并安装 Docker-CE
sudo apt-get -y update

# 选择安装版本,这里选择 18.06.3
apt-cache madison docker-ce
# sudo apt-get -y install docker-ce=[version]
sudo apt-get -y install docker-ce=18.06.3~ce~3-0~ubuntu

后续操作

1、当前用户加入"docker"用户组

$ sudo usermod -aG docker $USER

2、 配置 cgroup 驱动为 systemd

#  创建文件 /etc/docker/daemon.json ,内容如下:
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}

3、重启服务生效配置

sudo systemctl daemon-reload
sudo systemctl restart docker.service

4、检查配置是否生效

docker info | grep Cgroup

# ECHO ------
Cgroup Driver: systemd

关闭 swap

swapoff -a && sudo sed -i 's/^.*swap/#&/g' /etc/fstab

安装 kubelet kubeadm kubectl

由于网络原因,直接 APT-GET 安装可能安装不了,这里需要配置一下镜像仓库。

1、配置阿里云 kubernetes 镜像仓库

$ sudo apt-get update && sudo apt-get install -y apt-transport-https

curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -

2、创建文件 /etc/apt/sources.list.d/kubernetes.list, 内容如下:

deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main

3、安装 kubelet kubectl kubeadm

$ sudo apt-get update
$ sudo apt-get install -y kubelet kubeadm kubectl

4、设置kubelet开机启动

$ sudo systemctl enable kubelet

二、部署控制平面节点

过程中会用到一些列 docker 镜像文件,这些文件在 Google 的镜像仓库,可以通过 kubeadm config images pull 命令验证网络是否能够正常拉取镜像。国内环境,十有八九无法直接连接,可从其他镜像仓库下载,然后再修改镜像标签,以便启动相关 pod。

准备镜像

列出安装过程中需要用到的镜像文件,命令为

kubeadm config images list

# ECHO ------
k8s.gcr.io/kube-apiserver:v1.15.2
k8s.gcr.io/kube-controller-manager:v1.15.2
k8s.gcr.io/kube-scheduler:v1.15.2
k8s.gcr.io/kube-proxy:v1.15.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1

这里选择从 docker hub 中的 mirrorgooglecontainers 拉取镜像副本,然后更新tag,再删除镜像副本,脚本如下:

images=(kube-proxy:v1.15.2 kube-scheduler-amd64:v1.15.2 kube-controller-manager-amd64:v1.15.2 kube-apiserver:v1.15.2 etcd:3.3.10 pause:3.1)
for imageName in ${images[@]} ; do
  docker pull mirrorgooglecontainers/$imageName  
  docker tag mirrorgooglecontainers/$imageName k8s.gcr.io/$imageName  
  docker rmi mirrorgooglecontainers/$imageName
done

mirrorgooglecontainers 下面没有 coredns,我们可以从另一个位置单独拉取,命令如下:

docker pull coredns/coredns:1.3.1
docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
docker rmi coredns/coredns:1.3.1

初始化控制平面节点

控制平面节点是控制平面组件运行的机器,包括etcd(集群数据库)和 API server (kubectl CLI与之通信)。

需要安装pod网络插件,才能使得集群 pod 间可以相互通信,必须在任何应用程序之前部署 pod 网络。此外,CoreDNS将不会在安装网络之前启动。kubeadm仅支持基于容器网络接口(CNI)的网络,有几个项目使用CNI提供了Kubernetes pod网络,其中一些还支持网络策略。有关可用网络加载项的完整列表,请参阅网络组件页面

另外,请注意,Pod网络不得与任何主机网络重叠,因为这可能会导致问题。如果发现网络插件的首选Pod网络与某些主机网络之间发生冲突,应为 kubeadm init 指定 --pod-network-cidr 参数配置网络网络,并在网络插件的YAML中修改相应信息。

这里我选择 calico 网络,根据 calico 文档说明,我们需为 kubeadm init 指定 --pod-network-cidr=192.168.0.0/16参数。现在运行 kubeadm init <args>

kubeadm init \
    --kubernetes-version v1.15.2 \
    --apiserver-advertise-address=10.163.10.6 \
    --pod-network-cidr=192.168.0.0/16

如果一切正常,安装成功,将输入类似下面的结果信息:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.163.10.6:6443 --token xxxxxx.xxxxxxxxxxxxxxxx \
    --discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

根据提示消息,依次执行以下命令:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

注意记录输出结果中的 kubeadm join *** 信息,随后在添加工作节点到集群时需要用到,可以复制后暂存在某个地方。

安装网络

此时,我们通过 kubectl get pods --all-namespaces 命令,应该可以看到 CoreDNS pod 处于 pending 状态,安装网网络以后,它才能处于 running 状态。我们选择 calico 为 pod 提供网络,pod 网络组件本身以 k8s 应用的形式运行,执行下面命令进行安装。

kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml

安装了pod网络后,可以通过检查 CoreDNS pod 是否在输出中运行来确认它是否正常工作 kubectl get pods --all-namespaces

kubectl get pods --all-namespaces

# ECHO ----
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-7bd78b474d-vmq2w   1/1     Running   0          4m57s
kube-system   calico-node-2cwtx                          1/1     Running   0          4m57s
kube-system   coredns-5c98db65d4-gv2j6                   1/1     Running   0          10m
kube-system   coredns-5c98db65d4-n6lpj                   1/1     Running   0          10m
kube-system   etcd-vm-10-13-ubuntu                       1/1     Running   0          8m54s
kube-system   kube-apiserver-vm-10-13-ubuntu             1/1     Running   0          9m10s
kube-system   kube-controller-manager-vm-10-13-ubuntu    1/1     Running   0          9m3s
kube-system   kube-proxy-qbk66                           1/1     Running   0          10m
kube-system   kube-scheduler-vm-10-13-ubuntu             1/1     Running   0          9m8s

pod 启动需要时间,请耐心等待。

三、加入工作节点

CoreDNS pod 启动并运行后,我们可以为集群添加工作节点。工作节点服务器需安装 docker 、kubeadm 和 kubelet,安装过程请参考 master 节点部署流程。

拉取镜像

工作节点服务器需要至少启动两个 pod ,用到的镜像为 kube-proxypause ,同理我们无法直接从 k8s.grc.io 下载,需要提前拉取镜像并修改 tag ,执行下面命令:

images=(kube-proxy:v1.15.2 pause:3.1)
for imageName in ${images[@]} ; do
  docker pull mirrorgooglecontainers/$imageName  
  docker tag mirrorgooglecontainers/$imageName k8s.gcr.io/$imageName  
  docker rmi mirrorgooglecontainers/$imageName
done

加入集群

执行控制平面节点初始化完成后提供的添加工作节点命令,格式如下:

kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>

命令中的 --token--discovery-token-ca-cert-hash 在集群master节点部署完成后的结果信息中有体现,直接复制出来即可使用。

可以通过在控制平面节点执行 kubeadm token list 来获取 token 信息,token 令牌会在 24 小时候失效,如果要创建新的令牌,使用 kubeadm token create 命令。

可以通过下面命令获取 --discovery-token-ca-cert-hash

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
   openssl dgst -sha256 -hex | sed 's/^.* //'

注意,如果需要重新执行 kubeadm join ,需在控制平面节点删除该节点 kubectl delete node node-name,并在工作节点上执行 kubeadm reset 进行清理工作。

节点执行完 join 命令后,可以在控制平面节点检查 pod 启动进度 watch kubectl get pods --all-namespaces -o wide,观察新节点服务器上的 pod 状态,正常启动则加入成功且节点状态为 Ready。参照上述步骤,依次将所有工作节点加入集群。

检查工作节点状态

工作节点加入集群后,随着工作节点上相应 pod 的正常启动,工作节点状态会由 NotReady 切换到 Ready,Pod 启动需要时间,请耐心等待。所有节点正常加入集群后,可以通过命令查看节点状态:

kubectl get nodes

# ECHO ------
NAME              STATUS   ROLES    AGE    VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
vm-10-6-ubuntu   Ready    master   9h     v1.15.2   10.163.10.13   <none>        Ubuntu 18.04.1 LTS   4.15.0-54-generic   docker://18.6.3
vm-10-7-ubuntu   Ready    <none>   9h     v1.15.2   10.163.10.12   <none>        Ubuntu 18.04.1 LTS   4.15.0-54-generic   docker://18.6.3
vm-10-8-ubuntu    Ready    <none>   9h     v1.15.2   10.163.10.9    <none>        Ubuntu 18.04.1 LTS   4.15.0-54-generic   docker://18.6.3
vm-10-9-ubuntu    Ready    <none>   8h     v1.15.2   10.163.10.7    <none>        Ubuntu 18.04.1 LTS   4.15.0-54-generic   docker://18.6.3
vm-10-10-ubuntu    Ready    <none>   120m   v1.15.2   10.163.10.2    <none>        Ubuntu 18.04.1 LTS   4.15.0-54-generic   docker://18.6.3

四、安装 dashboard

dashboard 不会随集群一起安装,需要单独部署,执行下面命令安装:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta3/aio/deploy/recommended.yaml

Here we must note dashboard version, not all versions of the dashboard and can be any version of k8s cluster fully compatible. Official website quoted table

Kubernetes version 1.11 1.12 1.13 1.14 1.15
Compatibility ? ? ? ?

Fully supported version range.
? Due to breaking changes between Kubernetes API versions, some features might not work correctly in the Dashboard.

By default, Dashboard minimum RBAC configuration for deployment. Currently, Dashboard only supports login using Bearer Token. You can follow guide to creating an example of a user operation.

On the use of dashboard, and then it would have taken time to write a detailed introduced.

V. Conclusion

Now that we have a job 4 node cluster k8s single control plane, the paper simply introduces the deployment process on cluster management, but also involves the use of very large k8s concept and domain knowledge, the official website of the document is basically a very detailed account of the all kinds of concepts, as well as a detailed presentation of the operation, you can see more and more practice.

Finally, I wish you all good health and smooth work and good luck.

Guess you like

Origin www.cnblogs.com/kelsen/p/creating-single-control-plane-cluster-with-kubeadm.html