使用kubeadm安装Kubernetes1.13

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/qq1083062043/article/details/84839609

简介

kubeadm是Kubernetes官方提供的用于快速安装Kubernetes集群的工具,伴随Kubernetes每个版本的发布都会同步更新,kubeadm会对集群配置方面的一些实践做调整,通过实验kubeadm可以学习到Kubernetes官方在集群配置上一些新的最佳实践。

准备环境

在Windows下基于VMware Workstation创建了两个centos7虚拟机
192.168.112.38 k8s-node1 4核4G
192.168.112.39 k8s-node2 4核8G

  • 添加hosts /etc/hosts
...
192.168.112.38 k8s-node1
192.168.112.39 k8s-node2
  • 禁用防火墙
systemctl stop firewalld
systemctl disable firewalld
  • 禁用SELINUX
setenforce 0

vi /etc/selinux/config
SELINUX=disabled

创建/etc/sysctl.d/k8s.conf文件,添加如下内容

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

执行命令使修改生效

sysctl -p /etc/sysctl.d/k8s.conf
  • 禁用swap
    关闭系统的Swap方法如下:
swapoff -a

修改 /etc/fstab 文件,注释掉 SWAP 的自动挂载

UUID=37417d9b-de8d-4d91-bd86-4dfd46e89128 /                       xfs     defaults        1 1
UUID=db9ade39-3952-4ff1-9aeb-3d95ea4dac59 /boot                   xfs     defaults        1 2
UUID=4ee6483c-6a9a-43f8-be86-70046c996a96 /home                   xfs     defaults        1 2
#UUID=6072c1ae-6d61-45dc-9d24-c4ad78ec4445 swap                    swap    defaults        0 0

修改/etc/sysctl.d/k8s.conf添加下面一行:

vm.swappiness=0

执行sysctl -p /etc/sysctl.d/k8s.conf使修改生效。

各节点安装Docker

Kubernetes从1.6开始使用CRI(Container Runtime Interface)容器运行时接口。默认的容器运行时仍然是Docker,使用的是kubelet中内置dockershim CRI实现。

安装docker的yum源:

yum install -y xfsprogs yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

查看最新的Docker版本:

[root@k8s-node1 ~]# yum list docker-ce.x86_64  --showduplicates |sort -r
已加载插件:fastestmirror, langpacks
已安装的软件包
可安装的软件包
 * updates: ftp.cuhk.edu.hk
Loading mirror speeds from cached hostfile
 * extras: ftp.cuhk.edu.hk
docker-ce.x86_64            3:18.09.0-3.el7                    docker-ce-stable 
docker-ce.x86_64            18.06.1.ce-3.el7                   docker-ce-stable 
docker-ce.x86_64            18.06.1.ce-3.el7                   @docker-ce-stable
docker-ce.x86_64            18.06.0.ce-3.el7                   docker-ce-stable 
docker-ce.x86_64            18.03.1.ce-1.el7.centos            docker-ce-stable 
docker-ce.x86_64            18.03.0.ce-1.el7.centos            docker-ce-stable 
docker-ce.x86_64            17.12.1.ce-1.el7.centos            docker-ce-stable 
docker-ce.x86_64            17.12.0.ce-1.el7.centos            docker-ce-stable 
docker-ce.x86_64            17.09.1.ce-1.el7.centos            docker-ce-stable 
docker-ce.x86_64            17.09.0.ce-1.el7.centos            docker-ce-stable 
docker-ce.x86_64            17.06.2.ce-1.el7.centos            docker-ce-stable 
docker-ce.x86_64            17.06.1.ce-1.el7.centos            docker-ce-stable 
docker-ce.x86_64            17.06.0.ce-1.el7.centos            docker-ce-stable 
docker-ce.x86_64            17.03.3.ce-1.el7                   docker-ce-stable 
docker-ce.x86_64            17.03.2.ce-1.el7.centos            docker-ce-stable 
docker-ce.x86_64            17.03.1.ce-1.el7.centos            docker-ce-stable 
docker-ce.x86_64            17.03.0.ce-1.el7.centos            docker-ce-stable 

在各节点安装docker的18.06.1版本

yum makecache fast

yum install -y --setopt=obsoletes=0 docker-ce-18.06.1.ce-3.el7

systemctl start docker
systemctl enable docker

使用kubeadm部署Kubernetes

各节点安装kubeadm和kubelet

下面在各节点安装kubeadm和kubelet:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

测试网络是否可用

curl https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64

更新源

yum makecache fast
yum install -y kubelet kubeadm kubectl
...
已安装:
  kubeadm.x86_64 0:1.13.0-0                                                     kubectl.x86_64 0:1.13.0-0                                                     kubelet.x86_64 0:1.13.0-0                                                    
作为依赖被安装:
  cri-tools.x86_64 0:1.12.0-0                                                 kubernetes-cni.x86_64 0:0.6.0-0                                                 socat.x86_64 0:1.7.3.2-2.el7                                                

完毕!

从安装结果可以看出还安装了cri-tools, kubernetes-cni, socat三个依赖:

  • 官方从Kubernetes 1.9开始就将cni依赖升级到了0.6.0版本,在当前1.13中仍然是这个版本
  • socat是kubelet的依赖
  • cri-tools是CRI(Container Runtime Interface)容器运行时接口的命令行工具

使用kubeadm init初始化集群

在各节点开机启动kubelet服务:

systemctl enable kubelet.service

接下来使用kubeadm初始化集群,选择k8s-node1作为Master Node,在node1上执行下面的命令:

kubeadm init --kubernetes-version=v1.13.0 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.112.38

因为我们选择flannel作为Pod网络插件,所以上面的命令指定–pod-network-cidr=10.244.0.0/16。
运行结果

[init] Using Kubernetes version: v1.13.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-node1 localhost] and IPs [192.168.112.38 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-node1 localhost] and IPs [192.168.112.38 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.112.38]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 28.002313 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-node1" as an annotation
[mark-control-plane] Marking the node k8s-node1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-node1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: lauqjh.zhdjhnwldjnjd2qm
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.112.38:6443 --token lauqjh.zhdjhnwldjnjd2qm --discovery-token-ca-cert-hash sha256:31a376d3d61b84b69637e6b8b54f76dc0b2b9af91d54923c63b1ef40a0f7e99a

将节点加入集群的命令 kubeadm join 192.168.112.38:6443 --token lauqjh.zhdjhnwldjnjd2qm --discovery-token-ca-cert-hash sha256:31a376d3d61b84b69637e6b8b54f76dc0b2b9af91d54923c63b1ef40a0f7e99a

按提示配置常规用户如何使用kubectl访问集群

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看集群状态

kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok                   
controller-manager   Healthy   ok                   
etcd-0               Healthy   {"health": "true"}  

安装Pod Network

接下来安装flannel network

[root@k8s-node1 .kube]# mkdir -p ~/k8s/
[root@k8s-node1 .kube]# cd
[root@k8s-node1 ~]# cd k8s/
[root@k8s-node1 k8s]# ll
总用量 0
[root@k8s-node1 k8s]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
--2018-12-05 15:17:14--  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
正在解析主机 raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.64.133, 151.101.128.133, 151.101.192.133, ...
正在连接 raw.githubusercontent.com (raw.githubusercontent.com)|151.101.64.133|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:10599 (10K) [text/plain]
正在保存至: “kube-flannel.yml”

100%[==================================================================================================================================================================================================>] 10,599      --.-K/s 用时 0.001s  

2018-12-05 15:17:14 (14.8 MB/s) - 已保存 “kube-flannel.yml” [10599/10599])

[root@k8s-node1 k8s]# kubectl apply -f  kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

查看安装结果

kubectl get ds -l app=flannel -n kube-system
[root@k8s-node1 k8s]# kubectl get ds -l app=flannel -n kube-system
NAME                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                     AGE
kube-flannel-ds-amd64     1         1         1       1            1           beta.kubernetes.io/arch=amd64     2m5s
kube-flannel-ds-arm       0         0         0       0            0           beta.kubernetes.io/arch=arm       2m5s
kube-flannel-ds-arm64     0         0         0       0            0           beta.kubernetes.io/arch=arm64     2m5s
kube-flannel-ds-ppc64le   0         0         0       0            0           beta.kubernetes.io/arch=ppc64le   2m5s
kube-flannel-ds-s390x     0         0         0       0            0           beta.kubernetes.io/arch=s390x     2m5s

使用kubectl get pod –all-namespaces -o wide确保所有的Pod都处于Running状态。

[root@k8s-node1 k8s]# kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                                READY   STATUS    RESTARTS   AGE     IP               NODE        NOMINATED NODE   READINESS GATES
kube-system   coredns-86c58d9df4-7nf4d            1/1     Running   0          13m     10.244.0.3       k8s-node1   <none>           <none>
kube-system   coredns-86c58d9df4-s7s2n            1/1     Running   0          13m     10.244.0.2       k8s-node1   <none>           <none>
kube-system   etcd-k8s-node1                      1/1     Running   0          13m     192.168.112.38   k8s-node1   <none>           <none>
kube-system   kube-apiserver-k8s-node1            1/1     Running   0          13m     192.168.112.38   k8s-node1   <none>           <none>
kube-system   kube-controller-manager-k8s-node1   1/1     Running   0          12m     192.168.112.38   k8s-node1   <none>           <none>
kube-system   kube-flannel-ds-amd64-r4r8g         1/1     Running   0          6m34s   192.168.112.38   k8s-node1   <none>           <none>
kube-system   kube-proxy-dttsr                    1/1     Running   0          13m     192.168.112.38   k8s-node1   <none>           <none>
kube-system   kube-scheduler-k8s-node1            1/1     Running   0          13m     192.168.112.38   k8s-node1   <none>           <none>

master node参与工作负载

使用kubeadm初始化的集群,出于安全考虑Pod不会被调度到Master Node上,也就是说Master Node不参与工作负载。这是因为当前的master节点k8s-node1被打上了node-role.kubernetes.io/master:NoSchedule的污点:

[root@k8s-node1 k8s]# kubectl describe node k8s-node1 | grep Taint
Taints:             node-role.kubernetes.io/master:NoSchedule

因为这里搭建的是测试环境,去掉这个污点使node1参与工作负载:

[root@k8s-node1 k8s]# kubectl taint nodes k8s-node1 node-role.kubernetes.io/master-
node "k8s-node1" untaintednode/k8s-node1 untainted

向Kubernetes集群中添加Node节点

下面我们将k8s-node2这个主机添加到Kubernetes集群中

[root@k8s-node2 ~]# kubeadm join 192.168.112.38:6443 --token lauqjh.zhdjhnwldjnjd2qm --discovery-token-ca-cert-hash sha256:31a376d3d61b84b69637e6b8b54f76dc0b2b9af91d54923c63b1ef40a0f7e99a
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "192.168.112.38:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.112.38:6443"
[discovery] Requesting info from "https://192.168.112.38:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.112.38:6443"
[discovery] Successfully established connection with API Server "192.168.112.38:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-node2" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

在master节点上执行命令查看集群中的节点:

[root@k8s-node1 k8s]# kubectl get node
NAME        STATUS   ROLES         AGE   VERSION
k8s-node1   Ready    master        86m   v1.13.0
k8s-node2   Ready    <none>        56m   v1.13.0

猜你喜欢

转载自blog.csdn.net/qq1083062043/article/details/84839609