kubeadm方式搭建k8s v1.11.0集群

版权声明:本文为博主原创文章,博客地址:https://blog.csdn.net/lianjoke0,未经博主允许不得转载。 https://blog.csdn.net/lianjoke0/article/details/82115068

1、准备工作

本次安装建议至少3台服务器或者虚拟机,建议配置至少2C4G,基本架构为1台master节点,2台node节点。整个安装过程将在Ubuntu 16.04.1上完成,包括完成kubeadm,kubernetes的基本集群和flannel网络的安装,节点信息如下:

角色

主机名

内网IP地址

公网IP地址

Master

k8s-master

172.17.0.11

111.231.79.97

node

k8s-node1

172.17.0.17

118.25.99.129

node

k8s-node2

172.17.0.15

122.152.206.67

  • 操作系统版本

  • 各主机角色

2、在所有节点上安装kubeadm

  • 配置apt安装源

使用阿里的系统安装源和kubernetes源,并可以忽略gpg相关的警告

$ cat /etc/apt/source.list
# 系统安装源
deb http://mirrors.aliyun.com/ubuntu/ xenial main restricted
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates main restricted
deb http://mirrors.aliyun.com/ubuntu/ xenial universe
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates universe
deb http://mirrors.aliyun.com/ubuntu/ xenial multiverse
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates multiverse
deb http://mirrors.aliyun.com/ubuntu/ xenial-backports main restricted universe multiverse
# kubeadm及kubernetes组件安装源
deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main
$ sudo apt-get update
W: GPG error: https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB
W: The repository 'https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial InRelease' is not signed.
N: Data from such a repository can't be authenticated and is therefore potentially dangerous to use.
  • 安装docker

ubuntu@k8s-master:~$ sudo apt-get install docker.io
Reading package lists... Done
Building dependency tree       
Reading state information... Done
docker.io is already the newest version (17.03.2-0ubuntu2~16.04.1).
0 upgraded, 0 newly installed, 0 to remove and 222 not upgraded.
  • 安装kubeadm,kubelet,kubectl

ubuntu@k8s-master:~$ sudo apt-get install -y kubelet kubeadm kubectl --allow-unauthenticated
Reading package lists... Done
Building dependency tree
Reading state information... Done
kubeadm is already the newest version (1.11.1-00).
kubectl is already the newest version (1.11.1-00).
kubelet is already the newest version (1.11.1-00).
0 upgraded, 0 newly installed, 0 to remove and 222 not upgraded.

3、使用kubeadm初始化master节点

3.1、配置科学上网

由于k8s的官方镜像都在gcr(google container registry)上,不科学上网无法下载;其次就算不用官方镜像库,使用别人搭建在github上的镜像库,也最好加速一下,不然下载速度感人,话不多说开干。

  • 安装shadowsocks

由于shadowsocks是基于Python开发的,所以必须安装python,然后是python的包管理工具python-pip,最后是shadowsocks

$ sudo apt-get install python
$ sudo apt-get install python-pip
$ sudo pip install shadowsocks
  • 配置shadowsocks

新建一个配置文件shadowsocks.json,然后配置相应的参数,里面的一些参数相信都应该懂的

ubuntu@k8s-master:~$ cat shadowsocks.json 
{
  "server": "your server",
  "server_port": your server port,
  "local_address": "127.0.0.1",
  "local_port": 1080,
  "password": "your passwd",
  "timeout": 600,
  "method": "aes-256-cfb"
}
  • 启动shadowsocks

$ sudo sslocal -c shawdowsocks.json &
  • 配置全局代理

启动shadowsocks服务后,发现并不能科学上网,这是因为shadowsocks是socks 5代理,需要客户端配合才能科学。为了让整个系统都走shadowsocks通道,需要配置全局代理,可以通过polipo实现。

1、安装polipo

$ sudo apt-get install polipo

2、配置polipo

$ vim /etc/polipo/config
logSyslog = true
logFile = /var/log/polipo/polipo.log
proxyAddress = "0.0.0.0"
 
socksParentProxy = "127.0.0.1:1080"
socksProxyType = socks5
 
chunkHighMark = 50331648
objectHighMark = 16384
 
serverMaxSlots = 64
serverSlots = 16
serverSlots1 = 32

3、重启polipo

$ /etc/init.d/polipo restart

4、为终端配置http代理

$ export http_proxy="http://127.0.0.1:8123/"

5、接着测试下能否科学上网,如果有响应,则全局代理配置成功

$ curl www.google.com

注意事项,服务器重启后,下面两句需要重新执行:

$ sudo sslocal -c shadowsocks.json &
$ export http_proxy="http://127.0.0.1:8123/"

 3.2、手工下载k8s镜像

大牛在Github上搭建的镜像库,定时从gcr.io同步最新镜像

https://github.com/anjia0532/gcr.io_mirror/tree/master/google-containers

下载镜像时的转换方法

k8s.gcr.io/{image}/{tag} <==> gcr.io/google-containers/{image}/{tag} <==> anjia0532/google-containers.{image}/{tag}

  • kubernetes v1.11.0集群所需的镜像

(截止到2018.7.29最新版本为v1.11.1,但镜像没有push到官方库上,无法pull,故使用v1.11.0)

k8s.gcr.io/kube-apiserver-amd64:v1.11.0
k8s.gcr.io/kube-controller-manager-amd64:v1.11.0
k8s.gcr.io/kube-scheduler-amd64:v1.11.0
k8s.gcr.io/kube-proxy-amd64:v1.11.0
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd-amd64:3.2.18
k8s.gcr.io/coredns:1.1.3
  • 转换成github的镜像库后,再下载

sudo docker pull anjia0532/google-containers.kube-apiserver-amd64:v1.11.0
sudo docker pull anjia0532/google-containers.kube-controller-manager-amd64:v1.11.0
sudo docker pull anjia0532/google-containers.kube-scheduler-amd64:v1.11.0
sudo docker pull anjia0532/google-containers.kube-proxy-amd64:v1.11.0
sudo docker pull anjia0532/google-containers.pause:3.1
sudo docker pull anjia0532/google-containers.etcd-amd64:3.2.18
sudo docker pull anjia0532/google-containers.coredns:1.1.3
  •  给镜像打新的tag

docker tag anjia0532/google-containers.kube-apiserver-amd64:v1.11.0 k8s.gcr.io/kube-apiserver-amd64:v1.11.0
docker tag anjia0532/google-containers.kube-controller-manager-amd64:v1.11.0 k8s.gcr.io/kube-controller-manager-amd64:v1.11.0
docker tag anjia0532/google-containers.kube-scheduler-amd64:v1.11.0 k8s.gcr.io/kube-scheduler-amd64:v1.11.0
docker tag anjia0532/google-containers.kube-proxy-amd64:v1.11.0 k8s.gcr.io/kube-proxy-amd64:v1.11.0
docker tag anjia0532/google-containers.pause:3.1 k8s.gcr.io/pause:3.1
docker tag anjia0532/google-containers.etcd-amd64:3.2.18 k8s.gcr.io/etcd-amd64:3.2.18
docker tag anjia0532/google-containers.coredns:1.1.3 k8s.gcr.io/coredns:1.1.3

3.3、kubeadm初始化master

$ kubeadm init --kubernetes-version=1.11.0 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=172.17.0.11

[init] Using Kubernetes version: v1.11.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
	[WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [ 172.17.0.11]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [ubuntu-master] and IPs [172.17.0.11]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 28.003828 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node ubuntu-master as master by adding a label and a taint
[markmaster] Master ubuntu-master tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: rw4enn.mvk547juq7qi2b5f
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 172.17.0.11:6443 --token 54f68j.arg4usxkj5qyoaca

 --kubernetes-version 指定版本为1.11.0,否则默认是1.11.1;

--pod-network-cidr 设置kubernetes的子网为10.244.0.0/16,注意此处不要修改为其他地址,因为这个值与后续的flannel的yaml值要一致,如果修改,请一并修改。

--apiserver-advertise-address apiserver的地址,使用内网地址

3.4、配置 kubectl

kubectl 是管理 Kubernetes Cluster 的命令行工具,前面我们已经在所有的节点安装了 kubectl。Master 初始化完成后需要做一些配置工作,然后 kubectl 就能使用了。推荐用 Linux 普通用户执行 kubectl(root 会有一些问题),我们为 ubuntu 用户配置 kubectl。

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

为了使用更便捷,启用 kubectl 命令的自动补全功能

echo "source <(kubectl completion bash)" >> ~/.bashrc

 4、安装网络插件flannel

4.1、下载镜像

在k8s-master上下载镜像(科学上网只在k8s-master上配置)

$ sudo docker pull quay.io/coreos/flannel:v0.10.0-amd64

4.2、保存镜像,并在拷贝到各个node节点上

保存的镜像有k8s.gcr.io/pause,quay.io/coreos/flannel,k8s.gcr.io/kube-proxy-amd64

# k8s-master
$ sudo docker images
$ sudo docker save da86e6ba6ca1 f0fad859c909 1d3d7afd77d1 > node.tar
$ scp node.tar [email protected]:/tmp
$ scp node.tar [email protected]:/tmp
# k8s-node1
$ sudo docker load < /tmp/node.tar
$ sudo docker tag da86e6ba6ca1 k8s.gcr.io/pause:3.1
$ sudo docker tag f0fad859c909 quay.io/coreos/flannel:v0.10.0-amd64
$ sudo docker tag 1d3d7afd77d1  k8s.gcr.io/kube-proxy-amd64:v1.11.0
# k8s-node2
$ sudo docker load < /tmp/node.tar
$ sudo docker tag da86e6ba6ca1 k8s.gcr.io/pause:3.1
$ sudo docker tag f0fad859c909 quay.io/coreos/flannel:v0.10.0-amd64
$ sudo docker tag 1d3d7afd77d1  k8s.gcr.io/kube-proxy-amd64:v1.11.0

4.3、通过kube-flannel.yml创建flannel的pod

# k8s-master
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

 5、node节点加入集群

5.1、k8s-master上查看token值

$ sudo kubeadm token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
54f68j.arg4usxkj5qyoaca   <invalid>   2018-07-28T18:08:18+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

5.2、node节点加入集群

k8s-node1
$ kubeadm join --token 54f68j.arg4usxkj5qyoaca 172.17.0.11:6443 --discovery-token-unsafe-skip-ca-verification
k8s-node2
$ kubeadm join --token 54f68j.arg4usxkj5qyoaca 172.17.0.11:6443 --discovery-token-unsafe-skip-ca-verification

加入集群成功的显示,由于我当时没有截图,这是盗的参考资料里的图

5.3、查看pod状态和集群状态

所有的pod均是running状态时,表明已经OK了

$ kubectl get pod --all-namespaces
NAMESPACE     NAME                                 READY     STATUS    RESTARTS   AGE
kube-system   coredns-78fcdf6894-jrh8p             1/1       Running   0          1d
kube-system   coredns-78fcdf6894-kh65f             1/1       Running   0          1d
kube-system   etcd-k8s-master                      1/1       Running   0          1d
kube-system   kube-apiserver-k8s-master            1/1       Running   0          1d
kube-system   kube-controller-manager-k8s-master   1/1       Running   0          1d
kube-system   kube-flannel-ds-amd64-6kwlk          1/1       Running   0          1d
kube-system   kube-flannel-ds-amd64-8thxm          1/1       Running   0          1d
kube-system   kube-flannel-ds-amd64-f2rjt          1/1       Running   0          1d
kube-system   kube-proxy-h5k4v                     1/1       Running   0          1d
kube-system   kube-proxy-hsh5b                     1/1       Running   0          1d
kube-system   kube-proxy-lp55r                     1/1       Running   0          1d
kube-system   kube-scheduler-k8s-master            1/1       Running   0          1d

如果状态为Pending、ContainerCreating、ImagePullBackOff都表明 Pod 没有就绪,可以通过pod的Event排查下原因

$ kubectl describe pod kube-flannel-ds-amd64-f2rjt -n kube-system

查看所有节点的状态(version应该是v1.11.0,可能是因为我的kubectl版本是v1.11.1导致的,忽略)

$ kubectl get nodes
NAME         STATUS    ROLES     AGE       VERSION
k8s-master   Ready     master    1d        v1.11.1
k8s-node1    Ready     <none>    1d        v1.11.1
k8s-node2    Ready     <none>    1d        v1.11.1

搭建参考文档

https://www.kubernetes.org.cn/3895.html

https://mp.weixin.qq.com/s?__biz=MzIwMTM5MjUwMg==&mid=2653588195&idx=1&sn=ed00d0e19feb417b41f4d0d4b7af86de&chksm=8d3082faba470bec9b2be2a2c98b44a52f9b8f90be48a54f6212d2c43ef4a54593497cd12024&scene=21#wechat_redirect

https://mp.weixin.qq.com/s?__biz=MzIwMTM5MjUwMg==&mid=2653588210&idx=1&sn=b198ad2c5be463fb3f252fd375e18fff&chksm=8d3082ebba470bfd5ee1d343dddea7397f5c90b684d2510e5985ffb0efa1db420d605de49258&scene=21#wechat_redirect

http://www.3gcomet.com/?p=1980

猜你喜欢

转载自blog.csdn.net/lianjoke0/article/details/82115068