kubeadm deployment kubernetes (1.15.2 version) --- take you around the pit

K8S such a fire, a lot of online tutorials, but the frustration official update too fast, a lot of tutorials have been deployed behind, and there are numerous pits, for starters, not everyone has the ability to climb out.

For each of the students want to progress, the deployment K8S is a mountain, to clear out the deployment, so have some idea of ​​the various components and functions, so today this note tutorials can move forward in their own and classmates the road covered with a layer of thick foundation stone, no matter for what reason you are reading this article, then, explain the way you want to explore the progress, then, if someone wants to seek knowledge, then there is no sense not to share their small little experience.

Note: thanks to small droplets bokeyuan, I just stand on the shoulders of giants Furthermore, a lot of content is also adapted from his article.

docker: kubernetes dependent on container runtime 
kubelet: kubernetes the core agent components, each node will start a charge like pods and life-cycle management and other nodes 
kubectl: kubernetes command-line tool control can only be used on the master. 
kubeadm: bootstrap kubernetes used to initialize a k8s cluster.

First, all nodes in the environment preparation

1, the software version

software version
kubernetes v1.15.2
CentOS 7.5 07/06/1809 mini CentOS
Docker v18.09
flannel 0.10.0

2, node planning

IP Character CPU name
192.168.3.10 k8s master master
192.168.3.11 k8s node01 slaver1
192.168.3.12 k8s node02 slaver2

Here, I will not put the network architecture, and assuming you have a virtual machine and the Ministry is ready to turn off the firewall and selinux, and adds three nodes Well hosts file and modify the good hostname, and to ensure smooth network,

The need for agents or accelerate docker mirrored pull, in order to simplify this tutorial, assume that you are all here in a free online world can pull any file (need to please themselves refer to other tutorials or manual pull mirroring);

My test machine is located in Hong Kong, there is no need to configure the proxy or acceleration:

iptables bridge to process data:

Creating /etc/sysctl.d/k8s.conf file, add the following:

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

Run the changes.

modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf

Shut down the system Swap

Kubernetes 1.8 start demanding the closure system Swap, if not close, the default configuration kubelet will not start. A method, by kubelet startup parameters -fail-swap-on = false to change the limit. The second method Swap closed system.

swapoff -a

Modify / etc / fstab file, comment out the SWAP automatically mount, using the free -m confirm the swap has been closed.

#注释掉swap分区
[root@localhost /]# sed -i 's/.*swap.*/#&/' /etc/fstab

[root@localhost /]# free -m
              total        used        free      shared  buff/cache   available
Mem:            962         154         446           6         361         612
Swap:             0           0           0

Permanently disable swap

echo "vm.swappiness = 0">> /etc/sysctl.conf

Execute the following script on all Kubernetes nodes node1 and node2:

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

Installation docker

docker version K8S critical to the success of many of my friends will fall in (some living in the pit without realizing it), K8S 1.15.2 highest supported docker version 18.09, then we loaded this version small highest version 18.09.8.

According to the official installation tutorial docker's:

Centos comes to clean all docker program:

yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine
yum install -y yum-utils \
  device-mapper-persistent-data \
  lvm2

Add the latest docker yum repository:

yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo

The official installation command given by:

yum install docker-ce-<VERSION_STRING> docker-ce-cli-<VERSION_STRING> containerd.io

View the various versions of the warehouse and in accordance with the version provided in the end from the high Sort:

yum list docker-ce.x86_64  --showduplicates |sort -r

Here we install 18.09.8:

yum install docker-ce-18.09.8 docker-ce-cli-18.09.8 containerd.io -y

start up:

systemctl start docker ; systemctl enable docker

Modify docker cgroup driver to systemd

For use systemd init system as a Linux distribution, use systemd as cgroup driver docker can ensure more stable server nodes in a tight resource situation, so here modify cgroup driver on each node docker for systemd.

Created or modified /etc/docker/daemon.json:

{
    "exec-opts": ["native.cgroupdriver=systemd"]
}

Restart docker:

systemctl restart docker

docker info | grep Cgroup
Cgroup Driver: systemd

Installation kubeadm and kubelet

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

installation:

yum -y install omelet kubeadm kubectl

The above command will default to install the latest version of the program, there are 1.15.2

Installation kubeadm init to initialize the cluster required docker mirror

Before starting the cluster may be used to initialize kubeadm config images pullin advance on each node of the pull required docker mirror k8s

[root@localhost /]# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.15.2
k8s.gcr.io/kube-controller-manager:v1.15.2
k8s.gcr.io/kube-scheduler:v1.15.2
k8s.gcr.io/kube-proxy:v1.15.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1

One by one manually pull, first to a tip, plus a docker pull in front of each line, so write scripts or manually input trouble:

kubeadm config images list |awk '{print "docker pull " $0}'
docker pull k8s.gcr.io/kube-apiserver:v1.15.2
docker pull k8s.gcr.io/kube-controller-manager:v1.15.2
docker pull k8s.gcr.io/kube-scheduler:v1.15.2
docker pull k8s.gcr.io/kube-proxy:v1.15.2
docker pull k8s.gcr.io/pause:3.1
docker pull k8s.gcr.io/etcd:3.3.10
docker pull k8s.gcr.io/coredns:1.3.1

Direct copy run.

kubeadm cluster initialization

Use kubeadm initialize a cluster, execute the following command on the master:

kubeadm init --kubernetes-version = v1.15.2 --pod-network-cidr = 10.244.0.0 / 16 --service-cidr = 10.96.0.0 / 12

After the deployment is successful, the focus here:

Pictures .png

Follow the prompts to configure the necessary environment path:

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

Look at the state of the cluster, make sure components are in a healthy state:

[root@master /]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}

Pictures .png

Because no network deployment pod, so coredns be delayed to wait to start.

Pod to configure network

pod plug is necessary to install a network, so that each pod can communicate. Before deploying applications and start kube-dns, need to deploy the network, kubeadm only supports CNI network.

pod支持的网络插件有很多,如CalicoCanalFlannelRomanaWeave Net等,因为之前我们初始化使用了参数--pod-network-cidr=10.244.0.0/16,所以我们使用插件flannel

安装:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml

配置完后,通过命令可以清晰的看到随着flannel被安装成功并初始化后coredns渐渐被调度起来的过程:

Pictures .png

不要着急添加节点,先把master配置到底:

部署完了,就开始部署dashboard,有命令行没有web,感觉差点什么;

安装 Kubernetes Dashboard

k8s dashboard 的 docker镜像是

k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0

先准备好dashboard镜像,没办法,就是这么随意,国内得绕弯去别处找并打标签:

docker pull k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1

安装,这个是目前最新的编排文件地址,网上很多坑就是因为这个文件路径都已变更:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

修改NodePort:

因为Service 是ClusterIP 类型,为了方便使用,我们可通过下面命令修改成NodePort 类型。

kubectl patch svc kubernetes-dashboard -p '{"spec":{"type":"NodePort"}}' -n kube-system

查看端口

[root@master ~]# kubectl get svc -n kube-system
NAME           TYPE           CLUSTER-IP       EXTERNAL-IP       PORT(S)        AGE
kube-dns       ClusterIP     10.96.0.10       <none>           53/UDP,53/TCP   6h
kubernetes-dashboard  NodePort  10.107.238.193  <none>         443:32238/TCP   167m

登录web端:https://192.168.3.10:32238

Pictures .png

 

配置登录权限

Dashboard 支持 Kubeconfig 和 Token 两种认证方式,为了简化配置,我们通过配置文件 dashboard-admin.yaml 为 Dashboard 默认用户赋予 admin 权限。

[root@ken ~]# cat dashboard-admin.yml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  labels: 
     k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system

这里的重点是创建了一个ServiceAccountName为kubernetes-dashboard的系统服务账号,我们要记住它,有大用。

应用并且获取token:

kubectl apply -f dashboard-admin.yaml
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep kubernetes-dashboard| awk '{print $1}')

类似于下面这一段,把 token 填入确定即可。

Name: admin-user-token-xln5d
Namespace: kube-system
Labels: <none>Annotations: kubernetes.io/service-account.name: admin-user
kubernetes.io/service-account.uid: 54801c01-01e2-11e9-857e-00505689640f
Type: kubernetes.io/service-account-token
Data====ca.crt: 1025 bytes
namespace: 11 bytes
token: 
eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJ
lLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXhsbjVkIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3V
udC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI1NDgwMWMwMS0wMWUyLTE
xZTktODU3ZS0wMDUwNTY4OTY0MGYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.MbGeROnjA9f1AhBO8v2GuHC1ihVk1UcxpM8lYk
IC_P9Mx9jVZcM56zr4jcxjSnIuZPZxIznLJIIo7TRY94Ya2VujzF2lpDxlNYF2hWY3Ss9d5LqRtP3JsJNZVLmxvYIGnmnGyVCEikCM6W44WPu-S

基本就登录成功了,但是还没完,想要只管的看到每个pod和容器的资源使用情况,得安装K8S原生支持的基本可视化监控插件,这里说的就是:heapster+influxdb+grafana组合;

Heapster 本身是一个 Kubernetes 应用,部署方法很简单,但是它不像dashboard那样apply下就起来,因为它不是K8S默认系统应用,所以master默认是不允许直接调度到master节点上运行,所以,为了将heapster部署到master节点,必须首先放开这个限制:

kubectl taint node master node-role.kubernetes.io/master-

为了保险起见,还是先手动把需要的镜像拉取过来吧,不然服务一直处于延时等待启动状态:

docker pull k8s.gcr.io/heapster-amd64:v1.5.4
docker pull k8s.gcr.io/heapster-grafana-amd64:v5.0.4
docker pull k8s.gcr.io/heapster-influxdb-amd64:v1.5.2

这样这三剑客就这么来了,

上官方克隆heapster并安装:

git clone https://github.com/kubernetes/heapster.git
kubectl apply -f heapster/deploy/kube-config/influxdb/
kubectl apply -f heapster/deploy/kube-config/rbac/heapster-rbac.yaml

部署完了就恢复Master Only状态,禁止调度其他pod到master节点:

kubectl taint node master node-role.kubernetes.io/master=:NoSchedule

运行命令查看部署情况:

kubectl get pod -n kube-system |grep -e heapster -e monitor
kubectl get deploy -n kube-system |grep -e heapster -e monitor
kubectl get svc -n kube-system |grep -e heapster -e monitor

Pictures .png

修改NodePort,将grafana暴露到公网访问:

kubectl patch svc monitoring-grafana -p '{"spec":{"type":"NodePort"}}' -n kube-system

Pictures .png

好,这就算部署完了,你以为基本的master就这么简单的部署完了,更多的坑在向你招手;

在web页面等了半天怎么刷新发现可视化监控图表都没有出来,我们打开heapster容器的日志发现api接口被拒问题,于是修改heapster的部署文件:

# Heapster.yaml file 
- --source = Kubernetes: HTTPS: //kubernetes.default 
 
# revised to 
- --source = kubernetes:? Kubernetes: https: //kubernetes.default useServiceAccount = true & kubeletHttps = true & kubeletPort = 10250 & insecure = true

And modify the place, time in front of token deployment allows you to remember that there are far-reaching effects, here is to use this account:

Pictures .png

Here is the web editor using the dashboard, of course, a little humanity, yaml good grammar, and it was destroyed, so you can get directly to the command line:

kubectl edit deployment heapster  -n kube-system

After the update, K8S destroy the original container, and automatically re-create a new container, phenomena arise:

Pictures .png

Pictures .png

Finally, token login dashboard timeout settings, the default is a pit, the official 900S default is 15 minutes, which is simply a pain for students to engage in experiments,
Parameter Name Default Value Description 
token-ttl 15 minutes Expiration time ( in seconds) of JWE tokens generated by dashboard Default: 15 min 0 - never expires...

Kubernetes-dashboard directly modify deployment files, Dashboard the Token expiration time can be set by token-ttl parameter,

ports:
- containerPort: 8443
  protocol: TCP
args:
  - --auto-generate-certificates
  - --token-ttl=43200

If you docker docker-CLI version and not quite right, then you may enter another pit, unable to login shell container,

kubectl exec -it container name -n kube-system sh (/ bin / sh, / bin / bash, bash) Either say no to this path can not be performed, temporarily no solution.

Later, you will find time to log container is UTC time zone format, is a pit, crawl it.

grafana interested Quguan network download K8S template, direct import, slowly play!


Guess you like

Origin blog.51cto.com/kingda/2429579