Install Kubernetes using kebuadm

Install Kubernetes using kebuadm

1. Environmental preparation

Machine preparation Hardware requirements IP
k8s-master 4C、8G 192.168.1.220
k8s-node 4C、8G 192.168.1.221

2. Basic environment construction

1.Docker installation

Docker environment construction

2. System settings (Close Firewalld, SELinux, swap partitions; all nodes execute by default)

systemctl stop firewalld && systemctl disable firewalld
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
swapoff -a
#注释文件系统类型为swap的行
vim /etc/fstab
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

Host name setting (k8s-master):

hostname k8s-master && hostnamectl set-hostname k8s-master

Host name setting (k8s-node):

hostname k8s-node && hostnamectl set-hostname k8s-node
systemctl start docker && systemctl enable docker

cat >> /etc/hosts <<EOF
192.168.1.220  k8s-master
192.168.1.221  k8s-node
EOF

cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

Three, install k8s

1. Install kubectl, kubelet, kubeadm (executed on all nodes)

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum makecache fast
yum install -y kubectl kubelet kubeadm

2. Initialize the master node (k8s-master execution)

# 由于kubeadm 默认从官网k8s.grc.io下载所需镜像,国内无法访问,因此需要通过–-image-repository指定阿里云镜像仓库地址
kubeadm init --apiserver-advertise-address=192.168.1.220 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.10.0.0/16 --pod-network-cidr=10.122.0.0/16

Note: You can specify the version number through –kubernetes-version=1.18.0; the following images will be downloaded during this period, which will take 5-10 minutes!

k8s.gcr.io/kube-apiserver:v1.19.2
k8s.gcr.io/kube-controller-manager:v1.19.2
k8s.gcr.io/kube-scheduler:v1.19.2
k8s.gcr.io/kube-proxy:v1.19.2
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0

You can also use the following script to extract and pull images:

for i in `kubeadm config images list`; do 
  imageName=${
    
    i#k8s.gcr.io/}
  docker pull registry.aliyuncs.com/google_containers/$imageName
  docker tag registry.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
  docker rmi registry.aliyuncs.com/google_containers/$imageName
done;

The following information will be prompted after successful execution:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.220:6443 --token l12ocz.b4bm0a7kunh3yrbg \
    --discovery-token-ca-cert-hash sha256:8db373ea1f61ddaab29aa61b566c185aeccd9e35aab744ed6814cd33f9115a5e
# 复制admin.conf文件
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# master节点为NotReady,因为corednspod没有启动,缺少网络pod
kubectl get pod --all-namespaces
kubectl get nodes
# 打印join-command
kubeadm token create --print-join-command

3. Join the node node (k8s-node execution)

kubeadm join 192.168.1.220:6443 --token l12ocz.b4bm0a7kunh3yrbg \
    --discovery-token-ca-cert-hash sha256:8db373ea1f61ddaab29aa61b566c185aeccd9e35aab744ed6814cd33f9115a5e

4. Bash command completion (all nodes)

yum -y install bash-completion
source <(kubectl completion bash)

5. Install network components (k8s-master execution)

Network components: flannel, Canal, Calico, weave

# 安装Calico
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
# 卸载Calico
# kubectl delete -f https://docs.projectcalico.org/manifests/calico.yaml
kubectl get pod --all-namespaces
kubectl get nodes

# 执行结果如下
[root@k8s-master ~]# kubectl get pod --all-namespaces
NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-8f59968d4-cl7n5   1/1     Running   4          9m40s
kube-system   calico-node-8xkjs                         0/1     Running   0          9m40s
kube-system   calico-node-h9vft                         0/1     Running   0          9m40s
kube-system   coredns-6d56c8448f-w9tp6                  1/1     Running   1          63m
kube-system   coredns-6d56c8448f-z6jnm                  1/1     Running   1          63m
kube-system   etcd-k8s-master                               1/1     Running   0          63m
kube-system   kube-apiserver-k8s-master                     1/1     Running   0          63m
kube-system   kube-controller-manager-k8s-master            1/1     Running   0          63m
kube-system   kube-proxy-7t644                          1/1     Running   0          63m
kube-system   kube-proxy-b845t                          1/1     Running   0          55m
kube-system   kube-scheduler-k8s-master                     1/1     Running   0          63m
[root@k8s-master ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
chain1   Ready    master   63m   v1.19.2
chain2   Ready    <none>   55m   v1.19.2

Note: If the machine has previously installed k8s or rancher, please clear the corresponding network components.

For example: clean up flannel network plug-in residue

ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/
rm -f /etc/cni/net.d/*

Encountered a problem : In the official default configuration of Calico, sometimes the real network card interface cannot be found, and the Calico-related container reports the following error:

Readiness probe failed: caliconode is not ready: BIRD is not ready: BGP not established with x.x.x.x

The solution : ifconfig view the CentOS real network card interface name, usually starting with ens or enp! , Modify the IP automatic discovery method.

ifconfig
vim calico.yaml
#搜索“k8s,bgp”,追加如下内容:
        - name: IP_AUTODETECTION_METHOD
          value: "interface=enp.*"   # ens 根据实际网卡开头配置
          
#截取修改后的部分内容如下:
		- name: CLUSTER_TYPE
          value: 'k8s,bgp'
        - name: IP_AUTODETECTION_METHOD
          value: "interface=enp.*"          
        - name: IP
          value: autodetect
        - name: CALICO_IPV4POOL_IPIP
          value: Always
kubectl apply -f calico.yaml
kubectl get pod --all-namespaces

Fourth, install kubernetes-dashboard

1. Download the yaml file and modify it

wget  https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc7/aio/deploy/recommended.yaml
vim recommended.yaml

The content is as follows (intercept the modified part):

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30443
  selector:
    k8s-app: kubernetes-dashboard

---

2. Create and start the service

kubectl create -f recommended.yaml
kubectl get svc -n kubernetes-dashboard

web access:

https://192.168.1.220:30443/

3. Get the default token

kubectl describe secrets -n kubernetes-dashboard kubernetes-dashboard  | grep token | awk 'NR==3{print $2}'

eyJhbGciOiJSUzI1NiIsImtpZCI6ImZzLUJ3TS1LdGx1S0FCR3VWd1Z2SmlXSUlyalNzWlBITHo2WVlSQTl4Y0EifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtdG9rZW4tcG1sMm0iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZWZkMmZmNmItODg5NC00NGZmLThiNDctODg1Yjc0MDMzODk5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZCJ9.BysUZfx9yHr4JT0oAIdarjndZLf2f9vjBlz9nyNtWoTUUYk_D-MbYfjFonWy5s1ZyIAfhFUB3Q89bXVbBA7L57eSO-K-zFwxiZPKOpJrmIC73FQYNWgkCWSAEC-0wn4-Z602wGll1EkL0AHLu8ntg8QoKH_ERS3rsouOvfaEXCc59QwwTet8gc2Kucx2YDdeP4wUOY5o67IoiNlHPglzxE-N98ifTircnbhJuvrIzX2ZuCKTkNtBIrnUQBriwBswcJjQPzwFBnHikeC7UcwB8JqqgbZ9koGOaNe8ywPTM3MFehr5RbLtKanGuaRFcG1KBU6FjalS4iYqNLlFawXh-A

Just enter it on the web login page! The "little bell" in the upper right corner will prompt insufficient permissions!

Reason: The official default permission is the smallest, so we have to build a SA with the largest permission.

# 查看日志
kubectl get pods --all-namespaces
kubectl logs -f -n kubernetes-dashboard kubernetes-dashboard-5bc6d86cfd-7n99b

4. Create SA permission token

  • Method 1: Edit the yaml file and apply to make it effective
vim dashboard.yaml
# 内容如下
apiVersion: v1
kind: ServiceAccount
metadata:
  name: dashboard
  namespace: kubernetes-dashboard
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: dashboard
subjects:
  - kind: ServiceAccount
    name: dashboard
    namespace: kubernetes-dashboard
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
kubectl create -f dashboard.yaml
kubectl describe secrets -n kubernetes-dashboard dashboard  | grep token | awk 'NR==3{print $2}'

eyJhbGciOiJSUzI1NiIsImtpZCI6ImZzLUJ3TS1LdGx1S0FCR3VWd1Z2SmlXSUlyalNzWlBITHo2WVlSQTl4Y0EifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtdG9rZW4tcG1sMm0iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZWZkMmZmNmItODg5NC00NGZmLThiNDctODg1Yjc0MDMzODk5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZCJ9.BysUZfx9yHr4JT0oAIdarjndZLf2f9vjBlz9nyNtWoTUUYk_D-MbYfjFonWy5s1ZyIAfhFUB3Q89bXVbBA7L57eSO-K-zFwxiZPKOpJrmIC73FQYNWgkCWSAEC-0wn4-Z602wGll1EkL0AHLu8ntg8QoKH_ERS3rsouOvfaEXCc59QwwTet8gc2Kucx2YDdeP4wUOY5o67IoiNlHPglzxE-N98ifTircnbhJuvrIzX2ZuCKTkNtBIrnUQBriwBswcJjQPzwFBnHikeC7UcwB8JqqgbZ9koGOaNe8ywPTM3MFehr5RbLtKanGuaRFcG1KBU6FjalS4iYqNLlFawXh-A

  • Method 2: Through the kubectl create command
kubectl create serviceaccount k8s-sa -n kubernetes-dashboard
kubectl create clusterrolebinding k8s-sa-cluster-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:k8s-sa
kubectl describe secrets -n kubernetes-dashboard dashboard  | grep token | awk 'NR==3{print $2}'

Just log in with the new Token!

5. Create a login config file based on the token

secret=$(kubectl describe secrets -n kubernetes-dashboard k8s-sa  | grep token | awk 'NR==3{print $2}')
echo $secret
kubectl config set-cluster kubernetes --server=192.168.1.220:6443 --kubeconfig=./k8s-sa.conf
kubectl config set-credentials k8s-sa --token="$secret" --kubeconfig=./k8s-sa.conf
kubectl config set-context k8s-sa@kubernetes --cluster=kubernetes --user=k8s-sa --kubeconfig=./k8s-sa.conf
kubectl config use-context k8s-sa@kubernetes  --kubeconfig=./k8s-sa.conf
kubectl config view --kubeconfig=./k8s-sa.conf

Just log in with the config file!

Reference :

Centos7 environment uses kubeadm to build a multi-node k8s cluster

kubernetes-dashboard installation

Kubernetes-dashboard installation, configuration token and kubeconfig login

Guess you like

Origin blog.csdn.net/ory001/article/details/109078735