[Cloud Native] K8S Binary Build 1

1. Environment deployment

cluster IP components
k8s cluster master01 192.168.243.100 be apiserver be controller-manager be scheduler etcd
k8s cluster master02 192.168.243.104
k8s cluster node01 192.168.243.102 kubelet kube proxy docker
k8s cluster node02 192.168.243.103
etcd cluster node 1 192.168.243.100 etcd
etcd cluster node 2 192.168.243.102
etcd cluster node 3 192.168.243.103

1.1 Operating system initialization

#关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

#关闭selinux
setenforce 0
sed -i 's/enforcing/disabled/' /etc/selinux/config

#关闭swap
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab 

#根据规划设置主机名
hostnamectl set-hostname master01
hostnamectl set-hostname node01
hostnamectl set-hostname node02

#在master添加hosts
cat >> /etc/hosts << EOF
192.168.80.10 master01
192.168.80.11 node01
192.168.80.12 node02
EOF

#调整内核参数
cat > /etc/sysctl.d/k8s.conf << EOF
#开启网桥模式,可将网桥的流量传递给iptables链
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
#关闭ipv6协议
net.ipv6.conf.all.disable_ipv6=1
net.ipv4.ip_forward=1
EOF

sysctl --system

#时间同步
yum install ntpdate -y
ntpdate time.windows.com

insert image description here
insert image description here
insert image description here

2. Deploy etcd cluster

  • etcd is an open source project initiated by the CoreOS team in June 2013. Its goal is to build a highly available distributed key-value (key-value) database. etcd internally uses the raft protocol as the consensus algorithm, and etcd is written in the go language.

As a service discovery system, etcd has the following features:
Simple: easy to install and configure, and provides HTTP API for interaction, and easy to use
Secure: supports SSL certificate verification
Fast: single instance supports 2k+ read operations per second
Reliable: adopts raft algorithm , to achieve the availability and consistency of distributed system data

  • etcd currently uses port 2379 to provide HTTP API services by default, and port 2380 to communicate with peers (these two ports have been officially reserved for etcd by IANA (Internet Assigned Numbers Authority)). That is to say, etcd uses port 2379 to provide external communication for clients by default, and port 2380 for internal communication between servers.
  • etcd is generally recommended to be deployed in a cluster in a production environment. Due to the leader election mechanism of etcd, an odd number of at least 3 or more is required.

2.1 Prepare to issue certificate environment

  • CFSSL is a PKI/TLS tool open sourced by CloudFlare. CFSSL includes a command-line tool and an HTTP API service for signing, verifying, and bundling TLS certificates. Written in Go language.
  • CFSSL uses configuration files to generate certificates, so before self-signing, you need to generate a configuration file in json format that it recognizes. CFSSL provides a convenient command line to generate configuration files.
  • CFSSL is used to provide etcd with a TLS certificate, and it supports signing three types of certificates:
    1. Client certificate, the certificate carried by the server when connecting to the client, used by the client to verify the identity of the server, such as kube-apiserver accessing etcd; 2
    . , server certificate, the certificate carried by the client when connecting to the server, used by the server to verify the identity of the client, such as etcd to provide external services; 3, peer
    certificate, the certificate used when connecting to each other, such as verification between etcd nodes and communication.
    Here all use the same set of certificate authentication.

Operate on the master01 node

Prepare the cfssl certificate generation tool

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/local/bin/cfssl
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/local/bin/cfssljson
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/local/bin/cfssl-certinfo

chmod +x /usr/local/bin/cfssl*

cfssl: certificate issuing tool command
cfssljson: convert the certificate (json format) generated by cfssl into a file-bearing certificate
cfssl-certinfo: verify the information of the certificate
cfssl-certinfo -cert <certificate name> #View certificate information

insert image description here
Generate Etcd certificate

mkdir /opt/k8s
cd /opt/k8s/

#上传 etcd-cert.sh 和 etcd.sh 到 /opt/k8s/ 目录中
chmod +x etcd-cert.sh etcd.sh

#创建用于生成CA证书、etcd 服务器证书以及私钥的目录
mkdir /opt/k8s/etcd-cert
mv etcd-cert.sh etcd-cert/
cd /opt/k8s/etcd-cert/
./etcd-cert.sh			#生成CA证书、etcd 服务器证书以及私钥

ls
ca-config.json  ca-csr.json  ca.pem        server.csr       server-key.pem
ca.csr          ca-key.pem   etcd-cert.sh  server-csr.json  server.pem

#上传 etcd-v3.4.9-linux-amd64.tar.gz 到 /opt/k8s 目录中,启动etcd服务
cd /opt/k8s/
tar zxvf etcd-v3.4.9-linux-amd64.tar.gz
ls etcd-v3.4.9-linux-amd64
Documentation  etcd  etcdctl  README-etcdctl.md  README.md  READMEv2-etcdctl.md
------------------------------------------------------------------------------------------
etcd就是etcd 服务的启动命令,后面可跟各种启动参数
etcdctl主要为etcd 服务提供了命令行操作
------------------------------------------------------------------------------------------

#创建用于存放 etcd 配置文件,命令文件,证书的目录
mkdir -p /opt/etcd/{
    
    cfg,bin,ssl}

cd /opt/k8s/etcd-v3.4.9-linux-amd64/
mv etcd etcdctl /opt/etcd/bin/
cp /opt/k8s/etcd-cert/*.pem /opt/etcd/ssl/

cd /opt/k8s/
./etcd.sh etcd01 192.168.80.10 etcd02=https://192.168.80.11:2380,etcd03=https://192.168.80.12:2380
#进入卡住状态等待其他节点加入,这里需要三台etcd服务同时启动,如果只启动其中一台后,服务会卡在那里,直到集群中所有etcd节点都已启动,可忽略这个情况

#可另外打开一个窗口查看etcd进程是否正常
ps -ef | grep etcd

#把etcd相关证书文件、命令文件和服务管理文件全部拷贝到另外两个etcd集群节点
scp -r /opt/etcd/ [email protected]:/opt/
scp -r /opt/etcd/ [email protected]:/opt/
scp /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system/

insert image description here

insert image description here
insert image description here
insert image description here
insert image description here
insert image description here

Operate on node01 and 02 nodes

vim /opt/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd02"											#修改
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.80.11:2380"			#修改
ETCD_LISTEN_CLIENT_URLS="https://192.168.80.11:2379"		#修改

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.80.11:2380"		#修改
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.80.11:2379"				#修改
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.80.10:2380,etcd02=https://192.168.80.11:2380,etcd03=https://192.168.80.12:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#启动etcd服务
systemctl start etcd
systemctl enable etcd
systemctl status etcd

//在 node02 节点上操作
vim /opt/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd03"											#修改
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.80.12:2380"			#修改
ETCD_LISTEN_CLIENT_URLS="https://192.168.80.12:2379"		#修改

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.80.12:2380"		#修改
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.80.12:2379"				#修改
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.80.10:2380,etcd02=https://192.168.80.11:2380,etcd03=https://192.168.80.12:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#启动etcd服务
systemctl start etcd
systemctl enable etcd
systemctl status etcd

#检查etcd群集状态
ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.80.10:2379,https://192.168.80.11:2379,https://192.168.80.12:2379" endpoint health --write-out=table

------------------------------------------------------------------------------------------
--cert-file:识别HTTPS端使用SSL证书文件
--key-file:使用此SSL密钥文件标识HTTPS客户端
--ca-file:使用此CA证书验证启用https的服务器的证书
--endpoints:集群中以逗号分隔的机器地址列表
cluster-health:检查etcd集群的运行状况
------------------------------------------------------------------------------------------

#查看etcd集群成员列表
ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.80.10:2379,https://192.168.80.11:2379,https://192.168.80.12:2379" --write-out=table member list


insert image description here

insert image description here

3. Deploy the docker engine

/所有 node 节点部署docker引擎
yum install -y yum-utils device-mapper-persistent-data lvm2 
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 
yum install -y docker-ce docker-ce-cli containerd.io

systemctl start docker.service
systemctl enable docker.service 

vim /etc/docker/daemon.json
{
    
    
  "registry-mirrors": ["https://6ijb8ubo.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    
    
    "max-size": "500m", "max-file": "3"
  }
}

insert image description here
insert image description here

4. Deploy the Master component

4.1 Operate on the master01 node

#上传 master.zip 和 k8s-cert.sh 到 /opt/k8s 目录中,解压 master.zip 压缩包
cd /opt/k8s/
unzip master.zip
chmod +x *.sh

#创建kubernetes工作目录
mkdir -p /opt/kubernetes/{
    
    bin,cfg,ssl,logs}

#创建用于生成CA证书、相关组件的证书和私钥的目录
mkdir /opt/k8s/k8s-cert
mv /opt/k8s/k8s-cert.sh /opt/k8s/k8s-cert
cd /opt/k8s/k8s-cert/
./k8s-cert.sh				#生成CA证书、相关组件的证书和私钥

ls *pem
admin-key.pem  apiserver-key.pem  ca-key.pem  kube-proxy-key.pem  
admin.pem      apiserver.pem      ca.pem      kube-proxy.pem

#复制CA证书、apiserver相关证书和私钥到 kubernetes工作目录的 ssl 子目录中
cp ca*pem apiserver*pem /opt/kubernetes/ssl/

#上传 kubernetes-server-linux-amd64.tar.gz 到 /opt/k8s/ 目录中,解压 kubernetes 压缩包
cd /opt/k8s/
tar zxvf kubernetes-server-linux-amd64.tar.gz

#复制master组件的关键命令文件到 kubernetes工作目录的 bin 子目录中
cd /opt/k8s/kubernetes/server/bin
cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/
ln -s /opt/kubernetes/bin/* /usr/local/bin/

#创建 bootstrap token 认证文件,apiserver 启动时会调用,然后就相当于在集群内创建了一个这个用户,接下来就可以用 RBAC 给他授权
cd /opt/k8s/
vim token.sh
#!/bin/bash
#获取随机数前16个字节内容,以十六进制格式输出,并删除其中空格
BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
#生成 token.csv 文件,按照 Token序列号,用户名,UID,用户组 的格式生成
cat > /opt/kubernetes/cfg/token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

chmod +x token.sh
./token.sh

cat /opt/kubernetes/cfg/token.csv

#二进制文件、token、证书都准备好后,开启 apiserver 服务
cd /opt/k8s/
./apiserver.sh 192.168.80.10 https://192.168.80.10:2379,https://192.168.80.11:2379,https://192.168.80.12:2379

#检查进程是否启动成功
ps aux | grep kube-apiserver

netstat -natp | grep 6443   #安全端口6443用于接收HTTPS请求,用于基于Token文件或客户端证书等认证


#启动 scheduler 服务
cd /opt/k8s/
./scheduler.sh
ps aux | grep kube-scheduler

#启动 controller-manager 服务
./controller-manager.sh
ps aux | grep kube-controller-manager

#生成kubectl连接集群的kubeconfig文件
./admin.sh

#绑定默认cluster-admin管理员集群角色,授权kubectl访问集群
kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous

#通过kubectl工具查看当前集群组件状态
kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-2               Healthy   {
    
    "health":"true"}   
etcd-1               Healthy   {
    
    "health":"true"}   
etcd-0               Healthy   {
    
    "health":"true"}  

#查看版本信息
kubectl version

insert image description here

insert image description here
insert image description here
insert image description here
insert image description here
insert image description here
insert image description here
insert image description here
insert image description here
insert image description here
insert image description here

5. Deploy Worker Node components

Operates on all node nodes

#创建kubernetes工作目录
mkdir -p /opt/kubernetes/{
    
    bin,cfg,ssl,logs}

#上传 node.zip 到 /opt 目录中,解压 node.zip 压缩包,获得kubelet.sh、proxy.sh
cd /opt/
unzip node.zip
chmod +x kubelet.sh proxy.sh

insert image description here

Operate on the master01 node

#把 kubelet、kube-proxy 拷贝到 node 节点
cd /opt/k8s/kubernetes/server/bin
scp kubelet kube-proxy [email protected]:/opt/kubernetes/bin/
scp kubelet kube-proxy [email protected]:/opt/kubernetes/bin/

#上传kubeconfig.sh文件到/opt/k8s/kubeconfig目录中,生成kubelet初次加入集群引导kubeconfig文件和kube-proxy.kubeconfig文件
#kubeconfig 文件包含集群参数(CA 证书、API Server 地址),客户端参数(上面生成的证书和私钥),集群 context 上下文参数(集群名称、用户名)。Kubenetes 组件(如 kubelet、kube-proxy)通过启动时指定不同的 kubeconfig 文件可以切换到不同的集群,连接到 apiserver。
mkdir /opt/k8s/kubeconfig

cd /opt/k8s/kubeconfig
chmod +x kubeconfig.sh
./kubeconfig.sh 192.168.80.10 /opt/k8s/k8s-cert/

#把配置文件 bootstrap.kubeconfig、kube-proxy.kubeconfig 拷贝到 node 节点
scp bootstrap.kubeconfig kube-proxy.kubeconfig [email protected]:/opt/kubernetes/cfg/
scp bootstrap.kubeconfig kube-proxy.kubeconfig [email protected]:/opt/kubernetes/cfg/

#RBAC授权,使用户 kubelet-bootstrap 能够有权限发起 CSR 请求证书
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
  • The kubelet uses the TLS Bootstrapping mechanism to automatically complete the registration to the kube-apiserver, which is very useful when the number of nodes is large or the capacity is automatically expanded later.
    After the Master apiserver enables TLS authentication, if the kubelet component of the node node wants to join the cluster, it must use a valid certificate issued by the CA to communicate with the apiserver. When there are many node nodes, signing the certificate is a very cumbersome task. Therefore, Kubernetes introduces the TLS bootstraping mechanism to automatically issue client certificates. The kubelet will automatically apply for a certificate from the apiserver as a low-privilege user, and the kubelet certificate is dynamically signed by the apiserver.
  • The first startup of kubelet initiates the first CSR request by loading the user Token and apiserver CA certificate in bootstrap.kubeconfig. This Token is pre-built in the token.csv of the apiserver node, and its identity is kubelet-bootstrap user and system:kubelet-bootstrap user group ; If you want the first CSR request to be successful (that is, it will not be rejected by apiserver 401), you need to create a ClusterRoleBinding first, and bind the kubelet-bootstrap user to the built-in ClusterRole of system:node-bootstrapper (query through kubectl get clusterroles), so that It is capable of initiating a CSR certification request.
  • The certificate during TLS bootstrapping is actually signed by the kube-controller-manager component, that is to say, the validity period of the certificate is controlled by the kube-controller-manager component; the kube-controller-manager component provides an --experimental-cluster-signing- duration parameter to set the valid time of the signed certificate; the default is 8760h0m0s, change it to 87600h0m0s, that is, sign the certificate with TLS bootstrapping after 10 years.
  • That is to say, when the kubelet accesses the API Server for the first time, it uses a token for authentication. After passing the pass, the Controller Manager will generate a certificate for the kubelet, and subsequent accesses will use the certificate for authentication.

insert image description here
insert image description here

insert image description here

Operate on the node01 node

#启动 kubelet 服务
cd /opt/
./kubelet.sh 192.168.80.11
ps aux | grep kubelet

insert image description here

Operate on master01 node, request through CSR

#检查到 node01 节点的 kubelet 发起的 CSR 请求,Pending 表示等待集群给该节点签发证书
kubectl get csr
NAME                                                   AGE  SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-duiobEzQ0R93HsULoS9NT9JaQylMmid_nBF3Ei3NtFE   12s  kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

#通过 CSR 请求
kubectl certificate approve node-csr-duiobEzQ0R93HsULoS9NT9JaQylMmid_nBF3Ei3NtFE

#Approved,Issued 表示已授权 CSR 请求并签发证书
kubectl get csr
NAME                                                   AGE  SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-duiobEzQ0R93HsULoS9NT9JaQylMmid_nBF3Ei3NtFE   2m5s kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued

#查看节点,由于网络插件还没有部署,节点会没有准备就绪 NotReady
kubectl get node
NAME            STATUS     ROLES    AGE    VERSION
192.168.80.11   NotReady   <none>   108s   v1.20.11

#master自动发现node节点
kubectl create clusterrolebinding node-autoapprove-bootstrap --clusterrole=system:certificates.k8s.io:certificatesigningrequests:nodeclient --user=kubelet-bootstrap 

kubectl create clusterrolebinding node-autoapprove-certificate-rotation --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeclient --user=kubelet-bootstrap

insert image description here
insert image description here

Operate on the node01 node

#加载 ip_vs 模块
for i in $(ls /usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs|grep -o "^[^.]*");do echo $i; /sbin/modinfo -F filename $i >/dev/null 2>&1 && /sbin/modprobe $i;done

#启动proxy服务
cd /opt/
./proxy.sh 192.168.80.11
ps aux | grep kube-proxy

insert image description here

Guess you like

Origin blog.csdn.net/wang_dian1/article/details/132064531