Kubernetes二进制安装

Master:192.168.11.220

Node1:192.168.11.221

Node2:192.168.11.222

一、创建集群所需要的CA证书和秘钥

为确保安全,kubernetes 系统各组件需要使用 x509 证书对通信进行加密和认证。CA (Certificate Authority) 是自签名的根证书,用来签名后续创建的其它证书。这里使用 CloudFlare 的 PKI 工具集 cfssl 创建所有证书。

1)#安装cfssl工具集
[root@k8s-master ]# mkdir -p /opt/k8s/work && cd /opt/k8s/work
[root@k8s-master work]# curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
[root@k8s-master work]# curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
[root@k8s-master work]# curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
[root@k8s-master work]# chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo

2)#创建根证书(CA)
CA证书是集群所有节点共享的,只需要创建一个CA证书,后续创建的所有证书都由它签名。
2.1)创建配置文件
CA配置文件用于配置根证书的使用场景(profile)和具体参数(usage,过期时间、服务端认证、客户端认证、加密等),后续在签名其他证书时需要制定特定场景。
[root@k8s-master work]# cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "87600h"
      }
    }
  }
}
EOF

配置说明:
signing:表示该证书可用于签名其它证书,生成的ca.pem证书中CA=TRUE;
server auth:表示clent可以用该证书对server提供的证书进行验证;
client auth:表示server可以用该证书对client提供的证书进行验证;

2.2)创建证书签名请求文件
[root@k8s-master work]# cat > ca-csr.json <<EOF
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "GuangDong",
      "L": "GuangDong",
    }
  ]
}
EOF

配置说明:
CN:Common Name,kube-apiserver从证书中提取该字段作为请求的用户名(User Name),浏览器使用该字段验证网站是否合法

2.3)生成CA证书和私钥
[root@k8s-master work]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
[root@k8s-master work]# ls ca*
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem
[root@k8s-master work]# mkdir -p /etc/kubernetes/cert
[root@k8s-master work]# cp ca*.pem ca-config.json /etc/kubernetes/cert

#将证书和私钥下发到所有节点
[root@k8s-master work]# for node_all_ip in 192.168.11.221 192.168.11.222
do
   echo ">>> ${node_all_ip}"
   ssh root@${node_all_ip} "mkdir -p /etc/kubernetes/cert"
   scp ca*.pem ca-config.json root@${node_all_ip}:/etc/kubernetes/cert
done

 二、部署etcd集群

1)下载etcd二进制文件
[root@k8s-master work]# wget https://github.com/etcd-io/etcd/releases/download/v3.4.3/etcd-v3.4.3-linux-amd64.tar.gz
[root@k8s-master work]# tar -xvf etcd-v3.4.3-linux-amd64.tar.gz
[root@k8s-master work]# mkdir -p /opt/k8s/bin
[root@k8s-master work]# cp etcd-v3.4.3-linux-amd64/etcd* /opt/k8s/bin/
[root@k8s-master work]# chmod +x /opt/k8s/bin/*
[root@k8s-master work]# for node_all_ip in 192.168.11.221 192.168.11.222
do
   echo ">>> ${node_all_ip}"
   ssh root@${node_all_ip} "mkdir -p /opt/k8s/bin"
   scp etcd-v3.4.3-linux-amd64/etcd* root@${node_all_ip}:/opt/k8s/bin/
   ssh root@${node_all_ip} "chmod +x /opt/k8s/bin/*"
done

2)创建etcd证书和私钥
创建证书签名请求:
[root@k8s-master work]# cat > etcd-csr.json <<EOF
{
  "CN": "etcd",
  "hosts": [
    "192.168.11.220",
    "192.168.11.221",
    "192.168.11.222"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "GuangDong",
      "L": "GuangDong"
    }
  ]
}
EOF

配置说明:
hosts:制定授权使用该证书的etcd节点IP或域名列表,需要将etcd集群的三个节点IP都列在其中;

2.1)生成证书和私钥
[root@k8s-master work]# cfssl gencert -ca=/opt/k8s/work/ca.pem \
   -ca-key=/opt/k8s/work/ca-key.pem \
   -config=/opt/k8s/work/ca-config.json \
   -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
[root@k8s-master work]# ls etcd*pem
etcd-key.pem  etcd.pem

[root@k8s-master work]# mkdir -p /etc/etcd/cert
[root@k8s-master work]# cp etcd*.pem /etc/etcd/cert/
[root@k8s-master work]# for node_all_ip in 192.168.11.221 192.168.11.222
do
   echo ">>> ${node_all_ip}"
   ssh root@${node_all_ip} "mkdir -p /etc/etcd/cert"
   scp etcd*.pem root@${node_all_ip}:/etc/etcd/cert/
done

#创建etcd的systemd unit模板脚本
[root@k8s-master work]# cat etcd.sh
#!/bin/bash
#example: ./etcd.sh etcd01 192.168.11.220 etcd01=https://192.168.11.220:2380,etcd02=https://192.168.11.221:2380,etcd03=https://192.168.11.222:2380

NODE_ETCD_NAME=$1
NODE_ETCD_IP=$2
ETCD_NODES=$3
ETCD_DATA_DIR=/data/k8s/etcd/data
ETCD_WAL_DIR=/data/k8s/etcd/wal

if [ ! -d "/data/k8s/etcd/data /data/k8s/etcd/wal" ];then
  mkdir -p /data/k8s/etcd/data /data/k8s/etcd/wal
fi

cat > /etc/systemd/system/etcd.service <<EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
   
[Service]
Type=notify
WorkingDirectory=${ETCD_DATA_DIR}
ExecStart=/opt/k8s/bin/etcd \\
  --data-dir=${ETCD_DATA_DIR} \\
  --wal-dir=${ETCD_WAL_DIR} \\
  --name=${NODE_ETCD_NAME} \\
  --cert-file=/etc/etcd/cert/etcd.pem \\
  --key-file=/etc/etcd/cert/etcd-key.pem \\
  --trusted-ca-file=/etc/kubernetes/cert/ca.pem \\
  --peer-cert-file=/etc/etcd/cert/etcd.pem \\
  --peer-key-file=/etc/etcd/cert/etcd-key.pem \\
  --peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \\
  --peer-client-cert-auth \\
  --client-cert-auth \\
  --listen-peer-urls=https://${NODE_ETCD_IP}:2380 \\
  --initial-advertise-peer-urls=https://${NODE_ETCD_IP}:2380 \\
  --listen-client-urls=https://${NODE_ETCD_IP}:2379,http://127.0.0.1:2379 \\
  --advertise-client-urls=https://${NODE_ETCD_IP}:2379 \\
  --initial-cluster-token=etcd-cluster-0 \\
  --initial-cluster=${ETCD_NODES} \\
  --initial-cluster-state=new \\
  --auto-compaction-mode=periodic \\
  --auto-compaction-retention=1 \\
  --max-request-bytes=33554432 \\
  --quota-backend-bytes=6442450944 \\
  --heartbeat-interval=250 \\
  --election-timeout=2000
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
   
[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable etcd
systemctl restart etcd

#启动etcd,此时会卡住不动,是因为etcd需要选举才能正常运行,所以要在另外两个节点也执行以下命令,记得修改$1和$2参数
[root@k8s-master work]# ./etcd.sh etcd01 192.168.11.220 etcd01=https://192.168.11.220:2380,etcd02=https://192.168.11.221:2380,etcd03=https://192.168.11.222:2380
for node_all_ip in 192.168.11.221 192.168.11.222 
do
  echo ">>> ${node_all_ip}"
  scp /opt/k8s/work/etcd.sh root@${node_all_ip}:/opt/k8s/
done

node1: sh /opt/k8s/etcd.sh etcd02 192.168.11.221 etcd01=https://192.168.11.220:2380,etcd02=https://192.168.11.221:2380,etcd03=https://192.168.11.222:2380
node2: sh /opt/k8s/etcd.sh etcd03 192.168.11.222 etcd01=https://192.168.11.220:2380,etcd02=https://192.168.11.221:2380,etcd03=https://192.168.11.222:2380

#当所有节点都执行完成后检查,状态是否都正常
[root@k8s-master work]# ETCDCTL_API=3 /opt/k8s/bin/etcdctl --endpoints="https://192.168.11.220:2379,https://192.168.11.221:2379,https://192.168.11.222:2379" \
 --cacert=/etc/kubernetes/cert/ca.pem \
 --cert=/etc/etcd/cert/etcd.pem \
 --key=/etc/etcd/cert/etcd-key.pem endpoint health
输出内容:
https://192.168.11.221:2379 is healthy: successfully committed proposal: took = 25.077164ms
https://192.168.11.220:2379 is healthy: successfully committed proposal: took = 38.10606ms
https://192.168.11.222:2379 is healthy: successfully committed proposal: took = 38.785388ms

#查看当前etcd集群工中的leader
[root@k8s-master work]# for node_all_ip in 192.168.11.220 192.168.11.221 192.168.11.222
do
ETCDCTL_API=3 /opt/k8s/bin/etcdctl -w table --cacert=/etc/kubernetes/cert/ca.pem \
--cert=/etc/etcd/cert/etcd.pem --key=/etc/etcd/cert/etcd-key.pem \
--endpoints=https://${node_all_ip}:2379 endpoint status
done
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|          ENDPOINT           |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.168.11.220:2379 | a6bbfb193c776e5c |   3.4.3 |   25 kB |      true |      false |       458 |         21 |                 21 |        |
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.168.11.221:2379 | 7b37d21aaf69f7d2 |   3.4.3 |   20 kB |     false |      false |       458 |         21 |                 21 |        |
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.168.11.222:2379 | 711ad351fe31c699 |   3.4.3 |   20 kB |     false |      false |       458 |         21 |                 21 |        |
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
由上面结果可见,当前的leader节点为192.168.11.220

#如果报错:请检查所有服务器时间是否同步
rejected connection from "192.168.11.220:58360" (error "tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid", ServerName "")
#如果遇到ETCD出现连接失败状况,导致创建实例失败,则etcd启动文件中的 --initial-cluster-state=new 改为 existing,重启则正常

猜你喜欢

转载自www.cnblogs.com/pzb-shadow/p/12594217.html