세 번째 부분 (바이너리 배포 k8s 클러스터 --- Flannel 네트워크 및 keepalived + haproxy 고 가용성)

이 기사에서는 네트워크 기반과 kubernetes 클러스터의 보장, ha의 고 가용성을 보장 할뿐만 아니라 호스트 간의도 커가 서로 통신 할 수 있도록 플란넬 네트워크를 구축합니다.
배포 된 서버 :
master1 192.168.206.31
master2 192.168.206.32
master3 192.168.206.33
node1 192.168.206.41
node2 192.168.206.42
node3 192.168.206.43
VIP : 192.168.206.30
ha1 192.168.206.36
ha2 192.168.206.37

1. Flannel 네트워크 TLS 인증서 생성

모든 클러스터 노드에 Flannel을 설치하면 k8s-master1에서 다음 작업이 수행됩니다.
1. 인증서 서명 요청 만들기

cat > flanneld-csr.json <<EOF
{
  "CN": "flanneld",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Zhejiang",
      "L": "hangzhou",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

该证书只会被 kubectl 当做 client 证书使用,所以 hosts 字段为空;

2. 인증서 및 개인 키 생성 :

cfssl gencert -ca=/data/ssl/ca.pem \
  -ca-key=/data/ssl/ca-key.pem \
  -config=/data/ssl/ca-config.json \
  -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld

创建证书存放目录:
 mkdir /opt/kubernetes/ssl/flannel

这里是复制到3master+3node上
cp flanneld*.pem /opt/kubernetes/ssl/flannel

2. Flannel 배포
1. Flannel 다운로드 및 설치

wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
tar -xzvf flannel-v0.10.0-linux-amd64.tar.gz
cp {flanneld,mk-docker-opts.sh} /opt/kubernetes/bin/

2. etcd에 네트워크 세그먼트 정보 쓰기
다음 두 명령은 etcd 클러스터 중 하나에서 한 번 실행할 수 있으며, 이는 docker 배포를위한 플란넬 네트워크 세그먼트를 생성하는 것입니다.

etcdctl --ca-file=/opt/kubernetes/ssl/etcd/ca.pem --cert-file=/opt/kubernetes/ssl/etcd/etcd.pem --key-file=/opt/kubernetes/ssl/etcd/etcd-key.pem mkdir /opt/kubernetes/network
etcdctl --ca-file=/opt/kubernetes/ssl/etcd/ca.pem --cert-file=/opt/kubernetes/ssl/etcd/etcd.pem --key-file=/opt/kubernetes/ssl/etcd/etcd-key.pem mk /opt/kubernetes/network/config '{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}'

3. 시스템 단위 파일 만들기

cat > /etc/systemd/system/flanneld.service << EOF
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
ExecStart=/opt/kubernetes/bin/flanneld \
  -etcd-cafile=/opt/kubernetes/ssl/flannel/ca.pem \
  -etcd-certfile=/opt/kubernetes/ssl/flannel/flanneld.pem \
  -etcd-keyfile=/opt/kubernetes/ssl/flannel/flanneld-key.pem \
  -etcd-endpoints=https://192.168.206.31:2379,https://192.168.206.32:2379,https://192.168.206.33:2379 \
  -etcd-prefix=/opt/kubernetes/network
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
EOF

mk-docker-opts.sh 脚本将分配给 flanneld 的 Pod 子网网段信息写入到 /run/flannel/docker 文件中,后续 docker 启动时使用这个文件中参数值设置 docker0 网桥。
flanneld 使用系统缺省路由所在的接口和其它节点通信,对于有多个网络接口的机器(如,内网和公网),可以用 -iface=enpxx 选项值指定通信接口。

4. 플란넬을 시작하고 자동으로 시작하도록 설정합니다.

systemctl daemon-reload
systemctl enable flanneld
systemctl start flanneld

5. flannel에 의해 할당 된 서브넷 정보보기

[root@k8s-master1 ~]# cat /run/flannel/docker 
DOCKER_OPT_BIP="--bip=172.30.94.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=true"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.30.94.1/24 --ip-masq=true --mtu=1450"

[root@k8s-master1 ~]# cat /run/flannel/subnet.env 
FLANNEL_NETWORK=172.30.0.0/16
FLANNEL_SUBNET=172.30.94.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=false

/run/flannel/docker是flannel分配给docker的子网信息,/run/flannel/subnet.env包含了flannel整个大网段以及在此节点上的子网段。

6. 플란넬 네트워크가 효과적인지 확인

Last login: Thu Nov 19 09:28:40 2020 from 192.168.206.1
[root@k8s-master1 ~]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:6e:7f:49 brd ff:ff:ff:ff:ff:ff
    inet 192.168.206.31/24 brd 192.168.206.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::bbd4:6d75:22b1:e631/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::129b:129d:71ca:5d94/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::1b37:c32:6cc4:be75/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
3: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether de:b1:04:6f:d6:57 brd ff:ff:ff:ff:ff:ff
    inet 172.30.65.0/32 brd 172.30.65.0 scope global flannel.1
       valid_lft forever preferred_lft forever

셋, 도커 설치, 플란넬 네트워크
1 을 지원하도록 도커 구성 , 모든 노드에 도커 설치

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

#安装指定版本,这里安装18.06
yum list docker-ce --showduplicates | sort -r
yum install -y docker-ce-18.06.1.ce-3.el7

systemctl start docker && systemctl enable docker

2. 플란넬 네트워크를 지원하도록 도커를 구성하면 모든 도커 노드가 작동합니다.

[root@k8s-master1 ~]# vi /etc/systemd/system/multi-user.target.wants/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=/run/flannel/docker
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

3. Docker를 다시 시작하여 구성을 적용하십시오.

systemctl daemon-reload
systemctl restart docker

4. 모든 클러스터 호스트의 네트워크 상태보기

etcdctl --ca-file=/opt/kubernetes/ssl/etcd/ca.pem --cert-file=/opt/kubernetes/ssl/etcd/etcd.pem --key-file=/opt/kubernetes/ssl/etcd/etcd-key.pem ls /opt/kubernetes/network/subnets

넷째, keepalived + haproxy 고 가용성 배포입니다.
배포 서버
ha1 192.168.206.36
ha2 192.168.206.37
1. 모든 haproxy에 대해 haproxy를 설치합니다.

yum install -y haproxy

cat <<EOF > /etc/haproxy/haproxy.cfg
global
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon
defaults
    mode                    tcp
    log                     global
    retries                 3
    timeout connect         10s
    timeout client          1m
    timeout server          1m
frontend  k8s-api
    bind        *:6443
    bind        *:443
    mode        tcp
    option      tcplog
    default_backend k8s-api
backend k8s-api
    mode        tcp
    option      tcplog
    option      tcp-check
    balance     roundrobin
    default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
    server master1 192.168.206.31:6443 check
    server master2 192.168.206.32:6443 check
    server master3 192.168.206.33:6443 check
EOF

2. 모든 haproxy를 시작합니다.

systemctl start haproxy
systemctl status haproxy
systemctl enable haproxy

3. 모든 haproxy 설치 유지

yum install -y keepalived

cat <<EOF > /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id LVS_K8S
}

vrrp_instance VI_1 {
    state MASTER(BACKUP)
    interface ens33
    virtual_router_id 51
    priority 100(备50)
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        VIP/24
    }
}
EOF

4. 모든 haproxy 시작 유지

systemctl restart keepalived
systemctl status keepalived
systemctl enable keepalived

시작 후 vip를 확인하거나 메인 ha를 닫아 vip가 오프셋되었는지 확인할 수 있습니다.

추천

출처blog.51cto.com/14033037/2552447