K8S high availability cluster load balancer VIP (HAProxy, keepalived)
Why use load balancer VIP (HAProxy, keepalived)
- When the k8s Master initializes the configuration, the load balancer can be used to forward the request of the apiserver to different machines, so as to avoid the downtime of one server and cause all nodes to be unavailable.
- If you do not use a load balancer, create a k8s Master first, and then add other k8s Masters. When the first k8s Master goes down, the k8s Master added later will not be available (because the request is sent to the first k8s Master apiserver)
- With keepalived, why do we need HAProxy? Isn’t keepalived directly able to create VIP (virtual IP)? If keepalived and k8s are located on the same machine, you can do this (k8s and keepalived must exist on each machine), and on which machine the virtual IP is located, which machine is the final call, there may be a virtual IP The machine where it is located is normal, but k8s cannot be used. The actual situation may be that there will be a separate load balancing server VIP (not on the same machine as k8s).
- Why not use the IP of k8s Service? Because the IP of k8s Service may not be accessible on external machines, and the IP of Service is random, and each Service is different. Therefore, when providing external services, VIP (Virtual IP Address virtual IP) is used
installation configuration
- Install and configure HAProxy (all three servers need to be executed)
mkdir -p /etc/kubernetes/
cat > /etc/kubernetes/haproxy.cfg << EOF
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4096
user haproxy
group haproxy
daemon
stats socket /var/lib/haproxy/stats
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
frontend kube-apiserver
mode tcp
bind *:9443
option tcplog
default_backend kube-apiserver
listen stats
mode http
bind *:8888
stats auth admin:password
stats refresh 5s
stats realm HAProxy\ Statistics
stats uri /stats
log 127.0.0.1 local3 err
backend kube-apiserver
mode tcp
balance roundrobin
server k8s-master1 $MASTER_1_IP:6443 check
server k8s-master2 $MASTER_2_IP:6443 check
server k8s-master3 $MASTER_3_IP:6443 check
EOF
cat /etc/kubernetes/haproxy.cfg
docker run \
-d \
--name k8s-haproxy \
--net=host \
--restart=always \
-v /etc/kubernetes/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro \
haproxytech/haproxy-debian:2.3
- Install and configure keepalived (first machine: 192.168.80.81)
cat > /etc/kubernetes/keepalived.conf << EOF
! Configuration File for keepalived
global_defs {
router_id LVS_1
}
vrrp_script checkhaproxy
{
script "/usr/bin/check-haproxy.sh"
interval 2
weight -30
}
vrrp_instance VI_1 {
state MASTER
interface $INTERFACE_NAME
virtual_router_id 51
priority 100
advert_int 1
virtual_ipaddress {
$VIP_IP/24 dev $INTERFACE_NAME
}
authentication {
auth_type PASS
auth_pass <password>
}
track_script {
checkhaproxy
}
}
EOF
cat > /etc/kubernetes/check-haproxy.sh << EOF
#!/bin/bash
count=\`netstat -apn | grep 9443 | wc -l\`
if [ $count -gt 0 ]; then
exit 0
else
exit 1
fi
EOF
# 需要scp分发到另外两个节点上
cat /etc/kubernetes/keepalived.conf
cat /etc/kubernetes/check-haproxy.sh
docker run \
-d \
--name k8s-keepalived \
--restart=always \
--net=host \
--cap-add=NET_ADMIN \
--cap-add=NET_BROADCAST \
--cap-add=NET_RAW \
-v /etc/kubernetes/keepalived.conf:/container/service/keepalived/assets/keepalived.conf \
-v /etc/kubernetes/check-haproxy.sh:/usr/bin/check-haproxy.sh \
osixia/keepalived:2.0.20 \
--copy-service
mkdir -p /root/.ssh/
- Install and configure keepalived ( second machine: 192.168.80.82)
# 一路回车
ssh-keygen -t rsa
scp -P 22 /root/.ssh/id_rsa.pub root@$MASTER_1_IP:/root/.ssh/authorized_keys
scp -P 22 root@$MASTER_1_IP:/etc/kubernetes/keepalived.conf /etc/kubernetes/
scp -P 22 root@$MASTER_1_IP:/etc/kubernetes/check-haproxy.sh /etc/kubernetes/
sudo sed -i "s#router_id LVS_1#router_id LVS_2#g" /etc/kubernetes/keepalived.conf
sudo sed -i "s#state MASTER#state BACKUP#g" /etc/kubernetes/keepalived.conf
docker run \
-d \
--name k8s-keepalived \
--restart=always \
--net=host \
--cap-add=NET_ADMIN \
--cap-add=NET_BROADCAST \
--cap-add=NET_RAW \
-v /etc/kubernetes/keepalived.conf:/container/service/keepalived/assets/keepalived.conf \
-v /etc/kubernetes/check-haproxy.sh:/usr/bin/check-haproxy.sh \
osixia/keepalived:2.0.20 \
--copy-service
- Install and configure keepalived ( the third machine: 192.168.80.83)
# 一路回车
ssh-keygen -t rsa
scp -P 22 /root/.ssh/id_rsa.pub root@$MASTER_1_IP:/root/.ssh/authorized_keys
scp -P 22 root@$MASTER_1_IP:/etc/kubernetes/keepalived.conf /etc/kubernetes/
scp -P 22 root@$MASTER_1_IP:/etc/kubernetes/check-haproxy.sh /etc/kubernetes/
sudo sed -i "s#router_id LVS_1#router_id LVS_3#g" /etc/kubernetes/keepalived.conf
sudo sed -i "s#state MASTER#state BACKUP#g" /etc/kubernetes/keepalived.conf
docker run \
-d \
--name k8s-keepalived \
--restart=always \
--net=host \
--cap-add=NET_ADMIN \
--cap-add=NET_BROADCAST \
--cap-add=NET_RAW \
-v /etc/kubernetes/keepalived.conf:/container/service/keepalived/assets/keepalived.conf \
-v /etc/kubernetes/check-haproxy.sh:/usr/bin/check-haproxy.sh \
osixia/keepalived:2.0.20 \
--copy-service
-
Visit the following address, username: admin, password: password
http://192.168.80.81:8888/statshttp://192.168.80.82:8888/stats
- Test VIP (install different versions of Nginx on three machines for testing, easy to view the effect, and will be deleted after the test)
-
- Install Nginx 1.23.1 ( first machine: 192.168.80.81)
docker run \
--restart=always \
-itd \
--privileged=true \
-p 6443:80 \
-v /etc/localtime:/etc/localtime \
--name nginx nginx:1.23.1
-
- Install Nginx 1.23.2 ( second machine: 192.168.80.82)
docker run \
--restart=always \
-itd \
--privileged=true \
-p 6443:80 \
-v /etc/localtime:/etc/localtime \
--name nginx nginx:1.23.2
-
- Install Nginx 1.23.3 ( third machine: 192.168.80.83)
docker run \
--restart=always \
-itd \
--privileged=true \
-p 6443:80 \
-v /etc/localtime:/etc/localtime \
--name nginx nginx:1.23.3
-
- Access http://192.168.80.81:8888/stats , http://192.168.80.82:8888/stats , http://192.168.80.83:8888/stats , any one of http://192.168.80.100:8888/stats address, you can see that k8s-master1, k8s-master2, and k8s-master3 replaced by nginx are online
- Visit http://192.168.80.81:6443/xuxiaowei , http://192.168.80.82:6443/xuxiaowei , http://192.168.80.83:6443/xuxiaowei , you can see that the three machines use different Nginx versions.
- Visit http://192.168.80.100:9443/xuxiaowei , you can see that the nginx version is 1.23.3 , and you are accessing the third machine
- Shut down the third machine 192.168.80.83 and visit http://192.168.80.100:9443/xuxiaowei , you can see that the nginx version is 1.23.1 , and you are accessing the first machine
- After shutting down the first machine 192.168.80.81, visit http://192.168.80.100:9443/xuxiaowei , you can see that the nginx version is 1.23.2 , and you are accessing the second machine
- Close and delete the created nginx container ( all three servers need to be executed )
- Access http://192.168.80.81:8888/stats , http://192.168.80.82:8888/stats , http://192.168.80.83:8888/stats , any one of http://192.168.80.100:8888/stats address, you can see that k8s-master1, k8s-master2, and k8s-master3 replaced by nginx are online
docker stop nginx
docker rm nginx