Three working modes supported by kube-proxy

Three working modes supported by kube-proxy

userspace mode

In userspace mode, kube-proxy will create a listening port for each service. The user request is first sent to the Cluster IP, and then redirected to the port monitored by kube-proxy by iptables rules. Kube-proxy selects one to provide services according to the LB algorithm Pod and establish a connection with it

In this mode, kube-proxy acts as a four-layer load balancer. Since kube-proxy runs under userspace, it will increase the data copy between the kernel and userspace during the forwarding process. Although it is stable, the efficiency is relatively low.

Insert picture description here

iptables mode

In iptables mode, kube-proxy creates corresponding iptables rules for each pod in the service backend. When a user requests, the request is first sent to the cluster IP, and then forwarded to the specific pod according to the iptables rules

The role of kube-proxy in this mode is only to monitor service changes and generate the latest iptables rules. At this time, kube-proxy no longer assumes the role of a four-layer load balancer. The advantages of this mode are more efficient than the userspace mode, but it cannot Provide a flexible LB strategy, when one of the pods is abnormal, iptables will still forward to the abnormal pod, and will not retry

Insert picture description here

ipvs mode

The ipvs mode is similar to the iptables mode. Kube-proxy monitors the changes of service and pod and creates corresponding ipvs rules. Compared with iptables, ipvs has higher forwarding efficiency. In addition, ipvs supports more LB algorithms

Insert picture description here

Turn on ipvs mode

1.修改kube-proxy的configmap设置mode为ipvs
[root@k8s-master ~]# kubectl edit cm kube-proxy -n kube-system
44     mode: "ipvs"			#在44行左右
configmap/kube-proxy edited

2.重建所有关于kube-proxy的pod
#-l表示根据标签找到对应的pod
[root@k8s-master ~]# kubectl delete pod -l k8s-app=kube-proxy -n kube-system
pod "kube-proxy-4lvxw" deleted
pod "kube-proxy-lkm8r" deleted
pod "kube-proxy-whmll" deleted

3.查看ipvs规则
[root@k8s-master ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.81.210:30376 rr
TCP  192.168.122.1:30376 rr
TCP  10.96.0.1:443 rr
  -> 192.168.81.210:6443          Masq    1      0          0         
TCP  10.96.0.10:53 rr
  -> 10.244.0.18:53               Masq    1      0          0         
  -> 10.244.0.19:53               Masq    1      0          0         
TCP  10.96.0.10:9153 rr
  -> 10.244.0.18:9153             Masq    1      0          0         
  -> 10.244.0.19:9153             Masq    1      0          0         
TCP  10.101.187.80:80 rr
TCP  10.102.142.245:80 rr
  -> 10.244.1.198:8080            Masq    1      0          0         
TCP  10.103.231.226:80 rr persistent 10800
  -> 10.244.1.202:8080            Masq    1      0          0         
TCP  10.111.44.36:443 rr
  -> 192.168.81.230:443           Masq    1      1          0         
TCP  10.111.227.155:80 rr persistent 10800
TCP  10.244.0.0:30376 rr
TCP  10.244.0.1:30376 rr
TCP  127.0.0.1:30376 rr
TCP  172.17.0.1:30376 rr
UDP  10.96.0.10:53 rr
  -> 10.244.0.18:53               Masq    1      0          0         
  -> 10.244.0.19:53               Masq    1      0          0 

Take ipvs as an example to explain

[root@k8s-master ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.96.0.10:53 rr
  -> 10.244.0.18:53               Masq    1      0          0         
  -> 10.244.0.19:53               Masq    1      0          0   
  • 10.96.0.10:53 is the ip of the service, rr means polling, the following 10.244.0.18 and 10.244.0.19 are the ip addresses of the pod
  • When accessing this service ip, kube-proxy will forward it to the back-end pod by polling according to the ipvs policy
  • This set of rules will be generated on any node in the cluster, so any node in the cluster can access the internal pod

Guess you like

Origin blog.csdn.net/weixin_44953658/article/details/115003300