Escape will only deploy cluster series - Istio service deployment and traditional traffic scheduling

Table of contents

1. Service Mesh service grid

1. Service grid

2. Open source implementation

2. Istio service deployment

1. Install Istio

2. Install istio components

3. Traffic direction of the traditional mode

1. Scene 1

2. Resource list

3. Operation realization

Fourth, analyze the default traffic scheduling mechanism

1. Detailed explanation of cluster traffic scheduling rules

2. To summarize


1. Service Mesh service grid

1. Service grid

        The purpose is to solve the inter-service communication and governance problems after the system architecture is micro-serviced. Provide a general service governance solution.

        Sidecar specifically refers to the sidecar mode in software system architecture. This model is inspired by the sidecar in our life: that is, adding a sidecar next to the two-wheeled motorcycle to expand existing services and functions.

        

        The essence of this model lies in the decoupling of the data plane (business logic) and the control plane: the driver of the two-wheeled motorcycle focused on running the track, while the navigator on the sidecar focused on the surrounding information and maps, and focused on navigation.

         Service Mesh This service network focuses on handling services and communication between services. It is mainly responsible for constructing a stable and reliable service communication infrastructure, and making the whole architecture more advanced and Cloud Native. In engineering, Service Mesh is basically a set of lightweight service proxies deployed together with application logic services, and is transparent to application services.

2. Open source implementation

First Generation Service Mesh Linkerd and Envoy

Linkerd is written in Scala and is the industry's first open source service mesh solution. The author William Morgan is an evangelist and practitioner of service mesh. Envoy is written based on C++11, which has better performance than Linkderd both in theory and in practice. These two open source implementations are centered on sidecar, and most of the focus is on how to make a good proxy and complete some common control plane functions. However, when you deploy a large number of sidecars in containers, how to manage and control these sidecars is a big challenge in itself. Thus, the second generation of Service Mesh came into being.

The second-generation service mesh Istio

Istio is a collaborative open source project of two giants, Google and IBM, and Lyft. It is currently the most mainstream service mesh solution, and it is also the de facto second-generation service mesh standard.

2. Istio service deployment

1. Install Istio

https://istio.io/latest/docs/setup/getting-started/

Download Istio

The download will contain: installation files, examples and the istioctl command line tool.

  1. Visit the Istio release page to download the installation file for your operating system. On macOS or Linux systems, you can also download the latest version of Istio with the following command:

    $ wget https://github.com/istio/istio/releases/download/1.7.3/istio-1.7.3-linux-amd64.tar.gz
  2. Unzip and switch to the directory where the Istio package is located. For example: Istio package name istio-1.7.3, then:

    $ tar zxf istio-1.7.3-linux-amd64.tar.gz
    $ ll istio-1.7.3
    drwxr-x---  2 root root    22 Sep 27 08:33 bin
    -rw-r--r--  1 root root 11348 Sep 27 08:33 LICENSE
    drwxr-xr-x  6 root root    66 Sep 27 08:33 manifests
    -rw-r-----  1 root root   756 Sep 27 08:33 manifest.yaml
    -rw-r--r--  1 root root  5756 Sep 27 08:33 README.md
    drwxr-xr-x 20 root root   330 Sep 27 08:33 samples
    drwxr-x---  3 root root   133 Sep 27 08:33 tools
  3. Copy istioctlthe client to the path environment variable

    $ cp bin/istioctl /bin/
  4. Configuration command auto-completion

    istioctlAutocompletion files are located in toolsthe directory . Perform tab completion by copying istioctl.bashthe file into your home directory, then adding the following line to your file:.bashrcistioctl

    $ cp tools/istioctl.bash ~
    $ source ~/istioctl.bash

2. Install istio components

https://istio.io/latest/zh/docs/setup/install/istioctl/#display-the-configuration-of-a-profile

Install directly using istioctl:

$ istioctl install --set profile=demo
✔ Istio core installed
✔ Istiod installed
✔ Egress gateways installed
✔ Ingress gateways installed
✔ Installation complete
$ kubectl -n istio-system get po
NAME                                    READY   STATUS    RESTARTS   AGE
istio-egressgateway-7bf76dd59-n9t5l     1/1     Running   0          77s
istio-ingressgateway-586dbbc45d-xphjb   1/1     Running   0          77s
istiod-6cc5758d8c-pz28m                 1/1     Running   0          84s

istio provides several different initial deployment profiles for different environments

# View the provided profile type
$ istioctl profile list
We are using the demo mode, that is, all are installed
# Get the yaml of kubernetes:
$ istioctl manifest generate --set profile=demo > istio-kubernetes-manifest.yaml

uninstall

$ istioctl manifest generate --set profile=demo | kubectl delete -f -

3. Traffic direction of the traditional mode

1. Scene 1

front-tomcatAccess through front-end services bill-servicewill be randomly dispatched to back-end services by 50% by defaultbill-service-dp1和bill-service-dp2。

2. Resource list

front-tomcat-dpl-v1.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: front-tomcat
    version: v1
  name: front-tomcat-v1
  namespace: istio-demo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: front-tomcat
      version: v1
  template:
    metadata:
      labels:
        app: front-tomcat
        version: v1
    spec:
      containers:
      - image: consol/tomcat-7.0:latest
        name: front-tomcat

bill-service-dpl-v1.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    service: bill-service
    version: v1
  name: bill-service-v1
  namespace: istio-demo
spec:
  replicas: 1
  selector:
    matchLabels:
      service: bill-service
      version: v1
  template:
    metadata:
      labels:
        service: bill-service
        version: v1
    spec:
      containers:
      - image: nginx:alpine
        name: bill-service
        command: ["/bin/sh", "-c", "echo 'this is bill-service-v1'>/usr/share/nginx/html/index.html;nginx -g 'daemon off;'"]

 bill-service-dpl-v2.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    service: bill-service
    version: v2
  name: bill-service-v2
  namespace: istio-demo
spec:
  replicas: 1
  selector:
    matchLabels:
      service: bill-service
      version: v2
  template:
    metadata:
      labels:
        service: bill-service
        version: v2
    spec:
      containers:
      - image: nginx:alpine
        name: bill-service
        command: ["/bin/sh", "-c", "echo 'hello, this is bill-service-v2'>/usr/share/nginx/html/index.html;nginx -g 'daemon off;'"]

bill-service-svc.yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    service: bill-service
  name: bill-service
  namespace: istio-demo
spec:
  ports:
  - name: http
    port: 9999
    protocol: TCP
    targetPort: 80
  selector:
    service: bill-service
  type: ClusterIP

3. Operation realization

$ kubectl create namespace istio-demo
$ kubectl apply -f front-tomcat-dpl-v1.yaml
$ kubectl apply -f bill-service-dpl-v1.yaml
$ kubectl apply -f bill-service-dpl-v2.yaml
$ kubectl apply -f bill-service-svc.yaml
[root@k8s-master demo]# kubectl -n istio-demo exec front-tomcat-v1-7f8c94c6c8-lfz5m --  curl -s bill-service:9999
this is bill-service-v1
[root@k8s-master demo]# kubectl -n istio-demo exec front-tomcat-v1-7f8c94c6c8-lfz5m --  curl -s bill-service:9999
hello, this is bill-service-v2

Fourth, analyze the default traffic scheduling mechanism

1. Detailed explanation of cluster traffic scheduling rules

        We all know that the default access rules will be allocated according to 50% of the traffic of v1 and v2 pods, so how does the default scheduling mechanism of k8s be realized? Let me explain it from the network level.

 curl bill-service:9999-->curl svcip:9999-->find local route -n-->no match to enter 0.0.0.0 and go to 10.244.1.1 bridge-->10.244.2.1 is on the host machine--> The host looks at the kubeproxy component --> iptables-save | grep svcip to find the link --> iptables-save | grep link --> you can find the rules corresponding to 50% of the random of the two addresses of the pod

1. When executing the <kubectl -n istio-demo exec front-tomcat-v1-7f8c94c6c8-lfz5m -- curl -s bill-service:9999> operation, because the default dns resolution in the container is actually serving svc in the curl backend address.

# bill自带域名解析
[root@k8s-master demo]# kubectl -n istio-demo exec -it bill-service-v1-765cb46975-hmtmd sh
/ # nslookup bill-service
Server:		10.1.0.10
...
Address: 10.1.122.241

[root@k8s-master demo]# kubectl get svc -n istio-demo 
NAME           TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
bill-service   ClusterIP   10.1.122.241   <none>        9999/TCP   66m

curl -s bill-service:9999 等同于 curl -s 10.1.122.241:9999

2. The route is transferred to the 10.244.1.1 bridge through the 0.0.0.0 rule

[root@k8s-master demo]# kubectl -n istio-demo exec -it bill-service-v1-765cb46975-hmtmd sh
/ # route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         10.244.1.1      0.0.0.0         UG    0      0        0 eth0
10.244.0.0      10.244.1.1      255.255.0.0     UG    0      0        0 eth0
10.244.1.0      *               255.255.255.0   U     0      0        0 eth0

没有找到10.1.122.241规则,就通过0.0.0.0转到10.244.1.1的Gateway

3. The host has no maintenance rules, and the traffic jumps to iptables to view the rules

# 这个10.244.1.1实际上是宿主机的地址,也就是说容器此时网络规则,容器内部-->宿主机
[root@k8s-node2 ~]# ip a | grep -e 10.244.1.1
    inet 10.244.1.1/24 brd 10.244.1.255 scope global cni0

# 此时宿主机也没有发现相关规则

# 实际上宿主机部署kube-proxy组件,kube-proxy维护了iptables规则,流量虽然没有直接的宿主机route规则,但是流量访问时已经被iptables拦截,我们看下iptable有没有配置相关规则
[root@k8s-node2 ~]# iptables-save | grep 10.1.122.241
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.1.122.241/32 -p tcp -m comment --comment "istio-demo/bill-service:http cluster IP" -m tcp --dport 9999 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.1.122.241/32 -p tcp -m comment --comment "istio-demo/bill-service:http cluster IP" -m tcp --dport 9999 -j KUBE-SVC-PK4BNTKC2JYVE7B2

 4. The iptables rule link successfully finds the backend address with a weight of 0.5

# 根据svc地址规则跳转至KUBE-SVC-PK4BNTKC2JYVE7B2链
[root@k8s-node2 ~]# iptables-save | grep 10.1.122.241
-A KUBE-SERVICES -d 10.1.122.241/32 -p tcp -m comment --comment "istio-demo/bill-service:http cluster IP" -m tcp --dport 9999 -j KUBE-SVC-PK4BNTKC2JYVE7B2

# 可以看到0.5权重转发到KUBE-SEP-OIO7GYZLNGRLZLYD,剩下的权重转发到KUBE-SEP-OXS2CP2Q2RMPFLD5链路。我们看下分别对应到后端哪里了
[root@k8s-node2 ~]# iptables-save | grep KUBE-SVC-PK4BNTKC2JYVE7B2
-A KUBE-SVC-PK4BNTKC2JYVE7B2 -m comment --comment "istio-demo/bill-service:http" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-OIO7GYZLNGRLZLYD
-A KUBE-SVC-PK4BNTKC2JYVE7B2 -m comment --comment "istio-demo/bill-service:http" -j KUBE-SEP-OXS2CP2Q2RMPFLD5

# 查看KUBE-SEP-OIO7GYZLNGRLZLYD对应的IP
[root@k8s-node2 ~]# iptables-save | grep KUBE-SEP-OIO7GYZLNGRLZLYD
-A KUBE-SEP-OIO7GYZLNGRLZLYD -s 10.244.1.168/32 -m comment --comment "istio-demo/bill-service:http" -j KUBE-MARK-MASQ

# 查看KUBE-SEP-OXS2CP2Q2RMPFLD5对应的IP
[root@k8s-node2 ~]# iptables-save | grep KUBE-SEP-OXS2CP2Q2RMPFLD5
-A KUBE-SEP-OXS2CP2Q2RMPFLD5 -s 10.244.2.74/32 -m comment --comment "istio-demo/bill-service:http" -j KUBE-MARK-MASQ

# 可以发现,两条链路对应的就是bill-service的两个pod地址
[root@k8s-node2 ~]# kubectl get po -n istio-demo  -owide
NAME                               READY   STATUS    RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
bill-service-v1-765cb46975-hmtmd   1/1     Running   0          89m   10.244.1.168   k8s-node2   <none>           <none>
bill-service-v2-6854775ffc-9n6jv   1/1     Running   0          87m   10.244.2.74    k8s-node1   <none>           <none>
front-tomcat-v1-7f8c94c6c8-lfz5m   1/1     Running   0          89m   10.244.1.169   k8s-node2   <none>           <none>

2. To summarize

When the front container executes the curl bill-service:9999 operation, according to the domain name resolution rules, it actually executes curl -s 10.1.122.241:9999, and the container enters the 0.0.0.0 to 10.244.1.1 bridge according to the local route -n rule. The bridge address is located on the host machine but no corresponding route rules are configured; so we think that the k8s cluster network is scheduled by iptables rules configured by the kubeproxy component, so we find svcip-->0.5-->bill- through iptables-save step by step The process of service-ip.

Guess you like

Origin blog.csdn.net/weixin_39855998/article/details/122515422