Implementing multi-cluster traffic management based on istio

This article is shared from the Huawei Cloud Community " Implementing Multi-Cluster Traffic Management Based on Istio " by the author: You can make a friend.

a background

Service governance for heterogeneous infrastructure such as multi-cloud and hybrid cloud is one of the scenarios Istio focuses on supporting. In order to improve service availability and avoid vendor lock-in, enterprises usually choose to deploy applications in multiple clusters in multiple regions, or even in multiple cloud environments such as multi-cloud and hybrid cloud. Multi-cluster solutions have gradually become the best choice for enterprise application deployment. choose. Therefore, more and more users have strong demands for cross-cluster service governance. In this context, Istio, as the de facto standard in the ServiceMesh field, has launched a variety of multi-cluster management solutions.

2. Introduction

Currently Istio supports 4 multi-cluster models.

  1. Flat Network Single Control Plane Model
  2. Flat Network Multi-Control Plane Model
  3. Non-flat network single control plane model
  4. Non-flat network multi-control plane model

The single control plane model of multi-cluster means that multiple clusters share the same Istio control plane. The multi-control plane model of multi-cluster means that each cluster must independently use a set of Istio control plane, whether it is a single control plane or a multi-control plane model. , each set of Istio control plane (istiod) must be connected to the Kube-apiserver of all clusters, and List-Watch obtains all clusters Service、Endpoint、Pod 、Node and controls service access within the cluster or between clusters, but only monitors VirtualService、DestinationRule、Gatewaythe Istio API objects of the main cluster. .

Depending on whether the inter-cluster network is flat or not, Istio subdivides two control plane models:

  • Flat network: Multi-cluster container networks are connected through VPN and other technologies, and Pods can access directly across clusters.
  • Non-flat network: The container networks of each cluster are isolated from each other. Cross-cluster access cannot be passed through and must go through the east-west gateway.

When choosing the Istio multi-cluster model in a production environment, of course you need to make a decision based on your actual scenario. If the network between clusters is flat, you can choose a flat network model, if the network between clusters is isolated, you can choose a non-flat network model. If the cluster size is small, you can choose the single control plane model. If the cluster size is large, you can choose the multiple control plane model.

This document selects a non-flat network multi-control plane model for installation instructions: The installation model is as follows. The
image.png
non-flat network multi-control plane model has the following characteristics.

  1. Different clusters do not need to be under one large network, that is, the container network does not need to be connected at three layers, and cross-cluster service access is through Istio East-West Gatewayforwarding.
  2. There is no limit on the Pod address range and service address range of each kubernetes cluster. It can overlap with other clusters, and different clusters do not interfere with each other.
  3. The Sidecar of each Kubernetes cluster is only connected to the Istio control plane of this cluster, making communication more efficient.
  4. Istiod only monitors the Istio configuration of the main cluster, so there are redundant replication issues for other resources. VirtualService、DestinationRule、Gateway 
  5. Internal service access in the same cluster: direct connection between Pods; cross-cluster service access: relying on DNS proxy to resolve service domain names of other clusters. Since the networks between clusters are isolated from each other, they rely on the transit traffic of the Remote cluster. East-west Gateway

Three ClusterMesh environment construction

Build two clusters, cluster1 and cluster2, and then install the Istio control plane on each cluster, and set both as primary clusters. Cluster cluster1 is on the network1 network, and cluster cluster2 is on the network2 network.

3.1 Prerequisites

The environment information for this build is as follows: Use Kind to build a Kubernetes cluster, and the Kind version is v0.19.0. The Kubernetes version is 1.27.3; the Istio version is 1.20.1.

image.png

Before building a k8s cluster, ensure that docker kubectl and kind are installed on the Linux node.

Download istioctl binary

curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.20.1 TARGET_ARCH=x86_64 sh -
Add istioctl client to path
image.png

3.2 Kubernetes cluster installation

The cluster1 and cluster2 cluster installation scripts are as follows

# create-cluster.sh
# This script handles the creation of multiple clusters using kind and the
# ability to create and configure an insecure container registry.

set -o xtrace
set -o errexit
set -o nounset
set -o pipefail

# shellcheck source=util.sh
NUM_CLUSTERS="${NUM_CLUSTERS:-2}"
KIND_IMAGE="${KIND_IMAGE:-}"
KIND_TAG="${KIND_TAG:-v1.27.3@sha256:3966ac761ae0136263ffdb6cfd4db23ef8a83cba8a463690e98317add2c9ba72}"
OS="$(uname)"
function create-clusters() {
  local num_clusters=${1}

  local image_arg=""
  if [[ "${KIND_IMAGE}" ]]; then
    image_arg="--image=${KIND_IMAGE}"
  elif [[ "${KIND_TAG}" ]]; then
    image_arg="--image=kindest/node:${KIND_TAG}"
  fi
  for i in $(seq "${num_clusters}"); do
    kind create cluster --name "cluster${i}" "${image_arg}"
    fixup-cluster "${i}"
    echo

  done
}

function fixup-cluster() {
  local i=${1} # cluster num

  if [ "$OS" != "Darwin" ];then
    # Set container IP address as kube API endpoint in order for clusters to reach kube API servers in other clusters.
    local docker_ip
    docker_ip=$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' "cluster${i}-control-plane")
    kubectl config set-cluster "kind-cluster${i}" --server="https://${docker_ip}:6443"
  fi

  # Simplify context name
  kubectl config rename-context "kind-cluster${i}" "cluster${i}"
}
echo "Creating ${NUM_CLUSTERS} clusters"
create-clusters "${NUM_CLUSTERS}"
kubectl config use-context cluster1

echo "Kind CIDR is $(docker network inspect -f '{{$map := index .IPAM.Config 0}}{{index $map "Subnet"}}' kind)"

echo "Complete"

During the above cluster installation process, in order for istiod to access apiserverthe address of the other cluster, kube-apiserverthe cluster address is set to the address of the master node. Because it is a cluster deployed by kind, the master nodes of the two clusters are essentially containers running docker on the same host.

image.png

Confirm whether cluster1 and cluster2 are ready

image.png

3.3 Use MetalLB to allocate External IP to the gateway

Since kind is used to deploy multiple clusters, the creation of istio north-south gateway and east-west gateway requires the creation of LoadBalencer service, both of which require the use of ExternalIP. Here, metalLB is used to realize the distribution and announcement of LB ip addresses.
See kind to build a cluster using the node subnet segment: deploy in metalLB L2 mode. 172.18.0.0/16

metalLB configuration list in cluster1: metallb-config-1.yaml

### for cluster1 
##Configure IPAddressPool for allocation of lbip addresses. In L2 mode, the ippool address and the worker node can be in the same subnet. 
apiVersion: metallb.io/v1beta1 
kind: IPAddressPool 
metadata: 
  name: first-pool 
  namespace: metallb-system 
spec: 
  addresses: 
    - 172.18.1.230-172.18.1.240 
- -- 
##Configure L2Advertisement for address announcement 
apiVersion: metallb.io/v1beta1 
kind: L2Advertisement 
metadata: 
  name: first-adv 
  namespace: metallb-system 
spec: 
  ipAddressPools: 
    - first-pool

metalLB configuration list in cluster2 cluster: metallb-config-2.yaml

### for cluster2 
##Configure IPAddressPool for allocation of lbip addresses. In L2 mode, the ippool address and the worker node can be in the same subnet. 
apiVersion: metallb.io/v1beta1 
kind: IPAddressPool 
metadata: 
  name: second-pool 
  namespace: metallb-system 
spec: 
  addresses: 
    - 172.18.1.241-172.18.1.252 
- -- 
##Configure L2Advertisement for address announcement 
apiVersion: metallb.io/v1beta1 
kind: L2Advertisement 
metadata: 
  name: second-adv 
  namespace: metallb-system 
spec: 
  ipAddressPools: 
    - second-pool

Install using script

#!/usr/bin/env bash 

set -o xtrace 
set -o errexit 
set -o nounset 
set -o pipefail 

NUM_CLUSTERS="${NUM_CLUSTERS:-2}" 
for i in $(seq "${NUM_CLUSTERS}"); do 
  echo "Starting metallb deployment in cluster${i}" 
  kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.10/config/manifests/metallb-native.yaml --context "cluster$ {i}" 
  kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" --context "cluster${i}" 
  ## Increase the waiting time if metallb The load has not been deployed, and an error 
  sleep 10 will be reported when creating IPAddressPool L2Advertisement 
  kubectl apply -f ./metallb-config-${i}.yaml --context "cluster${i}" 
  echo "----" 
done

Confirm metalLB deployment status

image.png

Confirm IPAddressPool information:

image.png

3.4 Cluster shared root CA configuration trust relationship

In order to support secure cross-cluster mTLS communication, the multi-control plane model requires that the control plane Istiod of each cluster use an intermediate CA certificate issued by the same CA authority for Citatel to issue certificates to support cross-cluster TLS bidirectional authentication.
image.png
The Istio east-west gateway (cross-cluster access) uses SNI-based routing when working. It automatically routes it to the Cluster corresponding to the SNI based on the SNI requested by TLS. Therefore, cross-network access in non-flat networks requires that all traffic must be TLS encrypted. .

Insert the certificate and key into the cluster. The script is as follows (the script needs to be moved to the istio installation package directory):

#!/usr/bin/env bash 

set -o xtrace 
#set -o errexit 
set -o nounset 
set -o pipefail 
NUM_CLUSTERS="${NUM_CLUSTERS:-2}" 
##Create a directory in the top directory of the istio installation package. To store certificates and keys 
mkdir -p certs 
pushd certs 

##Generate root certificates and keys 
make -f ../tools/certs/Makefile.selfsigned.mk root-ca 

for i in $(seq "${NUM_CLUSTERS}" ); do 
  ##For each cluster, generate an intermediate certificate and key for Istio CA 
  make -f ../tools/certs/Makefile.selfsigned.mk "cluster${i}-cacerts" 
  ##For each cluster , create the istio-system namespace 
  kubectl create namespace istio-system --context "cluster${i}" ## For each cluster, add the network identifier 
  kubectl - 
  by labeling the istio system namespace with the topology.istio.io/network label. -context="cluster${i}" label namespace istio-system topology.istio.io/network="network${i}" 
  ##For each cluster, label the working node with a region and availability zone label to facilitate istio Implement regional failover and regional load balancing 
  kubectl --context="cluster${i}" label node "cluster${i}-control-plane" topology.kubernetes.io/region="region${i}" 
  kubectl - -context="cluster${i}" label node "cluster${i}-control-plane" topology.kubernetes.io/zone="zone${i}" 
  #In each cluster, create a private cacerts, Use all input files ca-cert.pem, ca-key.pem, root-cert.pem and cert-chain.pem. 
  kubectl delete secret cacerts -n istio-system --context "cluster${i}" 
  kubectl create secret generic cacerts -n istio-system --context "cluster${i}" \ 
      --from-file="cluster${ i}/ca-cert.pem" \ 
      --from-file="cluster${i}/ca-key.pem" \ 
      --from-file="cluster${i}/root-cert.pem" \ 
      --from-file="cluster${i}/cert-chain.pem" 
  echo "----" 
done
Executing the script will generate files such as root certificates and intermediate certificates.

image.png

image.png

3.5 Istio service mesh installation

Install multi-control plane istio grid for cluster1 and cluster2 clusters.

Set cluster1 as the main cluster and execute the following command in the installation directory of istio

cat <<EOF > cluster1.yaml 
apiVersion: install.istio.io/v1alpha1 
kind: IstioOperator 
spec: 
  values: 
    global: 
      meshID: mesh1 
      multiCluster: ##Enable multi-cluster configuration 
        clusterName: cluster1 #Specify the k8s cluster name 
      network: network1 #Specify Network identification 
      logging: 
        level: debug 
EOF

Set cluster2 as the main cluster and execute the following command in the installation directory of istio

cat <<EOF > cluster2.yaml 
apiVersion: install.istio.io/v1alpha1 
kind: IstioOperator 
spec: 
  values: 
    global: 
      meshID: mesh2 
      multiCluster: ##Enable multi-cluster configuration 
        clusterName: cluster2 #Specify the k8s cluster name 
      network: network2 #Specify Network identification 
      logging: 
        level: debug 
EOF
Write automated installation scripts
#!/usr/bin/env bash

set -o xtrace
set -o errexit
set -o nounset
set -o pipefail

OS="$(uname)"
NUM_CLUSTERS="${NUM_CLUSTERS:-2}"

for i in $(seq "${NUM_CLUSTERS}"); do

echo "Starting istio deployment in cluster${i}"

istioctl install --force --context="cluster${i}" -f "cluster${i}.yaml"

echo "Generate eastwest gateway in cluster${i}"

## 在每个集群中安装东西向网关。
bash samples/multicluster/gen-eastwest-gateway.sh \
--mesh "mesh${i}" --cluster "cluster${i}" --network "network${i}" | \
istioctl --context="cluster${i}" install -y -f -

echo

done

Execute the script to install and deploy istio

image.png

Wait for a while for the installation to complete

image.png

You can find that the ExternalIP information used by the gateway in each cluster is the address in the IPPool set by the configured metalLB.

3.6 Opening services on the east-west gateway

Because the clusters are on different networks, we need to open all services (*.local) on the east-west gateways of both clusters. Although this gateway is public on the Internet, the services behind it can only be accessed by services with trusted mTLS certificates, as if they were on the same network. Execute the following commands to expose services in both clusters:

apiVersion: networking.istio.io/v1beta1 
kind: Gateway 
metadata: 
  name: cross-network-gateway 
spec: 
  selector: 
    istio: eastwestgateway # Dedicated gateway servers for east-west traffic 
  : 
    - port: number: 15443 # 
        Name 
        has been declared : tls 
        protocol: TLS 
      tls: 
        mode: AUTO_PASSTHROUGH # The working mode of the east-west gateway is TLS AUTO_PASSTHROUGH 
      hosts: 
        - "*.local" # Expose all services

Apply the above Gateway configuration in each cluster separately:
kubectl -n istio-system --context=cluster${i} apply -f samples/multicluster/expose-services.yaml
image.png

3.7 Configure the secret so that istiod can access the remote cluster apiserver

The istiod in each k8s cluster needs to List-Watch the Kube-APIServer of other clusters and use the credentials of the K8s cluster to create a Secret object to allow Istio to access the remote Kubernetes apiserver.

#!/usr/bin/env bash

set -o xtrace
set -o errexit
set -o nounset
set -o pipefail
OS="$(uname)"
NUM_CLUSTERS="${NUM_CLUSTERS:-2}"

for i in $(seq "${NUM_CLUSTERS}"); do
  for j in $(seq "${NUM_CLUSTERS}"); do
    if [ "$i" -ne "$j" ]
    then
      echo "Enable Endpoint Discovery between cluster${i} and cluster${j}"

      if [ "$OS" == "Darwin" ]
      then
        # Set container IP address as kube API endpoint in order for clusters to reach kube API servers in other clusters.
        docker_ip=$(docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' "cluster${i}-control-plane")
        istioctl create-remote-secret \
        --context="cluster${i}" \
        --server="https://${docker_ip}:6443" \
        --name="cluster${i}" | \
          kubectl apply --validate=false --context="cluster${j}" -f -
      else
        istioctl create-remote-secret \
          --context="cluster${i}" \
          --name="cluster${i}" | \
          kubectl apply --validate=false --context="cluster${j}" -f -
      fi
    fi
  done
done

Execute the above script: the remote secret is created.

image.png

Check the istiod log and find that the remote cluster is already monitored.

image.png

Four Istio multi-cluster traffic management practices

image.png

Create a sample namespace for each cluster and set up sidecar automatic injection
kubectl create --context=cluster1 namespace sample
kubectl create --context=cluster2 namespace sample

kubectl label --context=cluster1 namespace sample \
    istio-injection=enabled
kubectl label --context=cluster2 namespace sample \
    istio-injection=enabled

kubectl apply --context=cluster1 \
    -f samples/helloworld/helloworld.yaml \
    -l service=helloworld -n sample
kubectl apply --context=cluster2 \
    -f samples/helloworld/helloworld.yaml \
    -l service=helloworld -n sample

Deploy different versions of services in different clusters

Deploy the application helloworld-v1 to cluster1:
kubectl apply --context=cluster1 \
    -f samples/helloworld/helloworld.yaml \
    -l version=v1 -n sample
Deploy the application helloworld-v2 to cluster2:
kubectl apply --context=cluster2 \
-f samples/helloworld/helloworld.yaml \
-l version=v2 -n sample
Deploy test client
kubectl apply --context=cluster1 \
    -f samples/sleep/sleep.yaml -n sample
kubectl apply --context=cluster2 \
    -f samples/sleep/sleep.yaml -n sample

Confirm that the load instance is deployed successfully and the sidecar has been injected

image.png

4.1 Verify cross-cluster traffic

Use Sleep pod to repeatedly call the service HelloWorld. To confirm that load balancing is working as expected, the service HelloWorld needs to be called from all clusters.

Send a request to the service HelloWorld from the Sleep pod in cluster1

image.png

Send a request to the service HelloWorld from the Sleep pod in cluster2

image.png

4.3 Verify access from gateway

Access the server Helloworld through the gateway

Create istio resources such as virtualservice and gateway. The configuration list is as follows

# helloworld-gateway.yaml
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: helloworld-gateway
spec:
  selector:
    istio: ingressgateway # use istio default controller
  servers:
    - port:
        number: 80
        name: http
        protocol: HTTP
      hosts:
        - "*"
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: helloworld
spec:
  hosts:
    - "*"
  gateways:
    - helloworld-gateway
  http:
    - match:
        - uri:
            exact: /hello
      route:
        - destination:
            host: helloworld
            port:
              number: 5000

Note: This configuration needs to be applied to both clusters

The access effect is as follows:

image.png

4.3 Verify regional load balancing

For more fine-grained control over traffic, set the weights of the two regions to 80% and 20% respectively, and use DestinationRule to configure the weight distribution . region1 -> zone1  region1 -> zone2 

# locality-lb-weight.yaml
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: helloworld
  namespace: sample
spec:
  host: helloworld.sample.svc.cluster.local
  trafficPolicy:
    connectionPool:
      http:
        maxRequestsPerConnection: 1
    loadBalancer:
      simple: ROUND_ROBIN
      localityLbSetting:
        enabled: true
        distribute:
          - from: region1/*
            to:
              "region1/*": 80
              "region2/*": 20
          - from: region2/*
            to:
              "region2/*": 80
              "region1/*": 20
    outlierDetection:
      consecutive5xxErrors: 1
      interval: 1s
      baseEjectionTime: 1m

Note: This configuration needs to be applied to both clusters

Send a request to the service HelloWorld from cluster1 through the gateway

image.png

Send a request to the service HelloWorld from cluster2 through the gateway

image.png

4.4 Verify regional failover

When multiple service instances are deployed in multiple regions/regions, if the service instance in a certain region/region is unavailable, traffic can be transferred to service instances in other regions/regions to achieve regional failover, thus ensuring service reliability. High availability.

# locality-lb-failover.yaml 
apiVersion: networking.istio.io/v1beta1 
kind: DestinationRule 
metadata: 
  name: helloworld 
  namespace: sample 
spec: 
  host: helloworld.sample.svc.cluster.local 
  trafficPolicy: 
    connectionPool: 
      http: 
        maxRequestsPerConnection: 1 # Turn off HTTP Keep-Alive and force each HTTP request to use a new connection policy 
    loadBalancer: 
      simple: ROUND_ROBIN 
      localityLbSetting: # Regional load balancing configuration, after turning on outlier detection, it is enabled by default. 
        enabled: true      
        failover: # Regional failover strategy 
          - from: region1   
            to: region2 
          - from: region2 
            to: region1 
    outlierDetection: 
      consecutive5xxErrors: 1 # 1 consecutive 5xx error 
      interval: 1s # Detection interval 1s 
      baseEjectionTime: 1m # Basic eviction time 1m

Note: This configuration needs to be applied to both clusters

Send a request to the service HelloWorld from cluster1 through the gateway

image.png

Simulate a failure and manually set the Helloworld V1 version in the cluster1 cluster to fail.

image.png

Access again, fault detection takes effect, triggers failover, and verifies that the version in the response is always v2, which means we are accessing the helloworld service in region 2, thus achieving regional failover.

image.png

The premise of failover is that when all instances in the current region are unavailable, it will be transferred to the current region. Otherwise, the traffic will be sent to other available instances in the current region.

Five remarks

References are as follows:

  1. istio open source community (installation instructions for cross-network multi-primary architecture):  https://istio.io/latest/zh/docs/setup/install/multicluster/multi-primary_multi-network/

  2. kind installation cluster script reference: https://github.com/cnych/multi-cluster-istio-kind/tree/main/kind-create 

  3. Multi-cluster certificate management reference: https://istio.io/latest/zh/docs/tasks/security/cert-management/plugin-ca-cert/

Click to follow and learn about Huawei Cloud’s new technologies as soon as possible~

 

The first major version update of JetBrains 2024 (2024.1) is open source. Even Microsoft plans to pay for it. Why is it still being criticized for open source? [Recovered] Tencent Cloud backend crashed: A large number of service errors and no data after logging in to the console. Germany also needs to be "independently controllable". The state government migrated 30,000 PCs from Windows to Linux deepin-IDE and finally achieved bootstrapping! Visual Studio Code 1.88 is released. Good guy, Tencent has really turned Switch into a "thinking learning machine". RustDesk remote desktop starts and reconstructs the Web client. WeChat's open source terminal database based on SQLite, WCDB, has received a major upgrade.
{{o.name}}
{{m.name}}

Guess you like

Origin my.oschina.net/u/4526289/blog/11051799