基于 KEDA 和 WorkloadSpread 的弹性 HPA 实验

本文记录基于 KEDA 和 WorkloadSpread 的弹性 HPA 实验过程

在这里插入图片描述

准备集群

本文通过 kind 创建一个四节点集群环境:

Linux amd64 机器安装 kind 命令:

# curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.12.0/kind-linux-amd64
# chmod +x ./kind
# mv ./kind /usr/bin/kind

初始化集群:

# cat << EOF > cluster.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
EOF
# kind create cluster --config cluster.yaml
安装 KEDA
# helm repo add kedacore https://kedacore.github.io/charts
# helm repo update
# kubectl create namespace keda
# helm install keda kedacore/keda --namespace keda
安装 OpenKruise
# helm repo add openkruise https://openkruise.github.io/charts/
# helm repo update
# helm install kruise openkruise/kruise --version 1.0.1 --set featureGates="WorkloadSpread=true"
安装 Ingress-Nginx-Controller
# kubectl create ns ingress-nginx
# cat << EOF | kubectl apply -f -
apiVersion: v1
data:
  allow-snippet-annotations: "true"
  http-snippet: |
    server {
      listen 8080;
      server_name _ ;
      location /stub_status {
        stub_status on;
      }

      location / {
        return 404;
      }
    }
kind: ConfigMap
metadata:
  annotations:
    meta.helm.sh/release-name: ingress-nginx
    meta.helm.sh/release-namespace: ingress-nginx
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    helm.sh/chart: ingress-nginx-4.0.13
  name: ingress-nginx-controller
  namespace: ingress-nginx
EOF
# cat << EOF > values.yaml
controller:
  containerPort:
    http: 80
    https: 443
    status: 8080
EOF
# helm upgrade --install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx --values values.yaml
# cat << EOF | kubectl apply -f -
kind: Service
apiVersion: v1
metadata:
  name: ingress-nginx-controller-8080
  namespace: ingress-nginx
spec:
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  type:  ClusterIP
  ports:
  - name: myapp
    port:  8080
    targetPort: status
EOF
安装 Nginx-Prometheus-Exporter
# cat << EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ingress-nginx-exporter
  namespace: ingress-nginx
  labels:
    app: ingress-nginx-exporter
spec:
  selector:
    matchLabels:
      app: ingress-nginx-exporter
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: ingress-nginx-exporter
    spec:
      containers:
      - image: nginx/nginx-prometheus-exporter:0.10
        imagePullPolicy: IfNotPresent
        args:
        - -nginx.scrape-uri=http://ingress-nginx-controller-8080.ingress-nginx.svc.cluster.local:8080/stub_status
        name: main
        ports:
        - name: http
          containerPort: 9113
          protocol: TCP
        resources:
          limits:
            cpu: "200m"
            memory: "256Mi"
EOF
安装 Prometheus-Operator
# helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
# helm repo update
# helm install kube-prometheus-stack-1640678515 prometheus-community/kube-prometheus-stack --namespace prometheus --create-namespace
# cat << EOF | kubectl apply -f -
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
  labels:
    release: kube-prometheus-stack-1640678515
  name: ingress-nginx-monitor
  namespace: ingress-nginx
spec:
  selector:
    matchLabels:
      app: ingress-nginx-exporter
  podMetricsEndpoints:
  - port: http
EOF
测试环境是否正确
# kubectl run --rm -i --tty busybox --image=odise/busybox-curl --restart=Never -- sh
/ # curl -L http://ingress-nginx-controller-8080.ingress-nginx.svc.cluster.local:8080/stub_status
Active connections: 1 
server accepts handled requests
 182 182 181 
Reading: 0 Writing: 1 Waiting: 0 

如执行上述 curl 后输出类似内容,则表示接口正常

查看 grafana 面板

# kubectl port-forward service/kube-prometheus-stack-1640678515-grafana -n prometheus 30080:80
Forwarding from 127.0.0.1:30080 -> 3000
Forwarding from [::1]:30080 -> 3000

浏览器打开 http://localhost:30080,用户名为 admin,密码为 prom-operator

在这里插入图片描述

部署测试服务
# cat << EOF  | kubectl apply -f -
apiVersion: apps.kruise.io/v1alpha1
kind: CloneSet
metadata:
  name: hello-web
  namespace: ingress-nginx
  labels:
    app: hello-web
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello-web
  template:
    metadata:
      labels:
        app: hello-web
    spec:
      containers:
      - name: hello-web
        image: zhangsean/hello-web
        ports:
        - containerPort: 80
        resources:
          requests:
            cpu: "1"
            memory: "256Mi"
          limits:
            cpu: "2"
            memory: "512Mi"
---
kind: Service
apiVersion: v1
metadata:
  name: hello-web
  namespace: ingress-nginx
spec:
  type: ClusterIP
  selector:
    app: hello-web
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-web
  namespace: ingress-nginx
spec:
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: hello-web
            port:
              number: 80
  ingressClassName: nginx
EOF

给节点打标:

# kubectl label node kind-worker2 resource-pool=fixed
# kubectl label node kind-worker3 resource-pool=elastic

部署 WorkloadSpread

# cat << EOF | kubectl apply -f -
apiVersion: apps.kruise.io/v1alpha1
kind: WorkloadSpread
metadata:
  name: workloadspread-demo
  namespace: ingress-nginx
spec:
  targetRef:
    apiVersion: apps.kruise.io/v1alpha1
    kind: CloneSet
    name: hello-web
  subsets:
    - name: subset-a
      requiredNodeSelectorTerm:
        matchExpressions:
          - key: resource-pool
            operator: In
            values:
              - fixed
      maxReplicas: 3
    - name: subset-b
      requiredNodeSelectorTerm:
        matchExpressions:
          - key: resource-pool
            operator: In
            values:
              - elastic
  scheduleStrategy:
    type: Adaptive
EOF

优先向 subset-a 部署,超过最大值,则向 subset-b 部署

部署 ScaledObject
# cat << EOF | kubectl apply -f -
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: ingress-nginx-scaledobject
  namespace: ingress-nginx
spec:
  maxReplicaCount: 10
  minReplicaCount: 1
  pollingInterval: 10
  cooldownPeriod:  2
  advanced:
    horizontalPodAutoscalerConfig:
      behavior:
        scaleDown:
          stabilizationWindowSeconds: 10
  scaleTargetRef:
    apiVersion: apps.kruise.io/v1alpha1
    kind: CloneSet
    name: hello-web
  triggers:
  - type: prometheus
    metadata:
      serverAddress: http://kube-prometheus-stack-1640-prometheus.prometheus:9090/
      metricName: nginx_http_requests_total 
      query: sum(rate(nginx_http_requests_total[3m]))
      threshold: '50'
EOF
压测

使用 go-stress-testing (https://github.com/link1st/go-stress-testing/releases) 工具进行压测:

# kubectl run --rm -i --tty busybox --image=health/go-stress-testing --restart=Never -- sh
sh-4.2# go-stress-testing -c 100 -n 10000 -u http://ingress-nginx-controller.ingress-nginx

在这里插入图片描述
在这里插入图片描述
停止压测,看到 Pod 数量在减少:
在这里插入图片描述
Ref: https://openkruise.io/zh/docs/best-practices/elastic-deployment

猜你喜欢

转载自blog.csdn.net/shida_csdn/article/details/123635205
今日推荐