This article is shared from the Huawei Cloud Community " K8s Image Cache Management Kube-fledged Awareness " by Shanhe.
We know that container scheduling on the Internet needs to pull the image of the current container on the scheduled node row. In some special scenarios, k8s
-
快速启动和/或扩展
Application required . For example, applications that perform real-time data processing need to scale quickly due to exploding data volumes. -
The image is relatively large and involves multiple versions. The node storage is limited and unnecessary images need to be dynamically cleaned up.
-
无服务器函数
It is often necessary to react immediately to incoming events and start containers within a fraction of a second. -
Running on edge devices needs to tolerate intermittent network connectivity to the mirror repository.
IoT 应用程序
边缘设备
-
If you need to pull an image
专用仓库
from , and you cannot give everyone镜像仓库
access to pull the image from there, you can make the image available on the nodes of the cluster. -
If a cluster administrator or operator needs to upgrade an application and wants to verify in advance whether the new image can be successfully pulled.
kube-fledged
It is used to create and manage container image cache directly on the nodes of the Kubernetes cluster . It allows users to define a list of images and which worker nodes these images should be cached to (i.e. pulled from). As a result, the application pod can be started almost immediately because there is no need to pull the image from the registry. kubernetes operator
worker
kube-fledged
A CRUD API is provided to manage the life cycle of the image cache, and supports multiple configurable parameters, allowing you to customize functions according to your own needs.
Kubernetes has it built-in 镜像垃圾回收机制
. The kubelet in the node periodically checks whether the disk usage reaches a certain threshold (configurable through flags). Once this is reached 阈值
, the kubelet will automatically delete all unused images from the node.
Automatic and periodic refresh mechanisms need to be implemented in the proposed solution. If the image in the image cache is deleted by kubelet's gc, the next refresh cycle will pull the deleted image into the image cache. This ensures that the image cache is up to date.
Design Flow
https://github.com/senthilrch/kube-fledged/blob/master/docs/kubefledged-architecture.png
Deploy kube-fledged
Helm mode deployment
──[[email protected]]-[~/ansible] └─$mkdir kube-fledged ┌──[[email protected]]-[~/ansible] └─$cd kube-fledged ┌──[[email protected]]-[~/ansible/kube-fledged] └─$export KUBEFLEDGED_NAMESPACE=kube-fledged ┌──[[email protected]]-[~/ansible/kube-fledged] └─$kubectl create namespace ${KUBEFLEDGED_NAMESPACE} namespace/kube-fledged created ┌──[[email protected]]-[~/ansible/kube-fledged] └─$helm repo add kubefledged-charts https://senthilrch.github.io/kubefledged-charts/ "kubefledged-charts" has been added to your repositories ┌──[[email protected]]-[~/ansible/kube-fledged] └─$helm repo update Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "kubefledged-charts" chart repository ...Successfully got an update from the "kubescape" chart repository ...Successfully got an update from the "rancher-stable" chart repository ...Successfully got an update from the "skm" chart repository ...Successfully got an update from the "openkruise" chart repository ...Successfully got an update from the "awx-operator" chart repository ...Successfully got an update from the "botkube" chart repository Update Complete. ⎈Happy Helming!⎈
┌──[[email protected]]-[~/ansible/kube-fledged] └─$helm install --verify kube-fledged kubefledged-charts/kube-fledged -n ${KUBEFLEDGED_NAMESPACE} --wait
During the actual deployment, it was found that due to network problems, chart
downloading could not be done, so it was deployed using yaml. make deploy-using-yaml
Yaml file deployment
┌──[[email protected]]-[~/ansible/kube-fledged] └─$git clone https://github.com/senthilrch/kube-fledged.git Cloning to 'kube-fledged'... remote: Enumerating objects: 10613, done. remote: Counting objects: 100% (1501/1501), done. remote: Compressing objects: 100% (629/629), done. remote: Total 10613 (delta 845), reused 1357 (delta 766), pack-reused 9112 Among receiving objects: 100% (10613/10613), 34.58 MiB | 7.33 MiB/s, done. Processing delta medium: 100% (4431/4431), done. ┌──[[email protected]]-[~/ansible/kube-fledged] └─$ls kube-fledged ┌──[[email protected]]-[~/ansible/kube-fledged] └─$cd kube-fledged/ ┌──[[email protected]]-[~/ansible/kube-fledged/kube-fledged] └─$make deploy-using-yaml kubectl apply -f deploy/kubefledged-namespace.yaml
When deploying for the first time, I found that the image could not be pulled down.
┌──[[email protected]]-[~] └─$kubectl get all -n kube-fledged NAME READY STATUS RESTARTS AGE pod/kube-fledged-controller-df69f6565-drrqg 0/1 CrashLoopBackOff 35 (5h59m ago) 21h pod/kube-fledged-webhook-server-7bcd589bc4-b7kg2 0/1 Init:CrashLoopBackOff 35 (5h58m ago) 21h pod/kubefledged-controller-55f848cc67-7f4rl 1/1 Running 0 21h pod/kubefledged-webhook-server-597dbf4ff5-l8fbh 0/1 Init:CrashLoopBackOff 34 (6h ago) 21h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kube-fledged-webhook-server ClusterIP 10.100.194.199 <none> 3443/TCP 21h service/kubefledged-webhook-server ClusterIP 10.101.191.206 <none> 3443/TCP 21h NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/kube-fledged-controller 0/1 1 0 21h deployment.apps/kube-fledged-webhook-server 0/1 1 0 21h deployment.apps/kubefledged-controller 0/1 1 0 21h deployment.apps/kubefledged-webhook-server 0/1 1 0 21h NAME DESIRED CURRENT READY AGE replicaset.apps/kube-fledged-controller-df69f6565 1 1 0 21h replicaset.apps/kube-fledged-webhook-server-7bcd589bc4 1 1 0 21h replicaset.apps/kubefledged-controller-55f848cc67 1 1 0 21h replicaset.apps/kubefledged-webhook-server-597dbf4ff5 1 1 0 21h ┌──[[email protected]]-[~] └─$
Here we find the image we want to pull
┌──[[email protected]]-[~/ansible/kube-fledged/kube-fledged/deploy] └─$cat *.yaml | grep image: - image: senthilrch/kubefledged-controller:v0.10.0 - image: senthilrch/kubefledged-webhook-server:v0.10.0 - image: senthilrch/kubefledged-webhook-server:v0.10.0
Pull some individually and currently use batch operations on all working nodes. ansible
┌──[[email protected]]-[~/ansible] └─$ansible k8s_node -m shell -a "docker pull docker.io/senthilrch/kubefledged-cri-client:v0.10.0" -i host.yaml
Pull other related images
After the operation is completed, the container status is all normal.
┌──[[email protected]]-[~/ansible] └─$kubectl -n kube-fledged get all NAME READY STATUS RESTARTS AGE pod/kube-fledged-controller-df69f6565-wdb4g 1/1 Running 0 13h pod/kube-fledged-webhook-server-7bcd589bc4-j8xxp 1/1 Running 0 13h pod/kubefledged-controller-55f848cc67-klxlm 1/1 Running 0 13h pod/kubefledged-webhook-server-597dbf4ff5-ktbsh 1/1 Running 0 13h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kube-fledged-webhook-server ClusterIP 10.100.194.199 <none> 3443/TCP 36h service/kubefledged-webhook-server ClusterIP 10.101.191.206 <none> 3443/TCP 36h NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/kube-fledged-controller 1/1 1 1 36h deployment.apps/kube-fledged-webhook-server 1/1 1 1 36h deployment.apps/kubefledged-controller 1/1 1 1 36h deployment.apps/kubefledged-webhook-server 1/1 1 1 36h NAME DESIRED CURRENT READY AGE replicaset.apps/kube-fledged-controller-df69f6565 1 1 1 36h replicaset.apps/kube-fledged-webhook-server-7bcd589bc4 1 1 1 36h replicaset.apps/kubefledged-controller-55f848cc67 1 1 1 36h replicaset.apps/kubefledged-webhook-server-597dbf4ff5 1 1 1 36h
Verify successful installation
┌──[[email protected]]-[~/ansible/kube-fledged/kube-fledged] └─$kubectl get pods -n kube-fledged -l app=kubefledged NAME READY STATUS RESTARTS AGE kubefledged-controller-55f848cc67-klxlm 1/1 Running 0 16h kubefledged-webhook-server-597dbf4ff5-ktbsh 1/1 Running 0 16h ┌──[[email protected]]-[~/ansible/kube-fledged/kube-fledged] └─$kubectl get imagecaches -n kube-fledged No resources found in kube-fledged namespace.
Using kubefledged
Create image cache object
Create an image cache object based on the file Demo
┌──[[email protected]]-[~/ansible/kube-fledged/kube-fledged] └─$cd deploy/ ┌──[[email protected]]-[~/ansible/kube-fledged/kube-fledged/deploy] └─$cat kubefledged-imagecache.yaml --- apiVersion: kubefledged.io/v1alpha2 kind: ImageCache metadata: # Name of the image cache. A cluster can have multiple image cache objects name: imagecache1 namespace: kube-fledged # The kubernetes namespace to be used for this image cache. You can choose a different namepace as per your preference labels: app: kubefledged kubefledged: imagecache spec: # The "cacheSpec" field allows a user to define a list of images and onto which worker nodes those images should be cached (i.e. pre-pulled). cacheSpec: # Specifies a list of images (nginx:1.23.1) with no node selector, hence these images will be cached in all the nodes in the cluster - images: - ghcr.io/jitesoft/nginx:1.23.1 # Specifies a list of images (cassandra:v7 and etcd:3.5.4-0) with a node selector, hence these images will be cached only on the nodes selected by the node selector - images: - us.gcr.io/k8s-artifacts-prod/cassandra:v7 - us.gcr.io/k8s-artifacts-prod/etcd:3.5.4-0 nodeSelector: tier: backend # Specifies a list of image pull secrets to pull images from private repositories into the cache imagePullSecrets: - name: myregistrykey
The corresponding image in the official Demo cannot be pulled down, so change it.
┌──[[email protected]]-[~/ansible/kube-fledged/kube-fledged/deploy] └─$docker pull us.gcr.io/k8s-artifacts-prod/cassandra:v7 Error response from daemon: Get "https://us.gcr.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) ┌──[[email protected]]-[~/ansible/kube-fledged/kube-fledged/deploy] └─$
In order to test the use of selector tags, we find the tag of a node and do the image cache separately.
┌──[[email protected]]-[~/ansible/kube-fledged/kube-fledged/deploy] └─$kubectl get nodes --show-labels
At the same time, we pull the image directly from the public warehouse, so no object is needed imagePullSecrets
┌──[[email protected]]-[~/ansible/kube-fledged/kube-fledged/deploy] └─$vim kubefledged-imagecache.yaml
modified file yaml
-
Added a liruilong/my-busybox:latest image cache for all nodes
-
Added a mirror cache corresponding to the tag selector
kubernetes.io/hostname: vms105.liruilongs.github.io
liruilong/hikvision-sdk-config-ftp:latest
┌──[[email protected]]-[~/ansible/kube-fledged/kube-fledged/deploy] └─$cat kubefledged-imagecache.yaml --- apiVersion: kubefledged.io/v1alpha2 kind: ImageCache metadata: # Name of the image cache. A cluster can have multiple image cache objects name: imagecache1 namespace: kube-fledged # The kubernetes namespace to be used for this image cache. You can choose a different namepace as per your preference labels: app: kubefledged kubefledged: imagecache spec: # The "cacheSpec" field allows a user to define a list of images and onto which worker nodes those images should be cached (i.e. pre-pulled). cacheSpec: # Specifies a list of images (nginx:1.23.1) with no node selector, hence these images will be cached in all the nodes in the cluster - images: - liruilong/my-busybox:latest # Specifies a list of images (cassandra:v7 and etcd:3.5.4-0) with a node selector, hence these images will be cached only on the nodes selected by the node selector - images: - liruilong/hikvision-sdk-config-ftp:latest nodeSelector: kubernetes.io/hostname: vms105.liruilongs.github.io # Specifies a list of image pull secrets to pull images from private repositories into the cache #imagePullSecrets: #- name: myregistrykey ┌──[[email protected]]-[~/ansible/kube-fledged/kube-fledged/deploy] └─$
Created directly and reported an error
┌──[[email protected]]-[~/ansible/kube-fledged/kube-fledged/deploy] └─$kubectl create -f kubefledged-imagecache.yaml Error from server (InternalError): error when creating "kubefledged-imagecache.yaml": Internal error occurred: failed calling webhook "validate-image-cache.kubefledged.io": failed to call webhook: Post "https://kubefledged-webhook-server.kube-fledged.svc:3443/validate-image-cache?timeout=1s": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubefledged.io") ┌──[[email protected]]-[~/ansible/kube-fledged/kube-fledged/deploy] └─$kubectl get imagecaches -n kube-fledged No resources found in kube-fledged namespace. ┌──[[email protected]]-[~/ansible/kube-fledged/kube-fledged/deploy] └─$
The solution is to delete the corresponding object and recreate it
I found a solution under one of my current projects https://github.com/senthilrch/kube-fledged/issues/76 issues
It looks like this is hardcoded, but when the server starts, a new CA bundle is generated and the webhook configuration is updated. When another deployment occurs, the original CA bundle is reapplied and webhook requests start failing until the webhook component is restarted again to patch the bundle init-server Webhook CA
webhook
┌──[[email protected]]-[~/ansible/kube-fledged/kube-fledged] └─$make remove-kubefledged-and-operator # Remove kubefledged kubectl delete -f deploy/kubefledged-operator/deploy/crds/charts.helm.kubefledged.io_v1alpha2_kubefledged_cr.yaml error: resource mapping not found for name: "kube-fledged" namespace: "kube-fledged" from "deploy/kubefledged-operator/deploy/crds/charts.helm.kubefledged.io_v1alpha2_kubefledged_cr.yaml": no matches for kind "KubeFledged" in version "charts.helm.kubefledged.io/v1alpha2" ensure CRDs are installed first
┌──[[email protected]]-[~/ansible/kube-fledged/kube-fledged] └─$make deploy-using-yaml kubectl apply -f deploy/kubefledged-namespace.yaml namespace/kube-fledged created kubectl apply -f deploy/kubefledged-crd.yaml customresourcedefinition.apiextensions.k8s.io/imagecaches.kubefledged.io unchanged .................... kubectl rollout status deployment kubefledged-webhook-server -n kube-fledged --watch Waiting for deployment "kubefledged-webhook-server" rollout to finish: 0 of 1 updated replicas are available... deployment "kubefledged-webhook-server" successfully rolled out kubectl get pods -n kube-fledged NAME READY STATUS RESTARTS AGE kubefledged-controller-55f848cc67-76c4v 1/1 Running 0 112s kubefledged-webhook-server-597dbf4ff5-56h6z 1/1 Running 0 66s
Re-create the cache object and create it successfully
┌──[[email protected]]-[~/ansible/kube-fledged/kube-fledged/deploy] └─$kubectl create -f kubefledged-imagecache.yaml imagecache.kubefledged.io/imagecache1 created ┌──[[email protected]]-[~/ansible/kube-fledged/kube-fledged/deploy] └─$kubectl get imagecaches -n kube-fledged NAME AGE imagecache1 10s ┌──[[email protected]]-[~/ansible/kube-fledged/kube-fledged/deploy] └─$
View the currently managed image cache
┌──[[email protected]]-[~/ansible/kube-fledged] └─$kubectl get imagecaches imagecache1 -n kube-fledged -o json { "apiVersion": "kubefledged.io/v1alpha2", "kind": "ImageCache", "metadata": { "creationTimestamp": "2024-03-01T15:08:42Z", "generation": 83, "labels": { "app": "kubefledged", "kubefledged": "imagecache" }, "name": "imagecache1", "namespace": "kube-fledged", "resourceVersion": "20169836", "uid": "3a680a57-d8ab-444f-b9c9-4382459c5c72" }, "spec": { "cacheSpec": [ { "images": [ "liruilong/my-busybox:latest" ] }, { "images": [ "liruilong/hikvision-sdk-config-ftp:latest" ], "nodeSelector": { "kubernetes.io/hostname": "vms105.liruilongs.github.io" } } ] }, "status": { "completionTime": "2024-03-02T01:06:47Z", "message": "All requested images pulled succesfully to respective nodes", "reason": "ImageCacheRefresh", "startTime": "2024-03-02T01:05:33Z", "status": "Succeeded" } } ┌──[[email protected]]-[~/ansible/kube-fledged] └─$
Verify via ansible
┌──[[email protected]]-[~/ansible] └─$ansible all -m shell -a "docker images | grep liruilong/my-busybox" -i host.yaml 192.168.26.102 | CHANGED | rc=0 >> liruilong/my-busybox latest 497b83a63aad 11 months ago 1.24MB 192.168.26.101 | CHANGED | rc=0 >> liruilong/my-busybox latest 497b83a63aad 11 months ago 1.24MB 192.168.26.103 | CHANGED | rc=0 >> liruilong/my-busybox latest 497b83a63aad 11 months ago 1.24MB 192.168.26.105 | CHANGED | rc=0 >> liruilong/my-busybox latest 497b83a63aad 11 months ago 1.24MB 192.168.26.100 | CHANGED | rc=0 >> liruilong/my-busybox latest 497b83a63aad 11 months ago 1.24MB 192.168.26.106 | CHANGED | rc=0 >> liruilong/my-busybox latest 497b83a63aad 11 months ago 1.24MB ┌──[[email protected]]-[~/ansible] └─$
┌──[[email protected]]-[~/ansible] └─$ansible all -m shell -a "docker images | grep liruilong/hikvision-sdk-config-ftp" -i host.yaml 192.168.26.102 | FAILED | rc=1 >> non-zero return code 192.168.26.100 | FAILED | rc=1 >> non-zero return code 192.168.26.103 | FAILED | rc=1 >> non-zero return code 192.168.26.105 | CHANGED | rc=0 >> liruilong/hikvision-sdk-config-ftp latest a02cd03b4342 4 months ago 830MB 192.168.26.101 | FAILED | rc=1 >> non-zero return code 192.168.26.106 | FAILED | rc=1 >> non-zero return code ┌──[[email protected]]-[~/ansible] └─$
Turn on automatic refresh
┌──[[email protected]]-[~/ansible] └─$kubectl annotate imagecaches imagecache1 -n kube-fledged kubefledged.io/refresh-imagecache= imagecache.kubefledged.io/imagecache1 annotated ┌──[[email protected]]-[~/ansible] └─$
Add image cache
Add a new image cache
┌──[[email protected]]-[~/ansible] └─$kubectl get imagecaches.kubefledged.io -n kube-fledged imagecache1 -o json { "apiVersion": "kubefledged.io/v1alpha2", "kind": "ImageCache", "metadata": { "creationTimestamp": "2024-03-01T15:08:42Z", "generation": 92, "labels": { "app": "kubefledged", "kubefledged": "imagecache" }, "name": "imagecache1", "namespace": "kube-fledged", "resourceVersion": "20175233", "uid": "3a680a57-d8ab-444f-b9c9-4382459c5c72" }, "spec": { "cacheSpec": [ { "images": [ "liruilong/my-busybox:latest", "liruilong/jdk1.8_191:latest" ] }, { "images": [ "liruilong/hikvision-sdk-config-ftp:latest" ], "nodeSelector": { "kubernetes.io/hostname": "vms105.liruilongs.github.io" } } ] }, "status": { "completionTime": "2024-03-02T01:43:32Z", "message": "All requested images pulled succesfully to respective nodes", "reason": "ImageCacheUpdate", "startTime": "2024-03-02T01:40:34Z", "status": "Succeeded" } } ┌──[[email protected]]-[~/ansible] └─$
Confirm via ansible
┌──[[email protected]]-[~/ansible] └─$ansible all -m shell -a "docker images | grep liruilong/jdk1.8_191" -i host.yaml 192.168.26.101 | FAILED | rc=1 >> non-zero return code 192.168.26.100 | FAILED | rc=1 >> non-zero return code 192.168.26.102 | FAILED | rc=1 >> non-zero return code 192.168.26.103 | FAILED | rc=1 >> non-zero return code 192.168.26.105 | FAILED | rc=1 >> non-zero return code 192.168.26.106 | FAILED | rc=1 >> non-zero return code ┌──[[email protected]]-[~/ansible] └─$ansible all -m shell -a "docker images | grep liruilong/jdk1.8_191" -i host.yaml 192.168.26.101 | CHANGED | rc=0 >> liruilong/jdk1.8_191 latest 17dbd4002a8c 5 years ago 170MB 192.168.26.102 | CHANGED | rc=0 >> liruilong/jdk1.8_191 latest 17dbd4002a8c 5 years ago 170MB 192.168.26.100 | CHANGED | rc=0 >> liruilong/jdk1.8_191 latest 17dbd4002a8c 5 years ago 170MB 192.168.26.103 | CHANGED | rc=0 >> liruilong/jdk1.8_191 latest 17dbd4002a8c 5 years ago 170MB 192.168.26.105 | CHANGED | rc=0 >> liruilong/jdk1.8_191 latest 17dbd4002a8c 5 years ago 170MB 192.168.26.106 | CHANGED | rc=0 >> liruilong/jdk1.8_191 latest 17dbd4002a8c 5 years ago 170MB ┌──[[email protected]]-[~/ansible] └─$
Delete image cache
┌──[[email protected]]-[~/ansible] └─$kubectl edit imagecaches imagecache1 -n kube-fledged imagecache.kubefledged.io/imagecache1 edited ┌──[[email protected]]-[~/ansible] └─$kubectl get imagecaches.kubefledged.io -n kube-fledged imagecache1 -o json { "apiVersion": "kubefledged.io/v1alpha2", "kind": "ImageCache", "metadata": { "creationTimestamp": "2024-03-01T15:08:42Z", "generation": 94, "labels": { "app": "kubefledged", "kubefledged": "imagecache" }, "name": "imagecache1", "namespace": "kube-fledged", "resourceVersion": "20175766", "uid": "3a680a57-d8ab-444f-b9c9-4382459c5c72" }, "spec": { "cacheSpec": [ { "images": [ "liruilong/jdk1.8_191:latest" ] }, { "images": [ "liruilong/hikvision-sdk-config-ftp:latest" ], "nodeSelector": { "kubernetes.io/hostname": "vms105.liruilongs.github.io" } } ] }, "status": { "message": "Image cache is being updated. Please view the status after some time", "reason": "ImageCacheUpdate", "startTime": "2024-03-02T01:48:03Z", "status": "Processing" } }
Through Ansible confirmation, you can see that whether it is the node on the mastere or the work node, the corresponding image cache has been cleared.
┌──[[email protected]]-[~/ansible] └─$ansible all -m shell -a "docker images | grep liruilong/my-busybox" -i host.yaml 192.168.26.102 | CHANGED | rc=0 >> liruilong/my-busybox latest 497b83a63aad 11 months ago 1.24MB 192.168.26.101 | CHANGED | rc=0 >> liruilong/my-busybox latest 497b83a63aad 11 months ago 1.24MB 192.168.26.105 | FAILED | rc=1 >> non-zero return code 192.168.26.100 | CHANGED | rc=0 >> liruilong/my-busybox latest 497b83a63aad 11 months ago 1.24MB 192.168.26.103 | FAILED | rc=1 >> non-zero return code 192.168.26.106 | FAILED | rc=1 >> non-zero return code ┌──[[email protected]]-[~/ansible] └─$ansible all -m shell -a "docker images | grep liruilong/my-busybox" -i host.yaml 192.168.26.105 | FAILED | rc=1 >> non-zero return code 192.168.26.102 | FAILED | rc=1 >> non-zero return code 192.168.26.103 | FAILED | rc=1 >> non-zero return code 192.168.26.101 | FAILED | rc=1 >> non-zero return code 192.168.26.100 | FAILED | rc=1 >> non-zero return code 192.168.26.106 | FAILED | rc=1 >> non-zero return code ┌──[[email protected]]-[~/ansible] └─$
It should be noted here that if all image caches are cleared, the following array needs to be written as "". images
┌──[[email protected]]-[~/ansible] └─$kubectl edit imagecaches imagecache1 -n kube-fledged imagecache.kubefledged.io/imagecache1 edited ┌──[[email protected]]-[~/ansible] └─$ansible all -m shell -a "docker images | grep liruilong/jdk1.8_191" -i host.yaml 192.168.26.102 | FAILED | rc=1 >> non-zero return code 192.168.26.101 | FAILED | rc=1 >> non-zero return code 192.168.26.100 | FAILED | rc=1 >> non-zero return code 192.168.26.105 | FAILED | rc=1 >> non-zero return code 192.168.26.103 | FAILED | rc=1 >> non-zero return code 192.168.26.106 | FAILED | rc=1 >> non-zero return code ┌──[[email protected]]-[~/ansible] └─$kubectl get imagecaches.kubefledged.io -n kube-fledged imagecache1 -o json { "apiVersion": "kubefledged.io/v1alpha2", "kind": "ImageCache", "metadata": { "creationTimestamp": "2024-03-01T15:08:42Z", "generation": 98, "labels": { "app": "kubefledged", "kubefledged": "imagecache" }, "name": "imagecache1", "namespace": "kube-fledged", "resourceVersion": "20176849", "uid": "3a680a57-d8ab-444f-b9c9-4382459c5c72" }, "spec": { "cacheSpec": [ { "images": [ "" ] }, { "images": [ "liruilong/hikvision-sdk-config-ftp:latest" ], "nodeSelector": { "kubernetes.io/hostname": "vms105.liruilongs.github.io" } } ] }, "status": { "completionTime": "2024-03-02T01:52:16Z", "message": "All cached images succesfully deleted from respective nodes", "reason": "ImageCacheUpdate", "startTime": "2024-03-02T01:51:47Z", "status": "Succeeded" } } ┌──[[email protected]]-[~/ansible] └─$
If deleted through the following method, directly comment the corresponding tag
┌──[[email protected]]-[~/ansible/kube-fledged/kube-fledged/deploy] └─$cat kubefledged-imagecache.yaml --- apiVersion: kubefledged.io/v1alpha2 kind: ImageCache metadata: # Name of the image cache. A cluster can have multiple image cache objects name: imagecache1 namespace: kube-fledged # The kubernetes namespace to be used for this image cache. You can choose a different namepace as per your preference labels: app: kubefledged kubefledged: imagecache spec: # The "cacheSpec" field allows a user to define a list of images and onto which worker nodes those images should be cached (i.e. pre-pulled). cacheSpec: # Specifies a list of images (nginx:1.23.1) with no node selector, hence these images will be cached in all the nodes in the cluster #- images: #- liruilong/my-busybox:latest # Specifies a list of images (cassandra:v7 and etcd:3.5.4-0) with a node selector, hence these images will be cached only on the nodes selected by the node selector - images: - liruilong/hikvision-sdk-config-ftp:latest nodeSelector: kubernetes.io/hostname: vms105.liruilongs.github.io # Specifies a list of image pull secrets to pull images from private repositories into the cache #imagePullSecrets: #- name: myregistrykey ┌──[[email protected]]-[~/ansible/kube-fledged/kube-fledged/deploy] └─$
Then the following error will be reported
┌──[[email protected]]-[~/ansible/kube-fledged/kube-fledged/deploy] └─$kubectl edit imagecaches imagecache1 -n kube-fledged error: imagecaches.kubefledged.io "imagecache1" could not be patched: admission webhook "validate-image-cache.kubefledged.io" denied the request: Mismatch in no. of image lists You can run `kubectl replace -f /tmp/kubectl-edit-4113815075.yaml` to try this update again.
Reference to part of the blog post
© The copyright of the reference links in this article belongs to the original author. If there is any infringement, please inform us. If you agree with it, don’t be stingy with stars:)
https://github.com/senthilrch/kube-fledged
Click to follow and learn about Huawei Cloud’s new technologies as soon as possible~
Fellow chicken "open sourced" deepin-IDE and finally achieved bootstrapping! Good guy, Tencent has really turned Switch into a "thinking learning machine" Tencent Cloud's April 8 failure review and situation explanation RustDesk remote desktop startup reconstruction Web client WeChat's open source terminal database based on SQLite WCDB ushered in a major upgrade TIOBE April list: PHP fell to an all-time low, Fabrice Bellard, the father of FFmpeg, released the audio compression tool TSAC , Google released a large code model, CodeGemma , is it going to kill you? It’s so good that it’s open source - open source picture & poster editor tool