Getting started with k8s: kube-prometheus-stack family bucket construction (Grafana + Prometheus)

series of articles


Chapter 1: Getting Started with ✨k8s: Bare Metal Deployment of k8s Clusters Chapter 2 :
Getting Started with ✨K8s : Deploying Applications to K8s Clusters :✨Getting started with k8s: Storage (storage) Chapter 6: ✨K8S configuration storageclass Use nfs to dynamically claim local disk space Chapter 7: ✨K8s getting started: Configuring ConfigMap & Secret Chapter 8: ✨K8s getting started: K8s getting started: Helm construction MySQL Chapter 9: Getting Started with k8s: kubernetes-dashboard Installation Chapter 10: Getting Started with k8s: kube-prometheus-stack Family Bucket Construction (Grafana + Prometheus)








Artifact Hub official website: https://artifacthub.io/packages/helm/prometheus-community/kube-prometheus-stack

1. Introduction

Install the kube-prometheus stack, Kubernetes manifests, Grafana dashboards, and a collection of Prometheus rules, combined with documentation and scripts, to provide easy-to-operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

2. Installation

1. Helm installation

helm install kube-prometheus-stack

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install prometheus-community/kube-prometheus-stack --generate-name
2. Yaml manifest installation

The above may fail to install due to network reasons, you can download the file from GitHub for installation

GitHub download address: https://github.com/prometheus-operator/kube-prometheus

Create kube-prometheus-stack with manifests folder

kubectl apply --server-side -f manifests/setup
until kubectl get servicemonitors --all-namespaces ; do date; sleep 1; echo ""; done
kubectl apply -f manifests/

To delete the kube-prometheus-stack created in the manifests folder, use the following command

kubectl delete --ignore-not-found=true -f manifests/ -f manifests/setup
3. View resources

If the manifests folder is successfully created, a namespace/monitoring will be created, as well as the required service, StatefulSet, Deployment, Secret, ConfigMap, etc. (the following figure is a screenshot of the error resolution)

insert image description here

4. Solve the error Error: ImagePullBackOff

The error is as follows, pulling the image failed
insert image description here

View all resources under the monitorin namespace kubectl get all -n monitoring, and find that there are two resource images that are pulled incorrectly
insert image description here
. Solution

  • Method 1: Pull the image from dockerHub, and use the docker tag to copy an image that is consistent with the name of the image pulled above (you need to download the image to the specified node node, that is, which node the pod is deployed on, which node needs to have the image)
# dockerHub 上拉去镜像
docker pull willdockerhub/prometheus-adapter:v0.9.1
docker pull bitnami/kube-state-metrics:2.5.0
# 镜像重命名
docker tag willdockerhub/prometheus-adapter:v0.9.1 k8s.gcr.io/prometheus-adapter/prometheus-adapter:v0.9.1
docker tag bitnami/kube-state-metrics:2.5.0 k8s.gcr.io/kube-state-metrics/kube-state-metrics:v2.5.0
  • Method 2: kubernetes-dashboardModify the configuration file and restart
    insert image description here
    it. If you have not installed it, please refer to my previous article. If you have not installed kubernetes-dashboard, you can modify it in a single yaml file, and then redeploy
5. The installation is successful

After restarting, as follows, all resources start successfully

insert image description here
After the above installation is complete, all services are of ClusterIP type, and all resources can only be accessed within the cluster. Next, modify the service type to NodePort type to ensure that physical nodes can be accessed

3. Access test

1. Grafana physical node access

kubernetes-dashboardModify the grafana service configuration and add the physical node port nodePort=13000
insert image description here

Access test http://192.168.25.100:13000/default username and password admin/admin
insert image description here

2. Prometheus physical node access

kubernetes-dashboardModify prometheus-k8s configuration, add physical node port nodePort=19090
insert image description here

Access test http://192.168.25.100:19090/
insert image description here

Guess you like

Origin blog.csdn.net/qq_41538097/article/details/125564711