In k8s, we will easily deploy dozens or hundreds of microservices . The version of these microservices and the number of copies will bring out more pods
With so many pods, how can they be organized efficiently? If the organization is not good, the management microservices will become chaotic and disorganized
Therefore, there is a label Label
label
Tags are a simple but powerful feature of K8S that can organize resources in K8S, including pod resources
A label is an arbitrary key-value pair that can be attached to a resource to select a resource with that exact label
That is to say, the key of our label is arbitrary inside the resource and can be defined by yourself, as long as it is unique in the resource
for example
We can define 2 labels to organize the above messy multiple pods
- app
Identify what application this pod belongs to
- rel
Identify the application version running in this pod, for example, the following version can be set
-
- stable
- beta
- canary
can be organized like this
By organizing microservices through the above labels, we can easily view the pod status we expect to see through the pod label
write a demo
Just use the previous xmt-kubia,yaml file and change it to xmt-kubia-labels.yaml
Plus 2 custom tags:
- xmt_create: auto
- rel: stable
apiVersion: v1
kind: Pod
metadata:
name: xmt-kubia-labels
labels:
xmt_create: auto
rel: stable
spec:
containers:
- image: xiaomotong888/xmtkubia
name: xmtkubia-labels
ports:
- containerPort: 8080
protocol: TCP
kubectl create -f xmt-kubia-labels.yaml
Run the pod by
Check the actual label situation
view label
kubectl get pod --show-labels
view the specified label
kubectl get po -L rel,xmt_create
Modify the label of the pod
kubectl label pod xmt-kubia-labels key=value
Modify an existing label
kubectl label pod xmt-kubia-labels rel=dev --overwrite
label selector
Through the above case, I found that the label does not seem to be very useful. In fact, the label should be used in combination with the label selector
The label selector allows us to find out the corresponding pod set according to a specific label , and can operate on these pod sets
A tag selector is a criterion used to filter resources that contain a specific tag with a specific value, as follows:
- Include or not include tags with specific keys
- Contains tags with specific key-values
- Contains tags with a specific key value, but the corresponding value is different from what we specified
Use tag selector
list pods
kubectl get po -l key=value
kubectl get po -l rel=dev
List pods containing a label
kubectl get po -l rel
To list pods that do not contain a certain label , you need to use single quotes to wrap the condition
kubectl get po -l '!rel'
Using an exclamation mark , this condition can be written !rel
to indicate that the rel tag is not included, because the bash shell will parse the exclamation mark, so we use !, and single quotes '' can be used for inclusion and processing
It can also be added after -lkey=value
or key != value
,
use in ,kubectl get po -l 'rel in (dev,prod)'
use notin ,kubectl get po -l 'rel notin (dev,prod)'
Using multiple conditions in label selector
kubectl get pod -l 条件1,条件2...
The label selector can help us list the filtered pods, and we can also perform a certain type of operation on all pods in a subset, such as deleting all pods in a subset, which can be written like thiskubectl delete pod -l xx=xx
Labels can be used to categorize worker nodes
When we create pods, there will be such requirements. Some pods we create have higher requirements for CPU computing performance, so I need to deploy such pods to nodes with good performance. In this case, the program is actually Strongly coupled with the infrastructure
But the idea of K8S is not equivalent. The idea in K8S is that the application hides the actual infrastructure . In K8S, the created pods are randomly assigned to different nodes.
Then, we need to achieve the above requirements, we can do it through the label
Label the node
We said earlier that labels can not only be added to pods, but can actually be added to all resources in K8S
For example, we can add a label to our node, such as: gpu=true, it is better to specify the gpu of this node, and temporarily use minikube to demonstrate
Schedule the pod to the specified node
Continuing the above demo, create a new xmt-kubia-gpu.yaml file, add a node selector under Spec, nodeSlector , specify to select the node with gpu: "true"
Use kubectl create -f xmt-kubia-gpu.yaml
to place the created pod on the node with the label gpu: "true" for scheduling
We should schedule pods considering groups of logical nodes matching certain criteria via label selectors
By the way:
If we really want to specify that the pod is scheduled to a specified node, we can also do it. We can check kubectl get node --show-labels
the label of the node
We can see that there is a label whose key is kubernetes.io/hostname , and the corresponding value is the hostname of the node, so we can use this label to schedule our pod to the specified node
This approach has a risk. If the node we specify is offline, the pod we created will not be schedulable. This method is technically feasible, but it is not recommended
annotation
annotations
Annotations are similar to labels, but unlike labels, annotations cannot be used to group objects like labels
Annotations can accommodate more information and help us understand the role of resources more smoothly. Annotations are also in the form of key-value pairs
Add and modify annotations
kubectl annotate pod podName 具体的键值对
Namespaces
Namespace namespace
Different namespaces can have the same resource name
View namespace
kubectl get ns
View the pods in the specified namespace
kubectl get pod --namespace xxx
create a namespace
- How to use commands
kubectl create namespace xxx
- The way to use yaml
Write a yaml file and use kubectl create to create it
apiVersion: v1
kind: Namespace
metadata:
name: test_ns
delete namespace
kubectl delete ns xxx
Delete the pod of the namespace, but keep the namespace
kubectl delete pod --all
If there is an RC in the namespace, the deleted pod will be recreated, and the RC will detect the number of copies of the pod. If it is less than the set number of copies, it will be created dynamically
Delete all resources in the namespace
kubectl delete all --all
That’s all for today, what I have learned, if there is any deviation, please correct me
Welcome to like, follow and collect
Friends, your support and encouragement is my motivation to keep sharing and improve quality
Ok, that's it for this time
Technology is open, and our mentality should be even more open. Embrace change, live in the sun, and strive to move forward.
I am A Bing Yunyuan , welcome to like, follow and collect, see you next time~
For more, you can check Zero Sound's live broadcast at 8 o'clock every night: https://ke.qq.com/course/417774