k8s资源之ResourceQuota

发布一个k8s部署视频:https://edu.csdn.net/course/detail/26967

课程内容:各种k8s部署方式。包括minikube部署,kubeadm部署,kubeasz部署,rancher部署,k3s部署。包括开发测试环境部署k8s,和生产环境部署k8s。

腾讯课堂连接地址https://ke.qq.com/course/478827?taid=4373109931462251&tuin=ba64518

第二个视频发布  https://edu.csdn.net/course/detail/27109

介绍主要的k8s资源的使用配置和命令。包括configmap,pod,service,replicaset,namespace,deployment,daemonset,ingress,pv,pvc,sc,role,rolebinding,clusterrole,clusterrolebinding,secret,serviceaccount,statefulset,job,cronjob,podDisruptionbudget,podSecurityPolicy,networkPolicy,resourceQuota,limitrange,endpoint,event,conponentstatus,node,apiservice,controllerRevision等。

第三个视频发布:https://edu.csdn.net/course/detail/27574

详细介绍helm命令,学习helm chart语法,编写helm chart。深入分析各项目源码,学习编写helm插件

第四个课程发布:https://edu.csdn.net/course/detail/28488

本课程将详细介绍k8s所有命令,以及命令的go源码分析,学习知其然,知其所以然

————————————————

资源限制:

kubernetes提供了两种资源限制的方式:ResourceQuota LimitRange

其中ResourceQuota 是针对namespace做的资源限制,而LimitRange是针对namespace中的每个组件做的资源限制。

ResourceQuota:

配置一个namespace可以使用的资源量

资源配额能够对计算资源CPU和内存)、存储资源、以及对资源对象的数量进行管理。

常用资源类型:

计算资源配额

存储资源配额

对象数量配额

计算资源配额:

存储资源配额:

requests.storage

persistentvolumeclaims

<storage-class-name>.storageclass.storage.k8s.io/requests.storage

<storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims

requests.ephemeral-storage

limits.ephemeral-storage

对象数量配额:

Quota Scopes:

示例:

[root@master01 compute-resources]# cat compute-resources.yaml 
apiVersion: v1
kind: ResourceQuota
metadata:
  name: compute-resources
spec:
  hard:
    requests.cpu: "0.1"
    requests.memory: 100Mi
    limits.cpu: "0.2"
    limits.memory: 200Mi
[root@master01 storage]# cat storage.yaml 
apiVersion: v1
kind: ResourceQuota
metadata:
  name: storage-resources
spec:
  hard:
    requests.storage: 200Mi
    requests.ephemeral-storage: 1Mi
    limits.ephemeral-storage: 1Mi
    nfs-sc.storageclass.storage.k8s.io/requests.storage: 100Mi
    nfs-sc.storageclass.storage.k8s.io/persistentvolumeclaims: 1
    persistentvolumeclaims: 2
[root@master01 object-counts]# cat object-counts.yaml 
apiVersion: v1
kind: ResourceQuota
metadata:
  name: object-counts
spec:
  hard:
    persistentvolumeclaims: 1
    services.loadbalancers: 1
    services.nodeports: 1
    configmaps: 1
    pods: 1
    resourcequotas: 1
    services: 1
    secrets: 1
[root@master01 best-effort]# cat best-effort.yaml 
apiVersion: v1
kind: ResourceQuota
metadata:
  name: best-effort
spec:
  hard:
    pods: "2"
  scopes:
  - BestEffort
[root@master01 not-best-effort]# cat not-best-effort.yaml 
apiVersion: v1
kind: ResourceQuota
metadata:
  name: not-best-effort
spec:
  hard:
    pods: "2"
    requests.cpu: "0.1"
    requests.memory: 100Mi
    limits.cpu: "0.2"
    limits.memory: 200Mi
  scopes:
  - NotBestEffort
[root@master01 termination]# cat termination.yaml 
apiVersion: v1
kind: ResourceQuota
metadata:
  name: termination
spec:
  hard:
    requests.cpu: "0.1"
    requests.memory: 100Mi
    limits.cpu: "0.2"
    limits.memory: 200Mi
  scopes:
  - Terminating
[root@master01 nottermination]# cat notterminating.yaml 
apiVersion: v1
kind: ResourceQuota
metadata:
  name: termination
spec:
  hard:
    requests.cpu: "0.1"
    requests.memory: 100Mi
    limits.cpu: "0.2"
    limits.memory: 200Mi
  scopes:
  - NotTerminating
[root@master01 prioity-class]# cat prioity-class.yaml 
apiVersion: v1
kind: ResourceQuota
metadata:
  name: priority-high
spec:
    hard:
      cpu: "0.1"
      memory: 100Mi
      pods: "2"
    scopeSelector:
      matchExpressions:
      - operator : In
        scopeName: PriorityClass
        values: ["high"]
原创文章 409 获赞 424 访问量 346万+

猜你喜欢

转载自blog.csdn.net/hxpjava1/article/details/103954451