K8s PV and PVC (dynamic)

Storage Class (StorageClass)
When users need to use storage, they need to bind PV through PVC. There are several situations that will cause problems (in a word, you can't help yourself with k8s)

  • PVC demand cannot match the required PV
  • When the demand for PVC is high, PV creation will be frequent
  • User needs cannot be determined, and usually change with the environment

The storage class is to realize the dynamic supply of PV. For PV group management, the administrator can customize the level or function to distinguish the use.

The static PV and PVC in the previous article left me a big doubt, that is, k8s can not free the administrator's hands, so the storage class is to achieve this, very similar to the cinder function of openstack, which can be automatically created according to PVC requirements. PV binding, liberating the administrator's hands

Storage class field

Command view: kubectl explain StorageClass

  • The lower layer name field of metadata: it is the PVC field storageClassName, which is very important. The matching condition is through the class name
  • provisioner: The supplier, the storage class needs to rely on the supplier's storage plug-in to complete the creation of the storage type (that is, similar to various storage drivers, in order to achieve the creation of storage volumes)

Not all storage types of k8s suppliers are supported, but they also support customization that meets the specification. See the following figure for support
Insert picture description here

  • parameters: The parameters associated with the storage volume by the user. Different provisioner parameters are different
  • reclaimPolicy: Reclaim policy, Delete (default) and Retain (delete the processing process of pvc for pv)
  • mountOptions: that is, mount-o option parameters (hard, ro, soft, etc.)
  • volumeBindingMode: Regarding whether the pvc is bound immediately after creation or is bound while waiting for Pod scheduling, the consideration is that the local volume is based on node storage, or the back-end storage network may not be all interoperable (delayed binding can be performed again based on Pod scheduling Filter)
    • Immediate: bind immediately (default)
    • WaitForFirstConsumer: delayed binding

This ensures that the PersistentVolumeClaim binding strategy will also evaluate any other node constraints that the Pod may have, such as node resource requirements, node selectors, Pod affinity, and Pod anti-affinity

  • allowVolumeExpansion: Enable support for PVC dynamic expansion space

For example, NFS realizes dynamic PV (NFS itself does not support plug-ins)

  1. Provisioner creation, based on deployment creation, need to realize the management of the storage volume (it was originally done manually by the administrator is handed over to the program)
  2. rbac authorization, because the provisioner provider needs to access the cluster
  3. Create storage class
  4. Create PVC
  5. Create Pod test

View the entire yaml file: https://github.com/kubernetes-retired/external-storage/tree/master/nfs-client/deploy

1. Provisioner creation
Insert picture description here

Only need to modify your own NFS address and path

2. RBAC authorization

You can modify the namespace according to your needs

3. Create a storage class

provisioner: Need to match the name specified by the provisioner to create env for the class to call
archiveOnDelete: "false" does not feel any use, the function is to delete the PVC, the PV is deleted, "true" is the opposite, similar to the recycling strategy

4. Create PVC

annotations: the role of annotations

However, the storageClassName is not specified on the PVC side. PVC storage resources can be used in two ways, the first storageClassName, and the second annotations (usually the first)
Insert picture description here

5. Create a Pod test

claimName specifies the name of the PVC definition

Observe the phenomenon

1. First make sure that the Pod of the provisioner has been successfully running. It is usually a problem of mirror downloading. DNS needs to be changed to 8.8.8.8.
Insert picture description here
2. The PV has been automatically created and automatically bound. The default recovery strategy is delete
Insert picture description here3. View the mount in the container In case, I automatically created a separate subordinate directory (to achieve automatic creation)
Insert picture description here4. Delete pvc, pv is also deleted
Insert picture description here5. Modify the recycling strategy

  1. Create a storage class and specify the reclaimPolicy field
  2. The PV that has been created passes the command: kubectl patch pv -p'{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'

6. Open the default storage class (opening requires 2 conditions)
After opening the PVC can not write storageClassName or annotations, and the specified default storage class will be added by default

  • kube-api needs to add the DefaultStorageClass controller (–admission-control=DefaultStorageClass)
  • kubectl patch storageclass 类名 -p ‘{“metadata”: {“annotations”:{“storageclass.kubernetes.io/is-default-class”:“true”}}}’
    Insert picture description here

Thinking 1
can see that the capacity of the container mount has not changed. The resource I applied for PVC is 1M. Compared with the static version, I understand that PV is created by myself, which means that the nfs path must be advanced by itself, of course, the capacity is its own. Also created

The difference between static and dynamic resources field

  • Static is bound to match PV
  • Dynamic is the volume size when the storage volume is created (this creation is done by the support plug-ins of different storage types, without the administrator)

NFS is a bit special, you cannot specify the size. The size of NFS follows the first path. You can see that the capacity is 1 megabyte when you change to ceph rbd, because the size specified when the image is automatically created is defined in the resources field.

Dynamic PV in this way already meets our normal needs, users only need to determine the storage type and size (storage type is a representative of the class, such as SSD, ceph storage)

Thinking 2

The storage class can be used as the default storage matching. PVC matches the static PV binding first, and there is no suitable dynamic binding.

Reference: Access Controller
Reference: Storage
Reference: Books-kubernetes advanced combat-Ma Yongliang

Guess you like

Origin blog.csdn.net/yangshihuz/article/details/113135054