k8s no brains series (seven) - NFS storage (dynamic memory)

k8s no brains series (seven) - NFS storage (dynamic memory)

1. Why do we need dynamic memory

Probably talk about it, not necessarily right. One began to think it is always a good thing. Uses, naturally appreciate the benefits.

  1. Pv resources for the preparation of each store's needs, pvc very cumbersome.

  2. There are a lot of the same type of request, the urgent need to implement solutions quickly controlled dynamic applications. pvc application, automatically bound pv

  3. Storage Reclamation requires fine control can reuse. Necessary to ensure the capacity, but also to ensure data security

2. Understand StorageClass

2.1 term basis

  • pv - Persistent Volume (persistent storage volumes)
  • pvc - Persistent Volume Claim (persistent storage volume request)
  • pvp - Persistent Volume Provisioner (persistent storage volume supplier)
  • cluster role cluster operations role, directly translated role for the role, the meaning is extended role with which authority

2.1 Principle

Schematic

  1. Administrators create Provisioner (supplier), responsible for providing pv external requests (persistent storage volumes) example

  2. StorageClass instead of PVC (persistent storage volume request) issued PV (persistent storage volumes) Examples of binding to a request Provistioner

  3. Pod bound pvc pvc acquired by storing instances pv

3. preparation

materials content Remark
nfs service 192.168.56.4 A
Store Directory /data/nfs/db-svc-dynamic-volume Used to store data
k8s cluster No brain series introduced Version 1.16.4

3.1 Yaml concepts involved

name Explanation Remark
StorageClass k8s API Doc
ServiceAccount Service Account
Role/ClusterRole Role / Role Cluster
RoleBinding/ClusterRoleBinding Roles and account Binding

3.2 define service accounts and permissions

The definition of a service account, which is responsible for the application of resources to the cluster.
The definition of "cluster roles", "role" and bind with the service account. So, yaml will be in five parts. They are

  1. service account
  2. cluster role
  3. cluster role and service account Bind
  4. role
  5. role and service account Bind

nfs-storage-rbac.yaml


apiVersion: v1
# 定义服务账户
kind: ServiceAccount
metadata:
  # 名字要知命达意,这个账户专门为数据库服务
  # {用途}-svc-{卷类型}-account
  name: db-svc-nfs-account 
  namespace: default
---
# 定义集群角色声明该角色的权限列表,可以看出全是存储相关
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  # 与 db-svc-nfs-account 相呼应
  name: db-svc-nfs-cluster-role
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
# 定义完角色后,就要将ServiceAccount与ClusterRole来绑定
kind: ClusterRoleBinding 
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: db-svc-nfs-account-cluster-role-bind
subjects:  # 这里用的数组结构,那么可以认为这一个“角色”可以被多个账户绑定
  - kind: ServiceAccount
    # 账户名字,见ServiceAccount的name
    name: db-svc-nfs-account
    namespace: default
roleRef:
  kind: ClusterRole
  # 角色的名字,见ClusterRole的name
  name: db-svc-nfs-cluster-role
  apiGroup: rbac.authorization.k8s.io
---
# 专门用来操作pvc与pv绑定时,ServiceAccount可以使用的权限
kind: Role # 角色
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: db-svc-nfs-role
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
# 与角色绑定的ServiceAccount
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: db-svc-nfs-account-role-bind
subjects:
  - kind: ServiceAccount
    # 账户名字,见ServiceAccount的name
    name: db-svc-nfs-account
    namespace: default
roleRef:
  kind: Role
  # 角色名字,见role
  name: db-svc-nfs-role
  apiGroup: rbac.authorization.k8s.io
$kubectl create -f nfs-storage-rbac.yaml
serviceaccount/db-svc-nfs-account created
clusterrole.rbac.authorization.k8s.io/db-svc-nfs-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/db-svc-nfs-account-cluster-role-bind created
role.rbac.authorization.k8s.io/db-svc-nfs-role created
rolebinding.rbac.authorization.k8s.io/db-svc-nfs-account-role-bind created

3.3 Creating StorageClass

nfs-storage-class.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: db-svc-nfs-storage-class
# 这个名字要记住
provisioner: db-svc-nfs-provistioner  
parameters:
  archiveOnDelete: "false"
reclaimPolicy: Retain

3.4 Creating NFS Provisioner

NFS provisioner.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: db-svc-nfs-provisioner
  labels:
    app: db-svc-nfs-provisioner
  namespace: default  
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: db-svc-nfs-provisioner
  template:
    metadata:
      labels:
        app: db-svc-nfs-provisioner
    spec:
      # 这里说明,用上面创建的service account来创建pv
      serviceAccountName: db-svc-nfs-account
      containers:
      - name: nfs-client-provisioner
        image: quay.io/external_storage/nfs-client-provisioner:latest
        volumeMounts:
        - name: nfs-client-root
          mountPath: /persistentvolumes
        env: #------ 这里是有学问的,见3.7
        - name: PROVISIONER_NAME
          value: db-svc-nfs-provistioner  
        - name: NFS_SERVER
          value: 192.168.56.4
        - name: NFS_PATH  
          value: /data/nfs/db-svc-dynamic-volume
      volumes:
        - name: db-svc-dynamic-volume
          nfs: #----- 这里是有学问的,见3.7
            server: 192.168.56.4  
            path: /data/nfs/db-svc-dynamic-volume
$kubectl create -f nfs-provisioner.yaml

3.5 Creating Persistent Volume Claim

nfs-dynamic-pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pvc
  namespace: default
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: db-svc-nfs-storage-class
$kubectl create -f nfs-dynamic-pvc.yaml

* Note: If the system reports the following error stating that you have read no other part of the brain series

Error from server (AlreadyExists): error when creating "nfs-dynamic-pvc.yaml": persistentvolumeclaims "mysql-pvc" already exists

Please call the following command to delete

$kubectl delete pvc mysql-pvc

3.6 check whether the system is operating normally

$kubectl get pv,pvc
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS               REASON   AGE
persistentvolume/pvc-e95e6bfb-aeba-4a96-b8eb-90c0db607ad9   1Gi        RWO            Retain           Bound    default/mysql-pvc   db-svc-nfs-storage-class            17m

NAME                              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS               AGE
persistentvolumeclaim/mysql-pvc   Bound    pvc-e95e6bfb-aeba-4a96-b8eb-90c0db607ad9   1Gi        RWO            db-svc-nfs-storage-class   17m
  1. Appeared ersistentvolume / pvc-e95e6bfb-aeba-4a96-b8eb-90c0db607ad9 such PV

  2. mysql-pvc status display 'Bound'
  3. System uptime!

More than 3.7 ask why

Why Provisioner appeared set to repeat the contents of NFS it?

Talk about nfs-client-provisioner this image

The image of the code in Github , look at the code concluded

  1. And cluster ApiServer Unicom, and complete the creation of the PV.

  2. PV in the creation process, the need to use NFS_SERVER and NFS_PATH these two variables, and passed to the new PV

    $kubectl describe pv pvc-e95e6bfb-aeba-4a96-b8eb-90c0db607ad9
    Name:            pvc-e95e6bfb-aeba-4a96-b8eb-90c0db607ad9
    Labels:          <none>
    Annotations:     pv.kubernetes.io/provisioned-by: db-svc-nfs-provistioner
    Finalizers:      [kubernetes.io/pv-protection]
    StorageClass:    db-svc-nfs-storage-class
    Status:          Bound
    Claim:           default/mysql-pvc
    Reclaim Policy:  Retain
    Access Modes:    RWO
    VolumeMode:      Filesystem
    Capacity:        1Gi
    Node Affinity:   <none>
    Message:
    Source:
     Type:      NFS (an NFS mount that lasts the lifetime of a pod)
     Server:    192.168.56.4  <--------这里
     Path:      /data/nfs/db-svc-dynamic-volume/default-mysql-pvc-pvc-e95e6bfb-aeba-4a96-b8eb-90c0db607ad9 
     ReadOnly:  false
    Events:        <none>
  3. The reason why the need to mount a mirror Volume, because the code required to complete the instruction reclaimPolicy PV issued to the cluster, and the specific operation is completed

Guess you like

Origin www.cnblogs.com/smokelee/p/12445155.html