kubernetes搭建 十四、本地数据卷和网络数据卷

mkdir volume
cd volume/

一、emptyDir

emptyDir存储卷是Pod生命周期中的一个临时目录,在pod对象被移除时会被一并删除,用得很少,例如同一pod内的多个容器间文件共享,或者作为容器数据的临时存储目录用于数据缓存系统等。
1、创建一个redis的pod进行测试,vim edir.yaml

[root@k8s-master-101 volume]# cat edir.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: redis-pod
spec:
  containers:
  - image: redis
    name: test-container
    volumeMounts:
    - mountPath: /cache
      name: cache-volume
  volumes:
  - name: cache-volume
    emptyDir: {}

2、创建

[root@k8s-master-101 volume]# kubectl create -f edir.yaml
pod/redis-pod created

[root@k8s-master-101 volume]# kubectl get pods              
NAME        READY   STATUS    RESTARTS   AGE
redis-pod   1/1     Running   0          24s

3、进入容器,可以看到有一个挂载

[root@k8s-node1-102 ~]# kubectl exec -it redis-pod bash
root@redis-pod:/data# df -h
Filesystem      Size  Used Avail Use% Mounted on
overlay          18G  8.7G  9.1G  50% /
tmpfs            64M     0   64M   0% /dev
tmpfs           910M     0  910M   0% /sys/fs/cgroup
/dev/sda3        18G  8.7G  9.1G  50% /data
shm              64M     0   64M   0% /dev/shm
tmpfs           910M   12K  910M   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs           910M     0  910M   0% /proc/acpi
tmpfs           910M     0  910M   0% /proc/scsi
tmpfs           910M     0  910M   0% /sys/firmware

二、hostPath

hostPath类型的存储卷是指将工作节点上某文件系统的目录或文件挂载于Pod中的一种存储卷。把宿主机上的目录挂载到容器,但是在每个节点上都要有,因为不确定容器会分配到哪个节点。

1、创建一个pod,把宿主机上的/tmp目录挂载到容器的/test-pod目录下
vim hostpath.yaml

[root@k8s-master-101 volume]# cat hostpath.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: test-pod
spec:
  containers:
  - image: nginx
    name: test-container
    volumeMounts:
    - mountPath: /test-pod
      name: test-volume
  volumes:
  - name: test-volume
    hostPath:
      path: /tmp
      type: Directory

2、创建

[root@k8s-master-101 volume]# kubectl create -f hostpath.yaml 
pod/test-pod created
[root@k8s-master-101 volume]# kubectl get pod
NAME       READY   STATUS    RESTARTS   AGE
test-pod   1/1     Running   0          15s

3、进入容器,可以看到容器里有一个test-pod目录了,宿主机上的tmp目录挂载在这个目录下。

[root@k8s-master-101 volume]# kubectl exec -it test-pod bash
root@test-pod:/# cd /test-pod/
root@test-pod:/test-pod# ls
systemd-private-09bddda8d47b4f3498c0933c0c18d9ee-bolt.service-kjuIRd
systemd-private-09bddda8d47b4f3498c0933c0c18d9ee-chronyd.service-7pChC5
systemd-private-09bddda8d47b4f3498c0933c0c18d9ee-colord.service-2l7RwW
systemd-private-09bddda8d47b4f3498c0933c0c18d9ee-cups.service-QxhL5E
systemd-private-09bddda8d47b4f3498c0933c0c18d9ee-rtkit-daemon.service-maUgOU
systemd-private-0df4fa4b61a047a9b761217b8f14113d-bolt.service-pJTKa8
systemd-private-0df4fa4b61a047a9b761217b8f14113d-chronyd.service-clWKhL
systemd-private-0df4fa4b61a047a9b761217b8f14113d-colord.service-YUQpzB

三、NFS存储卷

1、在之前创建的nfs服务器上创建一下文件夹并挂载,10.0.0.31
mkdir -p /opt/nfs/data
cd /opt/nfs/data
echo “nfs数据卷挂载成功” >index.html

mkdir -p /opt/nfs/data
cd /opt/nfs/data
echo “nfs数据卷挂载成功”  >index.html

[root@nfs01 ~]# cat /etc/exports
/opt/nfs/data 10.0.0.0/24(rw,no_root_squash)

2、在master上
把nfs服务器上的 /opt/nfs/data目录挂载到nginx容器里的/usr/share/nginx/html目录下。
vim nfs-deployment.yaml

[root@k8s-master-101 volume]# cat nfs-deployment.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        volumeMounts:
        - name: wwwroot
          mountPath: /usr/share/nginx/html
        ports:
        - containerPort: 80
      volumes:
      - name: wwwroot
        nfs:
          server: 10.0.0.31
          path: /opt/nfs/data

3、创建deployment

[root@k8s-master-101 volume]# kubectl create -f nfs-deployment.yaml 
deployment.extensions/nginx-deployment created
[root@k8s-master-101 volume]# kubectl get pods           
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-577c68c7f5-84rhr   1/1     Running   0          48s
nginx-deployment-577c68c7f5-98lq9   1/1     Running   0          48s
nginx-deployment-577c68c7f5-kr759   1/1     Running   0          48s

[root@k8s-master-101 volume]# kubectl exec -it nginx-deployment-577c68c7f5-84rhr bash
root@nginx-deployment-577c68c7f5-84rhr:/# cd /usr/share/nginx/html/
root@nginx-deployment-577c68c7f5-84rhr:/usr/share/nginx/html# cat index.html 
nfs数据卷挂载成功!

4、为这个deployment创建一个svc,然后在node上访问这个svc,df -h可以看到挂载信息。

[root@k8s-master-101 volume]# kubectl expose deployment nginx-deployment --port=80 --target-port=80 --name=nginx 
service/nginx exposed
[root@k8s-master-101 volume]# kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.10.10.1     <none>        443/TCP   21d
nginx        ClusterIP   10.10.10.184   <none>        80/TCP    6s

[root@k8s-node1-102 ~]# curl 10.10.10.184:80
nfs数据卷挂载成功!
[root@k8s-node1-102 ~]# df -h | grep nfs 
10.0.0.31:/opt/nfs/data   18G  3.9G   14G   22% /var/lib/kubelet/pods/1b4e85d0-51c9-11e9-9b95-000c293313a6/volumes/kubernetes.io~nfs/wwwroot

四、glusterfs存储卷

1、新建两台机器10.0.0.104 10.0.0.105 安装gluster

yum -y install centos-release-gluster
yum -y install glusterfs glusterfs-fuse glusterfs-server
systemctl enable glusterd
systemctl start glusterd

2、添加另一台服务器并查看glusterfs状态

gluster peer probe 10.0.0.105   

[root@glusterfs-node1-104 ~]# gluster peer status
Number of Peers: 1

Hostname: glusterfs-node2
Uuid: 98c336a0-ac10-49ee-9332-c9c1911aca75
State: Peer in Cluster (Connected)
Other names:
10.0.0.105

这里的ip也可以用主机名,并且在hosts里定义主机名
比如这样 gluster peer probe glusterfs-node2
在这里插入图片描述
3、在两个gluster节点上创建数据卷目录

mkdir -p /opt/glusterfs/data

4、创建volume,创建gv0这个卷

gluster volume create gv0 replica 2 10.0.0.104:/opt/glusterfs/data/gv0  \
10.0.0.105:/opt/glusterfs/data/gv0 force

5、这里再创建一个gv1后面用

gluster volume create gv1 replica 2 10.0.0.104:/opt/glusterfs/data/gv1  \
10.0.0.105:/opt/glusterfs/data/gv1 force

6、查看volume信息

[root@glusterfs-node1-104 ~]# gluster volume info
 
Volume Name: gv0
Type: Replicate
Volume ID: 44b1c272-66ac-40d2-be20-fc6504f08386
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.0.0.104:/opt/glusterfs/data/gv0
Brick2: 10.0.0.105:/opt/glusterfs/data/gv0
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
 
Volume Name: gv1
Type: Replicate
Volume ID: 5256b07e-870d-4040-a334-85b3b36a78bb
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.0.0.104:/opt/glusterfs/data/gv1
Brick2: 10.0.0.105:/opt/glusterfs/data/gv1
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet

7、在k8s的节点上要装yum install glusterfs-fuse客户端工具才能进行挂载
创建数据卷后要开启数据卷才能进行挂载,一开始没开启一直挂载不了

gluster volume start gv0
gluster volume start gv1

8、测试:在glusterfs-node2上将gv0数据卷挂载在本地/mnt目录下,然后进入mnt目录,创建一个文件。

[root@glusterfs-node2-105 ~]# mount -t glusterfs 10.0.0.104:/gv0 /mnt/
[root@glusterfs-node2-105 ~]# cd /mnt/
[root@glusterfs-node2-105 mnt]# echo "glusterfs挂载成功!" >index.html

9、然后进入数据卷所在目录,可以看到有刚才创建的文件,两台上都有,因为刚才创建数据卷的时候指定了两个副本。

[root@glusterfs-node1-104 ~]# cat /opt/glusterfs/data/gv0/index.html 
glusterfs挂载成功!

[root@glusterfs-node2-105 mnt]# cat /opt/glusterfs/data/gv0/index.html 
glusterfs挂载成功!

10、在k8s master上,创建endpoint,匹配两台glusterfs服务器的地址
vim glusterfs-endpoints.json

[root@k8s-master-101 volume]# cat glusterfs-endpoints.json 
{
  "kind": "Endpoints",
  "apiVersion": "v1",
  "metadata": {
    "name": "glusterfs-cluster"
  },
  "subsets": [
    {
      "addresses": [
        {
          "ip": "10.0.0.104"
        }
      ],
      "ports": [
        {
          "port": 1
        }
      ]
    },
    {
      "addresses": [
        {
          "ip": "10.0.0.105"
        }
      ],
      "ports": [
        {

          "port": 1
        }
      ]
    }
  ]
}

11创建endpoint并查看

[root@k8s-master-101 volume]# kubectl create -f glusterfs-endpoints.json
endpoints/glusterfs-cluster created
[root@k8s-master-101 volume]# kubectl get ep
NAME                ENDPOINTS                   AGE
glusterfs-cluster   10.0.0.104:1,10.0.0.105:1   2s
kubernetes          10.0.0.101:6443             21d

12、创建service
vim glusterfs-service.json

[root@k8s-master-101 volume]# cat glusterfs-service.json 
{
  "kind": "Service",
  "apiVersion": "v1",
  "metadata": {
    "name": "glusterfs-cluster"
  },
  "spec": {
    "ports": [
      {"port": 1}
    ]
  }
}

13、创建glusterfs的service,匹配端口1,这个service不能删,因为需要通过这个service才能使用相应glusterfs服务器的数据卷,我后面误删了一次导致创建不了pv

[root@k8s-master-101 volume]# kubectl create -f glusterfs-service.json
service/glusterfs-cluster created
[root@k8s-master-101 volume]# kubectl get svc
NAME                TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
glusterfs-cluster   ClusterIP   10.10.10.231   <none>        1/TCP     7s

14、测试k8s挂载glusterfs数据卷
创建deployment文件,这里指定了endpoint为 glusterfs-cluster,存储卷为gv0,挂载到容器的/usr/share/nginx/html目录下。

vim glusterfs-deployment.yaml

[root@k8s-master-101 volume]# cat glusterfs-deployment.yaml    
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        volumeMounts:
        - name: glusterfsvol
          mountPath: /usr/share/nginx/html
        ports:
        - containerPort: 80
      volumes:
      - name: glusterfsvol
        glusterfs:
          endpoints: glusterfs-cluster
          path: gv0
          readOnly: false

15、创建

[root@k8s-master-101 volume]# kubectl create -f glusterfs-deployment.yaml
deployment.extensions/nginx-deployment created

[root@k8s-master-101 volume]# kubectl get pod
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-7549cc478b-8jhkp   1/1     Running   0          26s
nginx-deployment-7549cc478b-lk4zd   1/1     Running   0          26s
nginx-deployment-7549cc478b-tqmz6   1/1     Running   0          26s

16、进入容器可以看到挂载成功,并查看挂载信息

[root@k8s-master-101 volume]# kubectl exec -it nginx-deployment-7549cc478b-8jhkp bash
root@nginx-deployment-7549cc478b-8jhkp:/# cat /usr/share/nginx/html/index.html 
glusterfs挂载成功!
root@nginx-deployment-7549cc478b-8jhkp:/# mount | grep gluster
10.0.0.104:gv0 on /usr/share/nginx/html type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

17、glusterfs如果有一个节点挂掉也不会影响,因为这里创建了两个节点。如果容器删除了,数据卷里的文件还是会存在。后面的持久化存储PV可以定义在容器删除后是继续保留数据卷里的文件还是删除。

18、glusterfs 需要在glusterfs服务器上先创建数据卷例如pv0,然后在k8s集群中通过glusterfs服务器的IP地址来创建endpoint,然后创建一个glusterfs的service。在deployment文件中创建volumes的时候或者创建PV的时候便可以通过endpoint名称+数据卷名称例如下面这样,然后再把vloumes应用到容器中

   volumes:
      - name: glusterfsvol
        glusterfs:
          endpoints: glusterfs-cluster
          path: gv0
          readOnly: false

猜你喜欢

转载自blog.csdn.net/qq_41475058/article/details/88886897