k8s1.19使用ceph15 rbd块存储

在这里插入图片描述

一、ceph集群操作

#创建rbd

#创建存储池,指定pg和pgp的数量, pgp是对存在于pg的数据进行组合存储,pgp通常等于pg的值
# 创建存储池
ceph osd pool create kubernetes 128 128


#对存储池启用 RBD 功能
ceph osd pool application enable kubernetes rbd

#通过 RBD 命令对存储池初始化
rbd pool init -p kubernetes 

#创建用户


#查看pool
ceph osd pool ls



#查看adim秘钥
ceph auth get client.admin 2>&1 |grep "key = " |awk '{print  $3'}

#在Ceph上创建用户
ceph auth add client.kubernetes mon 'allow r' osd 'allow rwx pool=kubernetes'

#创建访问pool秘钥
ceph auth get-or-create client.kubernetes mon 'profile rbd' osd 'profile rbd pool=kubernetes' mgr 'profile rbd pool=kubernetes'

#查看信息

#获取Ceph信息
ceph mon dump

#######################################################################
fsid 83baa63b-c421-480a-be24-0e2c59a70e17

min_mon_release 15 (octopus)

0: [v2:192.168.100.201:3300/0,v1:192.168.100.201:6789/0] mon.vm-201
1: [v2:192.168.100.202:3300/0,v1:192.168.100.202:6789/0] mon.vm-202
2: [v2:192.168.100.203:3300/0,v1:192.168.100.203:6789/0] mon.vm-203


#检查创建的用户
ceph auth get client.kubernetes
#######################################################################
exported keyring for client.kubernetes
[client.kubernetes]
	key = AQD7QJxhQ4xJARAAHbBdXZ43xxSiTRscbynLWA==
	caps mgr = "profile rbd pool=kubernetes"
	caps mon = "profile rbd"
	caps osd = "profile rbd pool=kubernetes"
#######################################################################

二、k8s集群操作

1.所有节点安装ceph-common

#安装python3
yum install python3 -y


#下载cephadm
wget https://github.com/ceph/ceph/raw/v15.2.17/src/cephadm/cephadm
#执行授权
chmod +x cephadm

#指定ceph15版本
./cephadm add-repo --release 15.2.17

#安装ceph-common
yum install ceph-common -y

2.k8s安装ceph-csi插件

1、csi-config-map.yaml


cat > /root/1-csi-config-map.yaml << EOF
apiVersion: v1
kind: ConfigMap
data:
  config.json: |-
    [
      {
        "clusterID": "83baa63b-c421-480a-be24-0e2c59a70e17", 
        "monitors": [
          "192.168.100.201:6789",
          "192.168.100.202:6789",
          "192.168.100.203:6789"
        ]
      }
    ]
metadata:
  name: ceph-csi-config
EOF

kubectl apply -f /root/1-csi-config-map.yaml


2. 配置csi-rbd-secret.yaml


cat > /root/2-csi-rbd-secret.yaml << EOF
apiVersion: v1
kind: Secret
metadata:
  name: csi-rbd-secret
stringData:
  userID: kubernetes
  userKey: AQD7QJxhQ4xJARAAHbBdXZ43xxSiTRscbynLWA==
EOF

kubectl apply -f /root/2-csi-rbd-secret.yaml


3、部署ceph-csi-v3.4.0插件


cat > /root/3-csi-provisioner-rbac.yaml << EOF
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: rbd-csi-provisioner

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-external-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", &#

猜你喜欢

转载自blog.csdn.net/qq_35583325/article/details/131556878