kubernetes 部署 rook (ceph集群)

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/signmem/article/details/88538638

说明

存储初始化
部署 rook

存储初始化

常见存储磁盘结构

全部 ssd 磁盘组成 (推荐)

ceph 数据存放独立 ssd 硬盘, journal 数据存放至独立 ssd 硬盘 ( 不知道 rook 是否支持)
ceph 数据及 jouranl 数据存放至相同 SSD 物理硬盘 ( rook 支持)

ssd 硬盘与 sata 硬盘混搭 (推荐)

ceph 数据存放至 sata 硬盘, journal 数据存放至 ssd ( 推荐并且理论上 rook 支持 )

全部 sata 硬盘 ( io 性能最差 )

ceph 数据存放至 sata 硬盘, journal 数据存放至独立 sata 硬盘 (不支持)
ceph 数据与 journal 数据存放至相同 sata 硬盘 (支持,性能最差)

存储需求

除系统盘, 每个硬盘独立划分成一个 r0 磁盘
磁盘初始化方法如下: ex /dev/sdb

dd if=/dev/zero of=/dev/sdb bs=1M count=100
sync 
parted -s /dev/sdb  mklabel gpt

存储需要加入 kubernetes 集群中

[root@ns-yun-020065 ceph]# kubectl get node
NAME                            STATUS   ROLES    AGE     VERSION
ns-storage-020100.vclound.com   Ready    <none>   11d     v1.13.3
ns-storage-020101.vclound.com   Ready    <none>   11d     v1.13.3
ns-storage-020102.vclound.com   Ready    <none>   11d     v1.13.3
ns-storage-020104.vclound.com   Ready    <none>   3d23h   v1.13.3
ns-yun-020065.vclound.com       Ready    master   14d     v1.13.3
ns-yun-020066.vclound.com       Ready    <none>   14d     v1.13.3
ns-yun-020067.vclound.com       Ready    <none>   14d     v1.13.3

rook docker images 下载

每个存储节点上都需要获取下面镜像

rook/ceph:master

初始化 ceph 集群过程中,生成 key, 部署 mon, mgr, osd 等过程都需要使用该镜像
具体信息,直接参考 https://github.com/rook/rook 代码

ceph/ceph:v13.2.2-20181023

ceph 软件
ceph images tag 可以从 https://hub.docker.com/r/ceph/ceph/tags/ 获取

参考下面 docker pull 过程

[root@ns-storage-020104 ~]# docker pull rook/ceph:master
master: Pulling from rook/ceph
aeb7866da422: Already exists
a759c546a14c: Already exists
f350ad2d857b: Already exists
d3fad71e21f3: Pull complete
Digest: sha256:08dcf99f3761246bba3946c3c3558b2298e4ae7cdc3c4d590ed20812ec4afd99
Status: Downloaded newer image for rook/ceph:master

[root@ns-storage-020104 ~]# docker pull ceph/ceph:v13.2.2-20181023
v13.2.2-20181023: Pulling from ceph/ceph
Digest: sha256:d534e57377bfa0f1e2222c690c99b3b951eb9c52e067264aab147d5f3815cb1c
Status: Image is up to date for ceph/ceph:v13.2.2-20181023

安装 rook-operator

作用

与 k8s api server 进行消息调度
在整个 k8s 中有且只有一个 operator pod
完成整个 ceph-rook 部署调度与监控
在 ceph 搭建过程中,负责生成 mon key, user auth key 等重要工作
operator 监控了 ceph pod 的健康状态

部署

获取 yaml 文件

https://github.com/rook/rook/blob/master/cluster/examples/kubernetes/ceph/operator.yaml

部署方法

[root@ns-yun-020065 ceph]# kubectl apply -f operator.yaml
namespace/rook-ceph-system created
customresourcedefinition.apiextensions.k8s.io/cephclusters.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephfilesystems.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephnfses.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephobjectstores.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephobjectstoreusers.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephblockpools.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/volumes.rook.io created
clusterrole.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt created
role.rbac.authorization.k8s.io/rook-ceph-system created
clusterrole.rbac.authorization.k8s.io/rook-ceph-global created
clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-cluster created
serviceaccount/rook-ceph-system created
rolebinding.rbac.authorization.k8s.io/rook-ceph-system created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-global created
deployment.apps/rook-ceph-operator created

k8s 自动创建的 pod

[root@ns-yun-020065 ceph]# kubectl get pod -n rook-ceph-system
NAME                                 READY   STATUS    RESTARTS   AGE
rook-ceph-agent-4xxmx                1/1     Running   0          36s
rook-ceph-agent-6wlqg                1/1     Running   0          36s
rook-ceph-agent-bxng8                1/1     Running   0          36s
rook-ceph-agent-ffp6j                1/1     Running   0          36s
rook-ceph-agent-mbxn2                1/1     Running   0          36s
rook-ceph-agent-vq5q2                1/1     Running   0          36s
rook-ceph-operator-b996864dd-wtzt4   1/1     Running   0          37s
rook-discover-5lhwt                  1/1     Running   0          36s
rook-discover-7jq6c                  1/1     Running   0          36s
rook-discover-ggp6z                  1/1     Running   0          36s
rook-discover-lvgrq                  1/1     Running   0          36s
rook-discover-pfclj                  1/1     Running   0          36s
rook-discover-pmvvh                  1/1     Running   0          36s

POD 说明

rook-discover

每个 node 上都具有 root-discover pod
默认每 60min 读取本地设备
更新设备信息到 kubernetes configmap 中 device 信息

rook-ceph-agent

每个 node 上都具有 rook-ceph-agent pod
用于管理 kubernetes volumes 挂载事件

rook-ceph-operator

在 kubernetes 上有且只有一个 rook-ceph-operator pod
管理 kubernetes 上存储管理
管理并监控 ceph 集群 pod

rook ceph 集群创建

获取 yaml 文件

https://github.com/rook/rook/blob/master/cluster/examples/kubernetes/ceph/cluster.yaml

参考 yaml 文件配置

apiVersion: v1
kind: Namespace
metadata:
  name: rook-ceph
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: rook-ceph-osd
  namespace: rook-ceph
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: rook-ceph-mgr
  namespace: rook-ceph
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: rook-ceph-osd
  namespace: rook-ceph
rules:
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: [ "get", "list", "watch", "create", "update", "delete" ]
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: rook-ceph-mgr-system
  namespace: rook-ceph
rules:
- apiGroups:
  - ""
  resources:
  - configmaps
  verbs:
  - get
  - list
  - watch
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: rook-ceph-mgr
  namespace: rook-ceph
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - services
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - batch
  resources:
  - jobs
  verbs:
  - get
  - list
  - watch
  - create
  - update
  - delete
- apiGroups:
  - ceph.rook.io
  resources:
  - "*"
  verbs:
  - "*"
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: rook-ceph-cluster-mgmt
  namespace: rook-ceph
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: rook-ceph-cluster-mgmt
subjects:
- kind: ServiceAccount
  name: rook-ceph-system
  namespace: rook-ceph-system
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: rook-ceph-osd
  namespace: rook-ceph
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: rook-ceph-osd
subjects:
- kind: ServiceAccount
  name: rook-ceph-osd
  namespace: rook-ceph
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: rook-ceph-mgr
  namespace: rook-ceph
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: rook-ceph-mgr
subjects:
- kind: ServiceAccount
  name: rook-ceph-mgr
  namespace: rook-ceph
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: rook-ceph-mgr-system
  namespace: rook-ceph-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: rook-ceph-mgr-system
subjects:
- kind: ServiceAccount
  name: rook-ceph-mgr
  namespace: rook-ceph
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: rook-ceph-mgr-cluster
  namespace: rook-ceph
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: rook-ceph-mgr-cluster
subjects:
- kind: ServiceAccount
  name: rook-ceph-mgr
  namespace: rook-ceph
---
apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
  name: rook-ceph
  namespace: rook-ceph
spec:
  cephVersion:
    image: ceph/ceph:v13.2.2-20181023            <- 定义使用 ceph 软件版本
    allowUnsupported: false
  dataDirHostPath: /var/lib/rook
  mon:
    count: 3                                     <- ceph mon 个数定义, 建议 3 个, 最大不超过 9 个
    allowMultiplePerNode: true
  dashboard:
    enabled: true                                <- ceph dashboard ,可以设为 false
  network:
    hostNetwork: false                           <- 不建议使用外部网络连接 ceph 
  rbdMirroring:
    workers: 0
  resources:
  storage:
    useAllNodes: false                           <- 只在自定义的机器上创建  ceph
    useAllDevices: false                         <- 只在自定义磁盘上创建  ceph
    deviceFilter:
    location:
    config:
      storeType: bluestore                       <- 可选 filestore, bluestore
      databaseSizeMB: "2048"
      journalSizeMB: "2048"
      osdsPerDevice: "1"                         <- 推荐每个 ceph pod 独立管理磁盘
    nodes:
    - name: "ns-storage-020100.vclound.com"      <- node name
      location: rack=rack1                       <- crush map 定义
      resources:                                 <- 资源限制
        limits:
          cpu: "1000m"
          memory: "8192Mi"
        requests:
          cpu: "1000m"
          memory: "8192Mi"
      devices:
      - FullPath: ""
        name: "sdb"                              <- 定义使用 sdb 作一个独立 osd (其他同理, 不累赘描述)
        config: null
      - FullPath: ""
        name: "sdc"
        config: null
      - FullPath: ""
        name: "sdd"
        config: null
      - FullPath: ""
        name: "sde"
        config: null
      - FullPath: ""
        name: "sdf"
        config: null
    - name: "ns-storage-020101.vclound.com"
      location: rack=rack2
      resources:
        limits:
          cpu: "1000m"
          memory: "8192Mi"
        requests:
          cpu: "1000m"
          memory: "8192Mi"
      devices:
      - FullPath: ""
        name: "sdb"
        config: null
      - FullPath: ""
        name: "sdc"
        config: null
      - FullPath: ""
        name: "sde"
        config: null
      - FullPath: ""
        name: "sdf"
        config: null
      - FullPath: ""
        name: "sdd"
        config: null
    - name: "ns-storage-020102.vclound.com"
      location: rack=rack3
      resources:
        limits:
          cpu: "1000m"
          memory: "8192Mi"
        requests:
          cpu: "1000m"
          memory: "8192Mi"
      devices:
      - FullPath: ""
        name: "sdb"
        config: null
      - FullPath: ""
        name: "sdc"
        config: null
      - FullPath: ""
        name: "sdd"
        config: null
      - FullPath: ""
        name: "sde"
        config: null
      - FullPath: ""
        name: "sdf"
        config: null
    - name: "ns-storage-020104.vclound.com"
      location: rack=rack3
      resources:
        limits:
          cpu: "1000m"
          memory: "8192Mi"
        requests:
          cpu: "1000m"
          memory: "8192Mi"
      devices:
      - FullPath: ""
        name: "sdb"
        config: null
      - FullPath: ""
        name: "sdc"
        config: null
      - FullPath: ""
        name: "sdd"
        config: null
      - FullPath: ""
        name: "sde"
        config: null
      - FullPath: ""
        name: "sdf"
        config: null

创建 ceph/rook 集群

kubectl apply -f cluster4.yaml

toolbox

由 rook/ceph:master 镜像提供所有功能
可部署在 kubernetes 中任意节点上
主要用于提供 ceph cli 接口, 便于 ceph 维护

部署方法

[root@ns-yun-020065 ceph]# kubectl apply -f toolbox.yaml
deployment.apps/rook-ceph-tools created

Crush map 定义方法

通过 cluster.yaml 进行定义
通过 toolbox pod 命令行方式管理

猜你喜欢

转载自blog.csdn.net/signmem/article/details/88538638