ceph容器部署-nautilus

参考文档
https://blog.csdn.net/wylfengyujiancheng/article/details/90576421

https://blog.csdn.net/u014534808/article/details/109159160

https://hub.docker.com/r/ceph/daemon

1. 节点规划

192.168.8.0/24

192.168.8.11 node1 admin(启动mon、osd、mgr)
192.168.8.12 node2 node2(启动mon、osd)
192.168.8.13 node3 node3(启动mon、osd)

2、配置/etc/hosts(各个节点)

192.168.8.11 node1
192.168.8.12 node2
192.168.8.13 node3

所有节点安装docker

# 安装必要的一些系统工具
yum install -y yum-utils device-mapper-persistent-data lvm2
# 添加软件源信息
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 修改源信息
sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
# 安装Docker-CE
yum -y install docker-ce

# docker配置文件
vim /etc/docker/daemon.json 
{
    
    
  "storage-driver": "overlay2",
  "insecure-registries": ["registry.access.redhat.com","quay.io","harbor.od.com"],
  "registry-mirrors": ["https://q2gr04ke.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "live-restore": true
}

# 开启Docker服务
systemctl start docker
systemctl enable docker

3、镜像下载(各个节点)

luminous 12.2.12 ceph3
nautilus 14.2.x ceph4

docker pull ceph/daemon:latest-nautilus

4、首先启动主节点mon(node1节点)

docker run -d --net=host --name=mon --restart=always -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph -e MON_IP=192.168.8.11,192.168.8.12,192.168.8.13 -e CEPH_PUBLIC_NETWORK=192.168.8.0/24 ceph/daemon:latest-nautilus  mon

端口

192.168.8.11:3300                  *:*                   users:(("ceph-mon"
192.168.8.11:6789                  *:*                   users:(("ceph-mon"

5、拷贝配置文件和系统文件到其他两个节点

这一步非常重要。如果没有拷贝admin节点安装mon后生产的配置文件和系统文件到其他节点,就开始在其他节点启动mon则三个节点会单独启动3个ceph集群,而不是一个集群的三个mon节点(因为已设置过节点名称和无密码访问,故scp可直接使用)

若直接使用非xfs文件系统的硬盘,需要在配置文件中加以下配置:
vim /etc/ceph/ceph.conf

osd max object name len = 256 
osd max object namespace len = 64

然后将配置文件推送到其他各节点

scp -r /etc/ceph node2:/etc/
scp -r /etc/ceph node3:/etc/

scp -r /var/lib/ceph node2:/var/lib/
scp -r /var/lib/ceph node3:/var/lib/

6、再用4中的命令启动其他节点mon,对应IP做相应修改

docker run -d --net=host --name=mon --restart=always -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph -e MON_IP=192.168.8.11,192.168.8.12,192.168.8.13 -e CEPH_PUBLIC_NETWORK=192.168.8.0/24 ceph/daemon:latest-nautilus  mon

docker run -d --net=host --name=mon --restart=always -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph -e MON_IP=192.168.8.11,192.168.8.12,192.168.8.13 -e CEPH_PUBLIC_NETWORK=192.168.8.0/24 ceph/daemon:latest-nautilus  mon

7、挂载osd

mkfs.xfs /dev/sdb
mkdir /osd0
mount /dev/sdb  /osd0

/dev/sdb: UUID=“7858c245-076b-41f1-bc8a-6ae7dc2abae2” TYPE=“xfs”

vim /etc/fstab

UUID=7858c245-076b-41f1-bc8a-6ae7dc2abae2 /osd0                       xfs     defaults        0 0

如果有独立的磁盘我们可以用osd_ceph_disk模式,不需要格式化,直接指定设备名称即可 OSD_DEVICE=/dev/sdb

8、启动OSD服务

创建密钥文件的命令(三台mon节点都需执行)

docker exec -it mon ceph auth get client.bootstrap-osd -o /var/lib/ceph/bootstrap-osd/ceph.keyring

osd_directory 镜像模式

docker run -d --net=host \
--name=osd \
--restart=always \
--privileged=true \
--pid=host \
-v /etc/ceph:/etc/ceph \
-v /var/lib/ceph:/var/lib/ceph \
-v /dev:/dev \
-v /osd0:/var/lib/ceph/osd \
ceph/daemon:latest-nautilus osd_directory
================
如果有独立的磁盘我们可以用osd_ceph_disk模式,不需要格式化,直接指定设备名称即可 OSD_DEVICE=/dev/sdb

docker run -d --net=host \
--pid=host \
--privileged=true \
-v /etc/ceph:/etc/ceph \
-v /var/lib/ceph/:/var/lib/ceph/ \
-v /dev/:/dev/ \
-v /run/udev/:/run/udev/ \
-e OSD_DEVICE=/dev/sdb \
ceph/daemon osd_ceph_disk

================

9、其他OSD参照步骤7和8

node2:

mkfs.xfs /dev/sdb
mkdir /osd0
mount /dev/sdb  /osd0
echo "UUID=8c25af73-0862-4a80-b5b6-41179b340891 /osd0                       xfs     defaults        0 0" >> /etc/fstab
docker run -d --net=host \
--name=osd \
--restart=always \
--privileged=true \
--pid=host \
-v /etc/ceph:/etc/ceph \
-v /var/lib/ceph:/var/lib/ceph \
-v /dev:/dev \
-v /osd0:/var/lib/ceph/osd \
ceph/daemon:latest-nautilus osd_directory

node3:

mkfs.xfs /dev/sdb
mkdir /osd0
mount /dev/sdb  /osd0

echo "UUID=812c6668-f0f0-410a-8fe8-82bb347ad94a /osd0                       xfs     defaults        0 0" >> /etc/fstab
docker run -d --net=host \
--name=osd \
--restart=always \
--privileged=true \
--pid=host \
-v /etc/ceph:/etc/ceph \
-v /var/lib/ceph:/var/lib/ceph \
-v /dev:/dev \
-v /osd0:/var/lib/ceph/osd \
ceph/daemon:latest-nautilus osd_directory

10、在node1启动mgr

docker run -d --net=host \
--restart=always \
-v /etc/ceph:/etc/ceph \
-v /var/lib/ceph/:/var/lib/ceph/ \
ceph/daemon:latest-nautilus mgr

11、在ceph中创建一个pool

docker exec mon ceph osd pool create rbd 64

12、配置crushmap,根据osd数目,0.15做相应调整,整体之和不大于1

docker exec mon ceph osd crush add osd.0 0.15 host=admin
docker exec mon ceph osd crush add osd.1 0.15 host=admin

检查osd tree

docker exec mon ceph osd tree

13、更新crushmap使得节点都归属于root default

docker exec mon ceph osd crush move node1 root=default
docker exec mon ceph osd crush move node2 root=default

14、检查ceph运行情况

docker exec mon ceph -s

ceph版本查看

docker exec mon rpm -qa |grep ceph
ceph-mgr-14.2.16-0.el7.x86_64
ceph-base-14.2.16-0.el7.x86_64
ceph-osd-14.2.16-0.el7.x86_64
ceph-radosgw-14.2.16-0.el7.x86_64
ceph-grafana-dashboards-14.2.16-0.el7.noarch
ceph-mgr-k8sevents-14.2.16-0.el7.noarch
python-ceph-argparse-14.2.16-0.el7.x86_64
ceph-iscsi-3.4-1.el7.noarch
ceph-mgr-dashboard-14.2.16-0.el7.noarch
ceph-mon-14.2.16-0.el7.x86_64
ceph-release-1-1.el7.noarch
ceph-selinux-14.2.16-0.el7.x86_64
ceph-mgr-diskprediction-local-14.2.16-0.el7.noarch
nfs-ganesha-ceph-2.8.1.2-0.1.el7.x86_64
ceph-fuse-14.2.16-0.el7.x86_64
ceph-mds-14.2.16-0.el7.x86_64
python-cephfs-14.2.16-0.el7.x86_64
ceph-mgr-rook-14.2.16-0.el7.noarch
libcephfs2-14.2.16-0.el7.x86_64
ceph-common-14.2.16-0.el7.x86_64

15、测试ceph集群
测试ceph集群在块存储下镜像的创建和文件的上传,如果成功才能说明ceph集群安装成功

docker exec mon rbd create rbd/test-image --size 100M
docker exec mon rbd info rbd/test-image
docker exec mon rados -p rbd ls

查看pool

docker exec mon ceph osd pool ls

查看pool下的镜像文件

docker exec mon rbd ls --pool rbd

ceph错误提示:application not enabled on 1 pool(s)

docker exec mon ceph health detail
HEALTH_WARN application not enabled on 1 pool(s)
POOL_APP_NOT_ENABLED application not enabled on 1 pool(s)
    application not enabled on pool 'rbd'
    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications

解决

ceph osd pool application enable <pool-name> <app-name>
docker exec mon ceph osd pool application enable rbd rgw

猜你喜欢

转载自blog.csdn.net/wuxingge/article/details/113882037