centos7安装ceph12 luminous 二节点集群

1.安装环境

centos7

172.29.236.181    node1

172.29.236.182    node2

ceph12 luminous

2.系统配置

vi /etc/hosts

172.29.236.181    node1

172.29.236.182    node2

关闭防火墙

systemctl stop firewalld

systemctl disable firewald

关闭selinux

vi /etc/selinux/config

SELINUX=disabled

修改IP

vi /etc/default/grub

GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0"

sudo grub-mkconfig -o /boot/grub/grub.cfg

reboot

vi /etc/sysconfig/network-netscripts/ifcfg-eth0

BOOTPROTO="static"
IPADDR=172.29.236.181

NETMASK=255.255.255.0

GATEWAY=172.29.236.1

注释ip6

修改DNS

vi /etc/resolve.conf

nameserver 172.29.236.1

3.安装ntp

yum install ntp ntp-date ntp-doc

4.添加ceph安装源

vi /etc/yum.repos.d/ceph.repo

[ceph]

name=ceph
baseurl=https://mirrors.aliyun.com/ceph/rpm-luminous/el7/x86_64/
enabled=1
gpgcheck=0


[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/noarch/
enabled=1
gpgcheck=0

[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/SRPMS

enabled=1
gpgcheck=0

5.安装epel-release

yum clean all

yum makecache

yum install epel-release

yum install python-pip

pip install distribute

6.添加用户,有sudo免密执行命令权限

useradd ceph

passwd ceph

visudo

root    ALL=(ALL)    ALL后添加

ceph    ALL=(ALL)    NOPASSWD: ALL

%wheel ALL=(ALL)    NOPASSWD: ALL

7.设置deploy主机可以无密码访问其他node

su ceph

ssh-keygen -t rsa -P ''

ssh-copy-id ceph@node1

ssh-copy-id ceph@node2

8.安装ceph-deploy

yum install ceph-deploy

创建配置文件目录
mkdir  /home/ceph/

cd  /home/ceph/

9.安装ceph集群

cepy-deploy new node1

修改ceph.conf

osd pool default size=2

ceph-deploy install node1 node2

sudo ceph -v查看ceph是否安装成功

10.创建监视器节点

ceph-deploy mon create-initial

sudo ceph-disk prepare /dev/sdb 所有节点要作为osd的硬盘

分发ceph.conf和ceph.client.admin.keyring

ceph-deploy admin deploy node1 node2

分发ceph.bootstrap-osd.keyring

cp /home/ceph/ceph.bootstrap-osd.keyring /etc/ceph

scp /etc/ceph/ceph.bootstrap-osd.keyring  node2:/home/ceph

sudo ceph-disk activate /dev/sdb1 --activate-key /etc/ceph/ceph.bootstrap-osd.keyring

node2同理

11.创建mgr

sudo ceph -s确认所有osd都是up in的正常状态

ceph-deploy mgr create node1

sudo systemctl start ceph-mgr@node1

sudo systemctl enable ceph.target

ceph集群已经搭建完毕,sudo ceph -s确认全部正常

12.测试块存储

创建一个新的存储池,而不是使用默认的rbd

ceph osd pool create test 128

创建一个块

rbd create --size 10G disk01 --pool test

查看rbd
rbd ls test

查看块的特性

rbd info --pool test disk01

由于内核不支持,需要禁止一些特性,只保留layering

rbd --pool test feature disable disk01 exclusive-lock, object-map, fast-diff, deep-flatten

映射块disk01到本地

rbd map --pool test disk01

格式化块设备

mkfs.xfs /dev/rbd0

把rbd0挂载到本地目录

mount /dev/rbd0 /mnt

这个时候查看集群状态, 集群的状态是HEALTH_WARN

执行ceph health detail

根据提示信息执行ceph osd pool application enable test rbd

集群状态正常了。

13.rbd文件导入(导入到pool中,通过rbd ls test查看所有块设备

rbd -p test import 7.tar 8.tar

14.删除image

rbd rm pool/image

例如:rbd rm test/disk01

16.qemu使用块设备

qemu-img rbd:test/disk01

qemu-img create -f raw rbd:test/disk02 10G

qemu-img resize rbd:test/disk02 20G

原文链接:https://blog.csdn.net/greatyoulv/article/details/80039589

转载请注明出处

联系邮箱:[email protected]

发布了20 篇原创文章 · 获赞 4 · 访问量 5847

猜你喜欢

转载自blog.csdn.net/greatyoulv/article/details/80039589