CentOS ceph 集群搭建(单节点)

软件环境:

-Centos7 X 64

CEPH 版本:

-ceph-deploy v1.5.37

-ceph version 10.2.9

步骤1.修改主机名(即节点名)

1

sed -i /HOSTNAME/d/etc/sysconfig/network

echo HOSTNAME=主机名” >> /etc/sysconfig/network

cat /etc/sysconfig/network

2

echo IP地址 主机名” >> /etc/hosts

cat /etc/hosts

3

hostname cydb

hostname -f

重启

步骤2.配置SSH

ssh-keygen -t rsa -P -f ~/.ssh/id_rsa-t rsaRSA加密,-P ‘’空密码,-f保存密码的位置)

ssh-copy-id root@主机名

步骤3.配置firewall

(开启6789端口给MON6800-7100端口给OSD

firewall-cmd zone=public add-port=6789/tcp --permanent

firewall-cmd zone=public add-port=6800-7100/tcp --permanent

firewall-cmd reload

firewall-cmd zone=public  --list-all

步骤4.关闭selinux

setenforce 0

sed -i s/SELINUX.*=.*enforcing/SELINUX=disabled/g /etc/selinux/config

cat /etc/selinux/config

可看到SELINUX=disabled

步骤5.

yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

步骤6.添加ceph.repo文件(把软件包源加入软件仓库)

sudo vim /etc/yum.repos.d/ceph.repo

写入

[ceph-noarch]

name=Ceph noarch packages

baseurl=https://download.ceph.com/rpm-jewel/el7/noarch

enabled=1

gpgcheck=1

type=rpm-md

gpgkey=https://download.ceph.com/keys/release.asc

步骤7 安装ceph-deploy(要更新软件库)

sudo yum update

sudo yum install ceph-deploy

检查是否安装成功:ceph-deploy --help

步骤8.部署cluster(集群)

mkdir /opt/ceph-cluster

cd /opt/ceph-cluster

ceph-deploy new 节点名

(这时可以ls一下当前目录,会有一个ceph配置文件、一个monitor密钥环。一个日志文件。)

echo osd crush chooseleaf type = 0>> ceph.conf

echo osd pool default size = 1>> ceph.conf

echo osd journal size = 100>> ceph.conf

cat ceph.conf

步骤9.安装ceph

修改ceph源(外国的源总是timeout

export CEPH_DEPLOY_REPO_URL=http://mirrors.163.com/ceph/rpm-jewel/el7

export CEPH_DEPLOY_GPG_URL=http://mirrors.163.com/ceph/keys/release.asc

安装ceph

ceph-deploy install 节点名

报错则运行yum remove -y ceph-release之后重新安装ceph

检查是否安装成功:ceph --version

步骤10.部署Monitor

ceph-deploy mon create-initial

检查集群状态:ceph -s

(显示现在是HEALTH_ERR不健康状态)

步骤11.部署两个OSD

1)准备两个块设备(可以是硬盘也可以是LVM卷),这里我们使用LVM(逻辑卷)

dd if=/dev/zero of=ceph-volumes.img bs=1M count=8192 oflag=direct

sgdisk -g  --clear ceph-volumes.img

sudo vgcreate ceph-volumes $(sudo losetup  --show -f ceph-volumes.img)

sudo lvcreate -L2G -nceph0 ceph-volumes

sudo lvcreate -L2G -nceph1 ceph-volumes

sudo mkfs.xfs -f /dev/ceph-volumes/ceph0

sudo mkfs.xfs -f /dev/ceph-volumes/ceph1

mkdir -p /srv/ceph/{osd0,osd1,mon0,mds0}

sudo mount /dev/ceph-volumes/ceph0 /srv/ceph/osd0

sudo mount /dev/ceph-volumes/ceph1 /srv/ceph/osd1

创建了两个虚拟磁盘ceph0ceph1并分别挂载到/srv/ceph/osd0/srv/ceph/osd1目录下

2)挂载两个OSD

ceph-deploy osd prepare monster:/srv/ceph/osd0

ceph-deploy osd prepare monster:/srv/ceph/osd1

3)激活两个OSD

ceph-deploy osd activate monster:/srv/ceph/osd0

ceph-deploy osd activate monster:/srv/ceph/osd1

报错:RuntimeError: Failed to execute command: ceph-disk -v activate mark-init upstart mount /srv/ceph/osd0

解决:使用命令 sudo chown ceph:ceph /srv/ceph/osd0,然后重新激活

步骤12.复制admin密钥到其他节点

ceph-deploy admin monsterls

验证:

ceph的安装状态:ceph -s

ceph集群健康状态:ceph -w

ceph monitor仲裁状态:ceph quorum_status format json-pretty

ceph mon stat

ceph osd stat

ceph osd tree(显示crush图)

ceph pg stat

ceph auth list(集群的认证密码)

如果碰到麻烦,要从头再来,可以用下列命令清楚配置:

ceph-deploy purgedata {ceph-node} [{ceph-node}]

ceph-deploy forgetkeys

用下列命令可以连 Ceph 安装包一起清除:

ceph-deploy purge {ceph-node} [{ceph-node}]

之后必须重新安装 Ceph

希望对大家有帮助

  

猜你喜欢

转载自www.cnblogs.com/strivegys/p/9171373.html
今日推荐