Ceph-集群部署安装

硬件环境

  1. 共4个节点, ceph-master, ceph-node01, ceph-node02, ceph-node03
  2. 准备3块50G的硬盘
[root@localhost ~]# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0   20G  0 disk
├─sda1   8:1    0  300M  0 part /boot
├─sda2   8:2    0    2G  0 part [SWAP]
└─sda3   8:3    0 17.7G  0 part /
sdb      8:16   0   50G  0 disk
sdc      8:32   0   50G  0 disk
sdd      8:48   0   50G  0 disk
sr0     11:0    1 1024M  0 rom

环境初始化

  1. 关闭selinux
    sed -i "/^SELINUX/s/enforcing/disable/" /etc/selinux/config
    setenforce 0
    或者
    getenforce 查看selinux是否已经关闭

  2. 关闭防火墙
    systemctl stop firewalld && systemctl disable firewalld

  3. 配置NTP,时间同步

yum -y install ntpdate ntp
ntpdate cn.ntp.org.cn
systemctl restart ntpd  && systemctl enable ntpd
  1. 配置sudo不需要tty
sed -i 's/Default requiretty/#Default requiretty/' /etc/sudoers
  1. 各个节点的hosts文件配置
cat /etc/hosts
192.168.47.144 ceph-node01
192.168.47.145 ceph-node02
192.168.47.146 ceph-node03
192.168.47.147 ceph-master

Ceph环境部署

  1. 配置yum源
$ cat /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-mimic/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc

[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-mimic/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc

[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-mimic/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
yum clean all; yum makecache fast
  1. 各个节点上安装依赖包
yum install -y yum-utils && yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/ && yum install --nogpgcheck -y epel-release && rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 && rm -f /etc/yum.repos.d/dl.fedoraproject.org*
  1. 对各个cephadm用户添加sudo权限
useradd cephadm	
echo 'kl' | passwd --stdin cephadm	
echo 'cephadm ALL = (root) NOPASSWD:ALL' | sudo tee /etc/sudoers.d/cephadm	
chmod 0440 /etc/sudoers.d/cephadm
  1. 管理节点以cephadm用户的身份为各个节点配置免密登录
su - cephadm
ssh-keygen -t rsa -P ''
ssh-copy-id cephadm@ceph-master
ssh-copy-id cephadm@ceph-node01
ssh-copy-id cephadm@ceph-node02
ssh-copy-id cephadm@ceph-node03
  1. 管理节点安装ceph-deploy
sudo yum install -y ceph-deploy python-pip
  1. 安装ceph包, 在node节点以及master节点安装
sudo yum install -y ceph ceph-radosgw

# 此步骤执行完毕,可以执行如下命令,查看ceph是否安装成功
ceph --version
  1. ceph集群创建
    如下步骤,注意1:使用什么身份 注意2:在哪些节点
# 在ceph-master节点安装
# 安装ceph软件
# 此处使用cephadm用户
$ mkdir my-cluster
$ cd my-cluster
$ ceph-deploy new ceph-master ceph-node01 ceph-node02 ceph-node03
# 生成keyring文件
$ ceph-deploy mon create-initial
# 将配置和client.admin秘钥环推送到远程主机
# 每次更改ceph的配置文件,都可以用这个命令推送到所有节点上
$ ceph-deploy admin ceph-master ceph-node01 ceph-node02 ceph-node03
# 在所有的节点都以root身份运行
# ceph.client.admin.keyring文件是ceph命令行,所需要的keyring文件
# 不管哪个节点,只需要使用cephadm用户执行命令行工具,这个文件就必须要让cephadm用户拥有访问权限,就必须执行这一步骤
# 如果这一步骤不做,ceph命令就无法在非sudo环境中执行
$ setfacl -m u:cephadm:r /etc/ceph/ceph.client.admin.keyring

# ceph -s    			# 检查集群状态
[root@ceph-master ~]# ceph -s
  cluster:
    id:     7d949a14-5045-46b3-b3a3-79cbb3f26129
    health: HEALTH_WARN
            clock skew detected on mon.ceph-node02, mon.ceph-master

  services:
    mon: 3 daemons, quorum ceph-node01,ceph-node02,ceph-master
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0B
    usage:   0B used, 0B / 0B avail
    pgs:
  1. 配置OSD
# 清空osd节点上用来作为osd设备的磁盘
# 创建osd
# 这一步骤请检查,自己的磁盘名称是否对应
# 此步骤仍旧在my-cluster目录下,用cephadm用户执行操作
for dev in /dev/sdb /dev/sdc /dev/sdd
do
ceph-deploy disk zap ceph-master $dev
ceph-deploy osd create ceph-master --data $dev
ceph-deploy disk zap ceph-node01 $dev
ceph-deploy osd create ceph-node01 --data $dev
ceph-deploy disk zap ceph-node02 $dev
ceph-deploy osd create ceph-node02 --data $dev
ceph-deploy disk zap ceph-node03 $dev
ceph-deploy osd create ceph-node03 --data $dev
done

部署成功后

[cephadm@ceph-master my-cluster]$ ceph osd tree
ID CLASS WEIGHT  TYPE NAME            STATUS REWEIGHT PRI-AFF
-1       0.57477 root default
-3       0.14369     host ceph-master
 0   hdd 0.04790         osd.0            up  1.00000 1.00000
 4   hdd 0.04790         osd.4            up  1.00000 1.00000
 8   hdd 0.04790         osd.8            up  1.00000 1.00000
-5       0.14369     host ceph-node01
 1   hdd 0.04790         osd.1            up  1.00000 1.00000
 5   hdd 0.04790         osd.5            up  1.00000 1.00000
 9   hdd 0.04790         osd.9            up  1.00000 1.00000
-7       0.14369     host ceph-node02
 2   hdd 0.04790         osd.2            up  1.00000 1.00000
 6   hdd 0.04790         osd.6            up  1.00000 1.00000
10   hdd 0.04790         osd.10           up  1.00000 1.00000
-9       0.14369     host ceph-node03
 3   hdd 0.04790         osd.3            up  1.00000 1.00000
 7   hdd 0.04790         osd.7            up  1.00000 1.00000
11   hdd 0.04790         osd.11           up  1.00000 1.00000
  1. 部署mgr
 ceph-deploy mgr create ceph-master ceph-node01 ceph-node02 ceph-node03
  1. 开启dashboard模块,用于UI查看
ceph mgr module enable dashboard

查看http://192.168.47.144:7000/, 检查是否部署完成
ceph部署情况

问题

  1. 如何加快yum install -y ceph的执行速度?
    如果有好的办法,欢迎留言,不胜感激: )

  2. ceph -s命令是干嘛的?
    答:查看集群状态

[root@ceph-master cephadm]# ceph -s
  cluster:
    id:     edf2dff4-adf1-4274-8a17-367231d7d60d
    health: HEALTH_OK										# 集群健康状态

  services:
    mon: 3 daemons, quorum ceph-node03,ceph-master,ceph-node01
    mgr: ceph-master(active), standbys: ceph-node03, ceph-node01
    osd: 9 osds: 9 up, 9 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0B
    usage:   9.04GiB used, 441GiB / 450GiB avail			# 集群已经使用空间,剩余空间
    pgs:
发布了21 篇原创文章 · 获赞 0 · 访问量 2611

猜你喜欢

转载自blog.csdn.net/u012720518/article/details/105460851