ceph (luminous 版) 修改 mon IP 地址

说明

当前 ceph 网络环境如下

public network = xx.xxx.208.0/22     <外部网络, 与 client 沟通>
cluster network = xxx.xx.128.0/21    <内部网络, 与 osd 之间沟通>

目标

切换网络环境

public network =   xxx.xx.128.0/21             <外部网络, 与 client 沟通>
cluster network =  xx.xxx.208.0/22          <内部网络, 与 osd 之间沟通>

注意事项

创建 ceph 集群, 新增节点时, 切记对 journal disk (/dev/sdk1-5) 进行格式化否则会引发 bug
需要对整个 ceph 执行停机操作
可预先备份原 ceph mon 配置, 用于回滚

ceph 环境说明

架构

[root@ns-ceph-208214 ceph]# ceph osd tree
ID CLASS WEIGHT    TYPE NAME               STATUS REWEIGHT PRI-AFF
-1       240.00000 root default
-2        60.00000     host ns-ceph-208213
 0   hdd   6.00000         osd.0               up  1.00000 1.00000
 1   hdd   6.00000         osd.1               up  1.00000 1.00000
 2   hdd   6.00000         osd.2               up  1.00000 1.00000
 3   hdd   6.00000         osd.3               up  1.00000 1.00000
 4   hdd   6.00000         osd.4               up  1.00000 1.00000
 5   hdd   6.00000         osd.5               up  1.00000 1.00000
 6   hdd   6.00000         osd.6               up  1.00000 1.00000
 7   hdd   6.00000         osd.7               up  1.00000 1.00000
 8   hdd   6.00000         osd.8               up  1.00000 1.00000
 9   hdd   6.00000         osd.9               up  1.00000 1.00000
-3        60.00000     host ns-ceph-208214
10   hdd   6.00000         osd.10              up  1.00000 1.00000
11   hdd   6.00000         osd.11              up  1.00000 1.00000
12   hdd   6.00000         osd.12              up  1.00000 1.00000
13   hdd   6.00000         osd.13              up  1.00000 1.00000
14   hdd   6.00000         osd.14              up  1.00000 1.00000
15   hdd   6.00000         osd.15              up  1.00000 1.00000
16   hdd   6.00000         osd.16              up  1.00000 1.00000
17   hdd   6.00000         osd.17              up  1.00000 1.00000
18   hdd   6.00000         osd.18              up  1.00000 1.00000
19   hdd   6.00000         osd.19              up  1.00000 1.00000
-4        60.00000     host ns-ceph-208215
20   hdd   6.00000         osd.20              up  1.00000 1.00000
21   hdd   6.00000         osd.21              up  1.00000 1.00000
22   hdd   6.00000         osd.22              up  1.00000 1.00000
23   hdd   6.00000         osd.23              up  1.00000 1.00000
24   hdd   6.00000         osd.24              up  1.00000 1.00000
25   hdd   6.00000         osd.25              up  1.00000 1.00000
26   hdd   6.00000         osd.26              up  1.00000 1.00000
27   hdd   6.00000         osd.27              up  1.00000 1.00000
28   hdd   6.00000         osd.28              up  1.00000 1.00000
29   hdd   6.00000         osd.29              up  1.00000 1.00000
-5        60.00000     host ns-ceph-208216
30   hdd   6.00000         osd.30              up  1.00000 1.00000
31   hdd   6.00000         osd.31              up  1.00000 1.00000
32   hdd   6.00000         osd.32              up  1.00000 1.00000
33   hdd   6.00000         osd.33              up  1.00000 1.00000
34   hdd   6.00000         osd.34              up  1.00000 1.00000
35   hdd   6.00000         osd.35              up  1.00000 1.00000
36   hdd   6.00000         osd.36              up  1.00000 1.00000
37   hdd   6.00000         osd.37              up  1.00000 1.00000
38   hdd   6.00000         osd.38              up  1.00000 1.00000
39   hdd   6.00000         osd.39              up  1.00000 1.00000

集群健康状态

[root@ns-ceph-208214 ceph]# ceph -s
    cluster:
        id:     xxxxxxx-4229-4dec-9e75-46f665bc4620
        health: HEALTH_OK
    services:
        mon: 3 daemons, quorum ns-ceph-208214,ns-ceph-208215,ns-ceph-208216
        mgr: openstack(active)
        osd: 40 osds: 40 up, 40 in
    data:
        pools:   1 pools, 2048 pgs
        objects: 2702 objects, 10740 MB
        usage:   55426 MB used, 218 TB / 218 TB avail
        pgs:     2048 active+clean

检查当前网络配置

mon 配置

[root@ns-ceph-208213 ~]# ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show  | grep -E 'mon_host|mon_initial_members'
        "mon_host": "xx.xxx.208.214,xx.xxx.208.215,xx.xxx.208.216",
        "mon_initial_members": "ns-ceph-208214,ns-ceph-208215,ns-ceph-208216",

外部, 内部网络配置

[root@ns-ceph-208213 ~]# ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show  | grep network
        "cluster_network": "xxx.xx.128.0/21",
        "public_network": "xx.xxx.208.0/22",

地址配置

[root@ns-ceph-208213 ~]# ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show  | grep xx.xxx.
        "mon_host": "xx.xxx.208.214,xx.xxx.208.215,xx.xxx.208.216",
        "public_addr": "xx.xxx.208.213:0/0",
        "public_network": "xx.xxx.208.0/22",
[root@ns-ceph-208213 ~]# ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show  | grep xxx.xx
        "cluster_addr": "xxx.xx.132.213:0/0",
        "cluster_network": "xxx.xx.128.0/21",

change mon ipaddr 步骤

确保 ceph mon 正常工作, 备份原有 ceph mon 配置
导出 ceph mon 配置并修改
导入 ceph mon 配置
关闭集群并重启
修改 ceph 配置文件

导出 ceph mon 配置

[root@ns-ceph-208214 ~]# ceph mon getmap -o /tmp/changemon       <- 切记备份该文件

[root@ns-ceph-208214 ~]# monmaptool --print /tmp/changemon
monmaptool: monmap file /tmp/changemon
epoch 1
fsid xxxxxxx-4229-4dec-9e75-46f665bc4620
last_changed 2018-05-14 10:33:50.570920
created 2018-05-14 10:33:50.570920
0: xx.xxx.208.214:6789/0 mon.ns-ceph-208214
1: xx.xxx.208.215:6789/0 mon.ns-ceph-208215
2: xx.xxx.208.216:6789/0 mon.ns-ceph-208216

修改 ceph mon 配置

删除原有 mon

[root@ns-ceph-208214 ~]# monmaptool --rm ns-ceph-208214 --rm ns-ceph-208215 --rm ns-ceph-208216 /tmp/changemon
monmaptool: monmap file /tmp/changemon
monmaptool: removing ns-ceph-208214
monmaptool: removing ns-ceph-208215
monmaptool: removing ns-ceph-208216
monmaptool: writing epoch 1 to /tmp/changemon (0 monitors)

添加新 mon 配置

[root@ns-ceph-208214 ~]# monmaptool --add ns-ceph-208214 xxx.xx.132.214:6789 --add ns-ceph-208215 xxx.xx.132.215:6789 --add ns-ceph-208216 xxx.xx.132.216:6789 /tmp/changemon
monmaptool: monmap file /tmp/changemon
monmaptool: writing epoch 1 to /tmp/changemon (3 monitors)

验证配置

[root@ns-ceph-208214 ~]# monmaptool --print /tmp/changemon
monmaptool: monmap file /tmp/changemon
epoch 1
fsid xxxxxxx-4229-4dec-9e75-46f665bc4620
last_changed 2018-05-14 10:33:50.570920
created 2018-05-14 10:33:50.570920
0: xxx.xx.132.214:6789/0 mon.ns-ceph-208214
1: xxx.xx.132.215:6789/0 mon.ns-ceph-208215
2: xxx.xx.132.216:6789/0 mon.ns-ceph-208216

导入 ceph mon 配置

确保 ceph 集群没有启动, 需检查 ceph-osd, ceph-mon, ceph-mgr 服务
分别在每个 mon 节点上导入新配置

[root@ns-ceph-208214 ~]# ceph-mon -i ns-ceph-208214  --inject-monmap /tmp/changemo
[root@ns-ceph-208215 ~]# ceph-mon -i ns-ceph-208215 --inject-monmap /tmp/changemon
[root@ns-ceph-208216 ~]# ceph-mon -i ns-ceph-208216 --inject-monmap /tmp/changemon

修改配置文件

切记备份 /etc/ceph/ceph.conf 配置
修改 mon 地址, 修改 public, cluster network 地址

mon initial members = ns-ceph-208214,ns-ceph-208215,ns-ceph-208216
mon host = xxx.xx.132.214,xxx.xx.132.215,xxx.xx.132.216
public network = xxx.xx.128.0/21
cluster network = xx.xxx.208.0/22

重新启动集群

同步 /etc/ceph/ceph.conf 到左右 mon 及 osd 节点(略)
启动 ceph mon, ceph osd, ceph mgr 服务(略)

验证

ceph cluster 健康状态

[root@ns-ceph-208216 ~]# ceph -s
    cluster:
        id:     xxxxxxx-4229-4dec-9e75-46f665bc4620
        health: HEALTH_OK
    services:
        mon: 3 daemons, quorum ns-ceph-208214,ns-ceph-208215,ns-ceph-208216
        mgr: openstack(active)
        osd: 40 osds: 40 up, 40 in
    data:
        pools:   1 pools, 2048 pgs
        objects: 2702 objects, 10740 MB
        usage:   36747 MB used, 218 TB / 218 TB avail
        pgs:     2048 active+clean
    io:
        client:   98 B/s rd, 0 op/s rd, 0 op/s wr
        recovery: 1974 kB/s, 0 objects/s

检查 ceph 网络配置

[root@ns-ceph-208213 ~]# ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show  | grep -E 'mon_host|mon_initial_members'
        "mon_host": "xxx.xx.132.214,xxx.xx.132.215,xxx.xx.132.216",
        "mon_initial_members": "ns-ceph-208214,ns-ceph-208215,ns-ceph-208216",

[root@ns-ceph-208213 ~]# ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show  | grep network
        "cluster_network": "xx.xxx.208.0/22",
        "public_network": "xxx.xx.128.0/21",

[root@ns-ceph-208213 ~]# ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show  | grep xx.xxx.
        "cluster_addr": "xx.xxx.208.213:0/0",
        "cluster_network": "xx.xxx.208.0/22",

[root@ns-ceph-208213 ~]# ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show  | grep xxx.xx
        "mon_host": "xxx.xx.132.214,xxx.xx.132.215,xxx.xx.132.216",
        "public_addr": "xxx.xx.132.213:0/0",
        "public_network": "xxx.xx.128.0/21",

验证 ceph data 等都正常 [略]

猜你喜欢

转载自blog.csdn.net/signmem/article/details/80312345
今日推荐