ceph-deploy osd activate xxx bluestore ERROR

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/Hello_NB1/article/details/77745711

ceph luminous 12.2.0 bluestore 添加 osd 出错:


[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.38): /usr/bin/ceph-deploy --overwrite-conf --ceph-conf /etc/ceph/ceph.conf osd activate node-2:/dev/disk/by-partuuid/3ed2c4be-9183-4a97-a87e-2a9bda4510b3:/dev/disk/by-partuuid/8fb82b3e-0e9f-4b4e-ac2f-24aa99c428c5
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : True
[ceph_deploy.cli][INFO  ]  subcommand                    : activate
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1705290>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x1698de8>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : /etc/ceph/ceph.conf
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : [('node-2', '/dev/disk/by-partuuid/3ed2c4be-9183-4a97-a87e-2a9bda4510b3', '/dev/disk/by-partuuid/8fb82b3e-0e9f-4b4e-ac2f-24aa99c428c5')]
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks node-2:/dev/disk/by-partuuid/3ed2c4be-9183-4a97-a87e-2a9bda4510b3:/dev/disk/by-partuuid/8fb82b3e-0e9f-4b4e-ac2f-24aa99c428c5
[node-2][DEBUG ] connected to host: node-2 
[node-2][DEBUG ] detect platform information from remote host
[node-2][DEBUG ] detect machine type
[node-2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.3.1611 Core
[ceph_deploy.osd][DEBUG ] activating host node-2 disk /dev/disk/by-partuuid/3ed2c4be-9183-4a97-a87e-2a9bda4510b3
[ceph_deploy.osd][DEBUG ] will use init type: systemd
[node-2][DEBUG ] find the location of an executable
[node-2][INFO  ] Running command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/disk/by-partuuid/3ed2c4be-9183-4a97-a87e-2a9bda4510b3
[node-2][WARNIN] main_activate: path = /dev/disk/by-partuuid/3ed2c4be-9183-4a97-a87e-2a9bda4510b3
[node-2][WARNIN] get_dm_uuid: get_dm_uuid /dev/disk/by-partuuid/3ed2c4be-9183-4a97-a87e-2a9bda4510b3 uuid path is /sys/dev/block/65:37/dm/uuid
[node-2][WARNIN] command: Running command: /usr/sbin/blkid -o udev -p /dev/disk/by-partuuid/3ed2c4be-9183-4a97-a87e-2a9bda4510b3
[node-2][WARNIN] command: Running command: /sbin/blkid -p -s TYPE -o value -- /dev/disk/by-partuuid/3ed2c4be-9183-4a97-a87e-2a9bda4510b3
[node-2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[node-2][WARNIN] mount: Mounting /dev/disk/by-partuuid/3ed2c4be-9183-4a97-a87e-2a9bda4510b3 on /var/lib/ceph/tmp/mnt.xOcLiw with options rw,noexec,nodev,noatime,nodiratime,nobarrier,logbsize=256k
[node-2][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o rw,noexec,nodev,noatime,nodiratime,nobarrier,logbsize=256k -- /dev/disk/by-partuuid/3ed2c4be-9183-4a97-a87e-2a9bda4510b3 /var/lib/ceph/tmp/mnt.xOcLiw
[node-2][WARNIN] command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.xOcLiw
[node-2][WARNIN] activate: Cluster uuid is 3d292754-fcdc-4144-8120-c2883f0ff0a3
[node-2][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[node-2][WARNIN] activate: Cluster name is ceph
[node-2][WARNIN] activate: OSD uuid is 153f5c9c-4259-41dc-b883-57b4c1ce9f3b
[node-2][WARNIN] activate: OSD id is 44
[node-2][WARNIN] activate: Initializing OSD...
[node-2][WARNIN] command_check_call: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/tmp/mnt.xOcLiw/activate.monmap
[node-2][WARNIN] got monmap epoch 1
[node-2][WARNIN] command_check_call: Running command: /usr/bin/ceph-osd --cluster ceph --mkfs -i 44 --monmap /var/lib/ceph/tmp/mnt.xOcLiw/activate.monmap --osd-data /var/lib/ceph/tmp/mnt.xOcLiw --osd-uuid 153f5c9c-4259-41dc-b883-57b4c1ce9f3b --setuser ceph --setgroup ceph
[node-2][WARNIN] 2017-08-30 13:45:29.602611 7fdb4c382d00 -1 bdev(0x7fdb5746b000 /var/lib/ceph/tmp/mnt.xOcLiw/block.wal) _aio_start io_setup(2) failed with EAGAIN; try increasing /proc/sys/fs/aio-max-nr
[node-2][WARNIN] 2017-08-30 13:45:29.602671 7fdb4c382d00 -1 bluestore(/var/lib/ceph/tmp/mnt.xOcLiw) _open_db add block device(/var/lib/ceph/tmp/mnt.xOcLiw/block.wal) returned: (11) Resource temporarily unavailable
[node-2][WARNIN] 2017-08-30 13:45:30.359153 7fdb4c382d00 -1 bluestore(/var/lib/ceph/tmp/mnt.xOcLiw) mkfs failed, (11) Resource temporarily unavailable
[node-2][WARNIN] 2017-08-30 13:45:30.359189 7fdb4c382d00 -1 OSD::mkfs: ObjectStore::mkfs failed with error (11) Resource temporarily unavailable
[node-2][WARNIN] 2017-08-30 13:45:30.359347 7fdb4c382d00 -1  ** ERROR: error creating empty object store in /var/lib/ceph/tmp/mnt.xOcLiw: (11) Resource temporarily unavailable
[node-2][WARNIN] mount_activate: Failed to activate
[node-2][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.xOcLiw
[node-2][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.xOcLiw
[node-2][WARNIN] Traceback (most recent call last):
[node-2][WARNIN]   File "/usr/sbin/ceph-disk", line 9, in <module>
[node-2][WARNIN]     load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
[node-2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5704, in run
[node-2][WARNIN]     main(sys.argv[1:])
[node-2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5655, in main
[node-2][WARNIN]     args.func(args)
[node-2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3759, in main_activate
[node-2][WARNIN]     reactivate=args.reactivate,
[node-2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3522, in mount_activate
[node-2][WARNIN]     (osd_id, cluster) = activate(path, activate_key_template, init)
[node-2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3699, in activate
[node-2][WARNIN]     keyring=keyring,
[node-2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3151, in mkfs
[node-2][WARNIN]     '--setgroup', get_ceph_group(),
[node-2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 558, in command_check_call
[node-2][WARNIN]     return subprocess.check_call(arguments)
[node-2][WARNIN]   File "/usr/lib64/python2.7/subprocess.py", line 542, in check_call
[node-2][WARNIN]     raise CalledProcessError(retcode, cmd)
[node-2][WARNIN] subprocess.CalledProcessError: Command '['/usr/bin/ceph-osd', '--cluster', 'ceph', '--mkfs', '-i', u'44', '--monmap', '/var/lib/ceph/tmp/mnt.xOcLiw/activate.monmap', '--osd-data', '/var/lib/ceph/tmp/mnt.xOcLiw', '--osd-uuid', u'153f5c9c-4259-41dc-b883-57b4c1ce9f3b', '--setuser', 'ceph', '--setgroup', 'ceph']' returned non-zero exit status 1
[node-2][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/disk/by-partuuid/3ed2c4be-9183-4a97-a87e-2a9bda4510b3

检查了 /proc/sys/fs/aio-max-nr: 65535 和 /proc/sys/fs/aio-nr: 62053 的值

发现一个节点最多只能添加到 14 个 osd, 发现添加一个 osd aio-nr 的值增加 4096

当添加到 14 个 osd 时,aio-nr 值为 62053, 再添加一个 osd ,便会超出 65536 的限制


所以 ,修改 aio-max-nr 的最大值,可以解决这个问题

# sysctl fs.aio-max-nr=1048576



猜你喜欢

转载自blog.csdn.net/Hello_NB1/article/details/77745711