8 Openstack-Ussuri-Nova计算节点部署-centos8

Nova具体功能如下:
1 实例生命周期管理
2 管理计算资源
3 网络和认证管理
4 REST风格的API
5 异步的一致性通信
6 Hypervisor透明:支持Xen,XenServer/XCP, KVM, UML, VMware vSphere and Hyper-V

8.1 部署与配置nova - ALL Compute

#安装kvm&qemu源包

wget http://rpmfind.net/linux/centos/8.2.2004/extras/x86_64/os/Packages/centos-release-gluster6-1.0-1.el8.noarch.rpm
rpm -ivh centos-release-gluster6-1.0-1.el8.noarch.rpm
yum makecache

#安装包

yum install openstack-nova-compute -y

#备份nova配置

cp /etc/nova/nova.conf /etc/nova/nova.conf.bak
egrep -v "^$|^#" /etc/nova/nova.conf.bak >/etc/nova/nova.conf

#配置nova配置文件,在对应项底下增加以下字段
#vim /etc/nova/nova.conf


[DEFAULT]
# ...
my_ip = 172.16.1.163
transport_url = rabbit://rabbitmq:rabbitmq.123@controller160:5672,rabbitmq:rabbitmq.123@controller161:5672,rabbitmq:rabbitmq.123@controller162:5672

[api]
# ...
auth_strategy = keystone

[keystone_authtoken]
# ...
www_authenticate_uri = http://controller168:5000
auth_url = http://controller168:5000
memcached_servers = controller160:11211,controller161:11211,controller162:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = nova.123

[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
#因某些未做主机绑定的客户端不能访问”controller168”名字,改为使用具体的ip地址
novncproxy_base_url = http://172.16.1.168:6080/vnc_auto.html

[glance]
# ...
api_servers = http://controller168:9292

[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp

[placement]
# ...
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller168:5000/v3
username = placement
password = placement.123

[libvirt]
# ...
virt_type = qemu

#根据实际硬件支持填写支持项qemu/kvm
#通过“egrep -c ‘(vmx|svm)’ /proc/cpuinfo”命令查看主机是否支持硬件加速,返回1或者更大的值表示支持,返回0表示不支持;
#支持硬件加速使用”kvm”类型,不支持则使用”qemu”类型;
#一般虚拟机不支持硬件加速

#重启nova服务,并配置开机启动:

systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service
systemctl status libvirtd.service openstack-nova-compute.service

8.2 nova-compute服务验证 - controller160

#加载管理凭证

source adminrc.sh

#查看计算节点列表

openstack compute service list

#输出

[root@controller160 ~]# openstack compute service list
+----+----------------+---------------+----------+---------+-------+----------------------------+
| ID | Binary         | Host          | Zone     | Status  | State | Updated At                 |
+----+----------------+---------------+----------+---------+-------+----------------------------+
| 13 | nova-conductor | controller160 | internal | enabled | up    | 2020-06-21T05:16:19.000000 |
| 19 | nova-scheduler | controller160 | internal | enabled | up    | 2020-06-21T05:16:20.000000 |
| 20 | nova-conductor | controller162 | internal | enabled | up    | 2020-06-21T05:16:16.000000 |
| 23 | nova-scheduler | controller162 | internal | enabled | up    | 2020-06-21T05:16:17.000000 |
| 25 | nova-conductor | controller161 | internal | enabled | up    | 2020-06-21T05:16:18.000000 |
| 26 | nova-scheduler | controller161 | internal | enabled | up    | 2020-06-21T05:16:20.000000 |
| 47 | nova-compute   | compute163    | nova     | enabled | up    | 2020-06-21T05:16:14.000000 |
| 48 | nova-compute   | compute164    | nova     | enabled | up    | 2020-06-21T05:16:17.000000 |
+----+----------------+---------------+----------+---------+-------+----------------------------+

#发现计算节点:

su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

#输出

[root@controller160 ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting computes from cell 'cell1': 3c43b8cf-140d-401f-8b1c-4140ddad20a3
Checking host mapping for compute host 'compute163': f4021a77-37d8-4325-84a4-b47b0443aa02
Creating host mapping for compute host 'compute163': f4021a77-37d8-4325-84a4-b47b0443aa02
Checking host mapping for compute host 'compute164': f76880fb-90b7-47fc-8105-68f8f3e08914
Creating host mapping for compute host 'compute164': f76880fb-90b7-47fc-8105-68f8f3e08914
Found 2 unmapped computes in cell: 3c43b8cf-140d-401f-8b1c-4140ddad20a3

#计算节点自动发现配置,在nova.conf添加如下配置
vim /etc/nova/nova.conf

[scheduler]
discover_hosts_in_cells_interval = 300

#执行状态检查,都为success为正常

nova-status upgrade check

#输出

+------------------------------------+
| Upgrade Check Results              |
+------------------------------------+
| Check: Cells v2                    |
| Result: Success                    |
| Details: None                      |
+------------------------------------+
| Check: Placement API               |
| Result: Success                    |
| Details: None                      |
+------------------------------------+
| Check: Ironic Flavor Migration     |
| Result: Success                    |
| Details: None                      |
+------------------------------------+
| Check: Cinder API                  |
| Result: Success                    |
| Details: None                      |
+------------------------------------+
| Check: Policy Scope-based Defaults |
| Result: Success                    |
| Details: None                      |
+------------------------------------+

至此,nova-compute服务已部署完毕,如有问题请联系我改正,感激不尽!

8.x 部署过程遇到的问题汇总

eg1. 查看节点是否支持硬件加速
egrep -c '(vmx|svm)' /proc/cpuinfo
输出为0时,/etc/nova/nova-compute.conf配置需要更改成以下项
virt_type = qemu
输出为1时,更改成
virt_type = kvm

eg2.Error:
 Problem: package openstack-nova-compute-1:21.0.0-1.el8.noarch requires qemu-kvm-core >= 3.1.0, but none of the providers can be installed
  - package qemu-kvm-core-15:2.12.0-99.module_el8.2.0+320+13f867d7.x86_64 requires glusterfs-api >= 3.6.0, but none of the providers can be installed
  - package qemu-kvm-core-15:4.1.0-23.el8.1.x86_64 requires glusterfs-api >= 3.6.0, but none of the providers can be installed
  - package qemu-kvm-core-15:4.2.0-19.el8.x86_64 requires glusterfs-api >= 3.6.0, but none of the providers can be installed
  - cannot install the best candidate for the job
  - nothing provides glusterfs-libs(x86-64) = 6.0-20.el8 needed by glusterfs-api-6.0-20.el8.x86_64
  - nothing provides glusterfs(x86-64) = 6.0-20.el8 needed by glusterfs-api-6.0-20.el8.x86_64
  - nothing provides glusterfs-client-xlators(x86-64) = 6.0-20.el8 needed by glusterfs-api-6.0-20.el8.x86_64
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
解决方案:
wget http://rpmfind.net/linux/centos/8.2.2004/extras/x86_64/os/Packages/centos-release-gluster6-1.0-1.el8.noarch.rpm
rpm -ivh centos-release-gluster6-1.0-1.el8.noarch.rpm
yum makecache

猜你喜欢

转载自blog.csdn.net/caiyqn/article/details/106858187