《手动部署 OpenStack Rocky 双节点》
当前云基础设备服务清单:
[root@controller ~]# openstack compute service list
+----+------------------+------------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+------------------+------------+----------+---------+-------+----------------------------+
| 1 | nova-scheduler | controller | internal | enabled | up | 2019-05-08T09:51:29.000000 |
| 2 | nova-consoleauth | controller | internal | enabled | up | 2019-05-08T09:51:22.000000 |
| 3 | nova-conductor | controller | internal | enabled | up | 2019-05-08T09:51:27.000000 |
| 6 | nova-compute | controller | nova | enabled | up | 2019-05-08T09:51:24.000000 |
| 7 | nova-compute | compute | nova | enabled | up | 2019-05-08T09:51:24.000000 |
+----+------------------+------------+----------+---------+-------+----------------------------+
[root@controller ~]# openstack volume service list
+------------------+-----------------+------+---------+-------+----------------------------+
| Binary | Host | Zone | Status | State | Updated At |
+------------------+-----------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller | nova | enabled | up | 2019-05-08T09:51:40.000000 |
| cinder-volume | controller@lvm | nova | enabled | up | 2019-05-08T09:51:43.000000 |
| cinder-volume | controller@ceph | nova | enabled | up | 2019-05-08T09:51:39.000000 |
| cinder-backup | controller | nova | enabled | up | 2019-05-08T09:51:42.000000 |
+------------------+-----------------+------+---------+-------+----------------------------+
[root@controller ~]# openstack network agent list
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 41925586-9119-4709-bc23-4668433bd413 | Metadata agent | controller | None | :-) | UP | neutron-metadata-agent |
| 43281ac1-7699-4a81-a5b6-d4818f8cf8f9 | Open vSwitch agent | controller | None | :-) | UP | neutron-openvswitch-agent |
| b815e569-c85d-4a37-84ea-7bdc5fe5653c | DHCP agent | controller | nova | :-) | UP | neutron-dhcp-agent |
| d1ef7214-d26c-42c8-ba0b-2a1580a44446 | L3 agent | controller | nova | :-) | UP | neutron-l3-agent |
| f55311fc-635c-4985-ae6b-162f3fa8f886 | Open vSwitch agent | compute | None | :-) | UP | neutron-openvswitch-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
[root@controller ~]# openstack catalog list
+-----------+-----------+------------------------------------------------------------------------+
| Name | Type | Endpoints |
+-----------+-----------+------------------------------------------------------------------------+
| nova | compute | RegionOne |
| | | internal: http://controller:8774/v2.1 |
| | | RegionOne |
| | | admin: http://controller:8774/v2.1 |
| | | RegionOne |
| | | public: http://controller:8774/v2.1 |
| | | |
| cinderv2 | volumev2 | RegionOne |
| | | admin: http://controller:8776/v2/a2b55e37121042a1862275a9bc9b0223 |
| | | RegionOne |
| | | public: http://controller:8776/v2/a2b55e37121042a1862275a9bc9b0223 |
| | | RegionOne |
| | | internal: http://controller:8776/v2/a2b55e37121042a1862275a9bc9b0223 |
| | | |
| neutron | network | RegionOne |
| | | internal: http://controller:9696 |
| | | RegionOne |
| | | admin: http://controller:9696 |
| | | RegionOne |
| | | public: http://controller:9696 |
| | | |
| glance | image | RegionOne |
| | | admin: http://controller:9292 |
| | | RegionOne |
| | | public: http://controller:9292 |
| | | RegionOne |
| | | internal: http://controller:9292 |
| | | |
| keystone | identity | RegionOne |
| | | admin: http://controller:5000/v3/ |
| | | RegionOne |
| | | public: http://controller:5000/v3/ |
| | | RegionOne |
| | | internal: http://controller:5000/v3/ |
| | | |
| placement | placement | RegionOne |
| | | internal: http://controller:8778 |
| | | RegionOne |
| | | public: http://controller:8778 |
| | | RegionOne |
| | | admin: http://controller:8778 |
| | | |
| cinderv3 | volumev3 | RegionOne |
| | | internal: http://controller:8776/v3/a2b55e37121042a1862275a9bc9b0223 |
| | | RegionOne |
| | | admin: http://controller:8776/v3/a2b55e37121042a1862275a9bc9b0223 |
| | | RegionOne |
| | | public: http://controller:8776/v3/a2b55e37121042a1862275a9bc9b0223 |
| | | |
+-----------+-----------+------------------------------------------------------------------------+
BareMetal Node:
[root@localhost ~]# cat /etc/hosts
...
172.18.22.231 controller
172.18.22.232 compute
172.18.22.233 baremetal
[root@baremetal ~]# cat /etc/chrony.conf | grep -v ^# | grep -v ^$
server controller iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
logdir /var/log/chrony
[root@baremetal ~]# systemctl enable chronyd.service
[root@baremetal ~]# systemctl start chronyd.service
[root@baremetal ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^? controller 3 6 3 0 -9548us[-9548us] +/- 37ms
yum install centos-release-openstack-rocky -y
yum upgrade -y
yum install python-openstackclient -y
yum install openstack-selinux -y
NOTE:请注意每个步骤的操作节点
openstack service create --name ironic --description "Ironic baremetal provisioning service" baremetal
openstack user create --domain default --password-prompt ironic
openstack role add --project service --user ironic admin
openstack endpoint create --region RegionOne baremetal admin http://baremetal:6385
openstack endpoint create --region RegionOne baremetal public http://baremetal:6385
openstack endpoint create --region RegionOne baremetal internal http://baremetal:6385
openstack catalog list
yum install openstack-ironic-api openstack-ironic-conductor python-ironicclient -y
[DEFAULT]
my_ip=172.18.22.233
transport_url = rabbit://openstack:fanguiju@controller
auth_strategy = keystone
state_path = /var/lib/ironic
debug = True
[api]
port = 6385
[conductor]
automated_clean = false
clean_callback_timeout = 1800
rescue_callback_timeout = 1800
soft_power_off_timeout = 600
power_state_change_timeout = 30
power_failure_recovery_interval = 300
[database]
connection=mysql+pymysql://ironic:fanguiju@controller/ironic?charset=utf8
[dhcp]
dhcp_provider = neutron
[neutron]
auth_type = password
auth_url = http://controller:5000
username=ironic
password=fanguiju
project_name=service
project_domain_id=default
user_domain_id=default
region_name = RegionOne
valid_interfaces=public
[glance]
url = http://controller:9292
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = glance
password = fanguiju
[cinder]
region_name = RegionOne
project_domain_id = default
user_domain_id = default
project_name = service
password = fanguiju
username = ironic
auth_url = http://controller:5000
auth_type = password
[service_catalog]
region_name = RegionOne
project_domain_id = default
user_domain_id = default
project_name = service
password = fanguiju
username = ironic
auth_url = http://controller:5000
auth_type = password
[keystone_authtoken]
auth_type=password
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
username=ironic
password=fanguiju
project_name=service
project_domain_name=default
user_domain_name=default
NOTE:本文 ironic-api 和 ironic-conductor 同节点
CREATE DATABASE ironic CHARACTER SET utf8;
GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' IDENTIFIED BY 'fanguiju';
GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' IDENTIFIED BY 'fanguiju';
ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema
systemctl enable openstack-ironic-api openstack-ironic-conductor
systemctl start openstack-ironic-api openstack-ironic-conductor
systemctl status openstack-ironic-api openstack-ironic-conductor
[root@controller ~]# openstack baremetal driver list
+---------------------+----------------+
| Supported driver(s) | Active host(s) |
+---------------------+----------------+
| ipmi | baremetal |
+---------------------+----------------+
NOTE:该节点的 nova-compute service 作为 BareMetal 的管理与调度层,所以不需要支持嵌套虚拟化。e.g.
[root@baremetal ~]# egrep -c '(vmx|svm)' /proc/cpuinfo
0
yum install openstack-nova-compute -y
# /etc/nova/nova.conf
[DEFAULT]
my_ip = 172.18.22.233
transport_url = rabbit://openstack:fanguiju@controller
debug = True
use_neutron = true
compute_driver=ironic.IronicDriver
firewall_driver=nova.virt.firewall.NoopFirewallDriver
reserved_host_cpus=0
reserved_host_memory_mb=0
reserved_host_disk_mb=0
update_resources_interval=10
cpu_allocation_ratio=1.0
ram_allocation_ratio=1.0
disk_allocation_ratio=1.0
bandwidth_poll_interval=-1
[ironic]
api_retry_interval = 5
api_max_retries = 300
auth_type=password
auth_url=http://controller:5000/v3
project_name=service
username=ironic
password=fanguiju
project_domain_name=default
user_domain_name=default
[glance]
api_servers = http://controller:9292
[cinder]
os_region_name = RegionOne
[neutron]
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = fanguiju
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = fanguiju
systemctl enable openstack-nova-compute.service
systemctl start openstack-nova-compute.service
systemctl status openstack-nova-compute.service
[root@controller ~]# nova-manage cell_v2 discover_hosts --by-service
[root@controller ~]# nova-manage cell_v2 list_cells
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+----------+
| Name | UUID | Transport URL | Database Connection | Disabled |
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+----------+
| cell0 | 00000000-0000-0000-0000-000000000000 | none:/ | mysql+pymysql://nova:****@controller/nova_cell0 | False |
| cell1 | 51e0592e-b622-4365-814c-98ce96bcce7b | rabbit://openstack:****@controller | mysql+pymysql://nova:****@controller/nova | False |
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+----------+
[root@controller ~]# nova-manage cell_v2 list_hosts
+-----------+--------------------------------------+------------+
| Cell Name | Cell UUID | Hostname |
+-----------+--------------------------------------+------------+
| cell1 | 51e0592e-b622-4365-814c-98ce96bcce7b | baremetal |
| cell1 | 51e0592e-b622-4365-814c-98ce96bcce7b | compute |
| cell1 | 51e0592e-b622-4365-814c-98ce96bcce7b | controller |
+-----------+--------------------------------------+------------+
[root@controller ~]# openstack compute service list
+----+------------------+------------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+------------------+------------+----------+---------+-------+----------------------------+
| 1 | nova-scheduler | controller | internal | enabled | up | 2019-05-08T11:28:57.000000 |
| 2 | nova-consoleauth | controller | internal | enabled | up | 2019-05-08T11:28:55.000000 |
| 3 | nova-conductor | controller | internal | enabled | up | 2019-05-08T11:28:58.000000 |
| 6 | nova-compute | controller | nova | enabled | up | 2019-05-08T11:28:54.000000 |
| 7 | nova-compute | compute | nova | enabled | up | 2019-05-08T11:28:56.000000 |
| 8 | nova-compute | baremetal | nova | enabled | up | 2019-05-08T11:28:58.000000 |
+----+------------------+------------+----------+---------+-------+----------------------------+
[filter_scheduler]
track_instance_changes=False
systemctl restart openstack-nova-scheduler
Flat 网络模型下,所有的物理服务器(裸金属节点、OpenSack 节点)都处于同一个 Flat(扁平)网络中,无需交换机或者为透明交换机。物理机网络由运维人员完成预配置,Neutron 在 Flat 网络模型中只负责提供 DHCP 服务。
VLAN 网络模型下,Neutron 可以通过 Networking Generic Switch 来实现对物理交换机的接管。 在此基础上无论是 Provisioning Network 还是 Tenant Network 都通过 VLAN 网络类型(Physical Network)接入到物理交换机,由 Neutron 完成对物理交换机端口配置的切换控制。比如:部署时,BM Node 接入端口为 Provisioning Network VLAN;部署完成后,BM Node 接入端口为 Tenant Network VLAN。
NOTE:因为在本文环境中已经存在一个基础的 OpenStack 环境,包括已经配置好了的 Flat Physical Network。这里本质的需求是打造一个可以提供 Provisioning Network 的基础网络配置,所以根据个人的实际情况来选择是否执行下列配置。
yum install openstack-neutron-openvswitch ipset -y
# /etc/neutron/plugins/ml2/openvswitch_agent.ini
[ovs]
datapath_type = system
bridge_mappings = provider:br-provider
[agent]
l2_population = True
[securitygroup]
firewall_driver = openvswitch
systemctl enable openvswitch
systemctl start openvswitch
systemctl status openvswitch
ovs-vsctl add-br br-provider
ovs-vsctl add-port br-provider ens192
systemctl enable neutron-openvswitch-agent.service
systemctl start neutron-openvswitch-agent.service
systemctl status neutron-openvswitch-agent.service
[root@baremetal ~]# ovs-vsctl show
52fd6a40-ed6b-460c-8af9-8b13239a9ad5
Manager "ptcp:6640:127.0.0.1"
is_connected: true
Bridge br-provider
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port br-provider
Interface br-provider
type: internal
Port "ens192"
Interface "ens192"
Port phy-br-provider
Interface phy-br-provider
type: patch
options: {peer=int-br-provider}
Bridge br-int
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port int-br-provider
Interface int-br-provider
type: patch
options: {peer=phy-br-provider}
Port br-int
Interface br-int
type: internal
ovs_version: "2.10.1"
[root@controller ~]# openstack network agent list
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 02ac17a4-9a27-4dd6-b11f-a6eada895432 | Open vSwitch agent | baremetal | None | :-) | UP | neutron-openvswitch-agent |
...
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
# /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2_type_flat]
flat_networks = provider,external
[ml2_type_vlan]
network_vlan_ranges = provider1:1:1000
[ml2_type_vxlan]
vni_ranges = 1:1000
systemctl restart neutron-server
以往使用 Flat 网络接口时创建的裸机 Port 状态会一直处于 DOWN,但裸机操作系统的部署依然能够成功且正常工作。而 Networking-baremetal 项目正是希望解决裸机 Port 状态不正常的问题,该项目提供了网络服务和裸机服务深度集成的功能,不仅能完成裸机 Port 状态变更,还能提供 Routed Networks 功能。
PS:Routed Networks & Multi-Segments
Networking-baremetal ML2 mechanism driver 是一个 Neutron ML2 Mechanism Driver,主要用于 Fake Neutron Port Attach 动作,使其状态保存健康。但实际上 Networking-baremetal ML2 mechanism driver 是可选的。因为 Ironic Driver in Nova 的 Port Binding 允许失败。类比:
VM:将 Neutron Port 绑定到计算节点的 Tap
BM:将 Neutron Port Fake Attach
yum install python2-networking-baremetal -y
# /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = local,flat,vlan,vxlan
tenant_network_types = vxlan
extension_drivers = port_security
mechanism_drivers = openvswitch,l2population,baremetal
systemctl restart neutron-server
Ironic Neutron Agent 和 Networking-baremetal ML2 mechanism driver 配合使用。
yum install -y python2-ironic-neutron-agent
# /etc/neutron/plugins/ml2/ironic_neutron_agent.ini
[DEFAULT]
debug = true
[agent]
log_agent_heartbeats = true
[ironic]
project_domain_name = default
project_name = service
user_domain_name = default
password = fanguiju
username = ironic
auth_url = http://controller:5000/v3
auth_type = password
region_name = RegionOne
systemctl enable ironic-neutron-agent
systemctl start ironic-neutron-agent
systemctl status ironic-neutron-agent
如前文所述,我们创建一个 Flat 类型的 Provisioning Network,该网络的本质是一个运营商网络,通过运营商的物理交换机连通到裸金属服务器,使得裸金属服务器可以借助该网络的 DHCP 服务获取到 IP 地址和 PXE 服务器的信息。所以 Subnet 一定要 Enable DHCP。
openstack network create --project admin provisioning-net-1 --share --provider-network-type flat --provider-physical-network provider
openstack subnet create provisioning-subnet-1 --network provisioning-net-1 \
--subnet-range 172.18.22.0/24 --ip-version 4 --gateway 172.18.22.1 \
--allocation-pool start=172.18.22.237,end=172.18.22.240 --dhcp
本文环境将 Provisioning Network 和 Cleaning Network 合并。
[root@controller ~]# openstack network list
+--------------------------------------+--------------------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+--------------------+--------------------------------------+
| 3e8d84ab-9d6e-4194-b8c0-4a14807cf8ed | ext_net | 8792cf1d-51e8-49b7-80ae-656226c440e6 |
| b90fce07-0f32-4ba5-a1fd-a8e5e00f9c65 | provisioning-net-1 | 67327a38-4dd1-41bb-99cc-2be0bd2de00a |
| be8ca1f5-f243-4640-b7e1-4107fe16dd70 | vxlan-net-1000 | 85c68fdd-85f7-4f19-9538-ff82b5c8c5f0 |
+--------------------------------------+--------------------+--------------------------------------+
# /etc/ironic/ironic.conf
[neutron]
cleaning_network = b90fce07-0f32-4ba5-a1fd-a8e5e00f9c65
systemctl restart openstack-ironic-conductor
# /etc/ironic/ironic.conf
[DEFAULT]
...
enabled_network_interfaces=noop,flat,neutron
default_network_interface=flat
[neutron]
...
cleaning_network = b90fce07-0f32-4ba5-a1fd-a8e5e00f9c65
cleaning_network_security_groups = b9ce73bb-58c1-44f6-91cf-f66d5f55f57f
provisioning_network = b90fce07-0f32-4ba5-a1fd-a8e5e00f9c65
provisioning_network_security_groups = b9ce73bb-58c1-44f6-91cf-f66d5f55f57f
NOTE: The “provisioning” and “cleaning” networks may be the same network or distinct networks. To ensure that communication between the Bare Metal service and the deploy ramdisk works, it is important to ensure that security groups are disabled for these networks, or that the default security groups allow:
DHCP
TFTP
egress port used for the Bare Metal service (6385 by default)
ingress port used for ironic-python-agent (9999 by default)
if using iSCSI deploy, the ingress port used for iSCSI (3260 by default)
if using Direct deploy, the egress port used for the Object Storage service (typically 80 or 443)
if using iPXE, the egress port used for the HTTP server running on the ironic-conductor nodes (typically 80).
重启服务
systemctl restart openstack-ironic-api
systemctl restart openstack-ironic-conductor
$ virtualenv dib
$ source dib/bin/activate
(dib) $ pip install diskimage-builder
官方文档:Building or downloading a deploy ramdisk image
官方文档:Installing Ironic Python Agent!
NOTE:这里我们没有定制需求,直接下载。
wget https://tarballs.openstack.org/ironic-python-agent/coreos/files/coreos_production_pxe.vmlinuz
wget https://tarballs.openstack.org/ironic-python-agent/coreos/files/coreos_production_pxe_image-oem.cpio.gz
glance image-create --name deploy-vmlinuz --visibility public --disk-format aki --container-format aki < coreos_production_pxe.vmlinuz
glance image-create --name deploy-initrd --visibility public --disk-format ari --container-format ari < coreos_production_pxe_image-oem.cpio.gz
[root@baremetal deploy_images]# openstack image list
+--------------------------------------+----------------+--------+
| ID | Name | Status |
+--------------------------------------+----------------+--------+
| d18923bd-86fc-4f77-b5e8-976d3b1c367c | cirros_raw | active |
| 6000a17f-0ab7-418a-990c-2009a59c3392 | deploy-initrd | active |
| e650d33b-8fad-42f7-948c-5c12526bcd07 | deploy-vmlinuz | active |
+--------------------------------------+----------------+--------+
# 支持 Cloud Init
# 设置登录账户
$ DIB_CLOUD_INIT_DATASOURCES="ConfigDrive, OpenStack"
DIB_DEV_USER_USERNAME=root \
DIB_DEV_USER_PWDLESS_SUDO=yes \
DIB_DEV_USER_PASSWORD=fanguiju \
disk-image-create \
centos7 \
dhcp-all-interfaces \
baremetal \
grub2 \
-o my-image
$ ls
my-image.d my-image.initrd my-image.qcow2 my-image.vmlinuz
The partition image command creates my-image.qcow2, my-image.vmlinuz and my-image.initrd files. The grub2 element in the partition image creation command is only needed if local boot will be used to deploy my-image.qcow2, otherwise the images my-image.vmlinuz and my-image.initrd will be used for PXE booting after deploying the bare metal with my-image.qcow2.
glance image-create --name my-image.vmlinuz --visibility public --disk-format aki --container-format aki < my-image.vmlinuz
glance image-create --name my-image.initrd --visibility public --disk-format ari --container-format ari < my-image.initrd
export MY_VMLINUZ_UUID=$(openstack image list | awk '/my-image.vmlinuz/ { print $2 }')
export MY_INITRD_UUID=$(openstack image list | awk '/my-image.initrd/ { print $2 }')
glance image-create --name my-image --visibility public --disk-format qcow2 --container-format bare --property kernel_id=$MY_VMLINUZ_UUID --property ramdisk_id=$MY_INITRD_UUID < my-image.qcow2
(dib) [root@baremetal user_images]# openstack image list
+--------------------------------------+------------------+--------+
| ID | Name | Status |
+--------------------------------------+------------------+--------+
| d18923bd-86fc-4f77-b5e8-976d3b1c367c | cirros_raw | active |
| 6000a17f-0ab7-418a-990c-2009a59c3392 | deploy-initrd | active |
| e650d33b-8fad-42f7-948c-5c12526bcd07 | deploy-vmlinuz | active |
| 5e756d4d-b4e9-43a9-9e49-d530c72a7674 | my-image | active |
| 24c9d142-3589-420a-b59c-f70e04575dbe | my-image.initrd | active |
| 3bf6aaa0-58b6-4037-803a-43ee6d8937c4 | my-image.vmlinuz | active |
+--------------------------------------+------------------+--------+
根据裸机集群的具体厂商和硬件设备来配置 Bare Metal Provisioning Drivers。Ironic 支持的驱动类型非常之多,具体可浏览官方文档。
官方文档:Set up the drivers for the Bare Metal service
常见组合:
pxe + ipmi:IPMI 控制硬件设备、使用 PXE 实施部署
pxe + drac:DRAC 控制硬件设备、使用 PXE 实施部署
pxe + ilo:iLO 控制硬件设备、使用 PXE 实施部署
pxe + iboot:iBoot 控制硬件设备、使用 PXE 实施部署
pxe + ssh:SSH 控制硬件设备、使用 PXE 实施部署
配置
# /etc/ironic/ironic.conf
[DEFAULT]
...
enabled_hardware_types = ipmi,redfish
# boot
enabled_boot_interfaces = pxe
# console
enabled_console_interfaces = ipmitool-socat,no-console
# deploy
enabled_deploy_interfaces = direct,iscsi
# inspect
enabled_inspect_interfaces = inspector
# management
enabled_management_interfaces = ipmitool,redfish
# power
enabled_power_interfaces = ipmitool,redfish
# raid
enabled_raid_interfaces = agent
# vendor
enabled_vendor_interfaces = ipmitool, no-vendor
# storage
enabled_storage_interfaces = cinder, noop
# network
enabled_network_interfaces = flat,neutron
systemctl restart openstack-ironic-conductor
[root@controller ~]# openstack baremetal driver list
+---------------------+----------------+
| Supported driver(s) | Active host(s) |
+---------------------+----------------+
| ipmi | baremetal |
| redfish | baremetal |
+---------------------+----------------+
# /etc/ironic/ironic.conf
[ipmi]
retry_timeout=60
[pxe]
ipxe_enabled = False
pxe_append_params = nofb nomodeset vga=normal console=ttyS0 systemd.journald.forward_to_console=yes
tftp_root=/tftpboot
tftp_server=172.18.22.233
systemctl restart openstack-ironic-conductor
在 Ironic Conductor 节点安装 IPMI Tool。
yum install ipmitool -y
# ipmitool -I lanplus -H -U -P chassis power status
[root@baremetal ~]# ipmitool -I lanplus -H 172.18.22.106 -U admin -P admin chassis power status
Chassis Power is on
在 Ironic Conductor 节点配置 TFTP。
sudo mkdir -p /tftpboot
sudo chown -R ironic /tftpboot
sudo yum install tftp-server syslinux-tftpboot xinetd -y
# /etc/xinetd.d/tftp
service tftp
{
protocol = udp
port = 69
socket_type = dgram
wait = yes
user = root
server = /usr/sbin/in.tftpd
server_args = -v -v -v -v -v --map-file /tftpboot/map-file /tftpboot
disable = no
flags = IPv4
}
sudo cp /usr/share/syslinux/pxelinux.0 /tftpboot
# If whole disk images need to be deployed via PXE-netboot, copy the chain.c32 image to /tftpboot to support it
sudo cp /usr/share/syslinux/chain.c32 /tftpboot/
echo 're ^(/tftpboot/) /tftpboot/\2' > /tftpboot/map-file
echo 're ^/tftpboot/ /tftpboot/' >> /tftpboot/map-file
echo 're ^(^/) /tftpboot/\1' >> /tftpboot/map-file
echo 're ^([^/]) /tftpboot/\1' >> /tftpboot/map-file
sudo systemctl enable xinetd
sudo systemctl restart xinetd
sudo systemctl status xinetd
# 服务端
[root@baremetal ~]# echo 'test tftp' > /tftpboot/aaa
# 客户端
[root@controller ~]# tftp baremetal -c get aaa
[root@controller ~]# cat aaa
test tftp
NOTE:Make sure that bare metal node is configured to boot in UEFI boot mode and boot device is set to network/pxe.
sudo yum install grub2-efi shim -y
sudo cp /boot/efi/EFI/centos/shim.efi /tftpboot/bootx64.efi
sudo cp /boot/efi/EFI/centos/grubx64.efi /tftpboot/grubx64.efi
$ GRUB_DIR=/tftpboot/EFI/centos
$ sudo mkdir -p $GRUB_DIR
$ cat $GRUB_DIR/grub.cfg
set default=master
set timeout=5
set hidden_timeout_quiet=false
menuentry "master" {
configfile /tftpboot/$net_default_mac.conf
}
$ sudo chmod 644 $GRUB_DIR/grub.cfg
openstack baremetal node set --property capabilities='boot_mode:uefi'
因为使用 iSCSI Deploy 方式的话,Ironic Conductor 节点会作为 iSCSI Client 并执行镜像的注入,所以需要安装 qemu-img 和 iscsiadm 指令行工具。
yum install qemu-img
yum -y install iscsi-initiator-utils
https://www.cnblogs.com/zhangyufei/p/8473306.html