OpenStack 是一个社区,也是一个项目。它提供了一个部署云的操作平台或工具集,为组织提供可扩展的、灵活的云计算。
作为一个开源的云计算管理平台,OpenStack 由nova、cinder、neutron、glance、keystone、horizon等几个主要的组件组合起来完成具体工作。OpenStack 支持几乎所有类型的云环境,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。OpenStack 通过各种互补的服务提供了基础设施即服务(IaaS)的解决方案,每个服务提供 API 进行集成。
openEuler 22.03-LTS-SP1版本官方源已经支持 OpenStack-Train 版本,用户可以配置好 yum 源后根据此文档进行 OpenStack 部署。
部署说明
1.本文参考:
https://openeuler.gitee.io/openstack/install/openEuler-22.03-LTS-SP1/OpenStack-train/#openstack
2.部署表
节点 | ip | 常用服务 |
---|---|---|
controller | 192.168.57.30, 10.0.10.30 | nova,neurton,cinder; keystone,glance,placement,horizon; mysql, rabbitmq, memcache, etcd |
compute1 | 192.168.57.31, 10.0.10.31 | nova,neutron,cinder |
compute2 | 192.168.57.32, 10.0.10.32 | nova,neutron,cinder |
搭建的系统版本:openEuler-22.03-lts-sp1
3.节点环境配置
需要在每个节点上关闭selinux
vi /etc/sysconfig/selinux
SELINUX=enforcing指令更改为SELINUX=disabled
OpenStack 支持多种形态部署,此文档支持ALL in One
以及Distributed
两种部署方式,按照如下方式约定:
ALL in One
模式:
忽略所有可能的后缀
Distributed
模式:
以 `(CTL)` 为后缀表示此条配置或者命令仅适用`控制节点`
以 `(CPT)` 为后缀表示此条配置或者命令仅适用`计算节点`
以 `(STG)` 为后缀表示此条配置或者命令仅适用`存储节点`
除此之外表示此条配置或者命令同时适用`控制节点`和`计算节点`
注意
涉及到以上约定的服务如下:
启动OpenStack Train yum源
yum update
yum install openstack-release-train
yum clean all && yum makecache
注意:如果你的环境的YUM源没有启用EPOL,需要同时配置EPOL,确保EPOL已配置,如下所示
vi /etc/yum.repos.d/openEuler.repo
[EPOL]
name=EPOL
baseurl=http://repo.openeuler.org/openEuler-22.03-LTS-SP1/EPOL/main/$basearch/
enabled=1
gpgcheck=1
gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS-SP1/OS/$basearch/RPM-GPG-KEY-openEuler
EOF
修改主机名以及映射
设置各个节点的主机名
hostnamectl set-hostname controller (CTL)
hostnamectl set-hostname compute (CPT)
假设controller节点的IP是10.0.10.30
,compute节点的IP是10.0.10.31
(如果存在的话),则于/etc/hosts
新增如下:
10.0.10.30 controller
10.0.10.31 compute1
10.0.10.32 compute2
执行如下命令,安装软件包。
yum install mariadb mariadb-server python3-PyMySQL
执行如下命令,创建并编辑 /etc/my.cnf.d/openstack.cnf
文件。
vim /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 10.0.10.30
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
注意:其中 bind-address 设置为控制节点的管理IP地址。
启动 DataBase 服务,并为其配置开机自启动:
systemctl enable mariadb.service
systemctl start mariadb.service
配置DataBase的默认密码(可选)
mysql_secure_installation
注意
根据提示进行即可
执行如下命令,安装软件包。
yum install rabbitmq-server
启动 RabbitMQ 服务,并为其配置开机自启动。
systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service
添加 OpenStack用户。
rabbitmqctl add_user openstack RABBIT_PASS
注意:替换 RABBIT_PASS,为 OpenStack 用户设置密码
设置openstack用户权限,允许进行配置、写、读:
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
执行如下命令,安装依赖软件包。
yum install memcached python3-memcached
编辑 /etc/sysconfig/memcached
文件。
vim /etc/sysconfig/memcached
OPTIONS="-l 127.0.0.1,::1,controller"
执行如下命令,启动 Memcached 服务,并为其配置开机启动。
systemctl enable memcached.service
systemctl start memcached.service
注意:服务启动后,可以通过命令memcached-tool controller stats确保启动正常,服务可用,其中可以将controller替换为控制节点的管理IP地址。
创建 keystone 数据库并授权。
mysql -u root -p
MariaDB [(none)]> CREATE DATABASE keystone;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
IDENTIFIED BY 'KEYSTONE_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY 'KEYSTONE_DBPASS';
MariaDB [(none)]> exit
注意: 替换 KEYSTONE_DBPASS,为 Keystone 数据库设置密码
安装软件包。
yum install openstack-keystone httpd mod_wsgi
配置keystone相关配置
vim /etc/keystone/keystone.conf
[database]
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
[token]
provider = fernet
解释
[database]部分,配置数据库入口
[token]部分,配置token provider
注意:替换 KEYSTONE_DBPASS 为 Keystone 数据库的密码
同步数据库。
su -s /bin/sh -c "keystone-manage db_sync" keystone
初始化Fernet密钥仓库。
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
启动服务。
keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
--bootstrap-admin-url http://controller:5000/v3/ \
--bootstrap-internal-url http://controller:5000/v3/ \
--bootstrap-public-url http://controller:5000/v3/ \
--bootstrap-region-id RegionOne
注意:替换 ADMIN_PASS,为 admin 用户设置密码
配置Apache HTTP server
vim /etc/httpd/conf/httpd.conf
ServerName controller
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
解释:配置 ServerName
项引用控制节点
注意 如果 ServerName 项不存在则需要创建
启动Apache HTTP服务。
systemctl enable httpd.service
systemctl start httpd.service
创建环境变量配置。
cat << EOF >> ~/.admin-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
EOF
注意:替换 ADMIN_PASS 为 admin 用户的密码
依次创建domain, projects, users, roles,需要先安装好python3-openstackclient:
yum install python3-openstackclient
导入环境变量
source ~/.admin-openrc
创建project service
,其中 domain default
在 keystone-manage bootstrap 时已创建
openstack domain create --description "An Example Domain" example
openstack project create --domain default --description "Service Project" service
创建(non-admin)project myproject
,user myuser
和 role myrole
,为 myproject
和 myuser
添加角色myrole
openstack project create --domain default --description "Demo Project" myproject
openstack user create --domain default --password-prompt myuser
openstack role create myrole
openstack role add --project myproject --user myuser myrole
验证
取消临时环境变量OS_AUTH_URL和OS_PASSWORD:
source ~/.admin-openrc
unset OS_AUTH_URL OS_PASSWORD
为admin用户请求token:
openstack --os-auth-url http://controller:5000/v3 \
--os-project-domain-name Default --os-user-domain-name Default \
--os-project-name admin --os-username admin token issue
为myuser用户请求token:
openstack --os-auth-url http://controller:5000/v3 \
--os-project-domain-name Default --os-user-domain-name Default \
--os-project-name myproject --os-username myuser token issue
创建数据库、服务凭证和 API 端点
创建数据库:
mysql -u root -p
MariaDB [(none)]> CREATE DATABASE glance;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
IDENTIFIED BY 'GLANCE_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
IDENTIFIED BY 'GLANCE_DBPASS';
MariaDB [(none)]> exit
注意:替换 GLANCE_DBPASS,为 glance 数据库设置密码
创建服务凭证
source ~/.admin-openrc
openstack user create --domain default --password-prompt glance
openstack role add --project service --user glance admin
openstack service create --name glance --description "OpenStack Image" image
创建镜像服务API端点:
openstack endpoint create --region RegionOne image public http://controller:9292
openstack endpoint create --region RegionOne image internal http://controller:9292
openstack endpoint create --region RegionOne image admin http://controller:9292
安装软件包
yum install openstack-glance
配置glance相关配置:
vim /etc/glance/glance-api.conf
[database]
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = GLANCE_PASS
[paste_deploy]
flavor = keystone
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
解释:
[database]部分,配置数据库入口
[keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口
[glance_store]部分,配置本地文件系统存储和镜像文件的位置
注意
替换 GLANCE_DBPASS 为 glance 数据库的密码
替换 GLANCE_PASS 为 glance 用户的密码
同步数据库:
su -s /bin/sh -c "glance-manage db_sync" glance
启动服务:
systemctl enable openstack-glance-api.service
systemctl start openstack-glance-api.service
验证
下载镜像
source ~/.admin-openrc
wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
注意:如果您使用的环境是鲲鹏架构,请下载aarch64版本的镜像;已对镜像cirros-0.5.2-aarch64-disk.img进行测试。
向Image服务上传镜像:
openstack image create --disk-format qcow2 --container-format bare \
--file cirros-0.4.0-x86_64-disk.img --public cirros
确认镜像上传并验证属性:
openstack image list
创建数据库、服务凭证和 API 端点
创建数据库:
作为 root 用户访问数据库,创建 placement 数据库并授权。
mysql -u root -p
MariaDB [(none)]> CREATE DATABASE placement;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \
IDENTIFIED BY 'PLACEMENT_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
IDENTIFIED BY 'PLACEMENT_DBPASS';
MariaDB [(none)]> exit
注意:替换 PLACEMENT_DBPASS 为 placement 数据库设置密码
source admin-openrc
执行如下命令,创建 placement 服务凭证、创建 placement 用户以及添加‘admin’角色到用户‘placement’。
创建Placement API服务
openstack user create --domain default --password-prompt placement
openstack role add --project service --user placement admin
openstack service create --name placement --description "Placement API" placement
创建placement服务API端点:
openstack endpoint create --region RegionOne placement public http://controller:8778
openstack endpoint create --region RegionOne placement internal http://controller:8778
openstack endpoint create --region RegionOne placement admin http://controller:8778
安装和配置
安装软件包:
yum install openstack-placement-api
配置placement:
编辑 /etc/placement/placement.conf 文件:
在[placement_database]部分,配置数据库入口
在[api] [keystone_authtoken]部分,配置身份认证服务入口
# vim /etc/placement/placement.conf
[placement_database]
# ...
connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
[api]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = PLACEMENT_PASS
其中,替换 PLACEMENT_DBPASS 为 placement 数据库的密码,替换 PLACEMENT_PASS 为 placement 用户的密码。
同步数据库:
su -s /bin/sh -c "placement-manage db sync" placement
启动httpd服务:
systemctl restart httpd
验证
执行如下命令,执行状态检查:
. admin-openrc
placement-status upgrade check
安装osc-placement,列出可用的资源类别及特性:
yum install python3-osc-placement
openstack --os-placement-api-version 1.2 resource class list --sort-column name
openstack --os-placement-api-version 1.6 trait list --sort-column name
创建数据库、服务凭证和 API 端点
创建数据库:
mysql -u root -p (CTL)
MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> CREATE DATABASE nova_cell0;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> exit
注意:替换NOVA_DBPASS,为nova数据库设置密码
source ~/.admin-openrc (CTL)
创建nova服务凭证:
openstack user create --domain default --password-prompt nova (CTL)
openstack role add --project service --user nova admin (CTL)
openstack service create --name nova --description "OpenStack Compute" compute (CTL)
创建nova API端点:
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL)
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL)
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL)
安装软件包
yum install openstack-nova-api openstack-nova-conductor (CTL)
yum install openstack-nova-novncproxy openstack-nova-scheduler (CTL)
yum install openstack-nova-compute (CPT)
注意:如果为arm64结构,还需要执行以下命令
yum install edk2-aarch64 (CPT)
配置nova相关配置
vim /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
my_ip = 10.0.0.1
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
compute_driver=libvirt.LibvirtDriver (CPT)
instances_path = /var/lib/nova/instances/ (CPT)
lock_path = /var/lib/nova/tmp (CPT)
[api_database]
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL)
[database]
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL)
[api]
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = NOVA_PASS
[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT)
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp (CTL)
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = PLACEMENT_PASS
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
service_metadata_proxy = true (CTL)
metadata_proxy_shared_secret = METADATA_SECRET (CTL)
解释
[default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,配置my_ip,启用网络服务neutron;
[api_database] [database]部分,配置数据库入口;
[api] [keystone_authtoken]部分,配置身份认证服务入口;
[vnc]部分,启用并配置远程控制台入口;
[glance]部分,配置镜像服务API的地址;
[oslo_concurrency]部分,配置lock path;
[placement]部分,配置placement服务的入口。
注意
替换 RABBIT_PASS 为 RabbitMQ 中 openstack 账户的密码;
配置 my_ip 为控制节点的管理IP地址;
替换 NOVA_DBPASS 为nova数据库的密码;
替换 NOVA_PASS 为nova用户的密码;
替换 PLACEMENT_PASS 为placement用户的密码;
替换 NEUTRON_PASS 为neutron用户的密码;
替换METADATA_SECRET为合适的元数据代理secret。
额外
确定是否支持虚拟机硬件加速(x86架构):
egrep -c '(vmx|svm)' /proc/cpuinfo (CPT)
如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是KVM:
vim /etc/nova/nova.conf (CPT)
[libvirt]
virt_type = qemu
如果返回值为1或更大的值,则支持硬件加速,则virt_type
可以配置为kvm
注意:如果为arm64结构,还需要在计算节点执行以下命令
mkdir -p /usr/share/AAVMF
chown nova:nova /usr/share/AAVMF
ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \
/usr/share/AAVMF/AAVMF_CODE.fd
ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \
/usr/share/AAVMF/AAVMF_VARS.fd
vim /etc/libvirt/qemu.conf
nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \
/usr/share/AAVMF/AAVMF_VARS.fd", \
"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \
/usr/share/edk2/aarch64/vars-template-pflash.raw"]
并且当ARM架构下的部署环境为嵌套虚拟化时,libvirt
配置如下:
[libvirt]
virt_type = qemu
cpu_mode = custom
cpu_model = cortex-a72
同步数据库
同步nova-api数据库:
su -s /bin/sh -c "nova-manage api_db sync" nova (CTL)
注册cell0数据库:
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova (CTL)
创建cell1 cell:
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova (CTL)
同步nova数据库:
su -s /bin/sh -c "nova-manage db sync" nova (CTL)
验证cell0和cell1注册正确:
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova (CTL)
添加计算节点到openstack集群
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova (CTL)
启动服务
systemctl enable \ (CTL)
openstack-nova-api.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service
systemctl start \ (CTL)
openstack-nova-api.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service
systemctl enable libvirtd.service openstack-nova-compute.service (CPT)
systemctl start libvirtd.service openstack-nova-compute.service (CPT)
验证
source ~/.admin-openrc (CTL)
列出服务组件,验证每个流程都成功启动和注册:
openstack compute service list (CTL)
列出身份服务中的API端点,验证与身份服务的连接:
openstack catalog list (CTL)
列出镜像服务中的镜像,验证与镜像服务的连接:
openstack image list (CTL)
检查cells是否运作成功,以及其他必要条件是否已具备。
nova-status upgrade check (CTL)
创建数据库、服务凭证和 API 端点
创建数据库:
mysql -u root -p (CTL)
MariaDB [(none)]> CREATE DATABASE neutron;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
IDENTIFIED BY 'NEUTRON_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
IDENTIFIED BY 'NEUTRON_DBPASS';
MariaDB [(none)]> exit
注意:替换 NEUTRON_DBPASS 为 neutron 数据库设置密码。
source ~/.admin-openrc (CTL)
创建neutron服务凭证
openstack user create --domain default --password-prompt neutron (CTL)
openstack role add --project service --user neutron admin (CTL)
openstack service create --name neutron --description "OpenStack Networking" network (CTL)
创建Neutron服务API端点:
openstack endpoint create --region RegionOne network public http://controller:9696 (CTL)
openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL)
openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL)
安装软件包:
yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset openstack-neutron-ml2 (CTL)
yum install openstack-neutron-linuxbridge ebtables ipset (CPT)
配置neutron相关配置:
配置主体配置
vim /etc/neutron/neutron.conf
[database]
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL)
[DEFAULT]
core_plugin = ml2 (CTL)
service_plugins = router (CTL)
allow_overlapping_ips = true (CTL)
transport_url = rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true (CTL)
notify_nova_on_port_data_changes = true (CTL)
api_workers = 3 (CTL)
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = neutron
password = NEUTRON_PASS
[nova]
auth_url = http://controller:5000 (CTL)
auth_type = password (CTL)
project_domain_name = Default (CTL)
user_domain_name = Default (CTL)
region_name = RegionOne (CTL)
project_name = service (CTL)
username = nova (CTL)
password = NOVA_PASS (CTL)
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
解释
[database]部分,配置数据库入口;
[default]部分,启用ml2插件和router插件,允许ip地址重叠,配置RabbitMQ消息队列入口;
[default] [keystone]部分,配置身份认证服务入口;
[default] [nova]部分,配置网络来通知计算网络拓扑的变化;
[oslo_concurrency]部分,配置lock path。
注意
替换NEUTRON_DBPASS为 neutron 数据库的密码;
替换RABBIT_PASS为 RabbitMQ中openstack 账户的密码;
替换NEUTRON_PASS为 neutron 用户的密码;
替换NOVA_PASS为 nova 用户的密码。
配置ML2插件:
vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[ml2_type_vxlan]
vni_ranges = 1:1000
[securitygroup]
enable_ipset = true
创建/etc/neutron/plugin.ini的符号链接
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
注意
[ml2]部分,启用 flat、vlan、vxlan 网络,启用 linuxbridge 及 l2population 机制,启用端口安全扩展驱动;
[ml2_type_flat]部分,配置 flat 网络为 provider 虚拟网络;
[ml2_type_vxlan]部分,配置 VXLAN 网络标识符范围;
[securitygroup]部分,配置允许 ipset。
补充
l2 的具体配置可以根据用户需求自行修改,本文使用的是provider network + linuxbridge
配置 Linux bridge 代理:
vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
[vxlan]
enable_vxlan = true
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = true
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
解释
[linux_bridge]部分,映射 provider 虚拟网络到物理网络接口;
[vxlan]部分,启用 vxlan 覆盖网络,配置处理覆盖网络的物理网络接口 IP 地址,启用 layer-2 population;
[securitygroup]部分,允许安全组,配置 linux bridge iptables 防火墙驱动。
注意
替换PROVIDER_INTERFACE_NAME为物理网络接口;
替换OVERLAY_INTERFACE_IP_ADDRESS为控制节点的管理IP地址。
配置Layer-3代理:
vim /etc/neutron/l3_agent.ini (CTL)
[DEFAULT]
interface_driver = linuxbridge
解释
在[default]部分,配置接口驱动为linuxbridge
配置DHCP代理:
vim /etc/neutron/dhcp_agent.ini (CTL)
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
解释
[default]部分,配置linuxbridge接口驱动、Dnsmasq DHCP驱动,启用隔离的元数据。
配置metadata代理:
vim /etc/neutron/metadata_agent.ini (CTL)
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = METADATA_SECRET
解释
[default]部分,配置元数据主机和shared secret。
注意: 替换METADATA_SECRET为合适的元数据代理secret。
配置nova相关配置
vim /etc/nova/nova.conf
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
service_metadata_proxy = true (CTL)
metadata_proxy_shared_secret = METADATA_SECRET (CTL)
解释
[neutron]部分,配置访问参数,启用元数据代理,配置secret。
注意
替换NEUTRON_PASS为 neutron 用户的密码;
替换METADATA_SECRET为合适的元数据代理secret。
同步数据库:
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
重启计算API服务:
systemctl restart openstack-nova-api.service
启动网络服务
systemctl enable neutron-server.service neutron-linuxbridge-agent.service \ (CTL)
neutron-dhcp-agent.service neutron-metadata-agent.service \
neutron-l3-agent.service
systemctl restart neutron-server.service neutron-linuxbridge-agent.service \ (CTL)
neutron-dhcp-agent.service neutron-metadata-agent.service \
neutron-l3-agent.service
systemctl enable neutron-linuxbridge-agent.service (CPT)
systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT)
验证
验证 neutron 代理启动成功:
openstack network agent list
创建数据库、服务凭证和 API 端点
创建数据库:
mysql -u root -p
MariaDB [(none)]> CREATE DATABASE cinder;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
IDENTIFIED BY 'CINDER_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
IDENTIFIED BY 'CINDER_DBPASS';
MariaDB [(none)]> exit
注意
替换 CINDER_DBPASS 为cinder数据库设置密码。
source ~/.admin-openrc
创建cinder服务凭证:
openstack user create --domain default --password-prompt cinder
openstack role add --project service --user cinder admin
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
创建块存储服务API端点:
openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
安装软件包:
yum install openstack-cinder-api openstack-cinder-scheduler (CTL)
yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils openstack-cinder-volume openstack-cinder-backup (STG)
准备存储设备,以下仅为示例:
pvcreate /dev/vdb
vgcreate cinder-volumes /dev/vdb
vim /etc/lvm/lvm.conf
devices {
...
filter = [ "a/vdb/", "r/.*/"]
解释
在devices部分,添加过滤以接受/dev/vdb设备拒绝其他设备。
准备NFS
mkdir -p /root/cinder/backup
cat << EOF >> /etc/export
/root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash)
EOF
配置cinder相关配置:
vim /etc/cinder/cinder.conf
[DEFAULT]
transport_url = rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
my_ip = 10.0.0.11
enabled_backends = lvm (STG)
backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG)
backup_share=HOST:PATH (STG)
[database]
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = cinder
password = CINDER_PASS
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG)
volume_group = cinder-volumes (STG)
iscsi_protocol = iscsi (STG)
iscsi_helper = tgtadm (STG)
解释
[database]部分,配置数据库入口;
[DEFAULT]部分,配置RabbitMQ消息队列入口,配置my_ip;
[DEFAULT] [keystone_authtoken]部分,配置身份认证服务入口;
[oslo_concurrency]部分,配置lock path。
注意
替换CINDER_DBPASS为 cinder 数据库的密码;
替换RABBIT_PASS为 RabbitMQ 中 openstack 账户的密码;
配置my_ip为控制节点的管理 IP 地址;
替换CINDER_PASS为 cinder 用户的密码;
替换HOST:PATH为 NFS 的HOSTIP和共享路径的密码;
同步数据库:
su -s /bin/sh -c "cinder-manage db sync" cinder (CTL)
配置nova:
vim /etc/nova/nova.conf (CTL)
[cinder]
os_region_name = RegionOne
重启计算API服务
systemctl restart openstack-nova-api.service
启动cinder服务
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL)
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL)
systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \ (STG)
openstack-cinder-volume.service \
openstack-cinder-backup.service
systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \ (STG)
openstack-cinder-volume.service \
openstack-cinder-backup.service
注意
当cinder使用tgtadm的方式挂卷的时候,要修改/etc/tgt/tgtd.conf,内容如下,保证tgtd可以发现cinder-volume的iscsi target。
include /var/lib/cinder/volumes/*
验证
source ~/.admin-openrc
openstack volume service list
安装软件包
yum install openstack-dashboard
修改文件
修改变量
vim /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = ['*', ]
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
}
}
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "member"
WEBROOT = '/dashboard'
POLICY_FILES_PATH = "/etc/openstack-dashboard"
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 3,
}
重启 httpd 服务
systemctl restart httpd.service memcached.service
验证 打开浏览器,输入网址http://HOSTIP/dashboard/,登录 horizon。
注意:替换HOSTIP为控制节点管理平面IP地址