参考官网配置
各主机IP和主机名配置:
10.2.1.10 openstack 控制器节点
10.2.1.11 openstack1 计算节点
10.2.1.12 openstack2 块存储节点
10.2.1.13 openstack3 块存储节点
openstack环境配置:
1)首先看主机是否支持虚拟化,有结果即是支持,没结果就不用看下面的了.
grep -E '(svm|vmx)' /proc/cpuinfo
2)修改四台机子的/etc/hosts
10.2.1.10 openstack
10.2.1.11 openstack1
10.2.1.12 openstack2
10.2.1.13 openstack3
10.2.1.10 controller
3)四台主机关掉防火墙和selinux
systemctl stop iptables firewalld
进入/etc/selinux/config改参数为SELINUX=disabled
4)yum源用本来主机自带的yum源即可
5)四台机子皆下载chrony,并编辑/etc/chrony.conf文件
yum -y install chronyd
server 10.2.1.10 iburst
最后重启chronyd并设置为开机自启
systemctl restart chronyd
systemctl enable chronyd
6)四台机子yum安装train包和更新节点,最后安装openstack客户端
yum -y install centos-release-openstack-train
yum -y upgrade
yum -y update
yum -y install python-openstackclient
7)在控制器节点安装数据库并配置
yum -y install mariadb mariadb-server python2-PyMySQL
vi /etc/my.cnf.d/openstack.cnf文件
[mysqld]
bind-address = 10.2.1.10
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
systemctl start mariadb
systemctl enable mariadb
8)在控制器节点安装rabbitmq-server并配置
yum -y install rabbitmq-server
systemctl start rabbitmq-server
systemctl enable rabbitmq-server
添加openstack用户:
rabbitmqctl add_user openstack 密码
再插入代码片`允许对openstack用户进行配置,写入和读取访问:
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
9)在控制器节点安装memcached并配置
yum -y install memcached python-memcached
配置服务以使用控制器节点的管理IP地址. 这是为了允许其他节点通过管理网络进行访问:
vi /etc/sysconfig/memcached
修改OPTIONS="-l 127.0.0.1,::1,controller"
最后启动服务
systemctl start memcached
systemctl enable memcached
10)在控制节点安装配置etcd服务
yum -y install etcd
替换IP为控制节点的:
sed -i "s@localhost@10\.2\.1\.10@g" /etc/etcd/etcd.conf
vi /etc/etcd/etcd.conf
#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://10.0.0.11:2380"
ETCD_LISTEN_CLIENT_URLS="http://10.0.0.11:2379"
ETCD_NAME="controller"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.0.0.11:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://10.0.0.11:2379"
ETCD_INITIAL_CLUSTER="controller=http://10.0.0.11:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"
启动服务
systemctl start etcd
systemctl enable etcd
openstack服务配置:
1)keystone服务(身份认证服务)配置
在控制器上进行身份认证(keystone)配置
使用数据库访问客户端以root用户身份连接到数据库服务器创建数据库:
mysql -uroot -p
MariaDB [(none)]> create database keystone;
授予对keystone数据库的适当访问权限:
MariaDB [(none)]> grant all privileges on keystone.* to keystone@'localhost' identified by 'keystone DBpassword';
MariaDB [(none)]> grant all privileges on keystone.* to keystone@'%' identified by 'keystone DBpassword';
安装keystone
yum -y install openstack-keystone httpd mod_wsgi
如果这一步出现报错:
软件包:mod_wsgi-3.4-18.el7.x86_64 (base)
需要:httpd-mmn = 20120211x8664
您可以尝试添加 --skip-broken 选项来解决该问题
** 发现 1 个已存在的 RPM 数据库问题, 'yum check' 输出如下:
python2-requests-2.21.0-3.el7.noarch 有缺少的需求 python2-urllib3 >= ('0', '1.21.1', None)
上方报错解决方法:
yum -y install httpd
yum源没有httpd包就执行下面这步:
yum -y install httpd-tools libapr-1.so.0 libaprutil-1.so.0 mailcap
rpm -ivh http://mirror.centos.org/centos/7/os/x86_64/Packages/httpd-2.4.6-90.el7.centos.x86_64.rpm
vi /etc/keystone/keystone.conf
[database]
connection = mysql+pymysql://keystone:keystone DBpassword@controller/keystone
#database只修改这一行,其他不用改
[token]
provider = fernet
#token只修改这一行,其他不用改
填充身份服务数据库
su -s /bin/sh -c "keystone-manage db_sync" keystone
初始化Fernet密钥存储库
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
引导身份服务
keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
--bootstrap-admin-url http://controller:5000/v3/ \
--bootstrap-internal-url http://controller:5000/v3/ \
--bootstrap-public-url http://controller:5000/v3/ \
--bootstrap-region-id RegionOne
配置Apache HTTP服务器
vi /etc/httpd/conf/httpd.conf
ServerName controller
创建一个指向/usr/share/keystone/wsgi-keystone.conf文件的链接:
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
SSL
安全部署应将Web服务器配置为使用SSL或在SSL终结器之后运行。
完成安装
启动Apache HTTP服务,使其其配置为在系统启动时启动:
systemctl enable httpd
systemctl start httpd
通过设置适当的环境变量来配置管理帐户:
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS #跟上面引导身份服务的密码相同即可
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
创建新域
openstack domain create --description "An Example Domain" example
创建service项目
openstack project create --domain default --description "Service Project" service
创建myproject项目
openstack project create --domain default --description "Demo Project" myproject
创建myuser用户
openstack user create --domain default --password-prompt myuser
创建myrole角色
openstack role create myrole
将myrole角色添加到myproject项目和myuser用户
openstack role add --project myproject --user myuser myrole
验证身份服务
unset OS_AUTH_URL OS_PASSWORD
以admin用户身份请求身份验证令牌
openstack --os-auth-url http://controller:5000/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name admin --os-username admin token issue
作为上一级中创建的myuser用户,请请求身份验证令牌
openstack --os-auth-url http://controller:5000/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name myproject --os-username myuser token issue
创建和编辑admin-openrc文件并添加以下内容
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
创建和编辑demo-openrc文件并添加以下内容
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=myproject
export OS_USERNAME=myuser
export OS_PASSWORD=DEMO_PASS
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
加载admin-openrc文件,以使用身份服务的位置以及admin项目和用户替换填充环境变量
. admin-openrc
请求身份验证令牌
openstack token issue
2)glance服务(镜像服务)配置
在控制器节点上安装和配置镜像服务. 为简单起见,此配置将图像存储在本地文件系统上.
创建glance数据库并授权
mysql -u root -p
MariaDB [(none)]> CREATE DATABASE glance;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'GLANCE_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'GLANCE_DBPASS';
取得admin凭据,以访问仅管理员的CLI命令
. admin-openrc
创建glance用户
openstack user create --domain default --password-prompt glance
将admin角色添加到glance用户和service项目中
openstack role add --project service --user glance admin
创建glance服务实体
openstack service create --name glance --description "OpenStack Image" image
创建图像服务API端点
openstack endpoint create --region RegionOne image public http://controller:9292
openstack endpoint create --region RegionOne image internal http://controller:9292
openstack endpoint create --region RegionOne image admin http://controller:9292
安装软件包
yum -y install openstack-glance
vi /etc/glance/glance-api.conf
[database]
#仅修改此行
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
[keystone_authtoken]
#修改多行
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = GLANCE_PASS
[paste_deploy]
#仅修改此行
flavor = keystone
[glance_store]
#修改三行
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
填充图像服务数据库
su -s /bin/sh -c "glance-manage db_sync" glance
启动映像服务,并将其配置为在系统引导时启动
systemctl enable openstack-glance-api
systemctl start openstack-glance-api
3)Placement服务(放置服务)配置
在控制节点安装和配置放置服务
创建placement数据库并授权
mysql -u root -p
MariaDB [(none)]> CREATE DATABASE placement;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'PLACEMENT_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'PLACEMENT_DBPASS';
取得admin初步,以访问唯一管理员的CLI命令
. admin-openrc
使用您选择的PLACEMENT_PASS创建一个展示位置服务用户
openstack user create --domain default --password-prompt placement
使用管理员角色将Placement用户添加到服务项目中
openstack role add --project service --user placement admin
在服务目录中创建Placement API
openstack service create --name placement --description "Placement API" placement
创建Placement API服务端点
openstack endpoint create --region RegionOne placement public http://controller:8778
openstack endpoint create --region RegionOne placement internal http://controller:8778
openstack endpoint create --region RegionOne placement admin http://controller:8778
安装和配置服务
yum -y install openstack-placement-api
vi /etc/placement/placement.conf
[placement_database]
# ...
connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
[api]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = PLACEMENT_PASS
填充placement数据库并重启httpd服务
su -s /bin/sh -c "placement-manage db sync" placement
systemctl restart httpd
4)nova服务(计算服务)配置
在控制节点上配置计算服务
创建nova_api , nova和nova_cell0数据库并授权
mysql -uroot -p
MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> CREATE DATABASE nova_cell0;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
取得admin凭据,以访问仅管理员的CLI命令
. admin-openrc
创建nova用户
openstack user create --domain default --password-prompt nova
将admin角色添加到nova用户
openstack role add --project service --user nova admin
创建nova服务实体
openstack service create --name nova --description "OpenStack Compute" compute
创建Compute API服务端点
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
安装服务和配置服务
yum -y install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler
vi /usr/share/nova/nova-dist.conf
[database]
connection = mysql://nova:123456@controller/nova
vi /etc/nova/nova.conf
[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
my_ip = 10.2.1.10
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api_database]
# ...
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
[database]
# ...
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
[api]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = NOVA_PASS
[vnc]
enabled = true
# ...
server_listen = $my_ip
server_proxyclient_address = $my_ip
[glance]
# ...
api_servers = http://controller:9292
[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp
[placement]
# ...
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = PLACEMENT_PASS
填充nova-api数据库
su -s /bin/sh -c "nova-manage api_db sync" nova
注册cell0数据库:
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
创建cell1单元:
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
填充nova数据库:
su -s /bin/sh -c "nova-manage db sync" nova
验证nova cell0和cell1是否正确注册
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
启动Compute服务并将其配置为在系统启动时启动
systemctl enable openstack-nova-api openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy
systemctl start openstack-nova-api openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy
######################################################################################
在计算节点配置nova服务
安装服务并配置
yum -y install openstack-nova-compute
vi /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:RABBIT_PASS@controller
my_ip = 10.2.1.11
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = NOVA_PASS
[vnc]
# ...
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
[glance]
# ...
api_servers = http://controller:9292
[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp
[placement]
# ...
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = PLACEMENT_PASS
启动服务并设置开机自启
systemctl enable libvirtd openstack-nova-compute
systemctl start libvirtd openstack-nova-compute
在控制器节点上运行以下命令
获取管理员凭据以启用仅管理员的CLI命令,然后确认数据库中有计算主机
. admin-openrc
openstack compute service list --service nova-compute
发现计算主机:
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
5)Neutron服务(网络服务)配置
在控制节点上配置网络服务
创建数据库并授权
mysql -u root -p
MariaDB [(none)] CREATE DATABASE neutron;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS';
取得admin凭据,以访问仅管理员的CLI命令
. admin-openrc
创建neutron用户
openstack user create --domain default --password-prompt neutron
将admin角色添加到neutron用户
openstack role add --project service --user neutron admin
创建neutron服务实体
openstack service create --name neutron --description "OpenStack Networking" network
创建网络服务API端点
openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696
安装和配置网络组件
yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
vi /etc/neutron/neutron.conf
[database]
# ...
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
[DEFAULT]
# ...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
[nova]
# ...
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS
[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp
vi /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
# ...
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
# ...
flat_networks = provider
[ml2_type_vxlan]
# ...
vni_ranges = 1:1000
[securitygroup]
# ...
enable_ipset = true
vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens33 #网卡名
[vxlan]
enable_vxlan = true
local_ip = 10.2.1.10
l2_population = true
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
vi /etc/sysctl.conf
#...
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
[root@openstack ~] sysctl -p
vi /etc/neutron/l3_agent.ini
[DEFAULT]
# ...
interface_driver = linuxbridge
vi /etc/neutron/dhcp_agent.ini
[DEFAULT]
# ...
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
vi /etc/neutron/metadata_agent.ini
nova_metadata_host = controller
metadata_proxy_shared_secret = METADATA_SECRET
vi /etc/nova/nova.conf
[neutron]
# ...
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
填充数据库
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
重新启动Compute API服务
systemctl restart openstack-nova-api.service
启动网络服务,并将其配置为在系统引导时启动
systemctl enable neutron-server neutron-linuxbridge-agent neutron-dhcp-agent neutron-l3-agent neutron-metadata-agent
systemctl start neutron-server neutron-l3-agent neutron-linuxbridge-agent neutron-dhcp-agent neutron-metadata-agent
######################################################################################
在计算节点上配置Neutron服务
计算节点处理实例的连接性和安全性组.
yum -y install openstack-neutron-linuxbridge ebtables ipset
网络公用组件配置包括身份验证机制,消息队列和插件:
vi /etc/neutron/neutron.conf
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp
Linux网桥代理为实例构建第2层(桥接和交换)虚拟网络基础结构并处理安全组
vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens33
[vxlan]
enable_vxlan = true
local_ip = 10.2.1.11
l2_population = true
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
sysctl配置
vi /etc/sysctl.conf
#...
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
执行命令
sysctl -p
编辑配置文件
vi /etc/nova/nova.conf
[neutron]
# ...
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
重新启动计算服务并启动网桥代理服务和自启动
systemctl restart openstack-nova-compute
systemctl enable neutron-linuxbridge-agent
systemctl start neutron-linuxbridge-agent
6)horizon服务(仪表板服务)配置
在控制器节点上安装和配置仪表板服务
安装软件包并配置
yum -y install openstack-dashboard
vi /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = ['*', 'two.example.com'] #允许登录面板的主机,*代表所有
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
}
}
WEBROOT = '/dashboard/'
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 3,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
OPENSTACK_NEUTRON_NETWORK = {
...
'enable_router': False,
'enable_quotas': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_': False,
'enable_fip_topology_check': False,
}
TIME_ZONE = "PRC"
vi /etc/httpd/conf.d/openstack-dashboard.conf
WSGIApplicationGroup %{GLOBAL}
重启服务
chown -R root:apache /etc/openstack-dashboard/
systemctl restart httpd memcached
验证仪表板的操作
使用位于http://controller/dashboard的Web浏览器访问仪表http://controller/dashboard .
使用admin或demo用户和default域凭据进行身份验证.
仪表板各种插件安装网址
https://s0docs0openstack0org.icopy.site/horizon/train/install/plugin-registry.html
7)cinder服务(块存储服务)配置
在控制节点安装和配置块存储服务
创建数据库并授权
mysql -u root -p
MariaDB [(none)]> CREATE DATABASE cinder;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'CINDER_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'CINDER_DBPASS';
取得admin凭据,以访问仅管理员的CLI命令
. admin-openrc
创建一个cinder用户
openstack user create --domain default --password-prompt cinder
将admin角色添加到cinder用户
openstack role add --project service --user cinder admin
创建cinderv2和cinderv3服务实体
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
创建块存储服务API端点
openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
安装软件包并配置文件
yum -y install openstack-cinder
vi /etc/cinder/cinder.conf
[database]
# ...
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
my_ip = 10.2.1.10
[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = CINDER_PASS
[oslo_concurrency]
# ...
lock_path = /var/lib/cinder/tmp
填充块存储数据库
su -s /bin/sh -c "cinder-manage db sync" cinder
编辑配置文件
vi /etc/nova/nova.conf
[cinder]
os_region_name = RegionOne
重启服务并自启动
systemctl restart openstack-nova-api
systemctl enable openstack-cinder-api openstack-cinder-scheduler
systemctl start openstack-cinder-api openstack-cinder-scheduler
######################################################################################
在块存储节点上安装和配置块存储服务
安装支持的实用程序包
yum -y install lvm2 device-mapper-persistent-data
systemctl enable lvm2-lvmetad
systemctl start lvm2-lvmetad
创建pv卷和vg组
pvcreate /dev/sdb
vgcreate cinder-volumes /dev/sdb
编辑配置文件
vi /etc/lvm/lvm.conf
devices {
#...
filter = [ "a/sda/", "a/sdb/", "r/.*/"]
安装软件包并编辑配置文件
yum -y install openstack-cinder targetcli python-keystone
vi /etc/cinder/cinder.conf
[database]
# ...
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
enabled_backends = lvm
glance_api_servers = http://controller:9292
my_ip = 10.2.1.12
auth_strategy = keystone
[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = CINDER_PASS
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
target_protocol = iscsi
target_helper = lioadm
[oslo_concurrency]
# ...
lock_path = /var/lib/cinder/tmp
启动块存储卷服务及其相关性,并将其配置为在系统启动时启动
systemctl enable openstack-cinder-volume target
systemctl start openstack-cinder-volume target
安装backup软件包并配置文件
yum -y install openstack-cinder
vi /etc/cinder/cinder.conf
[DEFAULT]
# ...
backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver
backup_swift_url = http://controller:8776/v3/%(project_id)s
启动服务并设置开机自启
systemctl enable openstack-cinder-backup
systemctl start openstack-cinder-backup
######################################################################################
在控制节点上验证块存储(Cinder)服务
. admin-openrc
openstack volume service list
以上是最小化安装openstack的方法,下面是其他插件安装方法
######################################################################################
8)Orchestration 服务(编排服务)配置
在控制节点上安装及配置 Orchestration 服务
创建数据库并授权
mysql -u root -p
CREATE DATABASE heat;
GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS';
GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS';
获得 admin 凭证来获取只有管理员能执行命令的访问权限
. admin-openrc
创建heat用户并把admin角色添加到该用户
openstack user create --domain default --password-prompt heat
openstack role add --project service --user heat admin
创建heat和 heat-cfn 服务实体
openstack service create --name heat --description "Orchestration" orchestration
openstack service create --name heat-cfn --description "Orchestration" cloudformation
创建 Orchestration 服务的 API 端点
openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1
openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1
openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1
为栈创建 heat 包含项目和用户的域
openstack domain create --description "Stack projects and users" heat
在heat域中创建管理项目和用户的heat_domain_admin用户
openstack user create --domain heat --password-prompt heat_domain_admin
添加admin角色到 heat 域 中的heat_domain_admin用户,启用heat_domain_admin用户管理栈的管理权限
openstack role add --domain heat --user heat_domain_admin admin
创建heat_stack_owner角色:
openstack role create heat_stack_owner
添加heat_stack_owner角色到service项目和用户,启用heat用户管理栈
openstack role add --project service --user heat heat_stack_owner
创建 heat_stack_user 角色
openstack role create heat_stack_user
安装软件包并编辑文件
yum -y install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine
vi /usr/share/heat/heat-dist.conf
[DEFAULT]
sql_connection = mysql://heat:123456@controller/heat
log_dir = /var/log/heat
use_stderr = False
[keystone_authtoken]
auth_host = controller
auth_port = 35357
auth_protocol = http
auth_uri = http://controller:5000/v2.0
vi /etc/heat/heat.conf
[database]
connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat
[DEFAULT]
transport_url = rabbit://openstack:RABBIT_PASS@controller
heat_metadata_server_url = http://controller:8000
heat_waitcondition_server_url = http://controller:8000/v1/waitcondition
stack_domain_admin = heat_domain_admin
stack_domain_admin_password = HEAT_DOMAIN_PASS
stack_user_domain_name = heat
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = heat
password = HEAT_PASS
[trustee]
auth_type = password
auth_url = http://controller:5000
username = heat
password = HEAT_PASS
user_domain_name = default
[clients_keystone]
auth_uri = http://controller:5000
填充Orchestration数据库
su -s /bin/sh -c "heat-manage db_sync" heat
启动 Orchestration 服务并将其设置为随系统启动:
systemctl enable openstack-heat-api openstack-heat-api-cfn openstack-heat-engine
systemctl start openstack-heat-api openstack-heat-api-cfn openstack-heat-engine
验证Orchestration服务
. admin-openrc
openstack orchestration service list
9)Murano服务配置
在控制节点上安装Murano服务及配置
1.安装Murano API
创建数据库并授权
mysql -u root -p
MariaDB [(none)]>CREATE DATABASE murano;
MariaDB [(none)]>GRANT ALL PRIVILEGES ON murano.* TO 'murano'@'localhost' IDENTIFIED BY 'MURANO_DBPASS';
MariaDB [(none)]>GRANT ALL PRIVILEGES ON murano.* TO 'murano'@'%' IDENTIFIED BY 'MURANO_DBPASS';
使用admin凭据来访问仅管理员CLI命令
. admin-openrc
创建用户及添加角色到用户,最后创建服务实体
openstack user create --domain default --password-prompt murano
openstack role add --project service --user murano admin
openstack service create --name murano --description "Application Catalog" application-catalog
创建应用程序目录服务API端点
openstack endpoint create --region RegionOne application-catalog public http://controller:8082
openstack endpoint create --region RegionOne application-catalog internal http://controller:8082
openstack endpoint create --region RegionOne application-catalog admin http://controller:8082
安装软件包并编辑配置文件
yum -y install openstack-murano-engine openstack-murano-api
vi /etc/murano/murano.conf
[DEFAULT]
debug = true
verbose = true
rabbit_host = 10.2.1.10
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
rabbit_virtual_host = controller
driver = messagingv2
[database]
connection = mysql+pymysql://murano:MURANO_DBPASS@controller/murano
[keystone]
auth_url = http://controller:5000/v2.0
[keystone_authtoken]
www_authenticate_uri = http://controller:5000/v2.0
auth_host = 10.2.1.10
auth_port = 5000
auth_protocol = http
admin_tenant_name = admin
admin_user = admin
admin_password = OS_PASSWORD
[murano]
url = http://controller:8082
[rabbitmq]
host = 10.2.1.10
login = openstack
password = RABBIT_PASS
virtual_host = controller
[networking]
default_dns = 8.8.8.8
[oslo_messaging_rabbit]
rabbit_host=10.2.1.10
rabbit_port=5672
rabbit_hosts=10.2.1.10:5672
rabbit_use_ssl=False
rabbit_userid=openstack
rabbit_password=123456
rabbit_virtual_host=/
rabbit_ha_queues=False
填充数据库
su -s /bin/sh -c "murano-db-manage upgrade" murano
启动服务并配置开机自启
systemctl start murano-api murano-engine
systemctl enable murano-api murano-engine
2.安装Murano Dashboard
安装软件包并配置最后重启服务
yum -y install openstack-murano-ui
vi /etc/openstack-dashboard/local_settings
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': 'murano-dashboard.sqlite',
}
}
SESSION_ENGINE = 'django.contrib.sessions.backends.db'
systemctl restart httpd
3.验证服务
. admin-openrc
openstack service list | grep application-catalog