安装OpenStack-Train,Openstack版本发布以 A-Z 字母顺序来发布,目前Train版本为2019-10-16发布的版本
其中Victoria以上的版本系统上需要centos8以上才行;(openstack版本说明)
参考文献
配置ip,改为静态IP地址,并保证一下参数
DEVICE=INTERFACE_NAME
TYPE=Ethernet
ONBOOT="yes"
BOOTPROTO="none"
#sed -i 's/static/none/g' /etc/sysconfig/network-scripts/ifcfg-eth0
# 三台分别设置主机名
hostnamectl set-hostname controller
hostnamectl set-hostname compute-1
hostnamectl set-hostname compute-2
配置解析地址
# /etc/hosts下增加以下内容(三都加)
10.0.19.133 controller
10.0.19.134 compute-1
10.0.19.135 compute-2
关闭防火墙并重启
systemctl stop firewalld
systemctl disable firewalld
vim /etc/sysconfig/selinux
SELINUX=disabled
reboot
chrony
yum install chrony -y
vim /etc/chrony.conf(添加)
server NTP_SERVER iburst
allow 10.0.201.0/24
systemctl enable chronyd.service
systemctl start chronyd.service
chrony
yum install chrony -y
vim /etc/chrony.conf(添加)
server controller iburst
并注释(server 0.centos.pool.ntp.org iburst)这几行
systemctl enable chronyd.service
systemctl start chronyd.service
[root@controller ~]# chronyc sources
210 Number of sources = 4
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^? ntp6.flashdance.cx 2 7 3 86 +4000us[+2661us] +/- 154ms
^+ ntp1.as200552.net 2 6 377 26 -11ms[ -11ms] +/- 134ms
^* ntppool2.time.nl 1 6 377 27 +13ms[ +12ms] +/- 113ms
^+ ntp1.vmar.se 2 6 377 25 +8357us[+8357us] +/- 150ms
[root@compute-1 ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^? controller 0 7 0 - +0ns[ +0ns] +/- 0ns
[root@compute-2 ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^? controller 0 6 0 - +0ns[ +0ns] +/- 0ns
yum install centos-release-openstack-train -y
# centos8才执行一下命令
# yum config-manager --set-enabled powertools
# 更新软件包
yum upgrade -y
# 安装openstack客户端
yum install python-openstackclient -y
# 安装selinux
yum install openstack-selinux -y
yum install mariadb mariadb-server python2-PyMySQL -y
创建和编辑/etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 10.0.19.133
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
启动和开机自启数据库
systemctl enable mariadb.service
systemctl start mariadb.service
配置数据库
mysql_secure_installation
NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!
In order to log into MariaDB to secure it, we'll need the current
password for the root user. If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.
Enter current password for root (enter for none): #直接回车
OK, successfully used password, moving on...
Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.
Set root password? [Y/n] y
New password: #设置密码Mss123456
Re-enter new password:
Password updated successfully!
Reloading privilege tables..
... Success!
By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them. This is intended only for testing, and to make the installation
go a bit smoother. You should remove them before moving into a
production environment.
Remove anonymous users? [Y/n] y
... Success!
Normally, root should only be allowed to connect from 'localhost'. This
ensures that someone cannot guess at the root password from the network.
Disallow root login remotely? [Y/n] y
... Success!
By default, MariaDB comes with a database named 'test' that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment.
Remove test database and access to it? [Y/n] y
- Dropping test database...
... Success!
- Removing privileges on test database...
... Success!
Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.
Reload privilege tables now? [Y/n] y
... Success!
Cleaning up...
All done! If you've completed all of the above steps, your MariaDB
installation should now be secure.
Thanks for using MariaDB!
openstack:openstack
#安装
yum install rabbitmq-server -y
#启动和开机自启
systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service
#添加openstack用户
rabbitmqctl add_user openstack openstack
# 配置用户读写权限
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
# 设置角色
rabbitmqctl set_user_tags openstack administrator
# 启动web插件
rabbitmq-plugins enable rabbitmq_management
yum -y install memcached python-memcached
编辑/etc/sysconfig/memcached
OPTIONS="-l 127.0.0.1,::1,controller"
启动和自启
systemctl enable memcached.service
systemctl start memcached.service
yum -y install etcd
修改etcd文件vim /etc/etcd/etcd.conf
[root@controller ~]# cat /etc/etcd/etcd.conf |grep -v '^#'
#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://10.0.19.133:2380"
ETCD_LISTEN_CLIENT_URLS="http://10.0.19.133:2379"
ETCD_NAME="controller"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.0.19.133:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://10.0.19.133:2379"
ETCD_INITIAL_CLUSTER="default=http://10.0.19.133:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"
开机自启和启动etcd
systemctl start etcd
systemctl enable etcd
$ mysql -u root -p
MariaDB [(none)]> CREATE DATABASE keystone;
Mss123456
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
IDENTIFIED BY 'Mss123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY 'Mss123456';
退出数据库访问客户端。
yum install openstack-keystone httpd mod_wsgi -y
如果遇到以下报错
Error: Package: python2-qpid-proton-0.26.0-2.el7.x86_64 (centos-openstack-train)
Requires: qpid-proton-c(x86-64) = 0.26.0-2.el7
Available: qpid-proton-c-0.14.0-2.el7.x86_64 (extras)
qpid-proton-c(x86-64) = 0.14.0-2.el7
Available: qpid-proton-c-0.26.0-2.el7.x86_64 (centos-openstack-train)
qpid-proton-c(x86-64) = 0.26.0-2.el7
Installing: qpid-proton-c-0.36.0-1.el7.x86_64 (epel)
qpid-proton-c(x86-64) = 0.36.0-1.el7
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
则解决办法为:
yum -y install python2-qpid-proton-0.26.0-2.el7.x86_64
然后在继续上面的命令;
Mss123456
vim /etc/keystone/keystone.conf
……
[database]
connection = mysql+pymysql://keystone:Mss123456@controller/keystone
……
[token]
provider = fernet
……
su -s /bin/sh -c "keystone-manage db_sync" keystone
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
Mss123456
keystone-manage bootstrap --bootstrap-password Mss123456 \
--bootstrap-admin-url http://controller:5000/v3/ \
--bootstrap-internal-url http://controller:5000/v3/ \
--bootstrap-public-url http://controller:5000/v3/ \
--bootstrap-region-id RegionOne
vim /etc/httpd/conf/httpd.conf
# 添加
ServerName controller
#创建软链
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
#开机自启和启动httpd
systemctl enable httpd.service
systemctl start httpd.service
配置管理员环境变量信息
export OS_USERNAME=admin
export OS_PASSWORD=Mss123456
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
1)创建名为example的域
openstack domain create --description "An Example Domain" example
2)创建名为service的项目
openstack project create --domain default \
--description "Service Project" service
3)创建myproject项目
openstack project create --domain default \
--description "Demo Project" myproject
4)创建myuser用户:
openstack user create --domain default \
--password-prompt myuser
5)创建myrole角色:
openstack role create myrole
openstack role add --project myproject --user myuser myrole
1)取消设置临时OS_AUTH_URL和OS_PASSWORD环境变量:
unset OS_AUTH_URL OS_PASSWORD
openstack --os-auth-url http://controller:5000/v3 \
--os-project-domain-name Default --os-user-domain-name Default \
--os-project-name admin --os-username admin token issue
openstack --os-auth-url http://controller:5000/v3 \
--os-project-domain-name Default --os-user-domain-name Default \
--os-project-name myproject --os-username myuser token issue
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=Mss123456
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=myproject
export OS_USERNAME=myuser
export OS_PASSWORD=Mss123456
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
# 加载环境变量
. admin-openrc
# 身份验证
openstack token issue
# 加载环境变量
. demo-openrc
# 身份验证
openstack token issue
1)创建并设置glance的数据库
mysql -u root -p
MariaDB [(none)]> CREATE DATABASE glance;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
IDENTIFIED BY 'Mss123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
IDENTIFIED BY 'Mss123456';
# 设置环境变量
. admin-openrc
# 创建glance用户
openstack user create --domain default --password-prompt glance
# 将glance用户加入service项目,并设置为amdin角色
openstack role add --project service --user glance admin
# 创建名为glance的服务
openstack service create --name glance \
--description "OpenStack Image" image
openstack endpoint create --region RegionOne \
image public http://controller:9292
openstack endpoint create --region RegionOne \
image internal http://controller:9292
openstack endpoint create --region RegionOne \
image admin http://controller:9292
yum install openstack-glance -y
配置api
vim /etc/glance/glance-api.conf
[database]
# ...
connection = mysql+pymysql://glance:Mss123456@controller/glance
[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = Mss123456
[paste_deploy]
# ...
flavor = keystone
[glance_store]
# ...
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
su -s /bin/sh -c "glance-manage db_sync" glance
systemctl enable openstack-glance-api.service
systemctl start openstack-glance-api.service
# 下载图像
wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
# 上传到image服务
glance image-create --name "cirros" \
--file cirros-0.4.0-x86_64-disk.img \
--disk-format qcow2 --container-format bare \
--visibility=public
# 确定上传图片以及验证属性
glance image-list
1)创建数据库
mysql -u root -p
MariaDB [(none)]> CREATE DATABASE placement;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \
IDENTIFIED BY 'Mss123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
IDENTIFIED BY 'Mss123456';
# 环境变量
. admin-openrc
# 创建placement用户
openstack user create --domain default --password-prompt placement
# 配置为admin角色权限
openstack role add --project service --user placement admin
# 创建Placement API
openstack service create --name placement \
--description "Placement API" placement
# 创建端口
openstack endpoint create --region RegionOne \
placement public http://controller:8778
openstack endpoint create --region RegionOne \
placement internal http://controller:8778
openstack endpoint create --region RegionOne \
placement admin http://controller:8778
3)安装服务
yum install openstack-placement-api -y
配置服务
vim /etc/placement/placement.conf
[placement_database]
# ...
connection = mysql+pymysql://placement:Mss123456@controller/placement
[api]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = Mss123456
su -s /bin/sh -c "placement-manage db sync" placement
systemctl restart httpd
placement-status upgrade check
# 以下验证命令无验证
# pip3 install osc-placement
# openstack --os-placement-api-version 1.2 resource class list --sort-column name
# openstack --os-placement-api-version 1.6 trait list --sort-column name
1)创建数据库
mysql -u root -p
# 创建数据库
MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> CREATE DATABASE nova_cell0;
# 创建用户和配置权限
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
IDENTIFIED BY 'Mss123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
IDENTIFIED BY 'Mss123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
IDENTIFIED BY 'Mss123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
IDENTIFIED BY 'Mss123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
IDENTIFIED BY 'Mss123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
IDENTIFIED BY 'Mss123456';
# 环境变量
. admin-openrc
# 创建nova用户
openstack user create --domain default --password-prompt nova
# 添加为admin权限
openstack role add --project service --user nova admin
# 创建nova服务
openstack service create --name nova \
--description "OpenStack Compute" compute
# 创建compute api 服务节点
openstack endpoint create --region RegionOne \
compute public http://controller:8774/v2.1
openstack endpoint create --region RegionOne \
compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne \
compute admin http://controller:8774/v2.1
3)安装服务
yum install openstack-nova-api openstack-nova-conductor \
openstack-nova-novncproxy openstack-nova-scheduler -y
编辑配置
vim /etc/nova/nova.conf
[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
[api_database]
# ...
connection = mysql+pymysql://nova:Mss123456@controller/nova_api
[database]
# ...
connection = mysql+pymysql://nova:Mss123456@controller/nova
[DEFAULT]
# ...
transport_url = rabbit://openstack:openstack@controller:5672/
[api]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = Mss123456
[DEFAULT]
# ...
my_ip = 10.0.19.133
[DEFAULT]
# ...
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[vnc]
enabled = true
# ...
server_listen = $my_ip
server_proxyclient_address = $my_ip
[glance]
# ...
api_servers = http://controller:9292
[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp
[placement]
# ...
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = Mss123456
4)同步数据库
su -s /bin/sh -c "nova-manage api_db sync" nova
5)配置cell0数据库
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
su -s /bin/sh -c "nova-manage db sync" nova
6)验证验证 nova cell0 和 cell1 是否正确注册
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
7)开机自启和启动
systemctl enable \
openstack-nova-api.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service
systemctl start \
openstack-nova-api.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service
1)安装软件包
yum install openstack-nova-compute -y
2)配置文件
vim /etc/nova/nova.conf
[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:openstack@controller
my_ip = 10.0.19.134 (填自己的ip地址)
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = Mss123456
[vnc]
# ...
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
[glance]
# ...
api_servers = http://controller:9292
[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp
[placement]
# ...
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = Mss123456
3)判断计算节点是否支持硬件加速
egrep -c '(vmx|svm)' /proc/cpuinfo
如果返回值为0,就需要配置/etc/nova/nova.conf
[libvirt]
virt_type = qemu
4)开机自启和启动
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service
1)安装服务
# 环境变量
. admin-openrc
# 确实数据库中的计算主机
openstack compute service list --service nova-compute
# 发现计算主机
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
# 自动发现主机
vim /etc/nova/nova.conf
[scheduler]
discover_hosts_in_cells_interval = 300
2)验证服务
# 列出成功的进程
openstack compute service list
# 列出api端口
openstack catalog list
# 列出image的连接
openstack image list
# 列出单元和放置api是否运行成功
nova-status upgrade check
如果执行出现以下报错
[root@controller ~]# nova-status upgrade check
Error:
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 398, in main
ret = fn(*fn_args, **fn_kwargs)
File "/usr/lib/python2.7/site-packages/oslo_upgradecheck/upgradecheck.py", line 102, in check
result = func(self)
File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 164, in _check_placement
versions = self._placement_get("/")
File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 154, in _placement_get
return client.get(path, raise_exc=True).json()
File "/usr/lib/python2.7/site-packages/keystoneauth1/adapter.py", line 386, in get
return self.request(url, 'GET', **kwargs)
File "/usr/lib/python2.7/site-packages/keystoneauth1/adapter.py", line 248, in request
return self.session.request(url, method, **kwargs)
File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 961, in request
raise exceptions.from_response(resp, method, url)
Forbidden: Forbidden (HTTP 403)
则
vim /etc/httpd/conf.d/00-placement-api.conf
把这个加在文件的最后面
= 2.4>
Require all granted
Order allow,deny
Allow from all
重启httpd服务
systemctl restart httpd
1)创建数据库
mysql -u root -p
MariaDB [(none)] CREATE DATABASE neutron;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
IDENTIFIED BY 'Mss123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
IDENTIFIED BY 'Mss123456';
2)创建neutron用户
. admin-openrc
# 创建neutron用户并设置admin权限
openstack user create --domain default --password-prompt neutron
openstack role add --project service --user neutron admin
# 创建neutron服务
openstack service create --name neutron \
--description "OpenStack Networking" network
# 创建api端口
openstack endpoint create --region RegionOne \
network public http://controller:9696
openstack endpoint create --region RegionOne \
network internal http://controller:9696
openstack endpoint create --region RegionOne \
network admin http://controller:9696
3)安装neutron服务(采用self-service network的方式部署网络)
yum install openstack-neutron openstack-neutron-ml2 \
openstack-neutron-linuxbridge ebtables -y
4)修改配置文件
vim /etc/neutron/neutron.conf
[database]
# ...
connection = mysql+pymysql://neutron:Mss123456@controller/neutron
[DEFAULT]
# ...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = Mss123456
[nova]
# ...
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = Mss123456
[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp
5)修改第二层配置文件
vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
# ...
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
# ...
flat_networks = provider
[ml2_type_vxlan]
# ...
vni_ranges = 1:1000
[securitygroup]
# ...
enable_ipset = true
6)配置linux桥接代理
controller节点需双网卡
[root@controller network-scripts]# cat /etc/sysconfig/network-scripts/ifcfg-eth1
TYPE=Ethernet
BOOTPROTO=none
NAME=eth1
DEVICE=eth1
ONBOOT=yes
IPADDR=10.0.19.138
PREFIX=24
GATEWAY=10.0.19.254
DNS1=202.96.209.5
DNS2=202.96.209.133
vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:eth1 ##第二张网卡名称
[vxlan]
enable_vxlan = true
local_ip = 10.0.19.133
l2_population = true
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
7)修改系统参数
vim /etc/sysctl.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
# 载入br_netfilter模块
modprobe br_netfilter
# 从配置文件加载内核参数
sysctl -p
8)配置第三层代理
vim /etc/neutron/l3_agent.ini
[DEFAULT]
# ...
interface_driver = linuxbridge
9)配置DHCP代理
vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
10)配置元数据代理
vim /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = Mss123456
11)修改配置文件配置计算服务使用网络服务
vim /etc/nova/nova.conf
[neutron]
# ...
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = Mss123456
service_metadata_proxy = true
metadata_proxy_shared_secret = Mss123456
12)创建软连接
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
13)导入数据库结构
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
14)重启计算服务
systemctl restart openstack-nova-api.service
15)启动网络服务并开机自启
systemctl enable neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
systemctl start neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
systemctl enable neutron-l3-agent.service
systemctl start neutron-l3-agent.service
1)安装组件
yum install openstack-neutron-linuxbridge ebtables ipset -y
2)修改配置文件
vim /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = Mss123456
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
3)配置Linux桥接代理
vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:eth0 # 自己的网卡
[vxlan]
enable_vxlan = true
local_ip = 10.0.19.134 # 自己的ip地址
l2_population = true
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
vim /etc/sysctl.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
modprobe br_netfilter
sysctl -p
4)配置网络服务
vim /etc/nova/nova.conf
[neutron]
# ...
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = Mss123456
5)开机自启和启动
systemctl restart openstack-nova-compute.service
systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service
6)验证(在控制节点上操作)
. admin-openrc
# 执行命令验证是否成功启动neutron-server
openstack extension list --network
# 列出插件,验证网络插件是否成功启动
openstack network agent list
# 如在一下出现自己计算节点,那表示成功,如果不成功,在查查日志/var/log/neutron/linuxbridge-agent.log
[root@controller ~]# openstack network agent list
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 244101b3-8905-4999-b415-151fc67cb875 | Linux bridge agent | controller | None | :-) | UP | neutron-linuxbridge-agent |
| 2c1b081a-b132-4091-9b32-519eb3e52629 | Metadata agent | controller | None | :-) | UP | neutron-metadata-agent |
| 54dacfca-8aa9-4bcf-a771-88468b872801 | DHCP agent | controller | nova | :-) | UP | neutron-dhcp-agent |
| 5cb00320-c954-40a2-adbd-1dfd3c9ed04d | Linux bridge agent | compute-1 | None | :-) | UP | neutron-linuxbridge-agent |
| b16c5b44-f98a-443f-8393-9d62ba69d279 | L3 agent | controller | nova | :-) | UP | neutron-l3-agent |
| fcb6fb5f-c509-473f-b636-af99d7bbc34d | Linux bridge agent | compute-2 | None | :-) | UP | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
1)安装组件
yum install openstack-dashboard -y
2)编辑配置
vim /etc/openstack-dashboard/local_settings
# 配置界面在控制节点使用
OPENSTACK_HOST = "controller"
# 允许所有主机访问
ALLOWED_HOSTS = ['*']
# 配置存储服务
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
}
}
# 启动v3的认证api
OPENSTACK_KEYSTONE_URL = "http://%s/identity/v3" % OPENSTACK_HOST
# 启用domain支持
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
# 配置API版本
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 3,
}
# 配置Default为默认域
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
# 配置user角色为默认角色
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
# 配置时区
TIME_ZONE = "Asia/Shanghai"
OPENSTACK_NEUTRON_NETWORK = {
...
'enable_router': False,
'enable_quotas': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_': False,
'enable_fip_topology_check': False,
}
# 配置中必须加这串,不然页面可能打不开
WEBROOT = '/dashboard/'
3)修改httpd配置
vim /etc/httpd/conf.d/openstack-dashboard.conf
#添加
WSGIApplicationGroup %{GLOBAL}
4)重启httpd和memcached服务
systemctl restart httpd.service memcached.service
1)创建数据库
mysql -u root -p
MariaDB [(none)]> CREATE DATABASE cinder;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
IDENTIFIED BY 'Mss123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
IDENTIFIED BY 'Mss123456';
2)配置用户
. admin-openrc
# 创建cinder用户
openstack user create --domain default --password-prompt cinder
# 将admin角色添加到cinder用户
openstack role add --project service --user cinder admin
# 创建cinderv2和cinderv3服务实体
openstack service create --name cinderv2 \
--description "OpenStack Block Storage" volumev2
openstack service create --name cinderv3 \
--description "OpenStack Block Storage" volumev3
# 创建块存储服务API端点
openstack endpoint create --region RegionOne \
volumev2 public http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne \
volumev2 internal http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne \
volumev2 admin http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne \
volumev3 public http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne \
volumev3 internal http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne \
volumev3 admin http://controller:8776/v3/%\(project_id\)s
3)安装软件并配置
yum install openstack-cinder -y
vim /etc/cinder/cinder.conf
[database]
connection = mysql+pymysql://cinder:Mss123456@controller/cinder
[DEFAULT]
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone
my_ip = 10.0.19.133
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = Mss123456
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
4)同步数据库
su -s /bin/sh -c "cinder-manage db sync" cinder
5)将计算配置为使用块存储
vim /etc/nova/nova.conf
[cinder]
os_region_name = RegionOne
6)开机自启和启动
# 重新启动计算API服务
systemctl restart openstack-nova-api.service
# 开机启动和启动服务
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
1)安装软件
yum install lvm2 device-mapper-persistent-data -y
systemctl enable lvm2-lvmetad.service
systemctl start lvm2-lvmetad.service
2)创建LVM物理卷/dev/sdb
pvcreate /dev/sdb
3)创建LVM卷组cinder-volumes
vgcreate cinder-volumes /dev/sdb
4)配置LVM
vim /etc/lvm/lvm.conf
devices {
...
filter = [ "a/sdb/", "r/.*/"]
5)安装软件包
yum install targetcli python-keystone -y
# 此部分已安装,无需安装,如控制节点和存储节点不在一起,需安装yum install openstack-cinder -y
6)配置
vim /etc/cinder/cinder.conf
[database]
connection = mysql+pymysql://cinder:Mss123456@controller/cinder
[DEFAULT]
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone
my_ip = 10.0.19.133
enabled_backends = lvm
glance_api_servers = http://controller:9292
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = Mss123456
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
target_protocol = iscsi
target_helper = lioadm
[oslo_concurrency]
# ...
lock_path = /var/lib/cinder/tmp
7)开机自启和启动
systemctl enable openstack-cinder-volume.service target.service
systemctl start openstack-cinder-volume.service target.service
8)安装配置备份服务(可选)
# 此部分已安装,无需安装,如控制节点和存储节点不在一起,需安装yum install openstack-cinder -y
yum install openstack-cinder -y
配置
vim /etc/cinder/cinder.conf
[DEFAULT]
# ...
backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver
backup_swift_url = SWIFT_URL
openstack catalog show object-store
开机自启和启动
systemctl enable openstack-cinder-backup.service
systemctl start openstack-cinder-backup.service