安装OpenStack-wallaby

安装OpenStack-wallaby,Openstack版本发布以 A-Z 字母顺序来发布

其中Victoria以上的版本系统上需要centos8以上才行;(openstack版本说明)

参考文献

  • 1、https://docs.openstack.org/install-guide/openstack-services.html
  • 2、https://docs.openstack.org/liberty/zh_CN/install-guide-rdo/

0、准备工作

  • 10.0.19.170(controller)、10.0.19.173
  • 10.0.19.171(object-1)
  • 10.0.19.172(compute-2)

配置ip,改为静态IP地址,并保证一下参数

DEVICE=INTERFACE_NAME
TYPE=Ethernet
ONBOOT="yes"
BOOTPROTO="none"

#sed -i 's/static/none/g' /etc/sysconfig/network-scripts/ifcfg-eth0 
# systemctl restart NetworkManager.service 
# 三台分别设置主机名
hostnamectl set-hostname controller
hostnamectl set-hostname compute-1
hostnamectl set-hostname compute-2

配置解析地址

# /etc/hosts下增加以下内容(三都加)
10.0.19.170 controller
10.0.19.171 compute-1
10.0.19.172 compute-2

关闭防火墙并重启

systemctl stop firewalld
systemctl disable firewalld
vim /etc/sysconfig/selinux
SELINUX=disabled
# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
reboot

1、基础环境

1.1、控制节点安装chrony

dnf install chrony -y
vim /etc/chrony.conf(添加)
server NTP_SERVER iburst
allow 10.0.19.0/24

systemctl enable chronyd.service
systemctl start chronyd.service

1.2、计算节点安装chrony

dnf install chrony -y
vim /etc/chrony.conf(添加)
server controller iburst
并注释(server 0.centos.pool.ntp.org iburst)这几行

systemctl enable chronyd.service
systemctl start chronyd.service

1.3、控制节点运行以下命令

[root@controller ~]# chronyc sources
210 Number of sources = 4
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^? ntp6.flashdance.cx            2   7     3    86  +4000us[+2661us] +/-  154ms
^+ ntp1.as200552.net             2   6   377    26    -11ms[  -11ms] +/-  134ms
^* ntppool2.time.nl              1   6   377    27    +13ms[  +12ms] +/-  113ms
^+ ntp1.vmar.se                  2   6   377    25  +8357us[+8357us] +/-  150ms

1.4、计算节点运行一下命令

[root@compute-1 ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^? controller                    0   7     0     -     +0ns[   +0ns] +/-    0ns

[root@compute-2 ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^? controller                    0   6     0     -     +0ns[   +0ns] +/-    0ns

1.5、安装openstack的库(三台都做)

dnf install centos-release-openstack-wallaby -y
# centos8才执行一下命令
dnf config-manager --set-enabled powertools

# 更新软件包
dnf upgrade -y
# 安装openstack客户端
dnf install python3-openstackclient -y
# 安装selinux
dnf install openstack-selinux -y

1.6、安装sql数据库(control节点)

  • 账密为:root:Mss123456
dnf install mariadb mariadb-server python2-PyMySQL -y

创建和编辑/etc/my.cnf.d/openstack.cnf

[mysqld]
bind-address = 10.0.19.170

default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

启动和开机自启数据库

systemctl enable mariadb.service
systemctl start mariadb.service

配置数据库

  • root的账密为:root:Mss123456
mysql_secure_installation
NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user.  If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none): 		#直接回车
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.

Set root password? [Y/n] y
New password: 				#设置密码Baofoo@64
Re-enter new password: 
Password updated successfully!
Reloading privilege tables..
 ... Success!


By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them.  This is intended only for testing, and to make the installation
go a bit smoother.  You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] y
 ... Success!

Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] y
 ... Success!

By default, MariaDB comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] y
 - Dropping test database...
 ... Success!
 - Removing privileges on test database...
 ... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] y
 ... Success!

Cleaning up...

All done!  If you've completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!

1.7、安装RabbitMQ消息队列(control节点)

  • mq的账密为openstack:openstack
  • web登录地址为:http://10.0.19.170:15672/#/
#安装
dnf install rabbitmq-server -y

#启动和开机自启
systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service

#添加openstack用户
rabbitmqctl add_user openstack openstack

# 配置用户读写权限
rabbitmqctl set_permissions openstack ".*" ".*" ".*"

# 设置角色
rabbitmqctl set_user_tags openstack administrator

# 启动web插件
rabbitmq-plugins enable rabbitmq_management

1.8、安装memcached服务(control节点)

dnf -y install memcached python3-memcached

编辑/etc/sysconfig/memcached

sed -i 's/::1/::1,controller/g' /etc/sysconfig/memcached
#OPTIONS="-l 127.0.0.1,::1,controller"

启动和自启

systemctl enable memcached.service
systemctl start memcached.service

1.9、安装etcd服务(control节点)

dnf -y install etcd

修改etcd文件vim /etc/etcd/etcd.conf

[root@controller ~]# cat /etc/etcd/etcd.conf |grep -v '^#'
#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://10.0.19.170:2380"
ETCD_LISTEN_CLIENT_URLS="http://10.0.19.170:2379"
ETCD_NAME="controller"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.0.19.170:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://10.0.19.170:2379"
ETCD_INITIAL_CLUSTER="controller=http://10.0.19.170:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"

开机自启和启动etcd

systemctl start etcd
systemctl enable etcd

2、安装keystone认证服务(control节点)

2.1、使用数据库访问客户端作为root用户连接到数据库服务器:

$ mysql -u root -p

2.2、创建keystone数据库:

MariaDB [(none)]> CREATE DATABASE keystone;

2.3、允许正确访问keystone数据库:

  • keystone的用户密码为Mss123456
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
IDENTIFIED BY 'Mss123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY 'Mss123456';

退出数据库访问客户端。

2.4、安装keystone服务

dnf install openstack-keystone httpd python3-mod_wsgi  -y

如果遇到以下报错

Error: Package: python2-qpid-proton-0.26.0-2.el7.x86_64 (centos-openstack-train)
           Requires: qpid-proton-c(x86-64) = 0.26.0-2.el7
           Available: qpid-proton-c-0.14.0-2.el7.x86_64 (extras)
               qpid-proton-c(x86-64) = 0.14.0-2.el7
           Available: qpid-proton-c-0.26.0-2.el7.x86_64 (centos-openstack-train)
               qpid-proton-c(x86-64) = 0.26.0-2.el7
           Installing: qpid-proton-c-0.36.0-1.el7.x86_64 (epel)
               qpid-proton-c(x86-64) = 0.36.0-1.el7
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest

则解决办法为:

dnf -y install python2-qpid-proton-0.26.0-2.el7.x86_64

然后在继续上面的命令;

2.5、修改配置文件

  • 数据库密码:Mss123456
vim /etc/keystone/keystone.conf
……
[database]
connection = mysql+pymysql://keystone:Mss123456@controller/keystone
……
[token]
provider = fernet
……

2.6、导入keystone数据库表结构

su -s /bin/sh -c "keystone-manage db_sync" keystone

2.7、初始化Fernet密钥存储库

keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

2.8、引导认证服务

  • 管理密码为Mss123456
keystone-manage bootstrap --bootstrap-password Mss123456 \
  --bootstrap-admin-url http://controller:5000/v3/ \
  --bootstrap-internal-url http://controller:5000/v3/ \
  --bootstrap-public-url http://controller:5000/v3/ \
  --bootstrap-region-id RegionOne

2.9、配置apache服务

vim  /etc/httpd/conf/httpd.conf
# 添加
ServerName controller

#创建软链
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

#开机自启和启动httpd
systemctl enable httpd.service
systemctl start httpd.service

配置管理员环境变量信息

export OS_USERNAME=admin
export OS_PASSWORD=Mss123456
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3

2.10、创建相关域、项目、用户和角色

1)创建名为example的域

openstack domain create --description "An Example Domain" example

2)创建名为service的项目

openstack project create --domain default \
  --description "Service Project" service

3)创建myproject项目

openstack project create --domain default \
  --description "Demo Project" myproject

4)创建myuser用户:

  • 账密为:myuser:Mss123456
openstack user create --domain default \
  --password-prompt myuser

5)创建myrole角色:

openstack role create myrole
  1. 将myrole角色添加到myproject项目和myuser用户中
openstack role add --project myproject --user myuser myrole

2.11、验证操作

1)取消设置临时OS_AUTH_URL和OS_PASSWORD环境变量:

unset OS_AUTH_URL OS_PASSWORD
  1. 为admin用户,请求身份验证令牌
  • 此时用admin的密码:Mss123456
openstack --os-auth-url http://controller:5000/v3 \
  --os-project-domain-name Default --os-user-domain-name Default \
  --os-project-name admin --os-username admin token issue
  1. 为上一节中创建的myuser用户,请求身份验证令牌
  • 此时用myrole的密码:Mss123456
openstack --os-auth-url http://controller:5000/v3 \
  --os-project-domain-name Default --os-user-domain-name Default \
  --os-project-name myproject --os-username myuser token issue

2.12、创建OpenStack客户端环境脚本

  1. 创建和编辑admin-openrc文件,并添加以下内容:
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=Mss123456
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
  1. 创建和编辑demo-openrc文件,并添加以下内容
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=myproject
export OS_USERNAME=myuser
export OS_PASSWORD=Mss123456
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
  1. 验证脚本
# 加载环境变量
. admin-openrc
# 身份验证
openstack token issue


# 加载环境变量
. demo-openrc
# 身份验证
openstack token issue

3、安装glance镜像服务(control节点)

1)创建并设置glance的数据库

mysql -u root -p

MariaDB [(none)]> CREATE DATABASE glance;

MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
  IDENTIFIED BY 'Mss123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
  IDENTIFIED BY 'Mss123456';
  1. 创建glance镜像服务的keystone相关认证信息
  • 用户密码:glance:Mss123456
# 设置环境变量
. admin-openrc
# 创建glance用户
openstack user create --domain default --password-prompt glance
# 将glance用户加入service项目,并设置为amdin角色
openstack role add --project service --user glance admin
# 创建名为glance的服务
openstack service create --name glance \
  --description "OpenStack Image" image

  1. 创建镜像的api端口
openstack endpoint create --region RegionOne \
  image public http://controller:9292
  
openstack endpoint create --region RegionOne \
  image internal http://controller:9292
  
openstack endpoint create --region RegionOne \
  image admin http://controller:9292
  1. 安装glance服务
dnf install openstack-glance -y

配置api

vim /etc/glance/glance-api.conf

[database]
connection = mysql+pymysql://glance:Mss123456@controller/glance

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = Mss123456

[paste_deploy]
flavor = keystone

[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
  1. 同步图像服务数据库
su -s /bin/sh -c "glance-manage db_sync" glance
  1. 开机自启和启动服务
systemctl enable openstack-glance-api.service
systemctl start openstack-glance-api.service
  1. 验证
# 下载图像
wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img

# 上传到image服务
glance image-create --name "cirros" \
  --file cirros-0.4.0-x86_64-disk.img \
  --disk-format qcow2 --container-format bare \
  --visibility=public

# 确定上传图片以及验证属性
glance image-list

4、安装placement(control节点)

1)创建数据库

mysql -u root -p

MariaDB [(none)]> CREATE DATABASE placement;

MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \
  IDENTIFIED BY 'Mss123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
  IDENTIFIED BY 'Mss123456';
  1. 配置用户和端点
  • 用户账密placement:Mss123456
# 环境变量
. admin-openrc
# 创建placement用户
openstack user create --domain default --password-prompt placement
# 配置为admin角色权限
openstack role add --project service --user placement admin
# 创建Placement API
openstack service create --name placement \
  --description "Placement API" placement
# 创建端口 
openstack endpoint create --region RegionOne \
  placement public http://controller:8778
  
openstack endpoint create --region RegionOne \
  placement internal http://controller:8778

openstack endpoint create --region RegionOne \
  placement admin http://controller:8778

3)安装服务

dnf install openstack-placement-api -y

配置服务

vim /etc/placement/placement.conf

[placement_database]
connection = mysql+pymysql://placement:Mss123456@controller/placement

[api]
auth_strategy = keystone

[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = Mss123456
  1. 同步placement数据库
su -s /bin/sh -c "placement-manage db sync" placement
  1. 重启httpd服务
systemctl restart httpd
  1. 验证
[root@controller ~]# placement-status upgrade check
+----------------------------------------------------------------------+
| Upgrade Check Results                                                |
+----------------------------------------------------------------------+
| Check: Missing Root Provider IDs                                     |
| Result: Success                                                      |
| Details: None                                                        |
+----------------------------------------------------------------------+
| Check: Incomplete Consumers                                          |
| Result: Success                                                      |
| Details: None                                                        |
+----------------------------------------------------------------------+
| Check: Policy File JSON to YAML Migration                            |
| Result: Failure                                                      |
| Details: Your policy file is JSON-formatted which is deprecated. You |
|   need to switch to YAML-formatted file. Use the                     |
|   ``oslopolicy-convert-json-to-yaml`` tool to convert the            |
|   existing JSON-formatted files to YAML in a backwards-              |
|   compatible manner: https://docs.openstack.org/oslo.policy/         |
|   latest/cli/oslopolicy-convert-json-to-yaml.html.                   |
+----------------------------------------------------------------------+

# 验证命令
pip3 install osc-placement
openstack --os-placement-api-version 1.2 resource class list --sort-column name
openstack --os-placement-api-version 1.6 trait list --sort-column name

7)此时会出现问题

Expecting value: line 1 column 1 (char 0)

解决办法,在 /etc/httpd/conf.d/00-placement-api.conf 中的 内部加入以下代码:

  
    = 2.4>
      Require all granted
    
    
      Order allow,deny
      Allow from all
    
  

重启httpd:

systemctl restart httpd

再次验证:

[root@controller ~]# openstack --os-placement-api-version 1.2 resource class list --sort-column name
+----------------------------------------+
| name                                   |
+----------------------------------------+
| DISK_GB                                |
...

[root@controller ~]# openstack --os-placement-api-version 1.6 trait list --sort-column name
+---------------------------------------+
| name                                  |
+---------------------------------------+
| COMPUTE_ACCELERATORS                  |
...

5、安装nova计算服务(control节点)

1)创建数据库

mysql -u root -p
# 创建数据库
MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> CREATE DATABASE nova_cell0;
# 创建用户和配置权限
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
  IDENTIFIED BY 'Mss123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
  IDENTIFIED BY 'Mss123456';

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
  IDENTIFIED BY 'Mss123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
  IDENTIFIED BY 'Mss123456';

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
  IDENTIFIED BY 'Mss123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
  IDENTIFIED BY 'Mss123456';
  1. 创建nova凭证
  • nova用户密码:Mss123456
# 环境变量
. admin-openrc
# 创建nova用户
openstack user create --domain default --password-prompt nova
# 添加为admin权限
openstack role add --project service --user nova admin
# 创建nova服务
openstack service create --name nova \
  --description "OpenStack Compute" compute
# 创建compute api 服务节点
openstack endpoint create --region RegionOne \
  compute public http://controller:8774/v2.1
openstack endpoint create --region RegionOne \
  compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne \
  compute admin http://controller:8774/v2.1

3)安装服务

dnf install openstack-nova-api openstack-nova-conductor \
  openstack-nova-novncproxy openstack-nova-scheduler -y

编辑配置

vim /etc/nova/nova.conf

[DEFAULT]
enabled_apis = osapi_compute,metadata
my_ip = 10.0.19.170
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
transport_url = rabbit://openstack:openstack@controller:5672/

[api_database]
connection = mysql+pymysql://nova:Mss123456@controller/nova_api

[database]
connection = mysql+pymysql://nova:Mss123456@controller/nova

[api]
auth_strategy = keystone

[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = Mss123456

[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = Mss123456

log_dir=/var/log/nova

4)同步数据库

su -s /bin/sh -c "nova-manage api_db sync" nova

5)配置cell0数据库

su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova

su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova

su -s /bin/sh -c "nova-manage db sync" nova

6)验证验证 nova cell0 和 cell1 是否正确注册

su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova

7)开机自启和启动

systemctl enable \
    openstack-nova-api.service \
    openstack-nova-scheduler.service \
    openstack-nova-conductor.service \
    openstack-nova-novncproxy.service
systemctl start \
    openstack-nova-api.service \
    openstack-nova-scheduler.service \
    openstack-nova-conductor.service \
    openstack-nova-novncproxy.service

5.1、安装nova计算服务(compute节点)

1)安装软件包

dnf install openstack-nova-compute -y

2)配置文件

vim /etc/nova/nova.conf

[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:openstack@controller
my_ip = 10.0.19.134   (填自己的ip地址)
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api]
auth_strategy = keystone

[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = Mss123456

[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = Mss123456

log_dir=/var/log/nova
#compute_driver=libvirt.LibvirtDriver

3)判断计算节点是否支持硬件加速

egrep -c '(vmx|svm)' /proc/cpuinfo
如果返回值为0,就需要配置/etc/nova/nova.conf
[libvirt]
virt_type = qemu

4)开机自启和启动

systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service

5.2、安装nova计算服务-续(control节点)

1)安装服务

# 环境变量
. admin-openrc
# 确实数据库中的计算主机
openstack compute service list --service nova-compute
# 发现计算主机
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
# 自动发现主机
vim /etc/nova/nova.conf

[scheduler]
discover_hosts_in_cells_interval = 300

2)验证服务

# 列出成功的进程
openstack compute service list
# 列出api端口
openstack catalog list
# 列出image的连接
openstack image list
# 列出单元和放置api是否运行成功
nova-status upgrade check

如果执行出现以下报错

[root@controller ~]# nova-status upgrade check
Error:
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 398, in main
    ret = fn(*fn_args, **fn_kwargs)
  File "/usr/lib/python2.7/site-packages/oslo_upgradecheck/upgradecheck.py", line 102, in check
    result = func(self)
  File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 164, in _check_placement
    versions = self._placement_get("/")
  File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 154, in _placement_get
    return client.get(path, raise_exc=True).json()
  File "/usr/lib/python2.7/site-packages/keystoneauth1/adapter.py", line 386, in get
    return self.request(url, 'GET', **kwargs)
  File "/usr/lib/python2.7/site-packages/keystoneauth1/adapter.py", line 248, in request
    return self.session.request(url, method, **kwargs)
  File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 961, in request
    raise exceptions.from_response(resp, method, url)
Forbidden: Forbidden (HTTP 403)

vim /etc/httpd/conf.d/00-placement-api.conf 
 
把这个加在文件的最后面

   = 2.4>
      Require all granted
   
   
      Order allow,deny
      Allow from all
   

 
重启httpd服务
systemctl restart httpd

6、安装neutron网络服务(control节点)

1)创建数据库

mysql -u root -p
 
MariaDB [(none)] CREATE DATABASE neutron;
 
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
  IDENTIFIED BY 'Mss123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
  IDENTIFIED BY 'Mss123456';

2)创建neutron用户

  • neutron用密码为:Mss123456
. admin-openrc
# 创建neutron用户并设置admin权限
openstack user create --domain default --password-prompt neutron
openstack role add --project service --user neutron admin
# 创建neutron服务
openstack service create --name neutron \
  --description "OpenStack Networking" network
# 创建api端口  
openstack endpoint create --region RegionOne \
  network public http://controller:9696
  
openstack endpoint create --region RegionOne \
  network internal http://controller:9696  
  
openstack endpoint create --region RegionOne \
  network admin http://controller:9696

3)安装neutron服务(采用self-service network的方式部署网络)

dnf install openstack-neutron openstack-neutron-ml2 \
  openstack-neutron-linuxbridge ebtables -y

4)修改配置文件

vim /etc/neutron/neutron.conf

[database]
connection = mysql+pymysql://neutron:Mss123456@controller/neutron

[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = Mss123456

[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = Mss123456

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

5)修改第二层配置文件

vim /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security

[ml2_type_flat]
flat_networks = provider

[ml2_type_vxlan]
vni_ranges = 1:1000

[securitygroup]
enable_ipset = true

6)配置linux桥接代理

controller节点需双网卡

[root@controller network-scripts]# cat /etc/sysconfig/network-scripts/ifcfg-eth1
TYPE=Ethernet
BOOTPROTO=none
NAME=eth1
DEVICE=eth1
ONBOOT=yes
IPADDR=10.0.19.173
PREFIX=24
GATEWAY=10.0.19.254
DNS1=202.96.209.5
DNS2=202.96.209.133
vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]
physical_interface_mappings = provider:eth1		##第二张网卡名称

[vxlan]
enable_vxlan = true
local_ip = 10.0.19.170
l2_population = true

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

7)修改系统参数

vim /etc/sysctl.conf

net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1

# 载入br_netfilter模块
modprobe br_netfilter
# 从配置文件加载内核参数
sysctl -p

8)配置第三层代理

vim /etc/neutron/l3_agent.ini

[DEFAULT]
interface_driver = linuxbridge

9)配置DHCP代理

vim /etc/neutron/dhcp_agent.ini

[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

10)配置元数据代理

vim /etc/neutron/metadata_agent.ini

[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = Mss123456

11)修改配置文件配置计算服务使用网络服务

vim /etc/nova/nova.conf

[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = Mss123456
service_metadata_proxy = true
metadata_proxy_shared_secret = Mss123456

12)创建软连接

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

13)导入数据库结构

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

14)重启计算服务

systemctl restart openstack-nova-api.service

15)启动网络服务并开机自启

systemctl enable neutron-server.service \
  neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service
systemctl start neutron-server.service \
  neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service
systemctl enable neutron-l3-agent.service
systemctl start neutron-l3-agent.service

7、安装neutron网络服务(compute节点)

1)安装组件

dnf install openstack-neutron-linuxbridge ebtables ipset -y

2)修改配置文件

vim /etc/neutron/neutron.conf

[DEFAULT]
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = Mss123456

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

3)配置Linux桥接代理

vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]
physical_interface_mappings = provider:eth0   # 自己的网卡

[vxlan]
enable_vxlan = true
local_ip = 10.0.19.171                        # 自己的ip地址
l2_population = true

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

vim /etc/sysctl.conf

net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1

modprobe br_netfilter
sysctl -p

4)配置网络服务

vim /etc/nova/nova.conf

[neutron]
# ...
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = Mss123456

5)开机自启和启动

systemctl restart openstack-nova-compute.service
systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service

6)验证(在控制节点上操作)

. admin-openrc
# 执行命令验证是否成功启动neutron-server
openstack extension list --network
# 列出插件,验证网络插件是否成功启动
openstack network agent list
# 如在一下出现自己计算节点,那表示成功,如果不成功,在查查日志/var/log/neutron/linuxbridge-agent.log 
[root@controller ~]# openstack network agent list
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host       | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 244101b3-8905-4999-b415-151fc67cb875 | Linux bridge agent | controller | None              | :-)   | UP    | neutron-linuxbridge-agent |
| 2c1b081a-b132-4091-9b32-519eb3e52629 | Metadata agent     | controller | None              | :-)   | UP    | neutron-metadata-agent    |
| 54dacfca-8aa9-4bcf-a771-88468b872801 | DHCP agent         | controller | nova              | :-)   | UP    | neutron-dhcp-agent        |
| 5cb00320-c954-40a2-adbd-1dfd3c9ed04d | Linux bridge agent | compute-1  | None              | :-)   | UP    | neutron-linuxbridge-agent |
| b16c5b44-f98a-443f-8393-9d62ba69d279 | L3 agent           | controller | nova              | :-)   | UP    | neutron-l3-agent          |
| fcb6fb5f-c509-473f-b636-af99d7bbc34d | Linux bridge agent | compute-2  | None              | :-)   | UP    | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+

注意
如果上述命令检测不到计算节点的网络,并在报错上出现

CRITICAL neutron [-] Unhandled error: oslo_privsep.daemon.FailedToDropPrivileges: privsep helper command exited non-zero (1)

则修复方法是简单的SELinux布尔更改和守护进程重新启动

setsebool os_neutron_dac_override on
systemctl restart neutron-linuxbridge-agent.service

8、安装horizon界面服务(control节点)

1)安装组件

dnf install openstack-dashboard -y

2)编辑配置

vim /etc/openstack-dashboard/local_settings

# 配置界面在控制节点使用
OPENSTACK_HOST = "controller"
# 允许所有主机访问
ALLOWED_HOSTS = ['*']

# 配置存储服务
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller:11211',
    }
}
# 启动v3的认证api
OPENSTACK_KEYSTONE_URL = "http://%s:5000/identity/v3" % OPENSTACK_HOST

# 启用domain支持
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

# 配置API版本
OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 3,
}

# 配置Default为默认域
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"

# 配置user角色为默认角色
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

# 配置时区
TIME_ZONE = "Asia/Shanghai"


OPENSTACK_NEUTRON_NETWORK = {
    'enable_router': False,
    'enable_quotas': False,
    'enable_distributed_router': False,
    'enable_ha_router': False,
    'enable_lb': False,
    'enable_firewall': False,
    'enable_': False,
    'enable_fip_topology_check': False,
}

# 配置中必须加这串,不然页面可能打不开
WEBROOT = '/dashboard/'

3)修改httpd配置

vim /etc/httpd/conf.d/openstack-dashboard.conf

#添加
WSGIApplicationGroup %{GLOBAL}

4)重启httpd和memcached服务

systemctl restart httpd.service memcached.service

5)登录http://10.0.19.170/dashboard/

如果报403,看日志/var/log/httpd/error_log

client denied by server configuration: /usr/share/openstack-dashboard/openstack_dashboard/wsgi

则根据老版本的情况,直接新建文件,因为版本中无文件/usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi,手动创建文件

[root@controller wsgi]# cat django.wsgi 
# Copyright (c) 2017 OpenStack Foundation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"""
WSGI config for openstack_dashboard project.
"""

import logging
import os
import sys

from django.core.wsgi import get_wsgi_application

# Add this file path to sys.path in order to import settings
sys.path.insert(0, os.path.normpath(os.path.join(
    os.path.dirname(os.path.realpath(__file__)), '../..')))
os.environ['DJANGO_SETTINGS_MODULE'] = 'openstack_dashboard.settings'
sys.stdout = sys.stderr

logging.warning(
    "Use of this 'djano.wsgi' file has been deprecated since the Rocky "
    "release in favor of 'wsgi.py' in the 'openstack_dashboard' module. This "
    "file is a legacy naming from before Django 1.4 and an importable "
    "'wsgi.py' is now the default. This file will be removed in the T release "
    "cycle."
)

application = get_wsgi_application()

并重启登录systemctl restart httpd.service memcached.service

9、安装块存储服务(control节点)

1)创建数据库

mysql -u root -p

MariaDB [(none)]> CREATE DATABASE cinder;

MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
  IDENTIFIED BY 'Mss123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
  IDENTIFIED BY 'Mss123456';

2)配置用户

  • cinder用户的密码:Mss123456
. admin-openrc
# 创建cinder用户
openstack user create --domain default --password-prompt cinder
# 将admin角色添加到cinder用户
openstack role add --project service --user cinder admin

# 创建cinderv2和cinderv3服务实体
openstack service create --name cinderv2 \
  --description "OpenStack Block Storage" volumev2
  
openstack service create --name cinderv3 \
  --description "OpenStack Block Storage" volumev3  

# 创建块存储服务API端点
openstack endpoint create --region RegionOne \
  volumev2 public http://controller:8776/v2/%\(project_id\)s

openstack endpoint create --region RegionOne \
  volumev2 internal http://controller:8776/v2/%\(project_id\)s

openstack endpoint create --region RegionOne \
  volumev2 admin http://controller:8776/v2/%\(project_id\)s
  
openstack endpoint create --region RegionOne \
  volumev3 public http://controller:8776/v3/%\(project_id\)s

openstack endpoint create --region RegionOne \
  volumev3 internal http://controller:8776/v3/%\(project_id\)s

openstack endpoint create --region RegionOne \
  volumev3 admin http://controller:8776/v3/%\(project_id\)s

3)安装软件并配置

dnf install openstack-cinder -y


vim /etc/cinder/cinder.conf

[database]
connection = mysql+pymysql://cinder:Mss123456@controller/cinder

[DEFAULT]
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone
my_ip = 10.0.19.170

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = Mss123456

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

4)同步数据库

su -s /bin/sh -c "cinder-manage db sync" cinder

5)将计算配置为使用块存储

vim /etc/nova/nova.conf

[cinder]
os_region_name = RegionOne

6)开机自启和启动

# 重新启动计算API服务
systemctl restart openstack-nova-api.service
# 开机启动和启动服务
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

10、安装配置存储节点(存储节点)

  • 存储节点单独配置,由于限制问题,故此安装到control节点上

1)安装软件

dnf install lvm2 device-mapper-persistent-data -y

systemctl enable lvm2-lvmetad.service
systemctl start lvm2-lvmetad.service

2)创建LVM物理卷/dev/sdb

pvcreate /dev/sdb

3)创建LVM卷组cinder-volumes

vgcreate cinder-volumes /dev/sdb

4)配置LVM

  • 如果已使用LVM,则需要把其他的磁盘都添加到过滤器中
vim /etc/lvm/lvm.conf

devices {
...
filter = [ "a/sdb/", "r/.*/"]

5)安装软件包

dnf install openstack-cinder targetcli
# 此部分已安装,无需安装,如控制节点和存储节点不在一起,需安装dnf install openstack-cinder -y

6)配置

vim /etc/cinder/cinder.conf

[database]
connection = mysql+pymysql://cinder:Mss123456@controller/cinder

[DEFAULT]
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone
my_ip = 10.0.19.170
enabled_backends = lvm
glance_api_servers = http://controller:9292

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = Mss123456

[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
target_protocol = iscsi
target_helper = lioadm

[oslo_concurrency]
# ...
lock_path = /var/lib/cinder/tmp

7)开机自启和启动

systemctl enable openstack-cinder-volume.service target.service
systemctl start openstack-cinder-volume.service target.service

8)安装配置备份服务(可选)

# 此部分已安装,无需安装,如控制节点和存储节点不在一起,需安装dnf install openstack-cinder -y
dnf install openstack-cinder -y

配置

vim /etc/cinder/cinder.conf

[DEFAULT]
# ...
backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver
backup_swift_url = SWIFT_URL
openstack catalog show object-store

开机自启和启动

systemctl enable openstack-cinder-backup.service
systemctl start openstack-cinder-backup.service

你可能感兴趣的:(openstack,linux,运维,linux,centos)