社区OpenStack Queens版本部署安装详解(附加节点安装所有组件)
一、 部署软件环境
操作系统:
Centos7
内核版本:
[root@controller ~]# uname -m
x86_64
[root@controller ~]# uname -r
3.10.0-693.21.1.el7.x86_64
节点间以及网卡配置
controller节点
[root@controller ~]# ip a
1: lo:
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
compute节点
[root@compute ~]# ip a
1: lo:
存储Cinder节点
1: lo:
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
说明:此次部署搭建采用三台物理节点手搭建社区openstack Queens环境
二.OpenStack概述
OpenStack项目是一个开源云计算平台,支持所有类型的云环境。该项目旨在实现简单,大规模的可扩展性和丰富的功能。
OpenStack通过各种补充服务提供基础架构即服务(IaaS)解决方案。每项服务都提供了一个应用程序编程接口(API),以促进这种集成。
本文涵盖了使用适用于具有足够Linux经验的OpenStack新用户的功能性示例体系结构,逐步部署主要OpenStack服务。只用于学习OpenStack最小化环境。
三、OpenStack架构总览
1.概念性架构
下图显示了OpenStack服务之间的关系:
2.逻辑体系结构
下图显示了OpenStack云中最常见但不是唯一可能的体系结构:
对于设计,部署和配置OpenStack,学习者必须了解逻辑体系结构。
如概念架构所示,OpenStack由几个独立的部分组成,称为OpenStack服务。所有服务都通过keystone服务进行身份验证。
各个服务通过公共API相互交互,除非需要特权管理员命令。
在内部,OpenStack服务由多个进程组成。所有服务都至少有一个API进程,它监听API请求,预处理它们并将它们传递给服务的其他部分。除身份服务外,实际工作由不同的流程完成。
对于一个服务的进程之间的通信,使用AMQP消息代理。该服务的状态存储在数据库中。部署和配置OpenStack云时,您可以选择多种消息代理和数据库解决方案,例如RabbitMQ,MySQL,MariaDB和SQLite。
用户可以通过Horizon Dashboard实现的基于Web的用户界面,通过命令行客户端以及通过浏览器插件或curl等工具发布API请求来访问OpenStack。对于应用程序,有几个SDK可用。最终,所有这些访问方法都会对各种OpenStack服务发出REST API调用。
四.OpenStack组件服务部署
部署前置条件(以下命令在所有节点执行)
1.配置节点网卡IP(略)
2.设置主机名
hostnamectl set-hostname 主机名
bash ##使设置立即生效
3.配置域名解析,编辑编辑/etc/hosts文件,加入如下配置
10.71.11.12 controller
10.71.11.13 compute
10.71.11.14 cinder
4.验证网络连通性
在控制节点执行
root@controller ~]# ping -c 4 openstack.org
PING openstack.org (162.242.140.107) 56(84) bytes of data.
在计算节点执行
[root@compute ~]# ping -c 4 openstack.org
PING openstack.org (162.242.140.107) 56(84) bytes of data.
5.配置阿里yum源
备份
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
下载
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
或者
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
6.安装NTP时钟服务(所有节点)
##controller节点##
安装软件包
yum install chrony -y
编辑/etc/chrony.conf文件,配置时钟源同步服务端
server controlelr iburst ##所有节点向controller节点同步时间
allow 10.71.11.0/24 ##设置时间同步网段
设置NTP服务开机启动
systemctl enable chronyd.service
systemctl start chronyd.service
其他节点
安装软件包
yum install chrony -y
配置所有节点指向controller同步时间
vi /etc/chrony.conf
server controlelr iburst
重启NTP服(略)
验证时钟同步服务
在controller节点执行
[root@controller ~]# chronyc sources
210 Number of sources = 4
MS Name/IP address Stratum Poll Reach LastRx Last sample
^* time4.aliyun.com 2 10 377 1015 +115us[ +142us] +/- 14ms
MS列中的内容应该指明* NTP服务当前同步的服务器。
在其他节点执行
[root@compute ~]# chronyc sources
210 Number of sources = 4
MS Name/IP address Stratum Poll Reach LastRx Last sample
^* leontp.ccgs.wa.edu.au 1 10 377 752
[root@cinder ~]# chronyc sources
210 Number of sources = 4
MS Name/IP address Stratum Poll Reach LastRx Last sample
^+ 61-216-153-104.HINET-IP.> 3 10 377 748 -3373us[-
注意:日常运维中经常遇见时钟飘逸问题,导致集群服务脑裂
openstack服务安装、配置
说明:无特殊说明,以下操作在所有节点上执行
1.下载安装openstack软件仓库(queens版本)
yum install centos-release-openstack-queens -y
2.更新所有节点软件包
yum upgrade
3.两个节点安装openstack client端
yum install python-openstackclient -y
4.安装openstack-selinux
yum install openstack-selinux -y
安装数据库(controller节点执行)
大多数OpenStack服务使用SQL数据库来存储信息,数据库通常在控制器节点上运行。 本文主要使用MariaDB或MySQL。
安装软件包
yum install mariadb mariadb-server python2-PyMySQL -y
编辑/etc/my.cnf.d/mariadb-server.cnf并完成以下操作
[root@controller ~]# vim /etc/my.cnf.d/mariadb-server.cnf
#
These groups are read by MariaDB server.
[server]
this is only for the mysqld standalone daemon
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
log-error=/var/log/mariadb/mariadb.log
pid-file=/var/run/mariadb/mariadb.pid
bind-address = 192.168.10.102
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
说明:bind-address使用controller节点的管理IP
设置服务开机启动
systemctl enable mariadb.service
systemctl start mariadb.service
通过运行mysql_secure_installation脚本来保护数据库服务。密码123456
[root@controller ~]# mysql_secure_installation
Thanks for using MariaDB!
在controller节点安装、配置RabbitMQ
1.安装配置消息队列组件
yum install rabbitmq-server -y
2.设置服务开机启动
systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service
3.添加openstack 用户
rabbitmqctl add_user openstack openstack
4.openstack用户的权限配置
rabbitmqctl set_permissions openstack "." "." ".*"
8.RabbitMQ 消息队列安装及配置 (控制节点)
yum install rabbitmq-server -y
/usr/lib/rabbitmq/bin/rabbitmq-plugins list //查看插件安装情况
/usr/lib/rabbitmq/bin/rabbitmq-plugins enable rabbitmq_management
//启用rabbitmq_management服务
systemctl restart rabbitmq-server.service
systemctl enable rabbitmq-server
rabbitmqctl add_user openstack openstack
//添加 openstack 用户 , openstack 为密码
rabbitmqctl set_permissions openstack "." "." ".*"
//给openstack用户配置写和读权限
访问 httpd://192.168.0.17:15672 可以看到web管理页面
若无法访问,的赋予权限
rabbitmqctl set_user_tags openstack administrator
rabbitmqctl list_users ##查看权限
安装缓存数据库Memcached(controller节点)
说明:服务的身份认证服务使用Memcached缓存令牌。 memcached服务通常在控制器节点上运行。 对于生产部署,我们建议启用防火墙,身份验证和加密的组合来保护它。
1.安装配置组件
yum install memcached python-memcached -y
2.编辑/etc/sysconfig/memcached
vim /etc/sysconfig/memcached
OPTIONS="-l 10.71.11.12,::1,controller"
3.设置服务开机启动
systemctl enable memcached.service
systemctl start memcached.service
Etcd服务安装(controller)
1.安装服务
yum install etcd -y
2.编辑/etc/etcd/etcd.conf文件
vim /etc/etcd/etcd.conf
ETCD_INITIAL_CLUSTER
ETCD_INITIAL_ADVERTISE_PEER_URLS
ETCD_ADVERTISE_CLIENT_URLS
ETCD_LISTEN_CLIENT_URLS
#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://10.71.11.12:2380"
ETCD_LISTEN_CLIENT_URLS="http://10.71.11.12:2379"
ETCD_NAME="controller"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.71.11.12:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://10.71.11.12:2379"
ETCD_INITIAL_CLUSTER="controller=http://10.71.11.12:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"
3.设置服务开机启动
systemctl enable etcd
systemctl start etcd
安装keystone组件(controller)
1.创建keystone数据库并授权
mysql -u root -p
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone. TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';
GRANT ALL PRIVILEGES ON keystone. TO 'keystone'@'%' IDENTIFIED BY '123456';
2.安装、配置组件
yum install openstack-keystone httpd mod_wsgi -y
3.编辑 vim /etc/keystone/keystone.conf
[database] 737
connection = mysql+pymysql://keystone:123456@controller/keystone
[token] 2878
provider = fernet
4.同步keystone数据库
su -s /bin/sh -c "keystone-manage db_sync" keystone
5.数据库初始化
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
6.引导身份认证服务
keystone-manage bootstrap --bootstrap-password 123456 --bootstrap-admin-url http://controller:35357/v3/ --bootstrap-internal-url http://controller:5000/v3/ --bootstrap-public-url http://controller:5000/v3/ --bootstrap-region-id RegionOne
配置apache http服务
1.编辑 vim /etc/httpd/conf/httpd.conf,配置ServerName参数
ServerName controller
2.创建 /usr/share/keystone/wsgi-keystone.conf链接文件
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
3.设置服务开机启动
systemctl enable httpd.service
systemctl restart httpd.service
启动服务报错
[root@controller ~]# systemctl start httpd.service
经过判断,是SELinux引发的问题
解决办法:关闭防火墙
[root@controller ~]# vi /etc/selinux/config
SELINUX=disabled
SELINUXTYPE= can take one of three two values:
SELINUXTYPE=targeted
再次重启服务报错解决
[root@controller ~]# systemctl enable httpd.service;systemctl start httpd.service
4.配置administrative账号
export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
创建 domain, projects, users, roles
1.创建域
openstack domain create --description "Domain" example
[root@controller ~]# openstack domain create --description "Domain" example
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Domain |
| enabled | True |
| id | 199658b1d0234c3cb8785c944aa05780 |
| name | example |
| tags | [] |
+-------------+----------------------------------+
- 创建服务项目
openstack project create --domain default --description "Service Project" service - [root@controller ~]# openstack project create --domain default --description "Service Project" service
- +-------------+----------------------------------+
- | Field | Value |
- +-------------+----------------------------------+
- | description | Service Project |
- | domain_id | default |
- | enabled | True |
- | id | 03e700ff43e44b29b97365bac6c7d723 |
- | is_domain | False |
- | name | service |
- | parent_id | default |
- | tags | [] |
+-------------+----------------------------------+
3.创建平台demo项目
openstack project create --domain default --description "Demo Project" demo
[root@controller ~]# openstack project create --domain default --description "Demo Project" demo
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Demo Project |
| domain_id | default |
| enabled | True |
| id | 61f8c9005ca84477b5bdbf485be1a546 |
| is_domain | False |
| name | demo |
| parent_id | default |
| tags | [] |
+-------------+----------------------------------+
4.创建demo用户 密码demo
openstack user create --domain default --password-prompt demo
[root@controller ~]# openstack user create --domain default --password-prompt demo
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | fa794c034a53472c827a94e6a6ad12c1 |
| name | demo |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
5.创建用户角色
openstack role create user
[root@controller ~]# openstack role create user
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | None |
| id | 15ea413279a74770b79630b75932a596 |
| name | user |
+-----------+----------------------------------+
6.添加用户角色到demo项目和用户
openstack role add --project demo --user demo user
说明:此条命令执行成功后不返回参数
验证操作
1.取消环境变量
unset OS_AUTH_URL OS_PASSWORD
2.admin用户返回的认证token 密码123456
[root@controller ~]# unset OS_AUTH_URL OS_PASSWORD
[root@controller ~]# openstack --os-auth-url http://controller:35357/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name admin --os-username admin token issue
[root@controller ~]# openstack --os-auth-url http://controller:35357/v3 \
--os-project-domain-name Default --os-user-domain-name Default \
--os-project-name admin --os-username admin token issue
Password:
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
3.demo用户返回的认证token 密码deno
[root@controller ~]# openstack --os-auth-url http://controller:5000/v3 \
--os-project-domain-name Default --os-user-domain-name Default \
--os-project-name demo --os-username demo token issue
Password:
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
创建openstack 客户端环境脚本
1.创建 vim admin-openrc脚本
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
2.创建 vim demo-openrc脚本
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=demo
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
3.使用脚本,返回认证token 赋予脚本权限,执行脚本
[root@controller ~]# openstack token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires | 2018-04-01T08:17:29+0000 |
| id | gAAAAABawIeJ0z-3R2ltY6ublCGqZX80AIi4tQUxqEpw0xvPsFP9BLV8ALNsB2B7bsVivGB14KvhUncdoRl_G2ng5BtzVKAfzHyB-OxwiXeqAttkpQsuLCDKRHd3l-K6wRdaDqfNm-D1QjhtFoxHOTotOcjtujBHF12uP49TjJtl1Rrd6uVDk0g |
| project_id | 4205b649750d4ea68ff5bea73de0faae |
| user_id | 475b31138acc4cc5bb42ca64af418963 |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
安装Glance服务(controller)
1.创建glance数据库,并授权
mysql -u root -p
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '123456';
2.获取admin用户的环境变量,并创建服务认证
. admin-openrc
创建glance用户 密码123456
[root@controller ~]# openstack user create --domain default --password-prompt glance
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | dd2363d365624c998dfd788b13e1282b |
| name | glance |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
把admin用户添加到glance用户和项目中
openstack role add --project service --user glance admin
说明:此条命令执行不返回不返回
创建glance服务
[root@controller ~]# openstack service create --name glance --description "OpenStack Image" image
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Image |
| enabled | True |
| id | 5927e22c745449869ff75b193ed7d7c6 |
| name | glance |
| type | image |
+-------------+----------------------------------+
3.创建镜像服务API端点
[root@controller ~]# openstack endpoint create --region RegionOne image public http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 0822449bf80f4f6897be5e3240b6bfcc |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 5927e22c745449869ff75b193ed7d7c6 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne image internal http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | f18ae583441b4d118526571cdc204d8a |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 5927e22c745449869ff75b193ed7d7c6 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne image admin http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 79eadf7829274b1b9beb2bfb6be91992 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 5927e22c745449869ff75b193ed7d7c6 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
安装和配置组件
1.安装软件包
yum install openstack-glance -y
2.编辑 vim /etc/glance/glance-api.conf文件
[database] 1924
connection = mysql+pymysql://glance:glance@controller/glance
[keystone_authtoken] 3472
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 123456
[paste_deploy]
flavor = keystone
[glance_store] 2039
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
3.编辑 vim /etc/glance/glance-registry.conf
[database] 1170
connection = mysql+pymysql://glance:glance@controller/glance
[keystone_authtoken] 1285
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 123456
[paste_deploy] 2272
flavor = keystone
4.同步镜像服务数据库
su -s /bin/sh -c "glance-manage db_sync" glance
systemctl enable openstack-glance-api.service openstack-glance-registry.service
systemctl start openstack-glance-api.service openstack-glance-registry.service
验证操作
使用CirrOS验证Image服务的操作,这是一个小型Linux映像,可帮助您测试OpenStack部署。
有关如何下载和构建映像的更多信息,请参阅OpenStack虚拟机映像指南https://docs.openstack.org/image-guide/
有关如何管理映像的信息,请参阅OpenStack最终用户指南https://docs.openstack.org/queens/user/
1.获取admin用户的环境变量,且下载镜像
. admin-openrc
wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img 2.上传镜像 使用QCOW2磁盘格式,裸容器格式和公开可见性将图像上传到Image服务,以便所有项目都可以访问它: [root@controller ~]# openstack image create "cirros" --file cirros-0.3.5-x86_64-disk.img --disk-format qcow2 --container-format bare --public +------------------+------------------------------------------------------+ |
Field | Value |
---|
3.查看上传的镜像
[root@controller ~]# openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 916faa2b-e292-46e0-bfe4-0f535069a1a0 | cirros | active |
+--------------------------------------+--------+--------+
说明:glance具体配置选项:https://docs.openstack.org/glance/queens/configuration/index.html
controller节点安装和配置compute服务
1.创建nova_api, nova, nova_cell0数据库
mysql -u root -p
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
数据库登录授权
GRANT ALL PRIVILEGES ON nova_api. TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_api. TO 'nova'@'%' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'controller' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova. TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova. TO 'nova'@'%' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'controller' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_cell0. TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_cell0. TO 'nova'@'%' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'controller' IDENTIFIED BY 'nova';
建nova用户 密码123456
[root@controller ~]# . admin-openrc
[root@controller ~]# openstack user create --domain default --password-prompt nova
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 8e72103f5cc645669870a630ffb25065 |
| name | nova |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
3.添加admin用户为nova用户
openstack role add --project service --user nova admin
4.创建nova服务端点
[root@controller ~]# openstack service create --name nova --description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Compute |
| enabled | True |
| id | 9f8f8d8cb8e542b09694bee6016cc67c |
| name | nova |
| type | compute |
+-------------+----------------------------------+
5.创建compute API 服务端点
[root@controller ~]# openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | cf260d5a56344c728840e2696f44f9bc |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 9f8f8d8cb8e542b09694bee6016cc67c |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | f308f29a78e04b888c7418e78c3d6a6d |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 9f8f8d8cb8e542b09694bee6016cc67c |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 022d96fa78de4b73b6212c09f13d05be |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 9f8f8d8cb8e542b09694bee6016cc67c |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+----------------------------------+
创建一个placement服务用户 密码123456
[root@controller ~]# openstack user create --domain default --password-prompt placement
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | fa239565fef14492ba18a649deaa6f3c |
| name | placement |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
6.添加placement用户为项目服务admin角色
openstack role add --project service --user placement admin
7.创建在服务目录创建Placement API服务
[root@controller ~]# openstack service create --name placement --description "Placement API" placement
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Placement API |
| enabled | True |
| id | 32bb1968c08747ccb14f6e4a20cd509e |
| name | placement |
| type | placement |
+-------------+----------------------------------+
8.创建Placement API服务端点
[root@controller ~]# openstack endpoint create --region RegionOne placement public http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | b856962188484f4ba6fad500b26b00ee |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 32bb1968c08747ccb14f6e4a20cd509e |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne placement internal http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 62e5a3d82a994f048a8bb8ddd1adc959 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 32bb1968c08747ccb14f6e4a20cd509e |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne placement admin http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | f12f81ff7b72416aa5d035b8b8cc2605 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 32bb1968c08747ccb14f6e4a20cd509e |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
安装和配置组件
1.安装软件包
yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api
2.编辑 vim /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:openstack@controller
my_ip = 10.71.11.12
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api_database]
connection = mysql+pymysql://nova:nova@controller/nova_api
[database]
connection = mysql+pymysql://nova:nova@controller/nova
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 123456
[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = 123456
3.由于软件包的一个bug,需要在/etc/httpd/conf.d/00-nova-placement-api.conf文件中添加如下配置
Require all granted
Order allow,deny
Allow from all
4.重新http服务
systemctl restart httpd
5.同步nova-api数据库
su -s /bin/sh -c "nova-manage api_db sync" nova
同步数据库报错
[root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova
Traceback (most recent call last):
File "/usr/bin/nova-manage", line 10, in
sys.exit(main())
File "/usr/lib/python2.7/site-packages/nova/cmd/manage.py", line 1597, in main
config.parse_args(sys.argv)
File "/usr/lib/python2.7/site-packages/nova/config.py", line 52, in parse_args
default_config_files=default_config_files)
File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2502, in call
else sys.argv[1:])
File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 3166, in _parse_cli_opts
return self._parse_config_files()
File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 3183, in _parse_config_files
ConfigParser._parse_file(config_file, namespace)
File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 1950, in _parse_file
raise ConfigFileParseError(pe.filename, str(pe))
oslo_config.cfg.ConfigFileParseError: Failed to parse /etc/nova/nova.conf: at /etc/nova/nova.conf:8, No ':' or '=' found in assignment: '/etc/nova/nova.conf'
根据报错,把/etc/nova/nova.conf中第八行注释掉,解决报错
[root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
exception.NotSupportedWarning
6.注册cell0数据库
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
exception.NotSupportedWarning
7.创建cell1 cell
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
exception.NotSupportedWarning
6c689e8c-3e13-4e6d-974c-c2e4e22e510b
8.同步nova数据库
[root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
exception.NotSupportedWarning
/usr/lib/python2.7/site-packages/pymysql/cursors.py:165: Warning: (1831, u'Duplicate index block_device_mapping_instance_uuid_virtual_name_device_name_idx
. This is deprecated and will be disallowed in a future release.')
result = self._query(query)
/usr/lib/python2.7/site-packages/pymysql/cursors.py:165: Warning: (1831, u'Duplicate index uniq_instances0uuid
. This is deprecated and will be disallowed in a future release.')
result = self._query(query)
9.验证 nova、 cell0、 cell1数据库是否注册正确
[root@controller ~]# nova-manage cell_v2 list_cells
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
exception.NotSupportedWarning
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+
| Name | UUID | Transport URL | Database Connection |
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+
| cell0 | 00000000-0000-0000-0000-000000000000 | none:/ | mysql+pymysql://nova:@controller/nova_cell0 |
| cell1 | 6c689e8c-3e13-4e6d-974c-c2e4e22e510b | rabbit://openstack:@controller | mysql+pymysql://nova:****@controller/nova |
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+
10.设置服务为开机启动
systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl restart openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
计算节点安装和配置compute节点服务
1.安装软件包
yum install openstack-nova-compute -y
2.编辑 vim /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller
my_ip = 10.71.11.13
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 123456
[vnc]
enabled = True
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = 123456
3.设置服务开机启动
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service
说明:如果nova-compute服务无法启动,请检查/var/log/nova/nova-compute.log,会出现如下报错信息
2018-04-01 12:03:43.362 18612 INFO os_vif [-] Loaded VIF plugins: ovs, linux_bridge
2018-04-01 12:03:43.431 18612 WARNING oslo_config.cfg [-]
控制器:5672上的错误消息AMQP服务器无法访问可能表示控制器节点上的防火墙阻止了对端口5672的访问。配置防火墙以在控制器节点上打开端口5672,并在计算节点上重新启动nova-compute服务。
清除controller的防火墙
[root@controller ~]# iptables -F
[root@controller ~]# iptables -X
[root@controller ~]# iptables -Z
重启计算服务成功
4.添加compute节点到cell数据库(controller)
验证有几个计算节点在数据库中
[root@controller ~]. admin-openrc
[root@controller ~]# openstack compute service list --service nova-compute
+----+--------------+---------+------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+--------------+---------+------+---------+-------+----------------------------+
| 8 | nova-compute | compute | nova | enabled | up | 2018-04-01T22:24:14.000000 |
+----+--------------+---------+------+---------+-------+----------------------------+
5.发现计算节点
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
exception.NotSupportedWarning
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting compute nodes from cell 'cell1': 6c689e8c-3e13-4e6d-974c-c2e4e22e510b
Found 1 unmapped computes in cell: 6c689e8c-3e13-4e6d-974c-c2e4e22e510b
Checking host mapping for compute host 'compute': 32861a0d-894e-4af9-a57c-27662d27e6bd
Creating host mapping for compute host 'compute': 32861a0d-894e-4af9-a57c-27662d27e6b
在controller节点验证计算服务操作
1.列出服务组件
[root@controller ~]#. admin-openrc
[root@controller ~]# openstack compute service list +----+------------------+----------------+----------+---------+-------+----------------------------+ |
ID | Binary | Host | Zone | Status | State | Updated At | +----+------------------+----------------+----------+---------+-------+----------------------------+ |
1 | nova-consoleauth | controller | internal | enabled | up | 2018-04-01T22:25:29.000000 | 2 | nova-conductor | controller | internal | enabled | up | 2018-04-01T22:25:33.000000 | 3 | nova-scheduler | controller | internal | enabled | up | 2018-04-01T22:25:30.000000 | 6 | nova-conductor | ansible-server | internal | enabled | up | 2018-04-01T22:25:55.000000 | 7 | nova-scheduler | ansible-server | internal | enabled | up | 2018-04-01T22:25:59.000000 | 8 | nova-compute | compute | nova | enabled | up | 2018-04-01T22:25:34.000000 | 9 | nova-consoleauth | ansible-server | internal | enabled | up | 2018-04-01T22:25:57.000000 | +----+------------------+----------------+----------+---------+-------+----------------------------+ 2.列出身份服务中的API端点以验证与身份服务的连接: [root@controller ~]# openstack catalog list +-----------+-----------+-----------------------------------------+ |
Name | Type | Endpoints | +-----------+-----------+-----------------------------------------+ |
placement | placement | RegionOne | internal: http://controller:8778 | RegionOne | public: http://controller:8778 | RegionOne | admin: http://controller:8778 | |||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
keystone | identity | RegionOne | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public: http://controller:5000/v3/ | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
RegionOne | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
admin: http://controller:35357/v3/ | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
RegionOne | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
internal: http://controller:5000/v3/ | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
glance | image | RegionOne | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public: http://controller:9292 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
RegionOne | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
admin: http://controller:9292 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
RegionOne | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
internal: http://controller:9292 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
nova | compute | RegionOne | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
admin: http://controller:8774/v2.1 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
RegionOne | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public: http://controller:8774/v2.1 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
RegionOne | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
internal: http://controller:8774/v2.1 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
+-----------+-----------+-----------------------------------------+
3.列出镜像
[root@controller ~]# openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 916faa2b-e292-46e0-bfe4-0f535069a1a0 | cirros | active |
+--------------------------------------+--------+--------+
4.检查cells和placement API是否正常
[root@controller ~]# nova-status upgrade check
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
exception.NotSupportedWarning
Option "os_region_name" from group "placement" is deprecated. Use option "region-name" from group "placement".
+---------------------------+
| Upgrade Check Results |
+---------------------------+
| Check: Cells v2 |
| Result: Success |
| Details: None |
+---------------------------+
| Check: Placement API |
| Result: Success |
| Details: None |
+---------------------------+
| Check: Resource Providers |
| Result: Success |
| Details: None |
+---------------------------+
nova知识点https://docs.openstack.org/nova/queens/admin/index.html
安装和配置controller节点neutron网络配置
1.创建nuetron数据库和授权
mysql -u root -p
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron. TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';
GRANT ALL PRIVILEGES ON neutron. TO 'neutron'@'%' IDENTIFIED BY '123456';
2.创建服务
. admin-openrc 密码123456
openstack user create --domain default --password-prompt neutron
添加admin角色为neutron用户
openstack role add --project service --user neutron admin
创建neutron服务
openstack service create --name neutron --description "OpenStack Networking" network
3.创建网络服务端点
openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696
配置网络部分(controller节点)
1.安装组件
yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
2.配置服务组件,编辑 vim /etc/neutron/neutron.conf
[database]
connect
[DEFAULT]
auth_strategy = keystone
core_plugin = ml2
service_plugins =
transport_url = rabbit://openstack:openstack@controller
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456
[nova]
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 123456
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
配置网络二层插件
编辑 vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge , l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[ml2_type_vxlan]
vni_ranges = 1:1000
[securitygroup]
enable_ipset = true
配置Linux网桥
编辑 vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens37
[vxlan]
enable_vxlan = false 等于true时,写下面两行
l2_population = true
local_ip = 192.168.10.18
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
[root@controller ~]# vim /etc/neutron/l3_agent.ini
interface_driver = linuxbridge
配置DHCP服务
编辑 vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
配置metadata
编辑 vim /etc/neutron/metadata_agent.ini
DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = 123456
配置计算服务使用网络服务
编辑/etc/nova/nova.conf
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
service_metadata_proxy = true
metadata_proxy_shared_secret = 123456
完成安装
1.创建服务软连接
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
2.同步数据库
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
3.重启compute API服务
systemctl restart openstack-nova-api.service
4.配置网络服务开机启动
systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
配置compute节点网络服务
1.安装组件
yum -y install openstack-neutron-linuxbridge ebtables ipset
2.配置公共组件
编辑/etc/neutron/neutron.conf
[DEFAULT]
auth_strategy = keystone
transport_url = rabbit://openstack:123456@controller
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
配置网络
1.配置Linux网桥,编辑 /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens6f0
[vxlan]
enable_vxlan = false
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
配置计算节点网络服务
编辑/etc/nova/nova.conf
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
完成安装
1.重启compute服务
systemctl restart openstack-nova-compute.service
2.设置网桥服务开机启动
systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.servic
验证
[root@controller ~]# source admin-openrc
[root@controller ~]# openstack extension list --network
[root@controller ~]# openstack network agent list
在controller节点安装Horizon服务
1.安装软件包
yum install openstack-dashboard -y
编辑 vim /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = ['*']
配置memcache会话存储
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
}
}
开启身份认证API 版本v3
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HO
开启domains版本支持
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
配置API版本
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
:
OPENSTACK_NEUTRON_NETWORK = {
'enable_router': False,
'enable_quotas': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_***': False,
'enable_fip_topology_check': False,
}
为了防止服务器报500错,添加以下内容
[root@controller ~]# vim /etc/httpd/conf.d/openstack-dashboard.conf
WSGIProcessGroup %{Global}
2.完成安装,重启web服务和会话存储
systemctl restart httpd.service memcached.service
在浏览器输入http://10.71.11.12/dashboard.,访问openstack的web页面
default
admin
123456
控制节点安装配置cinder
mysql -u root -p123456
354 source admin-openrc
357 openstack user create --domain default --password-prompt cinder
358 openstack role add --project service --user cinder admin
359 openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
360 openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
361 openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
362 openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
363 openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
364 openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v2/%\(project_id\)s
365 openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v2/%\(project_id\)s
366 openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v2/%\(project_id\)s
367 yum install openstack-cinder python-keystone -y
368 vim /etc/cinder/cinder.conf
369 clear
370 su -s /bin/sh -c "cinder-manage db sync" cinder
371 mysql -uroot -p123456 -e "use cinder;show tables;"
372 clear
373 vim /etc/nova/nova.conf
374 systemctl restart openstack-nova-api.service
375 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
376 systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
377 history
安装和配置Cinder节点
本节介绍如何为Block Storage服务安装和配置存储节点。 为简单起见,此配置使用空的本地块存储设备引用一个存储节点。
该服务使用LVM驱动程序在该设备上配置逻辑卷,并通过iSCSI传输将其提供给实例。 您可以按照这些说明进行小的修改,以便使用其他存储节点水平扩展您的环境。
1.安装支持的软件包
安装LVM
yum install lvm2 device-mapper-persistent-data
设置LVM服务开机启动
systemctl enable lvm2-lvmetad.service
systemctl restart lvm2-lvmetad.service
2.创建LVM物理逻辑卷/dev/sdb
[root@cinder ~]# pvcreate /dev/sdb1
Device /dev/sdb not found (or ignored by filtering).
解决方案:
编辑 vim /etc/lvm/lvm.conf,找到global_filter一行,配置如下
global_filter = [ "a|.*/|","a|sdb1|"]
之后再执行pvcreate命令,问题解决。
[root@cinder ~]# pvcreate /dev/sdb1
Physical volume "/dev/sdb1" successfully created.
3.创建cinder-volumes逻辑卷组
[root@cinder ~]# vgcreate cinder-volumes /dev/sdb1
Volume group "cinder-volumes" successfully created
4.安装和配置组件
安装软件包
yum install openstack-cinder targetcli python-keystone -y
编辑 vim /etc/cinder/cinder.conf
[DEFAULT]
transport_url = rabbit://openstack:123456@controller
auth_strategy = keystone
my_ip = 10.71.11.14
enabled_backends = lvm
glance_api_servers = http://controller:9292
[database]
connection = mysql+pymysql://cinder:123456@controller/cinder
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = 123456
在[lvm]部分中,使用LVM驱动程序,cinder-volumes卷组,iSCSI协议和相应的iSCSI服务配置LVM后端。 如果[lvm]部分不存在,请创建它:
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
设置存储服务开机启动
systemctl enable openstack-cinder-volume.service target.service
systemctl restart openstack-cinder-volume.service target.service
控制节点验证
source admin-openrc
openstack volume service list
五、登录Dashboard界面
社区Queens Web界面显示三个角色
• 项目
• 管理员
• 身份管理
六、命令行上传镜像
- 把原生iso镜像上传到controller节点
2.转换原生ISO镜像格式为qcow2
[root@controller ~]# openstack image create --disk-format qcow2 --container-format bare --public --file /root/CentOS-7-x86_64-Minimal-1708.iso CentOS-7-x86_64
3.查看制作的镜像信息
七、创建虚拟机流程
- 创建网络
. admin-openrc
openstack network create --share --external --provider-physical-network provider --provider-network-type flat provider
参数
--share 允许所有项目使用虚拟网络
--external 定义外接虚拟网络 如果需要创建外网使用 --internal
--provider-physical-network provider && --provider-network-type flat 连接flat 虚拟网络
2.创建子网
openstack subnet create --network provider --allocation-pool start=10.71.11.50,end=10.71.11.60 --dns-nameserver 114.114.114.114 --gateway 10.71.11.254 --subnet-range 10.71.11.0/24 provider
3.创建flavor
openstack flavor create --id 1 --vcpus 4 --ram 128 --disk 1 m2.nano
4.控制节点生成秘钥对,在启动实例之前,需要将公钥添加到Compute服务
. demo-openrc
ssh-keygen -q -N ""
openstack keypair create --public-key ~/.ssh/id_rsa.pub liukey
5.添加安全组,允许ICMP(ping)和安全shell(SSH)
openstack security group rule create --proto icmp default
6.允许安全shell(SSH)访问
openstack security group rule create --proto tcp --dst-port 22 default
7.列出flavor
openstack flavor list
8.列出可用镜像
9.列出网络
10.列出安全组
11.创建虚拟机
12.查看实列状态
控制节点安装的组件:
78 yum install centos-release-openstack-queens -y
79 yum install python-openstackclient -y
80 yum install openstack-selinux -y
81 yum install mariadb mariadb-server python2-PyMySQL -y
82 yum install rabbitmq-server -y
83 yum install memcached python-memcached -y
84 yum install etcd -y
85 yum install openstack-keystone httpd mod_wsgi -y
86 yum install openstack-glance -y
87 yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api
88 yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
89 yum install openstack-dashboard -y
90 yum install openstack-cinder -y
计算节点安装的组件:
75 yum install centos-release-openstack-queens -y
76 yum install python-openstackclient -y
77 yum install openstack-selinux -y
78 yum install openstack-nova-compute
81 yum install openstack-neutron-linuxbridge ebtables ipset
89 yum -y istall libvirt* ##安装此项才能安装,不然报错
91 yum install -y openstack-nova-compute
存储节点安装的组件
53 yum install centos-release-openstack-queens -y
54 yum -y install lvm2 openstack-cinder targetcli python-keystone
客户端使用VNC连接
[root@192 ~]# yum -y install vnc
[root@192 ~]# yum -y install vncview
[root@192 ~]# vncviewer 192.168.0.19:5901