2、openstack-Mitaka 部署

2.1 环境部署

这次试验部署在了 VMware 虚拟机上,用的是 CentOS7.6 系统;openstack-M 版安装流程,同学们还可以参考 openstack 官方文档。

节点(主机名) cpu虚拟化 主机配置 IP
controller 开启cpu虚拟化 4C4G 192.168.117.130
compute 开启cpu虚拟化 4C4G 192.168.117.131

2.1.1 修改主机名(所有节点均进行配置)

  • controller 节点
[root@localhost ~]# hostnamectl set-hostname controller
  • compute 节点
[root@localhost ~]# hostnamectl set-hostname compute

2.1.2 host 解析(所有节点均进行配置)

  • controller 节点
[root@controller ~]# vim /etc/hosts
##########
192.168.117.130 controller
192.168.117.131 compute
##########
  • compute 节点
[root@compute ~]# vim /etc/hosts
##########
192.168.117.130 controller
192.168.117.131 compute
##########
  • 验证
[root@controller ~]# ping compute
[root@compute ~]# ping controller

2.1.3 yum 源配置(所有节点均进行配置)

在生产环境中可以做一个yum发布服务器。

  • controller 节点
[root@controller ~]# vim /etc/yum.repos.d/openstack-mitaka.repo
##########
[openstack]
name=openstack
baseurl=http://vault.centos.org/7.2.1511/cloud/x86_64/openstack-mitaka
enabled=1
gpgcheck=0
##########
[root@controller ~]# yum clean all
[root@controller ~]# yum makecache
  • compute 节点
[root@compute ~]# vim /etc/yum.repos.d/openstack-mitaka.repo
##########
[openstack]
name=openstack
baseurl=http://vault.centos.org/7.2.1511/cloud/x86_64/openstack-mitaka
enabled=1
gpgcheck=0
##########
[root@compute ~]# yum clean all
[root@compute ~]# yum makecache

2.1.4 NTP 配置(所有节点均进行配置)

NTP 时钟同步是十分重要的,生产环境中最后有单独的时钟源发布服务器。

1、controller 节点配置

  • 下载 ntp 软件包
[root@controller ~]# yum install -y chrony
  • 编辑 NFS 配置文件,结果如下图
[root@controller ~]# vim /etc/chrony.conf
##########
server ntp6.aliyun.com iburst
allow 192.168.0.0/16
########## 

2、openstack-Mitaka 部署_第1张图片

  • 启动 NTP 服务并将其配置为随系统启动
[root@controller ~]# systemctl enable chronyd.service
[root@controller ~]# systemctl start chronyd.service

2、compute 节点配置

  • 下载 ntp 软件包
[root@compute ~]# yum install -y chrony
  • 编辑 NFS 配置文件,
[root@compute ~]# vim /etc/chrony.conf
##########
server controller iburst
########## 
  • 启动 NTP 服务并将其配置为随系统启动
[root@compute ~]# systemctl enable chronyd.service
[root@compute ~]# systemctl start chronyd.service
  • 验证,结果如下图
[root@compute ~]# chrony sources

2、openstack-Mitaka 部署_第2张图片

2.1.5 Openstack 软件包下载(所有节点均进行以下步骤)

1、controller 节点

  • 安装 openstack 客户端
[root@controller ~]# yum install -y python-openstackclient 
  • 安装 openstack-selinux 软件包以便自动管理 OpenStack 服务的安全策略
  • compute 节点
[root@controller ~]# yum install -y openstack-selinux

2、compute 节点配置

  • 安装 openstack 客户端
[root@compute ~]# yum install -y python-openstackclient 
  • 安装 openstack-selinux 软件包以便自动管理 OpenStack 服务的安全策略
  • compute 节点
[root@compute ~]# yum install -y openstack-selinux

2.1.6 controller 节点安装数据库

  • 安装软件包
[root@controller ~]# yum install -y mariadb mariadb-server python2-PyMySQL
  • 创建并编辑 /etc/my.cnf.d/openstack.cnf
    在 [mysqld] 部分,设置 bind-address值为控制节点的管理网络IP地址以使得其它节点可以通过管理网络访问数据库,设置如下来启用一起有用的选项和 UTF-8 字符集:
[root@controller ~]# vim /etc/my.cnf.d/openstack.cnf
##########
[mysqld]
bind-address = 192.168.117.130
default-storage-engine = innodb
innodb_file_per_table
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
##########
  • 启动数据库服务,并将其配置为开机自启
[root@controller ~]# systemctl start mariadb.service
[root@controller ~]# systemctl enable mariadb.service
  • 数据库安全初始化(如果这部不执行后面同步数据库就会出现问题)
[root@controller ~]# mysql_secure_installation
##########
Enter current password for root (enter for none):回车
Set root password? [Y/n] n
Remove anonymous users? [Y/n] y
Disallow root login remotely? [Y/n] y
Remove test database and access to it? [Y/n] y
Reload privilege tables now? [Y/n] y
##########

2.1.7 controller 节点安装 消息队列(rabbitmq)

  • 安装软件包
[root@controller ~]# yum install -y rabbitmq-server
  • 启动消息队列服务并将其配置为随系统启动
[root@controller ~]# systemctl enable rabbitmq-server.service
[root@controller ~]# systemctl start rabbitmq-server.service
  • 添加 openstack 用户
[root@controller ~]# rabbitmqctl add_user openstack RABBIT_PASS

报错:
在这里插入图片描述
解决办法

#添加环境变量并重启服务即可
[root@controller ~]# export HOSTNAME=controller
[root@controller ~]# rabbitmq-server -detached
  • openstack用户配置写和读权限
[root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
  • 启动 rabbitmq 管理插件
[root@controller ~]# rabbitmq-plugins enable rabbitmq_management
  • 浏览器访问验证:默认用户:guest,默认密码:guest
    5672:客户端访问端口
    25672:内部集群用端口
    15672:浏览器访问用端口
    2、openstack-Mitaka 部署_第3张图片

2.1.8 controller 节点安装 缓存服务(memcached)

  • 安装软件包
[root@compute ~]# yum install memcached -y python-memcached
  • 修改 /etc/sysconfig/memcached 下的 options (否则其他网络无法访问)
    注:在 openstack-mitaka 官方文档中没有此步骤,这是个坑
[root@controller ~]# vim /etc/sysconfig/memcached 
##########
OPTIONS="-l 192.168.117.0,::1,controller"
##########
  • 启动Memcached服务,并且配置它随机启动。
[root@controller ~]# systemctl enable memcached.service
[root@controller ~]# systemctl start memcached.service

2.2 controller 几点部署 keystone (身份认证服务)

keystone 功能:

  • 认证管理:账户密码管理
  • 授权管理:权限管理
  • 服务目录:记载其它模块服务信息,便于用户访问

2.2.1 创建数据库并授权

  • 创建 keystone 数据库
MariaDB [(none)]> create database keystone;
  • 对 keystone 数据库授予适当权限
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'KEYSTONE_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'KEYSTONE_DBPASS';

2.2.2 安装 keystone

  • 下载安装包
[root@controller ~]# yum install -y openstack-keystone httpd mod_wsgi
  • 编辑 /etc/keystone/keystone.conf 配置文件
[root@controller ~]# vim /etc/keystone/keystone.conf
##########
[DEFAULT]
……略……
admin_token = ADMIN_TOKEN
			
[database]
……略……
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
			
[token]
……略……
provider = fernet
##########
  • 同步数据库(在 keystone 库里添加一些表)
[root@controller ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone
  • 初始化 fernet(在 /etc/keystone 下生成了 fernet-keys 目录:
[root@controller ~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone

2.2.3 配置 Apache 服务

  • 编辑/etc/httpd/conf/httpd.conf 文件,配置ServerName 选项为控制节点:
    这步是一个简单的优化,可以使 http 启动更快
[root@controller ~]# vim /etc/httpd/conf/httpd.conf 
##########
ServerName controller
##########
  • 用下面的内容创建文件 /etc/httpd/conf.d/wsgi-keystone.conf
[root@controller ~]# vim /etc/httpd/conf.d/wsgi-keystone.conf
##########
Listen 5000
Listen 35357
			<VirtualHost *:5000>
    WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
    WSGIProcessGroup keystone-public
    WSGIScriptAlias / /usr/bin/keystone-wsgi-public
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
    ErrorLogFormat "%{cu}t %M"
    ErrorLog /var/log/httpd/keystone-error.log
    CustomLog /var/log/httpd/keystone-access.log combined
			<Directory /usr/bin>
        Require all granted
    </Directory>
</VirtualHost>
			<VirtualHost *:35357>
    WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
    WSGIProcessGroup keystone-admin
    WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
    ErrorLogFormat "%{cu}t %M"
    ErrorLog /var/log/httpd/keystone-error.log
    CustomLog /var/log/httpd/keystone-access.log combined
			<Directory /usr/bin>
        Require all granted
    </Directory>
</VirtualHost>
##########
  • 启动服务
[root@controller ~]# systemctl enable httpd.service
[root@controller ~]# systemctl start httpd.service

2.2.4 创建服务和配置 API

  • 配置认证令牌(与 /etc/keystone/keystone.conf文件下的 TOKEN 一致)
[root@controller ~]# export OS_TOKEN=ADMIN_TOKEN
  • 配置端点 URL
[root@controller ~]# export OS_URL=http://controller:35357/v3
  • 配置认证 API
[root@controller ~]# export OS_IDENTITY_API_VERSION=3
  • 创建服务实体和身份认证服务
openstack service create --name keystone --description "OpenStack Identity" identity
  • 创建认证服务的 API 端点
[root@controller ~]# openstack endpoint create --region RegionOne identity public http://controller:5000/v3
[root@controller ~]# openstack endpoint create --region RegionOne identity internal http://controller:5000/v3
[root@controller ~]# openstack endpoint create --region RegionOne identity admin http://controller:35357/v3

2.2.5 创建域、项目(租户)、用户和角色

  • 创建域 default
[root@controller ~]# openstack domain create --description "Default Domain" default
  • 创建 admin 项目
[root@controller ~]# openstack project create --domain default --description "Admin Project" admin
  • 创建 admin 用户,密码为 ADMIN_PASS
[root@controller ~]# openstack user create --domain default --password ADMIN_PASS admin
  • 创建 admin 角色
[root@controller ~]# openstack role create admin
  • 添加 admin 角色到 admin 项目和用户上
[root@controller ~]# openstack role add --project admin --user admin admin
  • 创建 service 项目
[root@controller ~]# openstack project create --domain default --description "Service Project" service

2.2.6 验证操作

  • 重置OS_TOKENOS_URL 环境变量
[root@controller ~]# unset OS_TOKEN OS_URL
  • 作为 admin 用户,请求认证令牌
[root@controller ~]# openstack --os-auth-url http://controller:35357/v3 \
>   --os-project-domain-name default --os-user-domain-name default \
>   --os-project-name admin --os-username admin token issue
##########
Password: ADMIN_PASS
##########

2、openstack-Mitaka 部署_第4张图片

2.2.7 创建 OpenStack 客户端环境脚本

  • 创建并编辑文件 admin-openrc 并添加如下内容
[root@controller ~]# vim admin-openrc
##########
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
##########
  • 加载 admin-openrc 文件来获取环境变量
[root@controller ~]# source admin-openrc 

注:将这条命令写入 .bashrc 文件中,系统重启时就会自动加载环境变量,或者将环境变量配置文件写入 /etc/profile 文件中

  • 验证 keystone 是否正常运行
[root@controller ~]# openstack token issue

2.3 controller 节点 部署 glance (镜像服务)

openstack 镜像服务包括两个组件:

  • glance-api(接收镜像 API 的调用,例如镜像发现,下载、上传)
  • glance-registry(存储、处理和下载镜像的元数据,元数据包括项诸如大小和类型)

2.3.1 创建数据库、服务凭证和API端点

  • 创建 glance
MariaDB [(none)]> create database glance;
  • glance 数据库授予恰当的权限
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'GLANCE_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'GLANCE_DBPASS';
  • 创建 glance 用户,密码为 GLANCE_PASS,(注这步跟官方文档不一样
[root@controller ~]# openstack user create --domain default --password GLANCE_PASS glance
  • 添加 admin 角色到 glance 用户和 service 项目上
[root@controller ~]# openstack role add --project service --user glance admin

报错:
在这里插入图片描述
原因:keystone 第五步 service 项目没有创建

  • 创建 glance 服务实体
[root@controller ~]# openstack service create --name glance --description "OpenStack Image" image
  • 创建镜像服务的 API 端点
[root@controller ~]# openstack endpoint create --region RegionOne image public http://controller:9292
[root@controller ~]# openstack endpoint create --region RegionOne image internal http://controller:9292
[root@controller ~]# openstack endpoint create --region RegionOne image admin http://controller:9292

2.3.2 glance 部署安装

  • 安装软件包
[root@controller ~]# yum install openstack-glance
  • 编辑文件 /etc/glance/glance-api.conf 并完成如下动作
[root@controller ~]# vim /etc/glance/glance-api.conf
##########
[database]
……略……
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
			
[keystone_authtoken]
……略……
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = GLANCE_PASS
			
[paste_deploy]
……略……
flavor = keystone
			
[glance_store]
……略……
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
##########
  • 编辑文件 /etc/glance/glance-registry.conf并完成如下动作
[root@controller ~]# vim /etc/glance/glance-registry.conf 
##########
[database]
……略……
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
			
[keystone_authtoken]
……略……
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = GLANCE_PASS
			
[paste_deploy]
……略……
flavor = keystone
##########
  • 写入镜像服务数据库
[root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance
  • 启动镜像服务并配置随机启动
[root@controller ~]# systemctl enable openstack-glance-api.service   openstack-glance-registry.service
[root@controller ~]# systemctl start openstack-glance-api.service   openstack-glance-registry.service

2.3.3 验证操作

  • 下载源镜像
[root@controller ~]# wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
  • 使用 QCOW2 磁盘格式,上传镜像到镜像服务并设置公共可见
[root@controller ~]# openstack image create "cirros"   --file cirros-0.3.4-x86_64-disk.img   --disk-format qcow2 --container-format bare   --public
  • 确认镜像的上传并验证属性
[root@controller ~]# openstack image list

2、openstack-Mitaka 部署_第5张图片

2.4 controller 节点 部署 nova(计算服务)

nova 服务由下列组件构成:

  • nova-api:接收和相应用户的计算机 API 请求,管理虚拟机生命周期
  • nova-api-metadata:接收来自 虚拟机发送的元数据请求
  • nova-compute:真正管理虚拟机(nova-compute 调用 libvirt)
  • nova-scheduler:nova 调度器(挑选出最合适的 nova-compute 来创建虚拟机)
  • nova-conductor:帮助 nova-compute 代理修改数据库中虚拟机的状态
  • nova-network:提供网络服务
  • nova-novncproxy:web 版,提供客户端
  • nova-consoleauth:提供 vnc 的访问权限

2.4.1 创建数据库并API 授权

  • 创建数据库
MariaDB [(none)]> create database nova_api;
MariaDB [(none)]> create database nova;
  • 对数据库进行正确的授权
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost'    IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%'    IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost'    IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%'    IDENTIFIED BY 'NOVA_DBPASS';
  • 创建 nova 用户
[root@controller ~]# openstack user create --domain default   --password NOVA_PASS nova
  • 给 nova 用户添加 admin 角色
[root@controller ~]# openstack role add --project service --user nova admin
  • 创建 nova 服务实体
[root@controller ~]# openstack service create --name nova   --description "OpenStack Compute" compute
  • 创建 compute 服务 API 端点
[root@controller ~]# openstack endpoint create --region RegionOne   compute public http://controller:8774/v2.1/%\(tenant_id\)s 
[root@controller ~]# openstack endpoint create --region RegionOne   compute internal http://controller:8774/v2.1/%\(tenant_id\)s
[root@controller ~]# openstack endpoint create --region RegionOne   compute admin http://controller:8774/v2.1/%\(tenant_id\)s

2.4.2 nova 服务部署

  • 安装软件包
[root@controller ~]# yum install -y openstack-nova-api openstack-nova-conductor   openstack-nova-console openstack-nova-novncproxy   openstack-nova-scheduler
  • 编辑/etc/nova/nova.conf文件并完成下面的操作
[root@controller ~]# vim /etc/nova/nova.conf 
##########
[DEFAULT]
……略……
enabled_apis = osapi_compute,metadata
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 192.168.117.130
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
		
[api_database]
……略……
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
[database]
……略……
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
		
[oslo_messaging_rabbit]
……略……
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
		
[keystone_authtoken]
……略……
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = NOVA_PASS
		
		
[vnc]
……略……
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
		
[glance]
……略……
api_servers = http://controller:9292
		
[oslo_concurrency]
……略……
lock_path = /var/lib/nova/tmp
##########
  • 同步 compute 数据库
[root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova
[root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova
  • 启动 compute 服务并设置随系统启动
[root@controller ~]# systemctl enable openstack-nova-api.service   openstack-nova-consoleauth.service openstack-nova-scheduler.service   openstack-nova-conductor.service openstack-nova-novncproxy.service
[root@controller ~]# systemctl start openstack-nova-api.service   openstack-nova-consoleauth.service openstack-nova-scheduler.service   openstack-nova-conductor.service openstack-nova-novncproxy.service

2.4.3 验证

[root@controller ~]# nova service-list

2、openstack-Mitaka 部署_第6张图片
2、openstack-Mitaka 部署_第7张图片

2.5 compute 节点部署 nova (计算服务)

2.5.1 nova 服务部署

  • 安装软件包 openstack-nova-compute(调用 libvirt 来创建虚拟机)
[root@compute ~]# yum install -y openstack-nova-compute
  • 编辑/etc/nova/nova.conf文件并完成下面的操作
[root@compute ~]# vim /etc/nova/nova.conf
##########
[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
my_ip = 192.168.117.131
		
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
		
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = NOVA_PASS
		
[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
		
[glance]
api_servers = http://controller:9292
		
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
##########
  • 确认计算节点是否支持虚拟机的硬件加速
[root@compute ~]# egrep -c '(vmx|svm)' /proc/cpuinfo
#如果返回值是 1 或 greater,就不需要额外的配置
#如果返回值是 0 则:
				在 /etc/nova/nova.conf 文件的做如下编辑
				[libvirt]
				……略……
				virt_type = qemu
  • 启动计算服务及其依赖,并设置随系统启动(如果起不来,请检查配置文件)
[root@compute ~]# systemctl enable libvirtd.service openstack-nova-compute.service
[root@compute ~]# systemctl start libvirtd.service openstack-nova-compute.service

2.5.2 验证(controller 节点执行)

[root@controller ~]# nova service-list
  • 结果如下:
    2、openstack-Mitaka 部署_第8张图片

2.6 controller 节点 部署 neutron (网络服务)

  • neutron-server:端口(9696),接收和相应外部的网络管理请求
  • neutron-linuxbridge-agent:负责创建网桥
  • neutron-dhcp-agent:负责分配 IP
  • neutron-metadata-agent:配合 nova-metadata-api 实现虚拟机的定制化操作
  • L3-agent:实现三层网络 vxlan(网络层)

2.6.1 创建数据库并 API授权

  • 创建数据库
MariaDB [(none)]> create database neutron;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost'    IDENTIFIED BY 'NEUTRON_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%'    IDENTIFIED BY 'NEUTRON_DBPASS';
  • 创建 neutron 用户
[root@controller ~]# openstack user create --domain default --password NEUTRON_PASS neutron
  • 添加 admin 角色到 neutron 用户
[root@controller ~]# openstack role add --project service --user neutron admin
  • 创建 neutron 服务实体
[root@controller ~]# openstack service create --name neutron   --description "OpenStack Networking" network
  • 创建网络服务 API 端点
[root@controller ~]# openstack endpoint create --region RegionOne   network public http://controller:9696
[root@controller ~]# openstack endpoint create --region RegionOne   network internal http://controller:9696
[root@controller ~]# openstack endpoint create --region RegionOne   network admin http://controller:9696

2.6.2 公共网络配置(二层网络)

  • 安装组件
[root@controller ~]# yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
  • 编辑/etc/neutron/neutron.conf 文件并完成如下操作
[root@controller ~]# vim /etc/neutron/neutron.conf 
##########
[DEFAULT]
core_plugin = ml2
service_plugins =
rpc_backend = rabbit
auth_strategy = keystone
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True

[database]
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron

[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS

[nova]
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp			
##########
  • 编辑/etc/neutron/plugins/ml2/ml2_conf.ini文件并完成以下操作
[root@controller ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini 
##########
[ml2]
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = linuxbridge
extension_drivers = port_security
			
[ml2_type_flat]
flat_networks = provider
			
[securitygroup]
enable_ipset = True
##########
  • 编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件并且完成以下操作
[root@controller ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini 
##########
[linux_bridge]
physical_interface_mappings = provider:ens33
			
[vxlan]
enable_vxlan = False
			
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
##########
  • 编辑/etc/neutron/dhcp_agent.ini文件并完成下面的操作
[root@controller ~]# vim /etc/neutron/dhcp_agent.ini 
##########
[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
##########

2.6.3 配置元素据代理

  • 编辑/etc/neutron/metadata_agent.ini文件并完成以下操作
[root@controller ~]# vim /etc/neutron/metadata_agent.ini 
##########
[DEFAULT]
nova_metadata_ip = controller
metadata_proxy_shared_secret = METADATA_SECRET
##########

2.6.4 为 nova 服务匹配置网络服务

  • 编辑/etc/nova/nova.conf文件并完成以下操作
[root@controller ~]# vim /etc/nova/nova.conf 
##########
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
			
service_metadata_proxy = True
metadata_proxy_shared_secret = METADATA_SECRET
##########

2.6.5 完成安装

  • 链接网络初始化脚本
[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
  • 同步数据库
[root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
  • 重启 API 服务
[root@controller ~]# systemctl restart openstack-nova-api.service
  • 启动网络服务并配置随系统启动
[root@controller ~]# systemctl enable neutron-server.service   neutron-linuxbridge-agent.service neutron-dhcp-agent.service   neutron-metadata-agent.service
[root@controller ~]# systemctl start neutron-server.service   neutron-linuxbridge-agent.service neutron-dhcp-agent.service   neutron-metadata-agent.service

2.7 compute 节点 部署 neutron (网络服务)

2.7.1 安装组件

[root@compute ~]# yum install -y openstack-neutron-linuxbridge ebtables ipset

2.7.2 配置通用组件

  • 编辑/etc/neutron/neutron.conf 文件并完成如下操作
[root@compute ~]# vim /etc/neutron/neutron.conf 
##########
[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
			
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
			
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
			
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
##########

2.7.3 公共网络配置

  • 编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件并且完成以下操作
[root@compute ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini 
##########
[linux_bridge]
physical_interface_mappings = provider:ens33
			
[vxlan]
enable_vxlan = False
			
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
##########

2.7.4 配置网络服务

  • 编辑/etc/nova/nova.conf文件并完成下面的操作
[root@compute ~]# vim /etc/nova/nova.conf 
###########
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
##########

2.7.5 完成安装

  • 重启计算服务
[root@compute ~]# systemctl restart openstack-nova-compute.service
  • 启动 linuxbridge 代理并配置它开机自启动
[root@compute ~]# systemctl enable neutron-linuxbridge-agent.service
[root@compute ~]# systemctl start neutron-linuxbridge-agent.service

2.7.5 验证(controller 节点上执行下面的操作)

[root@controller ~]# neutron agent-list

2、openstack-Mitaka 部署_第9张图片

2.8 controller 节点 部署 horizon(web 界面)

2.8.1 安装软件

[root@controller ~]# yum install -y openstack-dashboard

2.8.2 编辑文件 /etc/openstack-dashboard/local_settings 并完成如下操作

[root@controller ~]# vim /etc/openstack-dashboard/local_settings
##########
OPENSTACK_HOST = "controller"
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
		
ALLOWED_HOSTS = ['*',]
		
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller:11211',
    }
}
		
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
		
OPENSTACK_API_VERSIONS = {
	"identity": 3,
	"volume": 2,
	"compute": 2,
}
		
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'default'
		
OPENSTACK_NEUTRON_NETWORK = {
	'enable_router': False,
	'enable_quotas': False,
	'enable_ipv6': False,
	'enable_distributed_router': False,
	'enable_ha_router': False,
	'enable_lb': False,
	'enable_firewall': False,
	'enable_': False,
	'enable_fip_topology_check': False,
		
TIME_ZONE = "Asia/Shanghai"
##########

2.8.3 重启 web 服务器及对象存储服务

[root@controller ~]# systemctl restart httpd.service memcached.service

2.8.4 验证操作

  • 浏览器访问 http://192.168.117.130/dashboard
    2、openstack-Mitaka 部署_第10张图片
  • 浏览器访问不到 解决办法:(在 http 配置文件中加入下面的参数)
[root@controller ~]# vim /etc/httpd/conf.d/openstack-dashboard.conf 
##########
WSGIApplicationGroup %{GLOBAL}
##########

你可能感兴趣的:(云计算)