这篇文件在部署时会出现一些问题,在ESXI版本有修正 可参考https://www.jianshu.com/p/b140e32aaedd
网络环境:
M: managerment Network E: External Network I: instance Tunnels Network
准备四个节点: 控制节点 网络节点 计算节点 存储节点
controller:
M: 192.168.1.5
I : 172.16.0.11
E : 100.100.100.11
资源:1U 1.5GB 1N 100GB
neutron:(实际使用hostname : network,可与controller共用一套)
M: 192.168.1.6
I : 172.16.0.6
E : 100.100.100.10
资源:1U 1.5GB 3N 20GB
compute
M: 192.168.1.10
I :172.16.0.10
资源:MAX U MAX GB 2N 100GB
block(可不用):
M: 192.168.1.20
资源:1U 1.5GB 1N 100GB
环境截图:
由于我需要网络环境,所以第一块网卡均选用了NAT模式,VMnet1 和 VMnet2我均选用了主机模式。
虚拟路由配置:
注意:
建议关闭DHCP服务,有可能增加网卡没有对应文件,将相应的文件复制过来即可,并通过ip addr 查看device对应的设备,
配置完成即可
neutron的网络配置:
先决条件(所有机器):
# 1、关闭防火墙 和 NetworkManager
systemctl stop firewalld.service
systemctl disable firewalld.service
systemctl stop NetworkManager
systemctl disable NetworkManager
#2、关闭SeLinux
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
grep -n 'SELINUX=' /etc/selinux/config
#3、设置主机名
echo 'xxxxxx' > /etc/hostname
hostnamectl set-hostname xxxxxx
#4、配置dns
vi /etc/resolv.conf
nameserver 114.114.114.114
nameserver 8.8.8.8
#5、启用OpenStack库
yum -y install yum-plugin-priorities
yum install -y centos-release-openstack-stein
yum upgrade -y
yum -y install openstack-selinux
yum install -y python-openstackclient
#6、hosts的IP对应关系
vi /etc/hosts
192.168.1.5 controller
192.168.1.6 network
192.168.1.10 compute
192.168.1.20 block
#7、安装时间同步
# 1、安装软件包
yum install -y chrony
# 2、允许其他节点可以连接到控制节点的 chrony 后台进程
echo 'allow 192.168.1.0/24' >> /etc/chrony.conf
替换掉原始服务器配置:
server ntp1.aliyun.com iburst
server ntp2.aliyun.com iburst
server ntp3.aliyun.com iburst
server ntp4.aliyun.com iburst
# 3、启动 NTP 服务并将其配置为随系统启动
systemctl enable chronyd.service
systemctl start chronyd.service
# 4、设置时区
timedatectl set-timezone Asia/Shanghai
# 5、查询时间
timedatectl status
修改为阿里云时间服务器配置
KeyStone (controller节点)
认证过程
各个服务之间认证机制
角色绑定
controller 节点预装的内容
1、安装MariaDB
# 1、安装软件包
yum install -y mariadb mariadb-server MySQL-python
# 2、配置
vim /etc/my.cnf.d/mariadb-server.cnf #在mysqld模块下放入一下几行
default-storage-engine = innodb
innodb_file_per_table = on
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8
# 3、启动数据库服务,并将其配置为开机自启
systemctl start mariadb.service
systemctl enable mariadb.service
# 4、对数据库进行安全加固(设置root用户密码)
mysql_secure_installation
2、安装Memcache
# 1、安装软件包
yum install -y memcached python-memcached
# 2、修改监听ip
sed -i 's/127.0.0.1/0.0.0.0/' /etc/sysconfig/memcached
# 3、启动并加入开机自启
systemctl start memcached.service
systemctl enable memcached.service
#4、测试
printf "set foo 0 0 3\r\nbar\r\n"|nc controller 11211 # 添加数据
printf "get foo\r\n"|nc controller 11211 # 获取数据,在计算节点上也测试下
3、安装消息队列
# 1、安装
yum install -y rabbitmq-server
# 2、启动
systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service
# 3、创建用户
rabbitmqctl add_user openstack openstack
# 4、授权
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
# 5、启用web管理界面
rabbitmq-plugins list # 查看rabbitmq有哪些插件
rabbitmq-plugins enable rabbitmq_management # 启用web管理界面
# 6、浏览器上登录
# 在浏览器上输入http://192.168.1.5:15672/
# 用户名、密码均为:guest(第一次登录必须使用该用户密码)
# 7、在浏览器上为刚创建的openstack更新Tags为:administrator
# 点击Admin -> 点击Users列表中的openstack ->在Update this user中输入两次openstack作为密码(密码必须写,因此我们写原密码),Tags设置为administrator -> 点击Update user
4、安装OpenStack库 及 OpenStack 客户端
yum install -y centos-release-openstack-stein
yum install -y python-openstackclient
KEYSTONE安装
1、数据库配置
# 为keystone创建数据库并授权
-- 1、登录数据库管理系统
mysql -uroot -p
-- 2、创建数据库
create database keystone;
-- 3、创建用户并授权
grant all privileges on keystone.* to keystone_user@controller identified by 'keystone_pass';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone_user'@'localhost' IDENTIFIED BY 'keystone_pass';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone_user'@'%' IDENTIFIED BY 'keystone_pass';
-- 4、刷新权限
flush privileges;
-- 5、退出该session
quit;
2、安装软件包
yum install -y openstack-keystone httpd mod_wsgi
3、修改配置文件
# 1、备份原文件
cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.ori
# 2、修改模块如下,vim /etc/keystone/keystone.conf
[database]
connection = mysql+pymysql://keystone_user:keystone_pass@controller/keystone
[token]
provider = fernet
4、同步数据库
su -s /bin/sh -c "keystone-manage db_sync" keystone
5、初始化Fernet密钥存储库
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
6、创建keystone管理员
keystone-manage bootstrap --bootstrap-password admin_pass \
--bootstrap-admin-url http://controller:5000/v3/ \
--bootstrap-internal-url http://controller:5000/v3/ \
--bootstrap-public-url http://controller:5000/v3/ \
--bootstrap-region-id RegionOne
7、配置并启动Apache HTTP server
# 1、配置ServerName
sed -i '/#ServerName/aServerName controller:80' /etc/httpd/conf/httpd.conf
# 2、连接keystone配置文件
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
# 3、启动并加入开机自启动
systemctl start httpd.service
systemctl enable httpd.service
# 4、配置管理员账号环境变量
export OS_USERNAME=admin
export OS_PASSWORD=admin_pass
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
创建域、项目、用户和角色
1 创建新域。
openstack domain create --description "An Example Domain" example
2 创建Service项目。
openstack project create --domain default --description "Service Project" service
3 创建常规(非管理员)任务应使用无特权的项目和用户
# 1、创建项目
openstack project create --domain default --description "Demo Project" myproject
# 2、创建用户
openstack user create --domain default --password myuser_pass myuser
# 3、创建角色
openstack role create myrole
# 4、把用户和角色添加到项目
openstack role add --project myproject --user myuser myrole
验证Keystone
1、删除临时环境变量OS_AUTH_URL、OS_PASSWORD
unset OS_AUTH_URL OS_PASSWORD
2 验证admin,密码为:admin_pass
openstack --os-auth-url http://controller:5000/v3 \
--os-project-domain-name Default --os-user-domain-name Default \
--os-project-name admin --os-username admin token issue
3 验证myuser,密码为:myuser_pass
openstack --os-auth-url http://controller:5000/v3 \
--os-project-domain-name Default --os-user-domain-name Default \
--os-project-name myproject --os-username myuser token issue
创建客户端环境变量脚本
1 创建脚本
# 1、进入家目录
cd ~
# 2、创建admin用户的OpenStack客户端环境变量脚本
cat >admin-openrc<demo-openrc<
2 验证脚本
# 1、加载环境变量
cd ~
. admin-openrc
# 2、请求验证token
openstack token issue
GLANCE
工作流程
创建Glance数据库
1 进入数据库。
mysql -u root -p
2 创建glance数据库。
CREATE DATABASE glance;
3 授权,允许本地及远程服务器访问mysql,''为数据库用户root的密码。
GRANT ALL PRIVILEGES ON glance.* TO 'glance_user'@'localhost' IDENTIFIED BY 'glance_pass';
GRANT ALL PRIVILEGES ON glance.* TO 'glance_user'@'%' IDENTIFIED BY 'glance_pass';
grant all privileges on glance.* to glance_user@controller identified by 'glance_pass';
flush privileges;
quit;
创建角色和用户
获取keystone管理员凭据
. admin-openrc
创建Glance服务凭证
# 1、 创建glance用户
openstack user create --domain default --password glance_pass glance
# 2、将glance用户加入到service项目并授予admin(管理员)角色
openstack role add --project service --user glance admin
# 3、创建glance服务实体
openstack service create --name glance --description "OpenStack Image" image
创建Glance服务API端点
# 1、创建共有Glance服务API端点
openstack endpoint create --region RegionOne image public http://controller:9292
# 2、创建私有Glance服务API端点
openstack endpoint create --region RegionOne image internal http://controller:9292
# 3、创建管理Glance服务API端点
openstack endpoint create --region RegionOne image admin http://controller:9292
安装及配置
安装软件包
yum install -y openstack-glance
修改glance-api.conf配置文件
# 1、备份原文件
sed -i.default -e '/^#/d' -e '/^$/d' /etc/glance/glance-api.conf
# 2、修改模板如下,vim /etc/glance/glance-api.conf
[database]
connection = mysql+pymysql://glance_user:glance_pass@controller/glance
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = glance_pass
[paste_deploy]
flavor = keystone
修改glance-registry.conf配置文件
# 1、备份原文件
sed -i.default -e '/^#/d' -e '/^$/d' /etc/glance/glance-registry.conf
# 2、修改模块如下,vim /etc/glance/glance-registry.conf
[database]
connection = mysql+pymysql://glance_user:glance_pass@controller/glance
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = glance_pass
[paste_deploy]
flavor = keystone
同步数据
su -s /bin/sh -c "glance-manage db_sync" glance
启动并加入开启自启
systemctl start openstack-glance-api.service openstack-glance-registry.service
systemctl enable openstack-glance-api.service openstack-glance-registry.service
验证Glance
cd ~
. admin-openrc
下载镜像。
进入 “/var/lib/glance/images”
wget https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-aarch64-disk.img
上传镜像到glance。
openstack image create "cirros" --file cirros-0.4.0-aarch64-disk.img --disk-format qcow2 --container-format bare --public
确认上传的镜像和属性。
openstack image list
Placement(控制节点)
之前计算系统资源的任务主要是在nova中进行计算,在newton版本之后将openstack中将所有资源监控的功能抛离出来作为Placement项目存在
创建Placement数据库
mysql -u root -p
create database placement;
grant all privileges on placement.* to 'placement_user'@'controller' identified by 'placement_pass';
GRANT ALL PRIVILEGES ON placement.* TO 'placement_user'@'localhost' IDENTIFIED BY 'placement_pass';
GRANT ALL PRIVILEGES ON placement.* TO 'placement_user'@'%' IDENTIFIED BY 'placement_pass';
flush privileges;
quit;
获取Keystone管理员凭据
cd ~
. admin-openrc
创建Placement服务凭证
# 1、 创建placement用户,密码设置为:placement_pass
openstack user create --domain default --password placement_pass placement
# 2、将管理员角色添加都placement用户和service项目中
openstack role add --project service --user placement admin
# 3、创建placement服务实体
openstack service create --name placement --description "Placement API" placement
创建Placement服务API端点
openstack endpoint create --region RegionOne placement public http://controller:8778
openstack endpoint create --region RegionOne placement internal http://controller:8778
openstack endpoint create --region RegionOne placement admin http://controller:8778
安装及配置
安装软件包
yum install -y openstack-placement-api
修改placement.conf配置文件
# 1、备份原文件
sed -i.default -e '/^#/d' -e '/^$/d' /etc/placement/placement.conf
# 2、修改模块如下,vim /etc/placement/placement.conf
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = placement
password = placement_pass
[placement_database]
connection = mysql+pymysql://placement_user:placement_pass@controller/placement
同步数据库
su -s /bin/sh -c "placement-manage db sync" placement
允许其他组件访问Placement API
# 1、修改Apache HTTP server配置
cat >>/etc/httpd/conf.d/00-placement-api.conf<
= 2.4>
Require all granted
Order allow,deny
Allow from all
EOF
# 2、重启Apache HTTP server使之生效
systemctl restart httpd
检查Placement安装结果
placement-status upgrade check
安装pip
yum install -y epel-release
yum install -y python-pip
rm -rf /etc/yum.repos.d/epel.repo /etc/yum.repos.d/epel-testing.repo
针对展示位置API运行以下命令:
1. 安装osc-placement插件。
pip install osc-placement
2. 列出可用的资源类和特征。
openstack --os-placement-api-version 1.2 resource class list --sort-column name
openstack --os-placement-api-version 1.6 trait list --sort-column name
Nova
执行流程
内部沟通机制
与其他组件交互
启动虚拟机流程
控制节点步骤
注:这块部署我参考的华为的部署文档、openstack的官方部署文档还有网上一些部署文档,华为增加了QEMU和libvirt的安装,但是在其他版本部署我均未看到,因此我不做安装,有需要的可以自行查看https://support.huaweicloud.com/dpmg-kunpengcpfs/kunpengopenstackstein_04_0010.html
- 创建Nova数据库
# 1、建库
create database nova_api;
create database nova;
create database nova_cell0;
# 2、授权
grant all privileges on nova_api.* to 'nova_user'@'controller' identified by 'nova_pass';
grant all privileges on nova.* to 'nova_user'@'controller' identified by 'nova_pass';
grant all privileges on nova_cell0.* to 'nova_user'@'controller' identified by 'nova_pass';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova_user'@'localhost' IDENTIFIED BY 'nova_pass';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova_user'@'%' IDENTIFIED BY 'nova_pass';
GRANT ALL PRIVILEGES ON nova.* TO 'nova_user'@'localhost' IDENTIFIED BY 'nova_pass';
GRANT ALL PRIVILEGES ON nova.* TO 'nova_user'@'%' IDENTIFIED BY 'nova_pass';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova_user'@'localhost' IDENTIFIED BY 'nova_pass';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova_user'@'%' IDENTIFIED BY 'nova_pass';
# 3、刷新权限
flush privileges;
quit;
- 创建Nova服务凭证
cd ~
. admin-openrc
# 1、 创建nova用户
openstack user create --domain default --password nova_pass nova
# 2、将管理员角色添加都nova用户和service项目中
openstack role add --project service --user nova admin
# 3、创建nova服务实体
openstack service create --name nova --description "OpenStack Compute" compute
#4、创建Nova服务API端点
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
- 安装nova
yum install -y openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler
- 编辑nova.conf配置文件
# 1、备份原文件
sed -i.default -e '/^#/d' -e '/^$/d' /etc/nova/nova.conf
# 2、修改模块如下,vim /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:openstack@controller
my_ip = 192.168.1.5
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
rpc_backend=rabbit
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[api]
auth_strategy = keystone
[api_database]
connection = mysql+pymysql://nova_user:nova_pass@controller/nova_api
[database]
connection = mysql+pymysql://nova_user:nova_pass@controller/nova
[glance]
api_servers = http://controller:9292
[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova_pass
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = placement_pass
[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip
#以下部分使用于通过vnc来访问nova的计算节点
novncproxy_host=0.0.0.0
novncproxy_port=6080
novncproxy_base_url=http://192.168.1.5:6080/vnc_auto.html
- 同步nova-api数据库
su -s /bin/sh -c "nova-manage api_db sync" nova
- 注册cell0数据库
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
- 创建cell1原件
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
- 同步nova数据库
su -s /bin/sh -c "nova-manage db sync" nova
- 验证novacell0和cell1注册情况
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
- 启动并加入开机自启
systemctl start openstack-nova-api.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl enable openstack-nova-api.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
计算节点步骤
- 检查是否支持虚拟化
egrep -c '(vmx|svm)' /proc/cpuinfo # 结果大于等于1,支持
- 安装软件包
yum install -y openstack-nova-compute
- 编辑nova.conf配置文件
# 1、备份原文件
sed -i.default -e '/^#/d' -e '/^$/d' /etc/nova/nova.conf
# 2、修改模块如下,vim /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:openstack@controller
my_ip = 192.168.1.10
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = nova_pass
[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
vncserver_proxyclient_address = $my_ip
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
#华为的文档有placement的配置在计算节点,但是官网并未给出
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = placement_pass
[libvirt]
virt_type = qemu
- 启动并加入开机自启
systemctl start libvirtd.service openstack-nova-compute.service
systemctl enable libvirtd.service openstack-nova-compute.service
在控制节点上添加计算节点
- 取得keystone管理员凭据
cd ~
. admin-openrc
- 添加计算节点到cell 数据库
openstack compute service list --service nova-compute
- 发现计算节点
# 手动发现
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
# 定期主动发现
# 1、修改/etc/nova/nova.conf配置文件
[scheduler]
discover_hosts_in_cells_interval=300
# 2、重启nova服务
systemctl restart openstack-nova-api.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
Neutron
执行流程
Neutron和nova的配合机制
Neutron组件集合
Layer3 网络结构
主控部署步骤
- 建库并授权
mysql -u root -p
create database neutron;
grant all privileges on neutron.* to 'neutron_user'@'controller' identified by 'neutron_pass';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron_user'@'localhost' IDENTIFIED BY 'neutron_pass';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron_user'@'%' IDENTIFIED BY 'neutron_pass';
flush privileges;
quit;
- 获取Keystone管理员凭证
cd ~
. admin-openrc
- 创建Neutron服务凭证
openstack user create --domain default --password neutron_pass neutron
openstack role add --project service --user neutron admin
openstack service create --name neutron --description "OpenStack Networking" network
- 创建Neutron服务API端点
openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696
- 安装及配置
yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
- 编辑neutron.conf配置文件
# 1、备份原文件并删除注释
sed -i.default -e '/^#/d' -e '/^$/d' /etc/neutron/neutron.conf
# 2、修改模块如下,vim /etc/neutron/neutron.conf
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[database]
connection = mysql+pymysql://neutron_user:neutron_pass@controller/neutron
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers =controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron_pass
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova_pass
- 配置模块化第2层(ML2)插件
# 1、备份原文件并删除注释
sed -i.default -e '/^#/d' -e '/^$/d' /etc/neutron/plugins/ml2/ml2_conf.ini
# 2、修改模块如下,vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[ml2_type_vxlan]
vni_ranges = 1:1000
[securitygroup]
enable_ipset = true
- 配置Linux桥代理
# 1、备份原文件并删除注释
sed -i.default -e '/^#/d' -e '/^$/d' /etc/neutron/plugins/ml2/linuxbridge_agent.ini
# 2、修改模块如下,vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens37
[vxlan]
enable_vxlan = false
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
注:ens37是实例网络的网卡,而非管理网络网卡
- 确保Linux操作系统内核支持网桥过滤器
#在“/etc/sysctl.conf”中添加如下配置后,保存并退出:
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
#运行以下命令,添加网桥过滤器:
modprobe br_netfilter
sysctl -p
sed -i '$amodprobe br_netfilter' /etc/rc.local
- 配置layer-3代理(openstack官方提供)
#编辑``/etc/neutron/l3_agent.ini``,在``[DEFAULT]``部分,配置Linuxbridge接口驱动和外部网络网桥:
[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
external_network_bridge =
- 配置DHCP代理
# 1、备份原文件并删除注释
sed -i.default -e '/^#/d' -e '/^$/d' /etc/neutron/dhcp_agent.ini
# 2、修改模块如下,vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
- 配置元数据代理
# 1、备份原文件并删除注释
sed -i.default -e '/^#/d' -e '/^$/d' /etc/neutron/metadata_agent.ini
# 2、修改模块如下,vim /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = metadata_secret
- 配置/etc/nova/nova.conf文件neutron模块
[neutron]
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron_pass
service_metadata_proxy = true
metadata_proxy_shared_secret = metadata_secret
- 创建网络服务初始化脚本需要的软连接
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
- 同步数据库
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
- 重启Compute API服务
systemctl restart openstack-nova-api.service
- 启动网络服务并开启自启
systemctl start neutron-server.service \
neutron-linuxbridge-agent.service \
neutron-dhcp-agent.service \
neutron-metadata-agent.service
systemctl enable neutron-server.service \
neutron-linuxbridge-agent.service \
neutron-dhcp-agent.service \
neutron-metadata-agent.service
计算节点安装Neutron(所有计算节点均一样)
- 安装软件
yum install -y openstack-neutron-linuxbridge ebtables ipset
- 编辑neutron.conf配置文件
# 1、备份原文件并删除注释
sed -i.default -e '/^#/d' -e '/^$/d' /etc/neutron/neutron.conf
# 2、修改模块如下,vim /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers =controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron_pass
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
- 配置Linux桥代理
# 1、备份原文件并删除注释
sed -i.bak -e '/^#/d' -e '/^$/d' /etc/neutron/plugins/ml2/linuxbridge_agent.ini
# 2、修改模块如下,vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:eno33554984
[vxlan]
enable_vxlan = false
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
- 确保您的Linux操作系统内核支持网桥过滤器
# 1、添加配置
cat >>/etc/sysctl.conf<
- 编辑/etc/nova/nova.conf文件
# 1、备份原文件并删除注释
sed -i.default -e '/^#/d' -e '/^$/d' /etc/nova/nova.conf
# 2、修改模块如下,vim /etc/nova/nova.conf
[neutron]
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron_pass
- 重启服务
systemctl restart openstack-nova-compute.service
systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service
openstack extension list --network
openstack network agent list # 注意:一共4个,其中两个是Linux bridge agent说明成功
创建网络(控制节点)
- 获取keystone管理员凭证
cd ~
. admin-openrc
- 创建网络
openstack network create --share --external \
--provider-physical-network provider \
--provider-network-type flat provider
openstack network list # 查看
- 创建子网
openstack subnet create --network provider \
--allocation-pool start=172.16.0.100,end=172.16.0.200 \
--dns-nameserver 172.16.0.2 --gateway 172.16.0.11\
--subnet-range 172.16.0.0/24 provider-sub
openstack subnet list
- 创建主机规格
openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
# openstack flavor create 创建主机
# --id 主机ID
# --vcpus cpu数量
# --ram 64(默认是MB,可以写成G)
# --disk 磁盘(默认单位是G)
创建一个实例
- 获取demo用户权限凭证
cd ~
. demo-openrc
- 生成秘钥对
ssh-keygen -q -N ""
- 将密钥放在openstack上
openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
- 验证密码是否创建成功
nova keypair-list
- 添加安全组规则
# 允许ICMP(ping
openstack security group rule create --proto icmp default
# 允许安全shell(SSH)访问
openstack security group rule create --proto tcp --dst-port 22 default
- 查看创建实例需要的相关信息
openstack flavor list
openstack image list
openstack network list
openstack security group list
openstack keypair list
- 创建并启动实例
openstack server create --flavor m1.nano --image cirros \
--nic net-id=9e07c3d5-9a9e-496c-90b6-ba294f8b0699 \
--security-group default \
--key-name mykey hello-instance
# –flavor: 类型名称
# –image: 镜像名称
# --nic: 指定网络ID,根据刚刚openstack network list查到的网络ID填写,不是子网哦
# –security-group:安全组名
- 查看实例状态
[root@controller ~]# openstack server list
+--------------------------------------+----------------+--------+---------------------+--------+---------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+----------------+--------+---------------------+--------+---------+
| 0d94ce6d-ae08-4ace-a183-3ecd44ccba56 | hello-instance | ACTIVE | provider=10.0.0.138 | cirros | m1.nano |
+--------------------------------------+----------------+--------+---------------------+--------+---------+
dashboard (控制节点)
- 安装
yum install -y openstack-dashboard
- 编辑 /etc/openstack-dashboard/local_settings
sed -i.bak -e '/^#/d' -e '/^$/d' /etc/openstack-dashboard/local_settings
#在 controller 节点上配置仪表盘以使用 OpenStack 服务
OPENSTACK_HOST = "controller"
#允许所有主机访问仪表板:
ALLOWED_HOSTS = ['*', ]
#配置 memcached 会话存储服务
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
}
}
#启用第3版认证API
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
#启用对域的支持
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
#配置API版本:
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
OPENSTACK_NEUTRON_NETWORK = {
'enable_router': False,
'enable_quotas': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_vpn': False,
'enable_fip_topology_check': False,
}
TIME_ZONE = "Asia/Shanghai"
- 重启服务
systemctl restart httpd.service memcached.service