openstack Q版 all in one 手动安装

openstack Q版 all in one 手动安装

    • 基础信息
    • 准备
    • RabbitMQ集群
    • Memcached集群
    • Keystone
    • Glance集群
    • Nova控制节点集群
      • Nova计算服务
    • Horizon集群

部署使用openstack是因为之前kvm虚拟机起的比较多和管理成本,以及对未来考量.这里记录一下openstack的部署安装过程

基础信息

Mysql数据库
ip: 192.168.1.1
User:user
Pwd:Passwd

Openstack集群可用ip
192.168.1.1/255.255.255.0/192.168.1.154
虚拟机可用ip
openstack_dhcp_pool: 192.168.8.1/255.255.252.0/192.168.11.254

准备

  1. 配置安装openstack yum源
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup

curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

yum clean all
  1. 安装openstack client端
    通过salt对openstack机器安装openstack client端
yum install python-openstackclient -y

数据库安装

yum install mariadb mariadb-server python2-PyMySQL -y

创建并编辑文件/etc/my.cnf.d/openstack.cnf

[mysqld]
bind-address = 192.168.1.1
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

启动

systemctl enable mariadb.service
systemctl start mariadb.service

数据库设置密码

mysql_secure_installation

RabbitMQ集群

Rabbitmq部署在openstack001.test.com

  1. 安装rabbitmq
 [root@openstack001 ~]# yum install erlang rabbitmq-server -y
  1. 启动并设置开机启动
[root@openstack001 ~]# systemctl enable rabbitmq-server.service 
[root@openstack001 ~]# systemctl start rabbitmq-server.service
  1. 创建rabbitmq账号
[root@openstack001 ~]# rabbitmqctl add_user openstack Passwd
设置新建账号的状态
[root@openstack001 ~]# rabbitmqctl set_user_tags openstack administrator
设置新建账号的权限
[root@openstack001 ~]# rabbitmqctl set_permissions -p "/" openstack ".*" ".*" ".*"
查看账号
[root@openstack001 ~]# rabbitmqctl list
  1. 安装web插件
    安装web管理插件,
[root@openstack001 ~]# rabbitmq-plugins enable rabbitmq_management

浏览器访问,如:http://192.168.1.1:15672
用户名: openstack
密码:Passwd

Memcached集群

  1. 安装memcached
[root@openstack001 ~]# yum install memcached python-memcached -y
  1. 配置memcached
    在全部安装memcached服务的节点设置服务监听地址```javascript
[root@openstack001 ~]# sed -i 's|127.0.0.1,::1|0.0.0.0|g' /etc/sysconfig/memcached
  1. 设置开机启动
systemctl enable memcached.service
systemctl start memcached.service
systemctl status memcached.service

Keystone

  1. 创建keystone数据库
    创建数据库,数据库自动同步
CREATE DATABASE keystone;
  1. 安装keystone
[root@openstack001 ~]# yum install openstack-keystone httpd mod_wsgi mod_ssl -y
  1. 配置keystone.conf/etc/keystone/keystone.conf
[root@openstack001.test.com ~]# cat /etc/keystone/keystone.conf
[DEFAULT]
admin_token = 3220926717d6a2d33771
[application_credential]
[assignment]
[auth]
[cache]
backend = oslo_cache.memcache_pool
enabled = true
memcache_servers = openstack001.test.com:11211
[catalog]
[cors]
[credential]
[database]
connection = mysql+pymysql://user:Passwd@192.168.1.1/keystone
[domain_config]
[endpoint_filter]
[endpoint_policy]
[eventlet_server]
[federation]
[fernet_tokens]
[healthcheck]
[identity]
[identity_mapping]
[ldap]
[matchmaker_redis]
[memcache]
[oauth1]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[paste_deploy]
[policy]
[profiler]
[resource]
[revoke]
[role]
[saml]
[security_compliance]
[shadow_users]
[signing]
[token]
provider = fernet
[tokenless_auth]
[trust]
[unified_limit]
  1. 同步keystone数据库
[root@openstack001 ~]# su -s /bin/sh -c "keystone-manage db_sync" keyston
查看验证
[root@openstack001 ~]# mysql -h 192.168.1.1 -uuser -pPasswd  -e "use keystone;show tables;"
  1. 初始化fernet秘钥
    选定任意控制节点(openstack001)做fernet秘钥初始化,在/etc/keystone/生成相关秘钥及目录
[root@openstack001 ~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
[root@openstack001 ~]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
  1. 配置httpd.conf
[root@openstack001 ~]# cp /etc/httpd/conf/httpd.conf /etc/httpd/conf/httpd.conf.bak
[root@openstack001 ~]# sed -i "s/#ServerName www.example.com:80/ServerName ${HOSTNAME}/" /etc/httpd/conf/httpd.conf
[root@openstack001 ~]# sed -i "s/Listen\ 80/Listen\ 192.168.1.1:80/g" /etc/httpd/conf/httpd.conf
  1. 配置wsgi-keystone.conf
    复制wsgi-keystone.conf文件;或者针对wsgi-keystone.conf创建软链接
[root@openstack001 ~]# cp /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
修改wsgi-keystone.conf文件,注意各节点对应的ip地址或主机名等
sed -i "s/Listen\ 5000/Listen\ 192.168.1.1:5000/g" /etc/httpd/conf.d/wsgi-keystone.conf
sed -i "s/Listen\ 35357/Listen\ 192.168.1.1:35357/g" /etc/httpd/conf.d/wsgi-keystone.conf
sed -i "s/*:5000/192.168.1.1:5000/g" /etc/httpd/conf.d/wsgi-keystone.conf
sed -i "s/*:35357/192.168.1.1:35357/g" /etc/httpd/conf.d/wsgi-keystone.conf
  1. 认证引导
    初始化admin用户(管理用户)与密码,3种api端点,服务实体可用区等
[root@openstack001 ~]# keystone-manage bootstrap --bootstrap-password Passwd \
  --bootstrap-admin-url http://openstack001.test.com:35357/v3/ \
  --bootstrap-internal-url http://openstack001.test.com:5000/v3/ \
  --bootstrap-public-url http://openstack001.test.com:5000/v3/ \
  --bootstrap-region-id Test 
  1. 启动服务
 systemctl enable httpd.service
 systemctl restart httpd.service
 systemctl status httpd.service
  1. 配置环境变量
export OS_USERNAME=admin
export OS_PASSWORD=Passwd
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://openstack001.test.com:35357/v3
export OS_IDENTITY_API_VERSION=3
  1. 创建domain, projects, users, 与roles
[root@openstack001 ~]# openstack domain list
[root@openstack001 ~]# openstack project create --domain default --description "Demo Project" demo
[root@openstack001 ~]# openstack user create --domain default --password=Passwd demo

[root@openstack001 ~]# openstack role create user
[root@openstack001 ~]# openstack role add --project demo --user demo user
[root@openstack001 ~]# openstack user list
[root@openstack001 ~]# openstack role list
[root@openstack001 ~]# openstack role assignment list
  1. openstack client 环境变量脚本
    admin-openrc
    openstack client环境脚本定义client调用openstack api环境变量,以方便api的调用(不必在命令行中携带环境变量); 根据不同的用户角色,需要定义不同的脚本;这里以"认证引导"章节定义的admin用户为例,设置其环境脚本,再根据需要分发到需要运行openstack client工具的节点;
    一般将脚本创建在用户主目录
[root@openstack001 ~]# touch admin-openrc
[root@openstack001 ~]# chmod u+x admin-openrc
[root@openstack001 ~]# vim admin-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=Passwd
export OS_AUTH_URL=http://openstack001.test.com:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
验证
[root@openstack001 ~]# openstack token issue 

demo-openrc,同admin-openrc,注意project/user/password的区别
[root@openstack001 ~]# touch demo-openrc
[root@openstack001 ~]# chmod u+x demo-openrc 
[root@openstack001 ~]# vim demo-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=Passwd
export OS_AUTH_URL=http://openstack001.test.com:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
验证
[root@openstack001 ~]# openstack token issue

Glance集群

  1. 创建glance数据库
    创建数据库,后台数据自动同步,
CREATE DATABASE glance;
  1. 创建glance-api
    在,;调用keystone服务需要认证信息,加载环境变量脚本即可
[root@openstack001 ~]# . admin-openrc

创建service项目
创建1个project,glance/nova/neutron等服务加入到此project; service项目在"default" domain中

[root@openstack001 ~]# openstack project create --domain default --description "Service Project" service
创建glance用户glance用户在"default" domain中
[root@openstack001 ~]# openstack user create --domain default --password=Passwd glance
glance用户赋权为glance用户赋予admin权限
[root@openstack001 ~]# openstack role add --project service --user glance admin
创建glance服务实体服务实体类型"image"
[root@openstack001 ~]# openstack service create --name glance --description "OpenStack Image" image

创建glance-api注意--region与初始化admin用户时生成的region一致;api地址统一采用vip,如果public/internal/admin分别使用不同的vip,请注意区分;服务类型为image;
 [root@openstack001 ~]# openstack endpoint create --region Test image public http://openstack001.test.com:9292
 [root@openstack001 ~]# openstack endpoint create --region Test image internal http://openstack001.test.com:9292
 [root@openstack001 ~]# openstack endpoint create --region Test image admin http://openstack001.test.com:9292
  1. 安装glance
[root@openstack001 ~]# yum install openstack-glance python-glance python-glanceclient -y
  1. 配置glance-api.conf
    注意"bind_host"参数,根据节点修改;注意glance-api.conf文件的权限:root:glance
cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.bak
[root@openstack001.test.com ~]# cat /etc/glance/glance-api.conf
[DEFAULT]
enable_v1_api = false
bind_host = 192.168.1.1
[cors]
[database]
connection = mysql+pymysql://user:Passwd@192.168.1.1/glance
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /data/glance/images/
[image_format]
[keystone_authtoken]
auth_uri = http://openstack001.test.com:5000
auth_url = http://openstack001.test.com:35357
memcache_servers = openstack001.test.com:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = Passwd
[matchmaker_redis]
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[paste_deploy]
flavor = keystone
[profiler]
[store_type_location_strategy]
[task]
[taskflow_executor]

创建镜像存储目录并赋权限;

[root@openstack001 ~]# mkdir -p /data/glance/images/
[root@openstack001 ~]# chown glance:glance /data/glance/images/
  1. 配置glance-registry.conf(optional)
    注意glance-registry.conf文件的权限:root:glance
[root@openstack001.test.com ~]# cat /etc/glance/glance-registry.conf
[DEFAULT]
bind_host = 192.168.1.1
[database]
connection = mysql+pymysql://user:Passwd@192.168.1.1/glance
[keystone_authtoken]
auth_uri = http://openstack001.test.com:5000
auth_url = http://openstack001.test.com:35357
memcache_servers = openstack001.test.com:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = Passwd
[matchmaker_redis]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_policy]
[paste_deploy]
flavor = keystone
[profiler]
  1. 同步glance数据库
忽略输出的"deprecated"信息
[root@openstack001 ~]# su -s /bin/sh -c "glance-manage db_sync" glance
查看验证
[root@openstack001 ~]# mysql -h 192.168.1.1 -uuser -pPasswd  -e "use glance;show tables;"
  1. 启动服务
 [root@openstack001 ~]# systemctl enable openstack-glance-api.service openstack-glance-registry.service
[root@openstack001 ~]# systemctl restart openstack-glance-api.service openstack-glance-registry.service

查看服务状态
[root@openstack001 ~]# systemctl status openstack-glance-api.service openstack-glance-registry.service
查看端口
[root@openstack001 ~]# netstat -tunlp | grep python2
  1. 验证测试
    下载镜像
[root@openstack001 ~]# wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img

上传镜像
"上传"指将已下载的原始镜像经过一定的格式转换上传到image服务;格式指定为qcow2,bare;设置public权限;镜像生成后,在指定的存储目录下生成以镜像id命名的镜像文件

[root@openstack001 ~]# . admin-openrc 
[root@openstack001 ~]# openstack image create "cirros-qcow2" \
  --file ~/cirros-0.3.5-x86_64-disk.img \
  --disk-format qcow2 --container-format bare \
  --public
查看镜像
[root@openstack001 ~]# openstack image list

Nova控制节点集群

  1. 创建nova相关数据库
    创建数据库,后台数据自动同步,;nova服务含4个数据库,统一授权到nova用户;
    placement主要涉及资源统筹,较常用的api接口是获取备选资源与claim资源等.
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
CREATE DATABASE nova_placement;
  1. 创建nova/placement-api
调用nova相关服务需要认证信息,加载环境变量脚本
[root@openstack001 ~]# . admin-openrc
创建nova/plcement用户
nova/placement用户在"default" domain中
[root@openstack001 ~]# openstack user create --domain default --password=Passwd nova
[root@openstack001 ~]# openstack user create --domain default --password=Passwd placement
nova/placement赋权
为nova/placement用户赋予admin权限
[root@openstack001 ~]# openstack role add --project service --user nova admin 
[root@openstack001 ~]# openstack role add --project service --user placement admin
创建nova/placement服务实体nova服务实体类型"compute";placement服务实体类型"placement"
[root@openstack001 ~]# openstack service create --name nova --description "OpenStack Compute" compute
[root@openstack001 ~]# openstack service create --name placement --description "Placement API" placement
创建nova/placement-api 
 [root@openstack001 ~]# openstack endpoint create --region Test compute public http://openstack001.test.com:8774/v2.1
 [root@openstack001 ~]# openstack endpoint create --region Test compute internal http://openstack001.test.com:8774/v2.1
 [root@openstack001 ~]# openstack endpoint create --region Test compute admin http://openstack001.test.com:8774/v2.1
 [root@openstack001 ~]# openstack endpoint create --region Test placement public http://openstack001.test.com:8778
 [root@openstack001 ~]# openstack endpoint create --region Test placement internal http://openstack001.test.com:8778
 [root@openstack001 ~]# openstack endpoint create --region Test placement admin http://openstack001.test.com:8778
  1. 安装nova
    安装nova相关服务,
[root@openstack001 ~]# yum install openstack-nova-api openstack-nova-conductor \
   openstack-nova-console openstack-nova-novncproxy \
   openstack-nova-scheduler openstack-nova-placement-api -y
  1. 配置nova.conf
    注意本次安装控制节点服务和计算节点在一起,nova.conf文件的权限:root:nova
[root@openstack001.test.com ~]# cat  /etc/nova/nova.conf
[DEFAULT]
my_ip=192.168.1.1
osapi_compute_listen=$my_ip
osapi_compute_listen_port=8774
metadata_listen=$my_ip
metadata_listen_port=8775
enabled_apis = osapi_compute,metadata
transport_url=rabbit://openstack:Passwd@openstack001.test.com:5672
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
instances_path=/data/nova/instances
allow_resize_to_same_host=true
dhcp_domain=test.com
reserved_host_disk_mb=10240
reserved_host_memory_mb=4096
cpu_allocation_ratio=3.0
ram_allocation_ratio=1.0
service_down_time=120
rpc_response_timeout = 300
[api]
auth_strategy=keystone
[api_database]
connection = mysql+pymysql://user:Passwd@192.168.1.1/nova_api
[barbican]
[cache]
backend=oslo_cache.memcache_pool
enabled=True
memcache_servers = openstack001.test.com:11211
[cells]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[crypto]
[database]
connection = mysql+pymysql://user:Passwd@192.168.1.1/nova
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
enabled_filters=RetryFilter,AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
[glance]
api_servers = http://openstack001.test.com:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_uri = http://openstack001.test.com:5000
auth_url = http://openstack001.test.com:35357
memcached_servers = openstack001.test.com:11211,openstack001.test.com:11211,openstack-master002.test.com:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = Passwd
[libvirt]
virt_type=kvm
[matchmaker_redis]
[metrics]
[mks]
[neutron]
url = http://openstack001.test.com:9696
auth_url = http://openstack001.test.com:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = Test
project_name = service
username = neutron
password = Passwd
service_metadata_proxy = true
metadata_proxy_shared_secret = Passwd
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
region_name = Test
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://openstack001.test.com:35357/v3
username = placement
password = Passwd
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled=true
server_listen=$my_ip
server_proxyclient_address=$my_ip
#novncproxy_base_url=http://openstack001.test.com:6080/vnc_auto.html
#novncproxy_base_url=http://$my_ip:6080/vnc_auto.html
novncproxy_host=$my_ip
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=192.168.1.1
novncproxy_base_url=http://192.168.1.1:6080/vnc_auto.html
[workarounds]
[wsgi]
[xenserver]
[xvp]


6. 配置00-nova-placement-api.conf
[root@openstack001 ~]# cp /etc/httpd/conf.d/00-nova-placement-api.conf /etc/httpd/conf.d/00-nova-placement-api.conf.bak
[root@openstack001 ~]# sed -i "s/Listen\ 8778/Listen\ 192.168.1.1:8778/g" /etc/httpd/conf.d/00-nova-placement-api.conf
[root@openstack001 ~]# sed -i "s/*:8778/192.168.1.1:8778/g" /etc/httpd/conf.d/00-nova-placement-api.conf
[root@openstack001 ~]# echo "
#Placement API
<Directory /usr/bin>
   <IfVersion >= 2.4>
      Require all granted
   </IfVersion>
   <IfVersion < 2.4>
      Order allow,deny
      Allow from all
   </IfVersion>
</Directory>
" >> /etc/httpd/conf.d/00-nova-placement-api.conf
```javascript
重启httpd服务,启动placement-api监听端口
```javascript
[root@openstack001 ~]# systemctl restart httpd
  1. 同步nova相关数据库
[root@openstack001 ~]# su -s /bin/sh -c "nova-manage api_db sync" nova
注册cell0数据库
[root@openstack001 ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
创建cell1 cell
[root@openstack001 ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
同步nova数据库, 忽略"deprecated"信息
[root@openstack001 ~]# su -s /bin/sh -c "nova-manage db sync" nova

注意:
同步数据库报错:此版本在向数据库同步导入数据表时,报错:/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) [‘use_tpool’] not supported

exception.NotSupportedWarning
解决方案如下:
bug:https://bugs.launchpad.net/nova/+bug/1746530
pacth:https://github.com/openstack/oslo.db/commit/c432d9e93884d6962592f6d19aaec3f8f66ac3a2

7.验证

cell0与cell1注册正确
[root@openstack001 ~]# nova-manage cell_v2 list_cells
查看数据表
[root@openstack001 ~]# mysql -h 192.168.1.1 -uuser -pPasswd  -e "use nova_api;show tables;"
[root@openstack001 ~]# mysql -h 192.168.1.1 -uuser -pPasswd  -e "use nova;show tables;" 
[root@openstack001 ~]# mysql -h 192.168.1.1 -uuser -pPasswd  -e "use nova_cell0;show tables;"
  1. 启动服务
systemctl enable openstack-nova-api.service \
openstack-nova-consoleauth.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service

启动
 systemctl restart openstack-nova-api.service
 systemctl restart openstack-nova-consoleauth.service
 systemctl restart openstack-nova-scheduler.service
 systemctl restart openstack-nova-conductor.service
 systemctl restart openstack-nova-novncproxy.service

查看状态
systemctl status openstack-nova-api.service \
openstack-nova-consoleauth.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service

查看端口
[root@openstack001 ~]# netstat -tunlp | egrep '8774|8775|8778|6080'
  1. 验证
[root@openstack001 ~]# . admin-openrc
列出各服务组件,查看状态;也可使用命令" nova service-list"
[root@openstack001 ~]# openstack compute service list
展示api端点
[root@openstack001 ~]# openstack catalog list
检查cell与placement api运行正常
[root@openstack001 ~]# nova-status upgrade check

Nova计算服务

  1. 安装nova-compute
安装nova-compute服务,
[root@openstack001 ~]# yum install python-openstackclient openstack-utils openstack-selinux -y
[root@openstack001 ~]# yum install openstack-nova-compute -y

由于控制节点和计算节点都在一台机器上,所以配置已在配置控制节点服务时配置.

创建磁盘镜像存储目录并赋权限;/data/nova/instances

[root@openstack001 ~]# mkdir -p /data/nova/instances
[root@openstack001 ~]# chown nova:nova /data/nova/instances
  1. 启动服务
[root@openstack001 ~]# systemctl enable libvirtd.service openstack-nova-compute.service
启动
[root@openstack001 ~]# systemctl restart libvirtd.service
[root@openstack001 ~]# systemctl restart openstack-nova-compute.service
查看状态
systemctl status libvirtd.service
systemctl status openstack-nova-compute.service
  1. 向cell数据库添加计算节点
[root@openstack001 ~]# . admin-openrc
[root@openstack001 ~]# openstack compute service list --service nova-compute
手工发现计算节点主机,即添加到cell数据库
[root@openstack001 ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
``
## Neutron服务
注意本次安装网络服务都在一个节点上
1. 创建neutron数据库
Neutron只部署在openstack001节点,挂载haproxy下面,api使用192.168.1.1 vip.
```javascript
 CREATE DATABASE neutron;
  1. 创建neutron-api
调用neutron服务需要认证信息,加载环境变量脚本即可
[root@openstack001 ~]# . admin-openrc 
创建neutron用户neutron用户在"default" domain中
[root@openstack001 ~]# openstack user create --domain default --password=Passwd neutron
neutron赋权,为neutron用户赋予admin权限
[root@openstack001 ~]# openstack role add --project service --user neutron admin
创建neutron服务实体,neutron服务实体类型"network"
[root@openstack001 ~]# openstack service create --name neutron --description "Test OpenStack Networking" network

[root@openstack001 ~]# openstack endpoint create --region Test network public http://openstack001.test.com:9696
 [root@openstack001 ~]# openstack endpoint create --region Test network internal http://openstack001.test.com:9696
 [root@openstack001 ~]# openstack endpoint create --region Test network admin http://openstack001.test.com:9696
  1. 安装neutron
[root@openstack001 ~]# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge python-neutronclient ebtables ipset -y
  1. 配置neutron.conf
    注意neutron.conf文件的权限:root:neutron
[root@openstack001.test.com ~]# cat /etc/neutron/neutron.conf
[DEFAULT]
bind_host = 192.168.1.1
auth_strategy = keystone
core_plugin = ml2
service_plugins = router
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
dhcp_agents_per_network = 1
transport_url = rabbit://openstack:Passwd@openstack001.test.com:5672
[agent]
[cors]
[database]
connection = mysql+pymysql://user:Passwd@192.168.1.1/neutron
[keystone_authtoken]
auth_uri = http://openstack001.test.com:5000
auth_url = http://openstack001.test.com:35357
memcache_servers = openstack001.test.com:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = Passwd
[matchmaker_redis]
[nova]
auth_url = http://openstack001.test.com:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = Test
project_name = service
username = nova
password = Passwd
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[quotas]
[ssl]
  1. 配置ml2_conf.ini
[root@openstack001.test.com ~]#  cat /etc/neutron/plugins/ml2/ml2_conf.ini
[DEFAULT]
[l2pop]
[ml2]
type_drivers = local,flat,vlan
tenant_network_types = local,flat,vlan
mechanism_drivers = linuxbridge
extension_drivers = port_security
[ml2_type_flat]
flat_networks = external
[ml2_type_geneve]
[ml2_type_gre]
[ml2_type_vlan]
network_vlan_ranges = external:100:3500
[ml2_type_vxlan]
[securitygroup]
enable_ipset = true

服务初始化调用ml2_conf.ini中的配置,但指向/etc/neutron/olugin.ini文件
[root@openstack001 ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
  1. 配置linuxbridge_agent.ini
[root@openstack001.test.com ~]# cat /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[DEFAULT]
[agent]
[linux_bridge]
physical_interface_mappings = external:bond0
[network_log]
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
[vxlan]
enable_vxlan = false

配置内核参数bridge:是否允许桥接;如果"sysctl -p"加载不成功,报" No such file or directory"错误,需要加载内核模块"br_netfilter";
命令"modinfo br_netfilter"查看内核模块信息;命令"modprobe br_netfilter"加载内核模块

 echo "# bridge" >> /etc/sysctl.conf
 echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
 echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
 sysctl -p
  1. 配置dhcp_agent.ini
[root@openstack001.test.com ~]# cat  /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
dhcp_lease_duration = -1
[agent]
[ovs]
  1. 配置metadata_agent.ini
    metadata_proxy_shared_secret:与/etc/nova/nova.conf文件中参数一致;

cat /etc/neutron/metadata_agent.ini

[DEFAULT]
nova_metadata_host = openstack001.test.com
metadata_proxy_shared_secret = Passwd
[agent]
[cache]
memcache_servers = openstack001.test.com:11211
  1. 配置nova.conf
    配置只涉及nova.conf的"[neutron]"字段; metadata_proxy_shared_secret:与/etc/neutron/metadata_agent.ini文件中参数一致,
cat /etc/neutron/metadata_agent.ini

[neutron]
url = http://openstack001.test.com:9696
auth_url = http://openstack001.test.com:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = Test
project_name = service
username = neutron
password = Passwd
service_metadata_proxy = true
metadata_proxy_shared_secret = Passwd
  1. 同步neutron数据库
 [root@openstack001 ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
验证
[root@openstack001 ~]# mysql -h 192.168.1.1 -uuser -pPasswd  -e "use neutron;show tables;"
  1. 启动服务
变更nova配置文件,首先需要重启nova服务
[root@openstack001 ~]# systemctl restart openstack-nova-api.service

开机启动
[root@openstack001 ~]# systemctl enable neutron-server.service \
neutron-linuxbridge-agent.service \
neutron-dhcp-agent.service \
neutron-metadata-agent.service
启动
systemctl restart neutron-server.service
systemctl restart neutron-linuxbridge-agent.service
systemctl restart neutron-dhcp-agent.service
systemctl restart neutron-metadata-agent.service
检查
systemctl status neutron-server.service \
neutron-linuxbridge-agent.service \
neutron-dhcp-agent.service \
neutron-metadata-agent.service
  1. 验证
[root@openstack001 ~]# . admin-openrc
[root@openstack001 ~]# openstack extension list --network
查看agent服务
[root@openstack001 ~]# openstack network agent list

Horizon集群

  1. 安装dashboard
[root@openstack001 ~]# yum install openstack-dashboard -y
  1. 配置local_settings
[root@openstack001 ~]# cp /etc/openstack-dashboard/local_settings /etc/openstack-dashboard/local_settings.bak

编辑/etc/openstack-dashboard/local_settings
[root@openstack001 ~]# vim /etc/openstack-dashboard/local_settings
# 允许所有主机访问
38  ALLOWED_HOSTS = ['*', 'localhost']

# 强制使用相应版本的api
64  OPENSTACK_API_VERSIONS = {
65  #    "data-processing": 1.1,
66      "identity": 3,
67      "image": 2,
68      "volume": 2,
69  #    "compute": 2,
70  }

# 在多域模式运行时开启,登陆时除账号/密码外还需要输入域
75  OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

# 取消注释
97  OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'

# 取消158~163行注释,并使用memcached集群
158  CACHES = {
159      'default': {
160          'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
161          'LOCATION': 'openstack001.test.com:11211'
162      },
163  }

# 注释165~169165  #CACHES = {
166  #    'default': {
167  #        'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
168  #    },
169  #}


# 监听地址使用vip;
# keystone认证使用v3;
# 设置通过dashboard创建的用户具有"user"角色权限,"user"角色在keystone章节已创建
188  OPENSTACK_HOST = "openstack001.test.com"
189  OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
190  OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"


OPENSTACK_NEUTRON_NETWORK = {
 
    'enable_router': False,
    'enable_quotas': False,
    'enable_distributed_router': False,
    'enable_ha_router': False,
    'enable_lb': False,
    'enable_firewall': False,
    'enable_': False,
    'enable_fip_topology_check': False,
}

# 修改时区
463  TIME_ZONE = "Asia/Shanghai"
  1. 配置openstack-dashboard.conf
    在全部节点编辑/etc/httpd/conf.d/openstack-dashboard.conf,在第3行后新增" WSGIApplicationGroup %{GLOBAL}"
[root@openstack001 ~]# cp /etc/httpd/conf.d/openstack-dashboard.conf /etc/httpd/conf.d/openstack-dashboard.conf.bak
[root@openstack001 ~]# sed -i '3a WSGIApplicationGroup\ %{GLOBAL}' /etc/httpd/conf.d/openstack-dashboard.conf
  1. 启动服务
[root@openstack001 ~]# systemctl restart httpd.service memcached.service
  1. 验证
    登陆:http://192.168.1.1/dashboard
    域:default
    用户:admin
    密码:Passwd

你可能感兴趣的:(openstack)