openstack 最新版本ocata 详细安装指南

简介

OpenStack是一个开源的云计算管理平台项目,由几个主要的组件组合起来完成具体工作。OpenStack支持几乎所有类型的云环境,
项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。OpenStack通过各种互补的服务提供了基础设施即服务(IaaS)的解决方案,每个服务提供API以进行集成。

OpenStack是一个旨在为公共及私有云的建设与管理提供软件的开源项目。它的社区拥有超过130家企业及1350位开发者,这些机构与个人都将OpenStack作为基础设施即服务(IaaS)资源的通用前端。
OpenStack项目的首要任务是简化云的部署过程并为其带来良好的可扩展性。本文希望通过提供必要的指导信息,帮助大家利用OpenStack前端来设置及管理自己的公共云或私有云。

核心服务

  • 计算(Compute):Nova。一套控制器,用于为单个用户或使用群组管理虚拟机实例的整个生命周期,根据用户需求来提供虚拟服务。负责虚拟机创建、开机、关机、挂起、暂停、调整、迁移、重启、销毁等操作,配置CPU、内存等信息规格。自Austin版本集成到项目中。
  • 对象存储(Object Storage):Swift。一套用于在大规模可扩展系统中通过内置冗余及高容错机制实现对象存储的系统,允许进行存储或者检索文件。可为Glance提供镜像存储,为Cinder提供卷备份服务。自Austin版本集成到项目中
  • 镜像服务(Image Service):Glance。一套虚拟机镜像查找及检索系统,支持多种虚拟机镜像格式(AKI、AMI、ARI、ISO、QCOW2、Raw、VDI、VHD、VMDK),有创建上传镜像、删除镜像、编辑镜像基本信息的功能。自Bexar版本集成到项目中。
  • 身份服务(Identity Service):Keystone。为OpenStack其他服务提供身份验证、服务规则和服务令牌的功能,管理Domains、Projects、Users、Groups、Roles。自Essex版本集成到项目中。
  • 网络&地址管理(Network):Neutron。提供云计算的网络虚拟化技术,为OpenStack其他服务提供网络连接服务。为用户提供接口,可以定义Network、Subnet、Router,配置DHCP、DNS、负载均衡、L3服务,网络支持GRE、VLAN。插件架构支持许多主流的网络厂家和技术,如OpenvSwitch。自Folsom版本集成到项目中。
  • 块存储 (Block Storage):Cinder。为运行实例提供稳定的数据块存储服务,它的插件驱动架构有利于块设备的创建和管理,如创建卷、删除卷,在实例上挂载和卸载卷。自Folsom版本集成到项目中。
  • UI 界面 (Dashboard):Horizon。OpenStack中各种服务的Web管理门户,用于简化用户对服务的操作,例如:启动实例、分配IP地址、配置访问控制等。自Essex版本集成到项目中。
  • 测量 (Metering):Ceilometer。像一个漏斗一样,能把OpenStack内部发生的几乎所有的事件都收集起来,然后为计费和监控以及其它服务提供数据支撑。自Havana版本集成到项目中。
  • 部署编排 (Orchestration):Heat[2] 。提供了一种通过模板定义的协同部署方式,实现云基础设施软件运行环境(计算、存储和网络资源)的自动化部署。自Havana版本集成到项目中。
  • 数据库服务(Database Service):Trove。为用户在OpenStack的环境提供可扩展和可靠的关系和非关系数据库引擎服务。自Icehouse版本集成到项目中

先来看一下openstack 的 Horizon ,这个是我们最后一步需要安装的,此组件为选安装,所有操作均可以在命令行完成

登录界面

计算节点管理

实例管理

实例类型管理

网络管理

安装好环境介绍

  • 本文档安装版本为最新版本ocata版本
  • openstack 安装需要至少两个节点,一个未控制节点,其他为计算资源节点
  • 本次使用两台虚拟机作为安装openstack 测试环境,真实环境需要为实体物理机
  • 两台机器系统均为 Centos7.2 1611
  • 两台机器至少为双网卡,一个网卡用于管理通信流量,另一个为其他数据流量
  • 需要连接外网或使用私有yum 源

控制节点

 controller 配置为 4core , 4G , 20G硬盘, 双网卡

计算节点

compute 配置为 2core , 2G , 20G硬盘, 双网卡

安装前准备(需要在所有节点执行,包括控制节点和计算节点)

1-更新yum 源为阿里云源

阿里云yum地址为
  http://mirrors.aliyun.com/ 
更新centos地址为阿里云
 curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
更新epel地址为阿里云
 curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

2-更新系统rpm,关闭selinux

  yum upgrade -y
  vi /etc/selinux/config

3-设置好两个节点IP 与 hosts 映射

控制节点:  computer   网卡1=192.168.1.240,网卡2=192.168.1.239
计算节点:  controller 网卡1=192.168.1.241,网卡2=192.168.1.242
在hosts中同时配置如下信息
192.168.1.240 controller 
192.168.1.241 compute   
配置好主机的host 不能重名
hostnamectl set-hostname "controller" --static
hostnamectl set-hostname "compute" --static

controller
compute

安装所需服务

1-安装时钟同步(所有节点,必须验证时间同步的正确性之后才能往下进行)

  yum install chrony -y

  systemctl enable chronyd.service
  systemctl restart chronyd.service
  systemctl status chronyd.service

  查看时间同步源:
  chronyc sources -v

2-安装mysql(控制节点)

  wget http://dev.mysql.com/get/mysql-community-release-el7-5.noarch.rpm
  rpm -ivh mysql-community-release-el7-5.noarch.rpm
  yum install mysql-community-server
开机启动
  systemctl enable mysqld.service
设置root密码
  /usr/bin/mysqladmin -u root password 'admin@hhwy'
添加远程连接用户,需要进入mysql(mysql -u root -p)
  GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY  'admin@hhwy' WITH GRANT OPTION;
刷新缓存
  FLUSH PRIVILEGES; 

3-安装ribbitMQ(控制节点)

 yum -y install erlang socat

 yum install rabbitmq-server
开启开机自启动,并启动 端口5672
  systemctl enable rabbitmq-server.service \
  systemctl start rabbitmq-server.service
打开web管理插件端口15672
 rabbitmq-plugins enable rabbitmq_management
设置用户以及密码
  # 设置admin管理账号 密码
  rabbitmqctl  add_user admin admin@hhwy
  rabbitmqctl  set_user_tags admin administrator

  # 设置openstack账号 密码
  rabbitmqctl add_user openstack openstack
  rabbitmqctl set_permissions openstack ".*" ".*" ".*"
  rabbitmqctl  set_user_tags openstack administrator

4-安装memcached(控制节点)

 yum install memcached
开启开机自启动,并启动 端口11211
systemctl enable memcached.service
systemctl start memcached.service

5-安装centos-openstack-ocata yum源(所有节点), openstack 工具

  yum install centos-release-openstack-ocata -y

  yum install openstack-utils -y

6-安装控制节点需要的服务

  yum install -y python-openstackclient  \
  python2-PyMySQL  python-memcached \
  openstack-keystone httpd mod_wsgi openstack-glance \
  openstack-nova-api openstack-nova-conductor \
  openstack-nova-console openstack-nova-novncproxy \
  openstack-nova-scheduler openstack-nova-placement-api \
  openstack-neutron openstack-neutron-ml2 \
  openstack-neutron-linuxbridge ebtables \
  openstack-dashboard

7-安装计算节点需要的服务

  yum install openstack-nova-compute openstack-neutron-linuxbridge ebtables ipset -y

开始配置(默认均为控制节点配置,如果在计算节点配置会有说明)

1-配置数据库

创建以下数据库
   keystone
   glance
   nova
   nova_api
   nova_cell0
   neutron
创建数据库语句
 CREATE DATABASE /*!32312 IF NOT EXISTS*/`keystone` /*!40100 DEFAULT CHARACTER SET utf8 */;
 CREATE DATABASE /*!32312 IF NOT EXISTS*/`glance` /*!40100 DEFAULT CHARACTER SET utf8 */;
 CREATE DATABASE /*!32312 IF NOT EXISTS*/`nova` /*!40100 DEFAULT CHARACTER SET utf8 */;
 CREATE DATABASE /*!32312 IF NOT EXISTS*/`nova_api` /*!40100 DEFAULT CHARACTER SET utf8 */;
 CREATE DATABASE /*!32312 IF NOT EXISTS*/`nova_cell0` /*!40100 DEFAULT CHARACTER SET utf8 */;
 CREATE DATABASE /*!32312 IF NOT EXISTS*/`neutron` /*!40100 DEFAULT CHARACTER SET utf8 */;
创建以下 用户 密码
   keystone   keystone
   glance   glance
   nova   nova
   neutron   neutron
创建脚本为
GRANT ALL PRIVILEGES ON *.* TO 'keystone'@'controller' IDENTIFIED BY 'keystone';
GRANT ALL PRIVILEGES ON *.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';
GRANT ALL PRIVILEGES ON *.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';

GRANT ALL PRIVILEGES ON *.* TO 'glance'@'controller' IDENTIFIED BY 'glance';
GRANT ALL PRIVILEGES ON *.* TO 'glance'@'localhost' IDENTIFIED BY 'glance';
GRANT ALL PRIVILEGES ON *.* TO 'glance'@'%' IDENTIFIED BY 'glance';

GRANT ALL PRIVILEGES ON *.* TO 'nova'@'controller' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON *.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON *.* TO 'nova'@'%' IDENTIFIED BY 'nova';

GRANT ALL PRIVILEGES ON *.* TO 'neutron'@'controller' IDENTIFIED BY 'neutron';
GRANT ALL PRIVILEGES ON *.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';
GRANT ALL PRIVILEGES ON *.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';
刷脚本前注意
* 执行脚本的时候很多信息是需要替换的,需要替换的在脚本中会用中括号括起来,controller 或者密码等可以按照文档中的配置即可.如有特殊需求需要修改为自己的请按照格式修改
* 如没有特殊说明,中括号都是标注可替换部分, 使用脚本的时候需要把中括号替换

2-配置身份验证(Keystone)服务

修改配置文件,执行以下命令
openstack-config --set /etc/keystone/keystone.conf database connection  mysql+pymysql://[keystone]:[keystone]@[controller]/[keystone]
openstack-config --set /etc/keystone/keystone.conf token provider fernet

* 数据库连接格式为用户名/密码@主机地址/数据库名,以后的数据库连接都是这个格式不再说明
验证配置
cat /etc/keystone/keystone.conf |grep -v ^# |grep -v ^$
编辑/etc/keystone/keystone-paste.ini
[pipeline:public_api][pipeline:admin_api][pipeline:api_v3]段删除admin_token_auth参数。禁止临时认证机制。
同步(写入)数据库
  su -s /bin/sh -c "keystone-manage db_sync" keystone
初始化密钥存储库
  keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
  keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
引导身份服务,设置admin用户(管理用户)和密码
 keystone-manage bootstrap --bootstrap-password admin \
 --bootstrap-admin-url http://controller:35357/v3/ \
 --bootstrap-internal-url http://controller:5000/v3/ \
 --bootstrap-public-url http://controller:5000/v3/ \
 --bootstrap-region-id RegionOne
配置web服务器(httpd)
修改/etc/httpd/conf/httpd.conf
 sed -i 's/#ServerName www.example.com:80/ServerName controller/g' /etc/httpd/conf/httpd.conf
验证
 cat /etc/httpd/conf/httpd.conf |grep ServerName
把keystone的虚拟主机文件链接的http的配置目录下
 ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
开机自启动和启动服务(httpd)
 systemctl enable httpd.service
 systemctl restart httpd.service
开启防火墙
 firewall-cmd --zone=public --add-port=11211/tcp --permanent & \
 firewall-cmd --zone=public --add-port=5672/tcp --permanent & \
 firewall-cmd --zone=public --add-port=15672/tcp --permanent & \
 firewall-cmd --zone=public --add-port=3306/tcp --permanent & \
 firewall-cmd --zone=public --add-port=5000/tcp --permanent & \
 firewall-cmd --zone=public --add-port=35357/tcp --permanent & \
 firewall-cmd --zone=public --add-port=80/tcp --permanent 
重新加载
 firewall-cmd --reload
查看
 firewall-cmd --zone=public --list-port --permanent
创建管理环境变量
mkdir -p /usr/local/openstack
vi /usr/local/openstack/admin.sh

export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
创建demo环境变量脚本
vi /usr/local/openstack/demo.sh
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=demo
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
创建一个域、项目、用户和角色 admin

source /usr/local/openstack/admin.sh

openstack project create --domain default  --description "Service Project" service 
openstack project create --domain default  --description "Demo Project" demo
openstack user create --domain default  --password demo demo
将用户角色添加到演示项目和用户
 openstack role create user
 openstack role add --project demo --user demo user
验证操作 , 输入admin用户的密码(admin),正确会有输出。
 unset OS_AUTH_URL OS_PASSWORD

 openstack --os-auth-url http://controller:35357/v3 \
 --os-project-domain-name default --os-user-domain-name default \
 --os-project-name admin --os-username admin token issue
验证使用环境变量 admin 验证
 source /usr/local/openstack/admin.sh
 openstack token issue

3-配置镜像(Glance)服务

编辑配置文件,直接执行如下命令修改即可
修改的配置文件为 /etc/glance/glance-api.conf

 #直接执行如下面脚本即可修改
openstack-config --set /etc/glance/glance-api.conf database connection mysql+pymysql://[glance]:[glance]@[controller]/[glance]
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://[controller]:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://[controller]:35357
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers [controller]:11211
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username [glance]
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password [glance]
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
openstack-config --set /etc/glance/glance-api.conf glance_store stores file,http
openstack-config --set /etc/glance/glance-api.conf glance_store default_store file
openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/
验证配置
cat /etc/glance/glance-api.conf |grep -v ^# |grep -v ^$
 修改 /etc/glance/glance-registry.conf

 #直接执行如下面脚本即可修改
openstack-config --set /etc/glance/glance-registry.conf database connection mysql+pymysql://[glance]:[glance]@[controller]/[glance]
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://[controller]:5000
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_url http://[controller]:35357
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken memcached_servers [controller]:11211
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken password glance
openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone 
验证配置
cat /etc/glance/glance-registry.conf |grep -v ^# |grep -v ^$
创建用户、服务、API
 source /usr/local/openstack/admin.sh
 openstack user create --domain default --password glance glance
 openstack role add --project service --user glance admin
 openstack service create --name glance  --description "OpenStack Image" image

 openstack endpoint create --region RegionOne  image public http://[controller]:9292
 openstack endpoint create --region RegionOne  image internal http://[controller]:9292
 openstack endpoint create --region RegionOne  image admin http://[controller]:9292
开启防火墙
 firewall-cmd --zone=public --add-port=9292/tcp --permanent
 firewall-cmd --reload
同步数据库
 su -s /bin/sh -c "glance-manage db_sync" glance
开机自启动和启动服务
systemctl enable openstack-glance-api.service  openstack-glance-registry.service
systemctl restart openstack-glance-api.service  openstack-glance-registry.service
查看状态
systemctl status openstack-glance-api.service   openstack-glance-registry.service
下载镜像,导入glance
下载
 wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
导入
 openstack image create "cirros" \
 --file cirros-0.3.5-x86_64-disk.img \
 --disk-format qcow2 --container-format bare \
 --public
查看已上传的镜像
 openstack image list

4-配置计算(Nova)服务

配置nova配置文件
 修改 /etc/nova/nova.conf
 #直接执行如下脚本即可
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf api_database connection mysql+pymysql://[nova]:[nova]@[controller]/[nova_api]
openstack-config --set /etc/nova/nova.conf database connection mysql+pymysql://[nova]:[nova]@[controller]/[nova]
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://[openstack]:[openstack]@[controller]
openstack-config --set /etc/nova/nova.conf api auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://[controller]:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://[controller]:35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers [controller]:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password [nova]
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True 
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip [192.168.1.240] 
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf vnc enabled true
openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address [192.168.1.240]
openstack-config --set /etc/nova/nova.conf glance api_servers http://[controller]:9292
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf placement os_region_name RegionOne
openstack-config --set /etc/nova/nova.conf placement project_domain_name Default
openstack-config --set /etc/nova/nova.conf placement project_name service
openstack-config --set /etc/nova/nova.conf placement auth_type password
openstack-config --set /etc/nova/nova.conf placement user_domain_name Default
openstack-config --set /etc/nova/nova.conf placement auth_url http://[controller]:35357/v3
openstack-config --set /etc/nova/nova.conf placement username placement
openstack-config --set /etc/nova/nova.conf placement password [placement]
验证配置
cat /etc/nova/nova.conf |grep -v ^# |grep -v ^$
修改nova的HTTP配置文件
 由于包错误,您必须启用对Placement API的访问
 修改/etc/httpd/conf.d/00-nova-placement-api.conf 在文件末尾添加
  <Directory /usr/bin>
    <IfVersion >= 2.4>
      Require all granted
    </IfVersion>
    <IfVersion < 2.4>
     Order allow,deny
      Allow from all
    </IfVersion>
  </Directory>

  或直接执行如下命令修改,必须添加到底部,不能覆盖之前

cat <<EOF >> /etc/httpd/conf.d/00-nova-placement-api.conf
<Directory /usr/bin>
   <IfVersion >= 2.4>
      Require all granted
   </IfVersion>
   <IfVersion < 2.4>
      Order allow,deny
      Allow from all
   </IfVersion>
</Directory>
EOF
创建nova用户、服务、API
openstack user create --domain default --password [nova] nova
openstack role add --project service --user nova admin
openstack service create --name nova  --description "OpenStack Compute" compute

openstack endpoint create --region RegionOne compute public http://[controller]:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://[controller]:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://[controller]:8774/v2.1
创建placement用户、服务、API
 openstack user create --domain default --password [placement] placement
 openstack role add --project service --user placement admin
 openstack service create --name placement --description "Placement API" placement

 openstack endpoint create --region RegionOne placement public http://[controller]:8778
 openstack endpoint create --region RegionOne placement admin http://[controller]:8778
 openstack endpoint create --region RegionOne placement internal http://[controller]:8778
开启防火墙
firewall-cmd --zone=public --add-port=8774/tcp --permanent
firewall-cmd --zone=public --add-port=8778/tcp --permanent
重新加载
firewall-cmd --reload
重启httpd
systemctl restart httpd
同步数据库
su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
su -s /bin/sh -c "nova-manage db sync" nova
查看cell_v2所生成的UUID
 nova-manage cell_v2 list_cells
开机自启动和启动服务
systemctl enable openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service


systemctl restart openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service


查看启动状态
systemctl status openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service

5-配置计算节点的计算服务(在计算节点配置,192.168.1.241)

配置nova配置文件

修改 /etc/nova/nova.conf

 #直接执行以下脚本即可
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://[openstack]:[openstack]@[controller]
openstack-config --set /etc/nova/nova.conf api auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://[controller]:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://[controller]:35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers [controller]:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password [nova]
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip [192.168.1.241]
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf vnc enabled True
openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address [192.168.1.241]
openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://[192.168.1.240]:6080/vnc_auto.html
openstack-config --set /etc/nova/nova.conf glance api_servers http://[controller]:9292
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf placement os_region_name RegionOne
openstack-config --set /etc/nova/nova.conf placement project_domain_name Default
openstack-config --set /etc/nova/nova.conf placement project_name service
openstack-config --set /etc/nova/nova.conf placement auth_type password
openstack-config --set /etc/nova/nova.conf placement user_domain_name Default
openstack-config --set /etc/nova/nova.conf placement auth_url http://[controller]:35357/v3
openstack-config --set /etc/nova/nova.conf placement username placement
openstack-config --set /etc/nova/nova.conf placement password [placement]
openstack-config --set /etc/nova/nova.conf libvirt virt_type [qemu]


[libvirt]说明:
virt_type
确定您的计算节点是否支持虚拟机的硬件加速:
egrep -c '(vmx|svm)' /proc/cpuinfo
如果这个命令返回一个或多个,你的计算机支持硬件加速这通常不需要额外的配置。
如果此命令返回值为零,则您的计算节点不支持硬件加速您必须配置libvirt以使用QEMU而不是KVM。


验证配置
cat /etc/nova/nova.conf |grep -v ^# |grep -v ^$
开机自启动和启动服务
 systemctl enable libvirtd.service openstack-nova-compute.service
 systemctl restart libvirtd.service openstack-nova-compute.service
开启防火墙
 firewall-cmd --zone=public --add-port=6080/tcp --permanent
 firewall-cmd --reload

6-验证

验证用户
 openstack user list 
验证hypervisor
 openstack hypervisor list 
验证endpoint
 openstack endpoint list 
验证catalog
openstack catalog list
验证image
openstack image list
验证compute service
openstack compute service list
7-在控制节点上发现计算节点
命令方式,每次新增计算节点时都需要执行。
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
或者,修改配置文件方式,每隔300s自动发现计算节点。(修改控制节点)
vi /etc/nova/nova.conf

[scheduler]
discover_hosts_in_cells_interval = 300

8-配置网络(Neutron)服务(控制节点)

配置neutron各个组件的配置文件(备份配置文件,删除配置文件里的所有数据,使用提供的配置)
 修改/etc/neutron/neutron.conf

openstack-config --set /etc/neutron/neutron.conf database connection mysql+pymysql://[neutron]:[neutron]@[controller]/[neutron]
openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips true
openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://[openstack]:[openstack]@[controller]
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://[controller]:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://[controller]:35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers [controller]:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password [neutron]
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes true
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes true
openstack-config --set /etc/neutron/neutron.conf nova auth_url http://[controller]:35357
openstack-config --set /etc/neutron/neutron.conf nova auth_type password
openstack-config --set /etc/neutron/neutron.conf nova project_domain_name default
openstack-config --set /etc/neutron/neutron.conf nova user_domain_name default
openstack-config --set /etc/neutron/neutron.conf nova region_name RegionOne
openstack-config --set /etc/neutron/neutron.conf nova project_name service
openstack-config --set /etc/neutron/neutron.conf nova username nova
openstack-config --set /etc/neutron/neutron.conf nova password [nova]
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp 
 修改  /etc/neutron/plugins/ml2/linuxbridge_agent.ini

openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:[eth1]
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan false
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip [192.168.1.240]

 #eth1 是桥接的网卡名称
 修改 /etc/neutron/plugins/ml2/ml2_conf.ini

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan,vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers linuxbridge,l2population
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks [provider]
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset true

 修改 /etc/neutron/dhcp_agent.ini

openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver linuxbridge
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata true
 修改 /etc/neutron/metadata_agent.ini
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip controller
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret meta
 修改 /etc/neutron/l3_agent.ini

 openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver linuxbridge
修改nova配置(加上neutron的配置信息)
 vi /etc/nova/nova.conf

openstack-config --set /etc/nova/nova.conf neutron url http://[controller]:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://[controller]:35357
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password [neutron]
openstack-config --set /etc/nova/nova.conf neutron service_metadata_proxy true
openstack-config --set /etc/nova/nova.conf neutron metadata_proxy_shared_secret meta
网络服务初始化脚本期望指向/etc/neutron/plugin.ini的符号链接,指向ML2插件配置文件
 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
创建neutron用户、服务、API
openstack user create --domain default --password [neutron] neutron
openstack role add --project service --user neutron admin
openstack service create --name neutron  --description "OpenStack Networking" network

openstack endpoint create --region RegionOne  network public http://[controller]:9696
openstack endpoint create --region RegionOne  network internal http://[controller]:9696
openstack endpoint create --region RegionOne  network admin http://[controller]:9696
同步数据库
 su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
 --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
开启防火墙
 firewall-cmd --zone=public --add-port=6080/tcp --permanent
 firewall-cmd --zone=public --add-port=9696/tcp --permanent
 firewall-cmd --reload
开机自启动和启动服务
 systemctl enable neutron-server.service \
 neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
 neutron-metadata-agent.service

 systemctl restart neutron-server.service \
 neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
 neutron-metadata-agent.service openstack-nova-api.service
 #查看状态
 systemctl status neutron-server.service \
 neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
 neutron-metadata-agent.service openstack-nova-api.service

9-配置网络服务(计算节点)

配置neutron各个组件的配置文件
 修改配置文件 /etc/neutron/neutron.conf

openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://[openstack]:[openstack]@[controller]
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://[controller]:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://[controller]:35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers [controller]:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password [neutron]
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
 修改/etc/neutron/plugins/ml2/linuxbridge_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:[eth1]
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan false
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip [192.168.1.241]
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
修改nova配置(加上neutron的配置信息)
openstack-config --set /etc/nova/nova.conf neutron url http://[controller]:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://[controller]:35357
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password [neutron]
开机自启动和启动服务
 systemctl enable neutron-linuxbridge-agent.service
 systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service
查看网络详情
 openstack network agent list

 出现以下结果则为正确
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host       | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 0fd21307-0c66-43cf-a158-5145e98fd2ad | Metadata agent     | controller | None              | True  | UP    | neutron-metadata-agent    |
| 31a542e6-0fc5-4956-92c7-178c35740bdf | DHCP agent         | controller | nova              | True  | UP    | neutron-dhcp-agent        |
| 458b82a5-4d4b-4a3e-9be0-5ca886c7a5bf | Linux bridge agent | compute    | None              | True  | UP    | neutron-linuxbridge-agent |
| e29ba688-b2fd-407d-aaa4-3d4fb4c3da7a | Linux bridge agent | controller | None              | True  | UP    | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+

10-安装仪表盘(Dashboard)(控制节点)

修改配置文件(备份配置文件,此处无需删除该文件所有内容,只需修改即可,建议使用以下配置,注释需要修改的配置,防止出现失误)
 vi /etc/openstack-dashboard/local_settings

OPENSTACK_HOST = "[192.168.1.240]"
 #下边这个中括号不是需要替换的,是固定的写法
ALLOWED_HOSTS = ['*', ]

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '[controller]:11211',
}
}

OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
}

OPENSTACK_NEUTRON_NETWORK = {
'enable_router': False,
'enable_quotas': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_': False,
'enable_fip_topology_check': False,
}

TIME_ZONE = "UTC"
重启httpd和memcached
 systemctl restart httpd.service memcached.service
访问只需在浏览器输入http://10.211.55.20/dashboard 即可
创建虚拟网络以及子网
 openstack network create --share --external \
 --provider-physical-network [provider] \
 --provider-network-type flat [vmnet]
创建子网(test-net)
openstack subnet create --network [vmnet] \
--allocation-pool start=[10.211.55.200],end=[10.211.55.220] \
--dns-nameserver [114.114.114.114] --gateway [10.211.55.1] --subnet-range [10.211.55.0/24] [vmnet]
创建秘钥
source /usr/local/openstack/demo.sh #使用 demo 权限 
ssh-keygen -q -N "" 
nova keypair-add --pub-key ~/.ssh/id_rsa.pub mykey 
nova keypair-list #查看密钥
创建虚拟机
nova boot --flavor [tiny] --image [cirros] --nic net-id=[c810cd8b-8aa6-424b-8873-a28a3ca4e518]  --security-group default --key-name [mykey] [test-instance]

注意事项

后边测试的时候发现rabbitMQ 用户无辜丢掉,是因为rabbitMQ 存储数据根据hostname 存储的,如果修改hostname则需要重新添加用户

如果计算节点起不来,有可能是计算节点防护墙问题, 目前是先把防火墙都关闭
删除节点需要删除 service compute_nodes  里边对应的数据

修改host 后必须查看 agents 和service compute_nodes 里边的数据

常用命令操作

重启所有服务

openstack-service restart    
检查计算服务
nova-manage cell_v2 list_cells
nova-status upgrade check
查看端口详情
openstack port list

你可能感兴趣的:(openstack)