OpenStack
OpenStack介绍
OpenStack是一种免费的开源平台,帮助服务提供商实现类似于亚马逊EC2和S3的基础设施服务。OpenStack当前有三个核心项目:计算(Nova),对象存储(Swift),镜像管理(Glance)。每个项目可以独立安装运行,该文档将帮助您快速学习OpenStack。
OpenStack背景现状
OpenStack是由Rackspace Cloud和NASA(美国航天局)于2010年7月开始共同开发支持,整合了Rackspace的Cloud Files platform和NASA的Nebula platform技术,目的是能为任何一个组织创建和提供云计算服务。
目前,超过150家公司参与了这个项目,包括Crtrix Systems, Dell, AMD, Intel, Cisco, HP等。 OpenStack最近发布了Austin产品,它是第一个开源的云计算平台,它是基于Rackspace的云服务器加上云服务,以及NASA的Nebula技术发布的。 似乎是作为对此的响应,Amazon为新用户提供一年的AWS免费使用方式。在OpenStack发布Austin之后,微软也宣称Windows Server 2008 R2 Hyper-V可以与OpenStack整合。 微软会为http://www.360docs.net/doc/info-1edcde5bcc22bcd127ff0c0f.html 提供架构和技术上的指引,它会编写必要的代码,从而OpenStack能够在微软的虚拟平台上运行。 这些代码会在http://www.360docs.net/doc/info-1edcde5bcc22bcd127ff0c0f.html 上提供。
OpenStack是什么?及OpenStack核心项目
OpenStack是一种免费的开源平台,帮助服务提供商实现类似于亚马逊EC2和S3的基础设施服务。OpenStack当前有三个核心项目:计算(Nova),对象存储(Swift),镜像管理(Glance)。每个项目可以独立安装运行。另外还有两个新增项目:身份验证(Keystone)和仪表盘(Horizon)。
OpenStack计算是一个云控制器,用来启动一个用户或一个组的虚拟实例,它也用于配置每个实例或项目中包含多个实例为某个特定项目的联网。
OpenStack对象存储是一个在具有内置冗余和容错的大容量系统中存储对象的系统。对象存储有各种应用,如备份或存档数据,存储图形或视频(流媒体数据传输到用户的浏览器),储存二级或三级静态数据,发展与数据存储集成新的应用程序,当预测存储容量困难时存储数据,创造弹性和灵活的云存储Web应用程序。
OpenStack镜像服务是一个查找和虚拟机图像检索系统。它可以配置三种方式:使用OpenStack对象存储来存储图像;使用亚马逊S3直接存储,或使用S3对象存储作为S3访问中间存储。
目前为止共有五个版本:
1. Austin
2. Bexar
3. Cactus
4. Diablo
5. Mitaka
OpenStack 功能
OpenStack能帮我们建立自己的IaaS,提供类似Amazon Web Service的服务给用户:
1、普通用户可以通过它注册云服务,查看运行和计费情况
2、开发和运维人员可以创建和存储他们应用的自定义镜像,并通过这些镜像启动、监控和、平台的管理人员能够配置和操作网络,存储等基础架构。
OpenStack的优势是平台分模块化,由每个独立的组件组成,每个nova组件都可以单独安装在独立的服务器上,各个组件之间不共享状态,各个组件之间通过消息队列(MQ)来进行异步通讯。也可以通过选用合适组件来定制个性化服务,便于应用改进。使用apache协议可以支持企业使用。
OpenStack架构
Compute(Nova)的软件架构,每个nova-xxx组件是由python代码编写的守护进程,每个进程之间通过队列(Queue)和数据库(nova database)来交换信息,执行各种请求。而用户通过nova-api暴露的web service来同其他组件进行交互。Glance是相对独立的基础架构,nova通过glance-api来和它交互。
Nova组件的作用 nova-api是Nova的中心。它为所有外部调用提供服务,除了提供OpenStack本身的API规范外,他还提供了兼容EC2的部分API,所以也可以用EC2的管理工具对nova进行日常管理。nova-compute负责对虚拟机实例进行创建、终止、迁移、Resize的操作。工作原理可以简单描述为:从队列中接收请求,通过相关的系统命令执行他们,再更新数据库的状态。nova-volume管理映射到虚拟机实例的卷的创建、附加和取消。nova-network从队列中接收网络任务,然后执行任务控制虚拟机的网络,比如创建桥接网络或改变iptables的规则。nova-scheduler 提供调度,来决定在哪台资源空闲的机器上启动新的虚拟机实例Queue为守护进程传递消息。只要支持AMQP协议的任何Message Queue Sever都可以,当前官方推荐用RabbitMQ。SQL database存储云基础架构中的各种数据。包括了虚拟机实例数据,网络数据等。
user dashboard是一个可选的项目。它提供了一个web界面来给普通用户或者管理者来管理、配置他们的计算资源。
所有的计算节点需要和控制节点进行镜像交互,网络交互,控制节点是整个架构的瓶颈,这种配置主要用于概念证明或实验环境。多节点:增加节点单独运行nova-volume,同时在计算节点上运行nova-network,并且根据不同的网络硬件架构选择DHCP或者VLan模式,让控制网络和公共网络的流量分离。
OpenStack在企业中的应用
更多的企业不只是谈论OpenStack,而是在实际生产环境中部署它,包括Rackspace基于Puppet的公有云。OpenStack自研发伊始,一直被视作云计算领域的Linux,其推动开放源代码服务的努力得到了众多公司的支持。目前就有超过100个机构参与了代码库的建设,或在其它方面参与该项目。新浪云计算将与OpenStack一起合力打造一套可以管理和配置各种虚拟化技术的IaaS平台,在开源代码库的建设方面将有着不小的贡献。
实验环境
1. 安装Oracle VM VirtualBox软件
2. 准备centos镜像
3. 创建虚拟机openstack-node1
注意:网卡配置
实验过程及步骤
设置网卡1
vim /etc/sysconfig/network-scripts/ifcfg-eth0
设置网卡2
vim /etc/sysconfig/network-scripts/ifcfg-eth1
1.3.3 网络内部域名解析
(1)设置主机的hostname
vim /etc/sysconfig/network
修改以下配置:
(2)设置主机节点的域名解析
vim /etc/hosts
添加以下内容:
192.168.56.111 open-node1.example.com
显示:
1.3.4内核参数调整
vim /etc/sysctl.conf
修改成下面内容:
reboot(重新启动电脑)
2实验环境软件的安装
2.1.基础软件包
rpm -ivh http://mirrors.ustc.edu.cn/fedora/epel//6/x86_64/epel-release-6-8.noarch.rpm
Python-pip
2.2 Yum的安装
yum install -y python-pip gcc gcc-c++ make libtool patch automake python-devel libxslt-devel MySQL-python openssl-devel libudev-devel git wget libvirt-python libvirt qemu-kvm gedit python-numdisplay python-eventlet device-mapper bridge-utils libffi-devel libffi python-crypto lrzsz swig
2.2.1安装redhat的rdo仓库
vim /etc/yum.repos.d/rdo-release.repo
复制下面文字
[openstack-icehouse]
name=OpenStack Icehouse Repository
baseurl=http://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/epel-6/
enabled=1
gpgcheck=0
gpgkey=
2.2.2keystone安装
yum install openstack-keystone python-keystoneclient
2.2.3 glance安装
yum install openstack-glance python-glanceclient python-crypto
2.2.4 Nova的控制节点安装’
yum install openstack-nova-api openstack-nova-cert \
openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy \
openstack-nova-scheduler python-novaclient
2.2.5 neutron控制节点的安装
yum install openstack-neutron openstack-neutron-ml2 python-neutronclient openstack-neutron-linuxbridge
2.2.6 horizion的安装
yum install -y httpd mod_wsgi memcached python-memcached openstack-dashboard
2.2.7 cinder的安装
yum install openstack-cinder python-cinderclient
2.2.8 Cinder安装
yum install openstack-cinder
3 基础服务部署
3.1数据库服务(MySQL)
3.1.1MySQL 安装
[root@open-node1 ~] # yum install mysql-server
[root@open-node1 ~] # cp /usr/share/mysql/my-medium.cnf /etc/my.cnf
[root@open-node1 ~] #vim /etc/my.cnf
增加以下配置”
[mysqld]
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8
[root@open-node1 ~] # chkconfig mysqld on
[root@open-node1 ~] # /etc/init.d/mysqld start
3.1.2 数据库的安装
[root@open-node1 ~] # mysql –u root
mysql>show databases;
3.1.3 创建keystone数据库并授权
mysql> create database keystone;
mysql> grant all on keystone.* to keystone@'192.168.56.0/255.255.255.0' identified by 'keystone';
3.1.4 创建glance数据库并授权
mysql> create database glance;
mysql> grant all on glance.* to glance@'192.168.56.0/255.255.255.0' identified by 'glance';
3.1.5创建nova数据库并授权
mysql> create database nova;
mysql> grant all on nova.* to nova@'192.168.56.0/255.255.255.0' identified by 'nova';
3.1.6 创建neutron并授权
mysql> create database neutron;
mysql> grant all on neutron.* to neutron@'192.168.56.0/255.255.255.0' identified by 'neutron';
3.1.7 创建cinder并授权
mysql> create database cinder;
mysql> grant all on cinder.* to cinder@'192.168.56.0/255.255.255.0' identified by 'cinder';
mysql>show databases
-> ;
显示:
| Database |
+--------------------+
| information_schema |
| cinder |
| glance |
| keystone |
| mysql |
| neutron |
| nova |
| test |
+--------------------+
8 rows in set (0.00 sec)
3.2 消息代理服务RabbitMQ
3.2.1 RabbitMQ
3.2.2 RabbitMQ安装
mysql> \q
[root@open-node1 ~]# yum -y install ncurses-devel
[root@open-node1 ~] # yum install -y erlang rabbitmq-server
[root@open-node1 ~] # chkconfig rabbitmq-server on
关闭防火墙:
(1)[root@open-node1 ~]# /etc/init.d/iptables stop
显示:
iptables: Setting chains to policy ACCEPT: nat mangle filte[ OK ]
iptables: Flushing firewall rules: [ OK ]
iptables: Unloading modules: [ OK ]
(2)[root@open-node1 ~]# chkonfig iptables off
(3)[root@open-node1 ~]# chkconfig --list | grep iptables
显示:
iptables 0:off 1:off 2:off 3:off 4:off 5:off 6:off
3.2.3启用 Web 监控插件
启用后就可以通过 http://IP:15672/来访问 web 管理界面。默认yum安装的 rabbitmq-server没有将rabbitmq-plugins 命令放到搜索路径,需要使用绝对路径来执行。
[root@open-node1 ~] # /usr/lib/rabbitmq/bin/rabbitmq-plugins list
[root@open-node1 ~] # /usr/lib/rabbitmq/bin/rabbitmq-plugins enable rabbitmq_management
[root@open-node1 ~] # /etc/init.d/rabbitmq-server restart
打开本地浏览器,输入http://IP:15672/,这里的IP为192.168.56.111,即输入
http://192.168.56.111:15672,打开如下图的rabbitmq管理界面
4.认证服务keystone
Keystone。为OpenStack其他服务提供身份验证、服务规则和服务令牌的功能,管理Domains、Projects、Users、Groups、Roles。自Essex版本集成到项目中。
4.1.1安装包的下载
[root@open-node1 ~]# cd /usr/local/src
[root@open-node1 src]#
wget https://launchpad.net/keystone/icehouse/2014.1.3/+download/keystone-2014.1.3.tar.gz
wget https://launchpad.net/nova/icehouse/2014.1.3/+download/nova-2014.1.3.tar.gz
wget https://launchpad.net/glance/icehouse/2014.1.3/+download/glance-2014.1.3.tar.gz
wget https://launchpad.net/horizon/icehouse/2014.1.3/+download/horizon-2014.1.3.tar.gz
wget https://launchpad.net/neutron/icehouse/2014.1.3/+download/neutron-2014.1.3.tar.gz
wget https://launchpad.net/cinder/icehouse/2014.1.3/+download/cinder-2014.1.3.tar.gz
tar zxf keystone-2014.1.3.tar.gz
tar zxf nova-2014.1.3.tar.gz
tar zxf glance-2014.1.3.tar.gz
tar zxf neutron-2014.1.3.tar.gz
tar zxf horizon-2014.1.3.tar.gz
tar zxf cinder-2014.1.3.tar.gz
4.1.2keystone的安装
[root@open-node1 src]# cd keystone-2014.1.3
4.2.2创建配置文件
[root@open-node1 keystone-2014.1.3]#
cp etc/keystone-paste.ini /etc/keystone
[root@open-node1 keystone-2014.1.3]#
cp etc/policy.v3cloudsample.json /etc/keystone
[root@open-node1 keystone-2014.1.3]# cd /etc/keystone
[root@open-node1 keystone]# ll
[root@open-node1 keystone]# mv policy.v3cloudsample.json policy.v3clonud.json
4.2.3配置keystone
(1)配置admin_token
[root@open-node1 ~]# ADMIN_TOKEN=$(openssl rand -hex 10)
[root@open-node1 ~]# echo $ADMIN_TOKEN
7de805df0ce7a2e127ab
上面是产生的随机码
[root@open-node1 ~]# vim /etc/keystone/keystone.conf
set number(设置行数)
【DEFAULT】
(修改的时候取 # ,然后修改)
13 admin_token=7de805df0ce7a2e127ab
374 debug=true
439 log_file=keystone.log
444 log_dir=/var/log/keystone
(3)配置数据库
在[database]中的[connection]中配置数据库的连接
619 connection=mysql://keystone:[email protected]/keystone
(4)验证日志的配置
[root@open-node1 ~]#
grep "^[a-z]" /etc/keystone/keystone.conf
显示:
admin_token=9991692567660be100ff
debug=true
log_file=keystone.log
log_dir=/var/log/keystone
connection=mysql://keystone:[email protected]/keystone
[root@open-node1 keystone]# cd
4.2.4 设置PKI Token
[root@open-node1 ~]#
keystone-manage pki_setup --keystone-user root --keystone-group root
显示:
Generating RSA private key, 2048 bit long modulus
..........................+++
..............................+++
e is 65537 (0x10001)
Generating RSA private key, 2048 bit long modulus
.................................+++
.....................................+++
e is 65537 (0x10001)
Using configuration from /etc/keystone/ssl/certs/openssl.conf
Check that the request matches the signature
Signature ok
The Subject's Distinguished Name is as follows
countryName :PRINTABLE:'US'
stateOrProvinceName :ASN.1 12:'Unset'
localityName :ASN.1 12:'Unset'
organizationName :ASN.1 12:'Unset'
commonName :ASN.1 12:'www.example.com'
Certificate is to be certified until Sep 18 09:24:56 2026 GMT (3650 days)
Write out database with 1 new entries
Data Base Updated
[root@open-node1 ~]#chown -R root:root /etc/keystone/ssl(授权)
[root@open-node1 ~]#chmod -R o-rwx /etc/keystone/ssl(授权)
4.2.5同步数据库
[root@open-node1 ~]# keystone-manage db_sync
[root@open-node1 ~]# mysql -h 192.168.56.111 -ukeystone -pkeystone -e " use keystone;show tables;"
显示:
+-----------------------+
| Tables_in_keystone |
+-----------------------+
| assignment |
| credential |
| domain |
| endpoint |
| group |
| migrate_version |
| policy |
| project |
| region |
| role |
| service |
| token |
| trust |
| trust_role |
| user |
| user_group_membership |
+-----------------------+
4.3 keystone管理
4.3.1启动keystone
[root@open-node1 ~]# keystone-all --config-file=/etc/keystone/keystone.conf
显示:
2016-09-21 18:15:20.071 11955 INFO keystone.common.environment.eventlet_server [-] Starting /usr/bin/keystone-all on 0.0.0.0:35357
2016-09-21 18:15:20.093 11955 INFO keystone.common.environment.eventlet_server [-] Starting /usr/bin/keystone-all on 0.0.0.0:5000
2016-09-21 18:15:20.094 11955 INFO eventlet.wsgi.server [-] (11955) wsgi starting up on http://0.0.0.0:35357/
2016-09-21 18:15:20.095 11955 INFO eventlet.wsgi.server [-] (11955) wsgi starting up on http://0.0.0.0:5000/
Ctrl+c,终止此命令进入命令行输入
[root@open-node1 ~]# nohup keystone-all --config-file=/etc/keystone/keystone.conf &
[1] 10992
打开日志文件查看执行情况
[root@open-node1 ~]# tail -f /var/log/keystone/keystone.log
输出内容:
[root@open-node1 ~]# nohup: ignoring input and appending output to `nohup.out'
[root@open-node1 ~]# tail -f /var/log/keystone/keystone.log
输出内容:
2016-09-23 21:12:11.490 2199 DEBUG keystone-all [-] token.expiration = 3600 log_opt_values /usr/lib/python2.6/site-packages/oslo/config/cfg.py:1953
2016-09-23 21:12:11.490 2199 DEBUG keystone-all [-] token.provider = None log_opt_values /usr/lib/python2.6/site-packages/oslo/config/cfg.py:1953
2016-09-23 21:12:11.490 2199 DEBUG keystone-all [-] token.revocation_cache_time = 3600 log_opt_values /usr/lib/python2.6/site-packages/oslo/config/cfg.py:1953
2016-09-23 21:12:11.490 2199 DEBUG keystone-all [-] token.revoke_by_id = True log_opt_values /usr/lib/python2.6/site-packages/oslo/config/cfg.py:1953
2016-09-23 21:12:11.491 2199 DEBUG keystone-all [-] ******************************************************************************** log_opt_values /usr/lib/python2.6/site-packages/oslo/config/cfg.py:1955
2016-09-23 21:12:12.632 2199 WARNING keystone.openstack.common.versionutils [-] Deprecated: keystone.middleware.core.XmlBodyMiddleware is deprecated as of Icehouse in favor of support for "application/json" only and may be removed in K.
2016-09-23 21:12:12.690 2199 INFO keystone.common.environment.eventlet_server [-] Starting /usr/bin/keystone-all on 0.0.0.0:35357
2016-09-23 21:12:12.723 2199 INFO keystone.common.environment.eventlet_server [-] Starting /usr/bin/keystone-all on 0.0.0.0:5000
2016-09-23 21:12:12.724 2199 INFO eventlet.wsgi.server [-] (2199) wsgi starting up on http://0.0.0.0:35357/
2016-09-23 21:12:12.725 2199 INFO eventlet.wsgi.server [-] (2199) wsgi starting up on http://0.0.0.0:5000/
4.3.3创建Admin用户
[root@open-node1 ~]# export OS_SERVICE_TOKEN=$ADMIN_TOKEN
[root@open-node1~]#
export OS_SERVICE_TOKEN=7de805df0ce7a2e127ab
[root@open-node1~]#
export OS_SERVICE_ENDPOINT=http://192.168.56.111:35357/v2.0
[root@open-node1 ~]# keystone role-list
输出内容:
+----------------------------------+----------+
| id | name |
+----------------------------------+----------+
| 1894a90878d3e92bab9fe2ff9ee4384b | _member_ |
+----------------------------------+----------+
(1)创建Admin用户
[root@open-node1 ~]#
keystone user-create --name=admin --pass=admin
显示:
+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| email | |
| enabled | True |
| id | 4ab52cb511186e4d56841b7fcf6894ed |
| name | admin |
| username | admin |
+----------+----------------------------------+
(2)创建admin角色
[root@open-node1 ~]# keystone role-create --name=admin
显示:
+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| id | 49f2c254f067137641678456382d94b8 |
| name | admin |
+----------+----------------------------------+
(3)创建admin租户
[root@open-node1 ~]#
keystone tenant-create --name=admin --description="Admin Tenant"
显示:
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | Admin Tenant |
| enabled | True |
| id | cfacb264b2b3a4bbc5994460e80e5042 |
| name | admin |
+-------------+----------------------------------+
(4)链接Admin的用户,角色和租户。
[root@open-node1 ~]# keystone user-role-add --user=admin --tenant=admin --role=admin
(5)连接admin用户、_member_角色和admin租户
[root@open-node1 ~]# keystone user-role-add --user=admin --role=_member_ --tenant=admin
查看刚才创建的用户,角色和租户情况
[root@open-node1 ~]# keystone user-list
+----------------------------------+-------+---------+-------+
| id | name | enabled | email |
+----------------------------------+-------+---------+-------+
| 4ab52cb511186e4d56841b7fcf6894ed | admin | True | |
+----------------------------------+-------+---------+-------+
[root@open-node1 ~]# keystone role-list
+----------------------------------+----------+
| id | name |
+----------------------------------+----------+
| 1894a90878d3e92bab9fe2ff9ee4384b | _member_ |
| 49f2c254f067137641678456382d94b8 | admin |
+----------------------------------+----------+
[root@open-node1 ~]# keystone tenant-list
+----------------------------------+-------+---------+
| id | name | enabled |
+----------------------------------+-------+---------+
| cfacb264b2b3a4bbc5994460e80e5042 | admin | True |
+----------------------------------+-------+---------+
4.3.4 创建普通用户
[root@open-node1 ~]# keystone user-create --name=demo --pass=demo
显示:
+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| email | |
| enabled | True |
| id | c5886edf6229406080c0ac7cfdbb5e94 |
| name | demo |
| username | demo |
+----------+----------------------------------+
[root@open-node1 ~]# keystone tenant-create --name=demo --description="Demo Tenant"
显示:
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | Demo Tenant |
| enabled | True |
| id | 8701ad62c68b889ab6b046480f97444b |
| name | demo |
+-------------+----------------------------------+
[root@open-node1 ~]# keystone user-role-add --user=demo --role=_member_ --tenant=demo
4.3.5创建keystone的service和endpoint
[root@open-node1 ~]# keystone service-create --name=keystone --type=identity --description="OpenStack Identity"
显示:
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | OpenStack Identity |
| enabled | True |
| id | 28365312bcd00630c36f820630c29bce |
| name | keystone |
| type | identity |
+-------------+----------------------------------+
查看日志
[root@open-node1 ~]# keystone service-list
+----------------------------------+----------+----------+--------------------+
| id | name | type | description |
+----------------------------------+----------+----------+--------------------+
| 28365312bcd00630c36f820630c29bce | keystone | identity | OpenStack Identity |
+----------------------------------+----------+----------+--------------------+
下面endpoint的创建需要创建Service时生成的service ID ,注意这个ID是一个随机生成的ID,但和上图中的service ID必须一致
[root@open-node1 ~]# keystone endpoint-create \
> --service-id=28365312bcd00630c36f820630c29bce \(自己输出的)
> --publicurl=http://192.168.56.111:5000/v2.0 \
> --internalurl=http://192.168.56.111:5000/v2.0 \
> --adminurl=http://192.168.56.111:35357/v2.0
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| adminurl | http://192.168.56.111:35357/v2.0 |
| id | db02d960816d42c58cd4fce08f2ca4c0 |
| internalurl | http://192.168.56.111:5000/v2.0 |
| publicurl | http://192.168.56.111:5000/v2.0 |
| region | regionOne |
| service_id | d29483b5f2ed49528c6fc6d72d5bdc99 |
+-------------+----------------------------------+
[root@open-node1 ~]# keystone endpoint-list
显示:
+----------------------------------+-----------+---------------------------------+---------------------------------+----------------------------------+----------------------------------+
| id | region | publicurl | internalurl | adminurl | service_id |
+----------------------------------+-----------+---------------------------------+---------------------------------+----------------------------------+----------------------------------+
| db02d960816d42c58cd4fce08f2ca4c0 | regionOne | http://192.168.56.111:5000/v2.0 | http://192.168.56.111:5000/v2.0 | http://192.168.56.111:35357/v2.0 | d29483b5f2ed49528c6fc6d72d5bdc99 |
+----------------------------------+-----------+---------------------------------+---------------------------------+----------------------------------+----------------------------------+
[root@open-node1 ~]# keystone --help | grep list
显示:
ec2-credentials-list
endpoint-list List configured service endpoints.
role-list List all roles.
service-list List all services in Service Catalog.
tenant-list List all tenants.
user-list List users.
user-role-list List roles granted to a user.
4.4.验证keystone安装
[root@open-node1 ~]#
unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT
.14.4.1验证测试
[root@open-node1 ~]#
keystone --os_username=admin --os_password=admin --os-auth-url=http://192.168.56.111:35357/v2.0 token-get
[root@open-node1 ~]#
keystone --os-username=admin --os-password=admin --os-tenant-name=admin --os-auth-url=http://192.168.56.111:35357/v2.0 token-get
4.4.2 环境变量的配置
[root@open-node1 ~]# vim keystone-admin
复制下面内容:
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_AUTH_URL=http://192.168.56.111:35357/v2.0
[root@open-node1 ~]# keystone token-get
[root@open-node1 ~]# source keystone-admin
[root@open-node1 ~]# keystone token-get
[root@open-node1 ~]# vim keystone-demo
复制下面内容:
export OS_TENANT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=demo
export OS_AUTH_URL=http://192.168.56.111:35357/v2.0
[root@open-node1 ~]# source keystone-admin
[root@open-node1 ~]#
keystone user-role-list --user admin --tenant admin
[root@open-node1 ~]#
keystone user-role-list --user demo --tenant demo
[root@open-node1 ~]# source keystone-demo
[root@open-node1 ~]#
keystone user-role-list --user demo --tenant demo
显示:
You are not authorized to perform the requested action, admin_required. (HTTP 403)
5 Image Service(Glance)
5.1 Glance 安装
[root@open-node1 ~]# cd /usr/local/src/glance-2014.1.3
[root@open-node1 ~]# python setup.py install
5.2 Glance 配置准备
5.2.1 初始化配置文件目录
[root@open-node1 ~]#mkdir /etc/glance
[root@open-node1 ~]#mkdir /var/log/glance
[root@open-node1 ~]#mkdir /var/lib/glance
[root@open-node1 ~]#mkdir /var/run/glance
5.2.2 复制配置文件
[root@open-node1 ~]#cd /usr/local/src/glance-2014.1.3/etc
[root@open-node1 etc]# cp * /etc/glance
[root@open-node1 etc]# cd /etc/glance/
5.2.3更改部分配置文件的文件名
[root@open-node1 ~]#mv logging.cnf.sample logging.cnf
[root@open-node1 ~]#
mv property-protections-policies.conf.sample property-protections-policies.conf
[root@open-node1 ~]#
mv property-protections-roles.conf.sample property-protections-roles.conf
5.3 设置数据库mysql
5.3.1 配置文件
[root@open-node1 glance]# vim /etc/glance/glance-api.conf
connection=mysql://glance:[email protected]/glance
[root@open-node1 glance]# vim glance-registry.conf
connection=mysql://glance:[email protected]/glance
5.3.2 同步数据库
[root@open-node1 glance]# glance-manage db_sync
[root@open-node1 glance]#
mysql -h 192.168.56.111 -u glance -pglance -e "use glance;show tables;"
显示:
+------------------+
| Tables_in_glance |
+------------------+
| image_locations |
| image_members |
| image_properties |
| image_tags |
| images |
| migrate_version |
| task_info |
| tasks |
+------------------+
5.4 设置RabbitMQ
[root@open-node1 ~]# vim /etc/glance/glance-api.conf
修改以下内容:
notifier_strategy = rabbit
rabbit_host = 192.168.56.111
rabbit_port = 5672
rabbit_use_ssl = false
rabbit_userid = guest
rabbit_password = guest
rabbit_virtual_host = /
rabbit_notification_exchange = glance
5.5 设置Keystone(按实验手册修改,红色的是手册上错误的)
[root@open-node1 ~]# vim /etc/glance/glance-api.conf
648 admin_tenant_name=admin
[root@open-node1 ~]# vim /etc/glance/glance-registry.conf
178 admin_tenant_name=admin
[root@open-node1 ~]# diff /usr/local/src/glance-2014.1.3/etc/glance-api.conf/etc/glance/glance-api.conf
5.6 Glance 的启动
5.6.1 glance的命令启动
[root@open-node1 ~]#
glance-api --config-file=/etc/glance/glance-api.conf
2016-10-02 15:20:24.269 10641 INFO glance.wsgi.server [-] Starting 1 workers
2016-10-02 15:20:24.274 10641 INFO glance.wsgi.server [-] Started child 10648
2016-10-02 15:20:24.280 10648 INFO glance.wsgi.server [-] (10648) wsgi starting up on http://0.0.0.0:9292/
[root@open-node1 ~]#
glance-registry --config-file=/etc/glance/glance-registry.conf
2016-10-02 15:25:38.859 10763 INFO glance.wsgi.server [-] Starting 1 workers
2016-10-02 15:25:38.863 10763 INFO glance.wsgi.server [-] Started child 10768
2016-10-02 15:25:38.869 10768 INFO glance.wsgi.server [-] (10768) wsgi starting up on http://0.0.0.0:9191/
5.6.2 glance 的脚本启动
[root@open-node1 ~]#git clone https://github.com/unixhot/openstack-inc.git
[root@open-node1 ~]#cd openstack-inc/control/init.d
[root@open-node1 init.d]#
cp openstack-keystone openstack-glance-* /etc/init.d/
cp: overwrite `/etc/init.d/openstack-keystone'? y
cp: overwrite `/etc/init.d/openstack-glance-api'? y
cp: overwrite `/etc/init.d/openstack-glance-registry'? Y
[root@open-node1 ~]# chmod +x /etc/init.d/openstack-glance-*
[root@open-node1 ~]# chkconfig --add openstack-glance-api
[root@open-node1 ~]#
chkconfig --add openstack-glance-registry
[root@open-node1 ~]# chkconfig openstack-glance-api on
[root@open-node1 ~]# chkconfig openstack-glance-registry on
[root@open-node1 ~]# /etc/init.d/openstack-glance-api start
Starting openstack-glance-api: [ OK ]
[root@open-node1 ~]#
/etc/init.d/openstack-glance-registry start
Starting openstack-glance-registry: [ OK ]
[root@open-node1 ~]# chkconfig --add openstack-keystone
[root@open-node1 ~]# chkconfig openstack-keystone on
[root@open-node1 ~]# ps aux | grep keystone
root 2419 0.0 2.8 398216 55736 pts/0 S 09:27 0:01 /usr/bin/python /usr/bin/keystone-all --config-file=/etc/keystone/keystone.conf
root 11044 0.0 0.0 103252 828 pts/0 S+ 15:36 0:00 grep keystone
[root@open-node1 ~]# /etc/init.d/openstack-keystone start
Starting keystone: [ OK ]
[root@open-node1 ~]# ps aux | grep keystone
root 2419 0.0 2.8 398216 55736 pts/0 S 09:27 0:01 /usr/bin/python /usr/bin/keystone-all --config-file=/etc/keystone/keystone.conf
root 11074 0.0 0.0 103252 828 pts/0 S+ 15:36 0:00 grep keystone
5.7 测试Glance
Glance。一套虚拟机镜像查找及检索系统,支持多种虚拟机镜像格式(AKI、AMI、ARI、ISO、QCOW2、Raw、VDI、VHD、VMDK),有创建上传镜像、删除镜像、编辑镜像基本信息的功能。自Bexar版本集成到项目中。
作为IaaS的存储服务 与OpenStack Compute对接,为其存储镜像 文档存储 存储需要长期保存的数据,例如log 存储网站的图片,缩略图等
OpenStack项目架构三 – Glance架构
OpenStack镜像服务提供OpenStack Nova虚拟机镜像的发现,注册,取得服务。通过Glance,虚拟机镜像可以被存储到多种存储上,比如简单的文件存储或者对象存储(比如OpenStack中swift项目)。
Glace组件架构
• Glance目前提供的参考实现中Registry Server仅是使用Sql数据库存储metadata。
• 前端通过API Server向多个Client提供服务。
• 可以使用多种后端存储。Glance目前支持S3,Swift,简单的文件存储及只读的HTTPS存储。
• 后续也可能支持其他后端,如分布式存储系统(SheepDog或Ceph) Glace组件架构特性
1.基于组件的架构 :便于快速增加新特性
2.高可用性:支持大负荷
3.容错性:独立的进程避免串行错误
4.开放标准: 对社区驱动的API提供参考实现
OpenStack功能
1、 Dashboard提供资源池管理功能, 通过资源池的方式对物理资源进行重新组织。
2、 提供基于命令行的虚拟机在线迁移功能,拟机生命周期管理,例如创建、启
动、休眠、唤醒、关闭、迁移、销毁虚拟机。
3、 将常用的运行环境保存为虚拟机模板,可以方便地创建一系列相同或者是相
似的运行环境,只能手动创建所需用户模板,类似Eucalyptus。
4、 在计算资源允许的情况下提供高可用性、动态负载均衡、备份与恢复
5、 对所有的物理机和虚拟机进行监控,生成报表并在必要的情况下发出预警,
监控和报表功能据说可以采用外围组件实现
5.7.1 在keystone中注册glance
[root@open-node1 ~]# glance image-list
[root@open-node1 ~]# source keystone-admin
[root@open-node1 ~]# glance image-list
[root@open-node1 ~]# keystone service-create --name=glance --type=image
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | |
| enabled | True |
| id | 4c167f51163d462aae11f9112144836b |
| name | glance |
| type | image |
+-------------+----------------------------------+
[root@open-node1 ~]# keystone endpoint-create \
> --service-id=4c167f51163d462aae11f9112144836b \
> --publicurl=http://192.168.56.111:9292 \
> --internalurl=http://192.168.56.111:9292 \
> --adminurl=http://192.168.56.111:9292
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| adminurl | http://192.168.56.111:9292 |
| id | 85f05ad0fa0e4510846bd0c1a6bee7c9 |
| internalurl | http://192.168.56.111:9292 |
| publicurl | http://192.168.56.111:9292 |
| region | regionOne |
| service_id | 73e4849e6c6f49fdbee4b0bce8247fe4 |
+-------------+----------------------------------+
[root@open-node1 ~]# keystone service-list
+----------------------------------+----------+----------+--------------------+
| id | name | type | description |
+----------------------------------+----------+----------+--------------------+
| 73e4849e6c6f49fdbee4b0bce8247fe4 | glance | image | |
| 136b5312bcd044b2836f820630c29bce | keystone | identity | OpenStack Identity |
+----------------------------------+----------+----------+--------------------+
[root@open-node1 ~]# keystone endpoint-list
[root@open-node1 ~]# glance image-list
+----+------+-------------+------------------+------+--------+
| ID | Name | Disk Format | Container Format | Size | Status |
+----+------+-------------+------------------+------+--------+
5.7.2 glance的镜像测试
下载一个镜像
[root@openstack-node1 ~]# wget https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img
然后上传
[root@openstack-node1 ~]# glance image-create --name "cirros-0.3.0-x86_64" --disk-format qcow2 --container-format bare --is-public True --file cirros-0.3.0-x86_64-disk.img
下图列出了该镜像的元数据
[root@openstack-node1 ~]# glance image-list
[root@openstack-node1 ~]# cd /var/lib/glance/images
[root@openstack-node1 images]# ll
total 9536
-rw-r----- 1 root root 9761280 Sep 27 17:05 9446e7de-5e5b-40cf-8be5-b2eb089f2447
镜像保存是以镜像的ID保存
6 Compute Services (Nova)
Nova。一套控制器,用于为单个用户或使用群组管理虚拟机实例的整个生命周期,根据用户需求来提供虚拟服务。负责虚拟机创建、开机、关机、挂起、暂停、调整、迁移、重启、销毁等操作,配置CPU、内存等信息规格。自Austin版本集成到项目中。
Nova功能介绍 用户通过访问horizon(dashboard)请求资源,horizon会调用nova-api。OpenStack首先对用户进行身份认证,这个功能通过keystone模块来完成。然后通过任务调度器(nova-scheduler)确定在哪一个计算节点上创建新的虚拟机。所有的任务都会通过MQ来进行异步通讯。
云管理员用户也可以通过Euca2ools来管理和创建虚拟机,因为OpenStack支持EC2和S3接口。
OpenStack项目架构二: Swift架构
OpenStack Object Storage(Swift)是OpenStack开源云计算项目的子项目之一。前身是Rackspace Cloud Files项目。OpenStack对象存储是一个在具有内置冗余和容错的大容量系统
中存储对象的系统。对象存储有各种应用,如备份或存档数据,存储图形或视频,储存二级或三级静态数据,发展与数据存储集成新的应用程序,当预测存储容量困难时存储数据,创造弹性和灵活的云存储Web应用程序。
Swift功能
Swift使用普通的服务器来构建冗余的、可扩展的分布式对象存储集群,存储容量可达PB级。 Swift提供的服务与AWS S3相同,可以用以下用途:
Openstack创建instance的流程 1. 用户向nova-api发送请求
用户发送请求到nova-api,这里有两种:
a.通过openstack api
从server.py's controller.create():
然后就进入等待直到instance的状态变为running.
a. networker 分配ip
在控制节点安装时,需要安装除了nova-compute之外的其他的所有nova服务。
6.1 Nova 安装
【yum中已经安装,可以忽略】
[root@openstack-node1 ~]# cd /usr/local/nova-2014.1.3
[root@openstack-node1 nova-2014.1.3]#python setup.py install
6.2 创建配置文件
6.2.1 创建相关目录
[root@openstack-node1 nova-2014.1.3]# mkdir /etc/nova
[root@openstack-node1 nova-2014.1.3]# mkdir /var/log/nova
[root@openstack-node1 nova-2014.1.3]# mkdir /var/lib/nova/instances –p
[root@openstack-node1 nova-2014.1.3]# mkdir /var/run/nova
6.2.2 复制部分配置文件
[root@openstack-node1 nova-2014.1.3]# cd etc/nova/
[root@openstack-node1 nova]# cp -a * /etc/nova/
cp: overwrite `/etc/nova/api-paste.ini'? y
cp: overwrite `/etc/nova/policy.json'? y
cp: overwrite `/etc/nova/rootwrap.conf'? y
[root@openstack-node1 nova]# mv logging_sample.conf logging.conf
6.3 nova的配置
6.3.1 配置数据库
[root@openstack-node1 nova]# vim /etc/nova/nova.conf
2475 connection=mysql://nova:[email protected]/nova
6.3.2 同步数据库
[root@openstack-node1 ~]# nova-manage db sync
测试数据库同步情况
[root@openstack-node1 ~]# mysql -h 192.168.56.111 -unova -pnova -e "use nova;show tables;"
+--------------------------------------------+
| Tables_in_nova |
+--------------------------------------------+
| agent_builds |
| aggregate_hosts |
| aggregate_metadata |
| aggregates |
| block_device_mapping |
| bw_usage_cache |
| cells |
| certificates |
| compute_nodes |
| console_pools |
| consoles |
| dns_domains |
| fixed_ips |
| floating_ips |
| instance_actions |
| instance_actions_events |
| instance_faults |
| instance_group_member |
| instance_group_metadata |
| instance_group_policy |
| instance_groups |
| instance_id_mappings |
| instance_info_caches |
| instance_metadata |
| instance_system_metadata |
| instance_type_extra_specs |
| instance_type_projects |
| instance_types |
| instances |
| iscsi_targets |
| key_pairs |
| migrate_version |
| migrations |
| networks |
| pci_devices |
| project_user_quotas |
| provider_fw_rules |
| quota_classes |
| quota_usages |
| quotas |
| reservations |
| s3_images |
| security_group_default_rules |
| security_group_instance_association |
| security_group_rules |
| security_groups |
| services |
| shadow_agent_builds |
| shadow_aggregate_hosts |
| shadow_aggregate_metadata |
| shadow_aggregates |
| shadow_block_device_mapping |
| shadow_bw_usage_cache |
| shadow_cells |
| shadow_certificates |
| shadow_compute_nodes |
| shadow_console_pools |
| shadow_consoles |
| shadow_dns_domains |
| shadow_fixed_ips |
| shadow_floating_ips |
| shadow_instance_actions |
| shadow_instance_actions_events |
| shadow_instance_faults |
| shadow_instance_group_member |
| shadow_instance_group_metadata |
| shadow_instance_group_policy |
| shadow_instance_groups |
| shadow_instance_id_mappings |
| shadow_instance_info_caches |
| shadow_instance_metadata |
| shadow_instance_system_metadata |
| shadow_instance_type_extra_specs |
| shadow_instance_type_projects |
| shadow_instance_types |
| shadow_instances |
| shadow_iscsi_targets |
| shadow_key_pairs |
| shadow_migrate_version |
| shadow_migrations |
| shadow_networks |
| shadow_pci_devices |
| shadow_project_user_quotas |
| shadow_provider_fw_rules |
| shadow_quota_classes |
| shadow_quota_usages |
| shadow_quotas |
| shadow_reservations |
| shadow_s3_images |
| shadow_security_group_default_rules |
| shadow_security_group_instance_association |
| shadow_security_group_rules |
| shadow_security_groups |
| shadow_services |
| shadow_snapshot_id_mappings |
| shadow_snapshots |
| shadow_task_log |
| shadow_virtual_interfaces |
| shadow_volume_id_mappings |
| shadow_volume_usage_cache |
| shadow_volumes |
| snapshot_id_mappings |
| snapshots |
| task_log |
| virtual_interfaces |
| volume_id_mappings |
| volume_usage_cache |
| volumes |
+--------------------------------------------+
6.3.3 RabbitMQ 配置
[root@openstack-node1 ~]# vim /etc/nova/nova.conf
72 rabbit_host=192.168.56.111
83 rabbit_port=5672
92rabbit_userid=guest
95 rabbit_password=guest
189 rpc_backend=rabbit
6.3.4 vnc相关配置
2036 novncproxy_base_url=http://192.168.56.111:6080/vnc_auto.html
2044 vncserver_listen=0.0.0.0
2048 vncserver_proxyclient_address=192.168.56.111
2051 vnc_enabled=true
2054 vnc_keymap=en-us
6.3.5 Keystone 相关配置
544 auth_strategy=keystone
2687 auth_host=192.168.56.111
2690 auth_port=35357
2694 auth_protocol=http
2697 auth_uri=http://192.168.56.111:5000
2701 auth_version=v2.0
2728 admin_user=admin
2731 admin_password=admin
2735 admin_tenant_name=admin
6.3.6 其他配置
302state_path=/var/lib/nova
885 instances_path=$state_path/instances
1576 lock_path=/var/lib/nova/tmp
6.3.7 查看配置内容
[root@openstack-node1 ~]# grep "^[a-z]" /etc/nova/nova.conf
6.4 创建Nova service和endpoint
[root@openstack-node1 ~]# source keystone-admin
[root@openstack-node1 ~]# keystone service-list
+----------------------------------+----------+----------+--------------------+
| id | name | type | description |
+----------------------------------+----------+----------+--------------------+
| 2fc2f88956d445eeb1e1d61d0b79c6e8 | glance | image | |
| 07484bbbeccd447ab8513e707af84944 | keystone | identity | OpenStack Identity |
+----------------------------------+----------+----------+--------------------+
6.4.1 创建novaservice
[root@openstack-node1 ~]# keystone service-create --name=nova --type=compute
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | |
| enabled | True |
| id | 9602f000ea7046dca8c98049bd05add6 |
| name | nova |
| type | compute |
+-------------+----------------------------------+
6.4.2 创建nova endpoint
[root@openstack-node1 ~]# keystone endpoint-create --service-id=9602f000ea7046dca8c98049bd05add6 --publicurl=http://192.168.56.111:8774/v2/%\(tenant_id\)s --internalurl=http://192.168.56.111:8774/v2/%\(tenant_id\)s --adminurl=http://192.168.56.111:8774/v2/%\(tenant_id\)s
+-------------+---------------------------------------------+
| Property | Value |
+-------------+---------------------------------------------+
| adminurl | http://192.168.56.111:8774/v2/%(tenant_id)s |
| id | 975e5ecc6f6f439880dff67aeda45cc6 |
| internalurl | http://192.168.56.111:8774/v2/%(tenant_id)s |
| publicurl | http://192.168.56.111:8774/v2/%(tenant_id)s |
| region | regionOne |
| service_id | 9602f000ea7046dca8c98049bd05add6 |
+-------------+---------------------------------------------+
6.4.3 查看keystone的service
[root@openstack-node1 ~]# keystone service-list
+----------------------------------+----------+----------+--------------------+
| id | name | type | description |
+----------------------------------+----------+----------+--------------------+
| 2fc2f88956d445eeb1e1d61d0b79c6e8 | glance | image | |
| 07484bbbeccd447ab8513e707af84944 | keystone | identity | OpenStack Identity |
| 9602f000ea7046dca8c98049bd05add6 | nova | compute | |
+----------------------------------+----------+----------+--------------------+
Nova的service创建成功
注意,如果想删除多余的service,可以使用keystone service-delete+id
例如:keystone service-delete 9602f000ea7046dca8c98049bd05add6
6.5 启动Nova Service
[root@openstack-node1 ~]# mkdir /var/lib/nova/tamp
[root@openstack-node1 ~]# cd openstack-inc/control/init.d
[root@openstack-node1 init.d]# cp openstack-nova-* /etc/init.d/
[root@openstack-node1 init.d]# chmod +x /etc/init.d/openstack-nova-*
[root@openstack-node1 ~]# for i in {api,cert,conductor,console,consoleauth,novncproxy,scheduler};do chkconfig --add openstack-nova-$i;done
[root@openstack-node1 ~]# for i in {api,cert,conductor,console,consoleauth,novncproxy,scheduler};do chkconfig openstack-nova-$i on;done
[root@openstack-node1 ~]# for i in {api,cert,conductor,console,consoleauth,novncproxy,scheduler};do service openstack-nova-$i start;done
[root@openstack-node1 ~]# ps aux |grep nova
发现openstack-nova-novncproxy未启动
下面是对novncproxy的更新和启动
[root@openstack-node1 ~]# /etc/init.d/openstack-nova-novncproxy start
[root@openstack-node1 ~]# /etc/init.d/openstack-nova-novncproxy status
[root@openstack-node1 ~]# pip install websockify==0.5.1
[root@openstack-node1 ~]# /etc/init.d/openstack-nova-novncproxy start
[root@openstack-node1 ~]# /etc/init.d/openstack-nova-novncproxy status
6.6 安装nova并启动该服务
[root@openstack-node1 ~]# cd /usr/local/src
[root@openstack-node1 src]# wget https://github.com/kanaka/noVNC/archive/v0.5.tar.gz
[root@openstack-node1 src]# tar zxf v0.5.tar.gz
[root@openstack-node1 src]# mv noVNC-0.5/ /usr/share/novnc
[root@openstack-node1 ~]# /etc/init.d/openstack-nova-novncproxy start
6.7 验证nova的安装
[root@openstack-node1 ~]# nova host-list
+-----------------------------+-------------+----------+
| host_name | service | zone |
+-----------------------------+-------------+----------+
| openstack-node1.example.com | consoleauth | internal |
| openstack-node1.example.com | conductor | internal |
| openstack-node1.example.com | console | internal |
| openstack-node1.example.com | cert | internal |
| openstack-node1.example.com | scheduler | internal |
+-----------------------------+-------------+----------+
[root@openstack-node1 ~]# nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
Nova配置完成
7 Dashboard(Horizon)
Horizon。OpenStack中各种服务的Web管理门户,用于简化用户对服务的操作,例如:启动实例、分配IP地址、配置访问控制等。自Essex版本集成到项目中。
Cinder。为运行实例提供稳定的数据块存储服务,它的插件驱动架构有利于块设备的创建和管理,如创建卷、删除卷,在实例上挂载和卸载卷。自Folsom版本集成到项目中。
7.1 Horizon安装
7.2 Horizon配置
[root@openstack-node1 ~]# cd /usr/local/src/
[root@openstack-node1 src]# mv horizon-2014.1.3 /var/www/
[root@openstack-node1 src]# cd /var/www/horizon-2014.1.3/openstack_dashboard/local
[root@openstack-node1 local]# mv local_settings.py.example local_settings.py
修改以下内容:
128 OPENSTACK_HOST = "192.168.56.111"
7.3 Apache配置
相关话题:集群中的Session解决方案。
[root@openstack-node1 ~]# chown -R apache:apache /var/www/horizon-2014.1.3/
[root@openstack-node1 ~]# vim /etc/httpd/conf.d/horizon.conf
ServerAdmin [email protected]
ServerName 192.168.56.111
DocumentRoot /var/www/horizon-2014.1.3/
ErrorLog /var/log/httpd/horizon_error.log
LogLevel info
CustomLog /var/log/httpd/horizon_access.log combined
WSGIScriptAlias / /var/www/horizon-2014.1.3/openstack_dashboard/wsgi/django.wsgi
WSGIDaemonProcess horizon user=apache group=apache processes=3 threads=10 home=/var/www/horizon-2014.1.3
WSGIApplicationGroup horizon
SetEnv APACHE_RUN_USER apache
SetEnv APACHE_RUN_GROUP apache
WSGIProcessGroup horizon
Alias /media /var/www/horizon-2014.1.3/openstack_dashboard/static
Options FollowSymLinks MultiViews
AllowOverride None
Order allow,deny
Allow from all
WSGISocketPrefix /var/run/horizon
7.4 启动apache
[root@openstack-node1 ~]# chown -R apache:apache /var/www/horizon-2014.1.3/
[root@openstack-node1 ~]# /etc/init.d/httpd restart
Stopping httpd: [ OK ]
Starting httpd: httpd: Could not reliably determine the server's fully qualified domain name, using openstack-node1.example.com for ServerName
[ OK ]
出现httpd: Could not reliably determine the server's fully qualified domain name, using openstack-node1.example.com for ServerName的提示,编辑vim /etc/httpd/conf/httpd.conf文件,将里面的
ServerName www.example.com:80注释去掉。
8 Networking Services(Neutron)
Neutron。提供云计算的网络虚拟化技术,为OpenStack其他服务提供网络连接服务。为用户提供接口,可以定义Network、Subnet、Router,配置DHCP、DNS、负载均衡、L3服务,网络支持GRE、VLAN。插件架构支持许多主流的网络厂家和技术,如OpenvSwitch。自Folsom版本集成到项目中。
8.1 Neutron安装
[root@openstack-node1 ~]# cd /usr/local/src/neutron-2014.1.3
8.2 Neutron 配置
8.2.1 配置文件初始化
复制模板配置文件到配置目录下。
[root@openstack-node1 neutron-2014.1.3]# cp -a etc/* /etc/neutron/
cp: overwrite `/etc/neutron/dhcp_agent.ini'? y
cp: overwrite `/etc/neutron/fwaas_driver.ini'? y
cp: overwrite `/etc/neutron/l3_agent.ini'? y
cp: overwrite `/etc/neutron/lbaas_agent.ini'? y
cp: overwrite `/etc/neutron/metadata_agent.ini'? y
cp: overwrite `/etc/neutron/neutron.conf'? y
cp: overwrite `/etc/neutron/policy.json'? y
cp: overwrite `/etc/neutron/rootwrap.conf'? y
8.2.2 Neutron 数据库配置
[root@openstack-node1 neutron-2014.1.3]# cd
[root@openstack-node1 ~]# vim /etc/neutron/neutron.conf
411 connection=mysql://neutron:[email protected]:3306/neutron
8.2.3 Keystone 相关配置
[root@openstack-node1 ~]# vim /etc/neutron/neutron.conf
70 auth_strategy = keystone
8.2.4 RabbitMQ 相关设置
[root@openstack-node1 ~]# vim /etc/neutron/neutron.conf
8.2.5 Nova相关配置在neutron.conf
[root@openstack-node1 ~]# vim /etc/neutron/neutron.conf
327 nova_admin_auth_url = http://192.168.56.111:35357/v2.0
8.2.6 网络和日志相关配置
网络的配置
53 core_plugin = ml2
62 service_plugins = router,lbaas
日志文件配置
3 verbose = true
6 debug = true
29 log_file = neutron.log
30 log_dir = /var/log/neutron
查看相关配置
[root@openstack-node1 neutron]# grep "^[a-z]" /etc/neutron/neutron.conf
verbose = true
debug = true
lock_path = $state_path/lock
log_file = neutron.log
log_dir = /var/log/neutron
core_plugin = ml2
service_plugins = router,lbaas
auth_strategy = keystone
rabbit_host = 192.168.56.111
rabbit_password = guest
rabbit_port = 5672
rabbit_userid = guest
rabbit_virtual_host = /
notification_driver = neutron.openstack.common.notifier.rpc_notifier
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://192.168.56.111:8774/v2
nova_admin_username = admin
nova_admin_password = admin
nova_admin_auth_url = http://192.168.56.111:35357/v2.0
auth_host = 192.168.56.111
auth_port = 35357
auth_protocol = http
admin_tenant_name = admin
admin_user = admin
admin_password = admin
signing_dir = $state_path/keystone-signing
connection=mysql://neutron:[email protected]:3306/neutron
service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
service_provider=VPN:openswan:neutron.services..service_drivers.ipsec.IPsecVPNDriver:default
8.2.7 Nova 相关配置在nova.conf
[root@openstack-node1 ~]# vim /etc/nova/nova.conf
253 my_ip=192.168.56.111
1200 network_api_class=nova.network.neutronv2.api.API
1321 linuxnet_interface_driver=nova.network.linux_net.LinuxBridgeInterfaceDriver
1464 notify_nova_on_port_status_changes=true
1466 neutron_url=http://192.168.56.111:9696
1474 neutron_admin_username=admin
1478 neutron_admin_password=admin
1488 neutron_admin_tenant_name=admin
1496 neutron_admin_auth_url=http://192.168.56.111:5000/v2.0
1503 neutron_auth_strategy=keystone
1536 security_group_api=neutron
1966 vif_plugging_is_fatal=false
1973 vif_plugging_timeout=10
1982 firewall_driver=nova.virt.firewall.NoopFirewallDriver
2872 vif_driver=nova.virt.libvirt.vif.NeutronLinuxBridgeVIFDriver
修改完毕后,需要重新启动nova相关服务。
[root@openstack-node1 neutron]# cd
[root@openstack-node1 ~]# vim /etc/nova/nova.conf
[root@openstack-node1 ~]# for i in {api,cert,conductor,consoleauth,novncproxy,scheduler};do /etc/init.d/openstack-nova-$i restart;done
Stopping openstack-nova-api: [ OK ]
Starting openstack-nova-api: [ OK ]
Stopping openstack-nova-cert: [ OK ]
Starting openstack-nova-cert: [ OK ]
Stopping openstack-nova-conductor: [ OK ]
Starting openstack-nova-conductor: [ OK ]
Stopping openstack-nova-consoleauth: [ OK ]
Starting openstack-nova-consoleauth: [ OK ]
Stopping openstack-nova-novncproxy: [ OK ]
Starting openstack-nova-novncproxy: [ OK ]
Stopping openstack-nova-scheduler: [ OK ]
Starting openstack-nova-scheduler: [ OK ]
8.2.8创建neutron Service 和endpoint
[root@openstack-node1 ~]# keystone service-create --name neutron --type network
/usr/lib64/python2.6/site-packages/Crypto/Util/number.py:57: PowmInsecureWarning: Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attack vulnerability.
_warn("Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attack vulnerability.", PowmInsecureWarning)
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | |
| enabled | True |
| id | d91b0b73659a58c03e5fdd0874b9e4c4 |
| name | neutron |
| type | network |
+-------------+----------------------------------+
[root@openstack-node1 ~]# keystone endpoint-create --service_id=d91b0b73659a58c03e5fdd0874b9e4c4 --publicurl=http://192.168.56.111:9696 --adminurl=http://192.168.56.111:9696 --internalurl=http://192.168.56.111:9696
/usr/lib64/python2.6/site-packages/Crypto/Util/number.py:57: PowmInsecureWarning: Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attack vulnerability.
_warn("Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attack vulnerability.", PowmInsecureWarning)
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| adminurl | http://192.168.56.111:9696 |
| id | cda66403995b8574936b074fd277a946 |
| internalurl | http://192.168.56.111:9696 |
| publicurl | http://192.168.56.111:9696 |
| region | regionOne |
| service_id | d91b0b73659a58c03e5fdd0874b9e4c4 |
+-------------+----------------------------------+
[root@openstack-node1 ~]# keystone service-list
/usr/lib64/python2.6/site-packages/Crypto/Util/number.py:57: PowmInsecureWarning: Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attack vulnerability.
_warn("Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attack vulnerability.", PowmInsecureWarning)
+----------------------------------+----------+----------+--------------------+
| id | name | type | description |
+----------------------------------+----------+----------+--------------------+
| 35d71cc7cece4b07ae64f26f32cf4ce4 | glance | image | |
| 7b829b9cdd4b440294067009cf7ae206 | keystone | identity | OpenStack Identity |
| d91b0b73659a58c03e5fdd0874b9e4c4 | neutron | network | |
| c91735236d1c4a2ca2c0094806a4fbf9 | nova | compute | |
+----------------------------------+----------+----------+--------------------+
[root@openstack-node1 ~]# keystone endpoint-list
/usr/lib64/python2.6/site-packages/Crypto/Util/number.py:57: PowmInsecureWarning: Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attack vulnerability.
_warn("Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attack vulnerability.", PowmInsecureWarning)
+----------------------------------+-----------+---------------------------------------------+---------------------------------------------+---------------------------------------------+----------------------------------+
| id | region | publicurl | internalurl | adminurl | service_id |
+----------------------------------+-----------+---------------------------------------------+---------------------------------------------+---------------------------------------------+----------------------------------+
| 06d8b0042ba04935aa8654f8063ad1dc | regionOne | http://192.168.56.111:5000/v2.0 | http://192.168.56.111:5000/v2.0 | http://192.168.56.111:35357/v2.0 | 7b829b9cdd4b440294067009cf7ae206 |
| 6b074fd277a946cda66403995b857493 | regionOne | http://192.168.56.111:9696 | http://192.168.56.111:9696 | http://192.168.56.111:9696 | e5fdd0874b9e4c4d91b0b73659a58c03 |
| 9356cedf4f1947d1ab2076ec9a89f5ec | regionOne | http://192.168.56.111:9292 | http://192.168.56.111:9292 | http://192.168.56.111:9292 | 35d71cc7cece4b07ae64f26f32cf4ce4 |
| e858b4c5c8e04d6fb0458aac83316acd | regionOne | http://192.168.56.111:8774/v2/%(tenant_id)s | http://192.168.56.111:8774/v2/%(tenant_id)s | http://192.168.56.111:8774/v2/%(tenant_id)s | c91735236d1c4a2ca2c0094806a4fbf9 |
+----------------------------------+-----------+---------------------------------------------+---------------------------------------------+---------------------------------------------+----------------------------------+
8.3 Neutron Plugin
8.3.1 Neutron ML2配置
[root@openstack-node1 ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini
5 type_drivers = flat,vlan,gre,vxlan
12 tenant_network_types = flat
17 mechanism_drivers = linuxbridge
29 flat_networks = physnet1
62 enable_security_group = True
8.3.2 Linuxbridge 配置
[root@openstack-node1 ~]# vim /etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini
20 network_vlan_ranges = physnet1
31 physical_interface_mappings = physnet1:eth0
78 enable_security_group = True
8.4 neutron 启动
[root@openstack-node1 ~]# neutron-server --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini --config-file=/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini
2016-10-08 19:20:12.893 12522 INFO neutron.plugins.ml2.managers [-] Loaded mechanism driver names: ['linuxbridge']
2016-10-08 19:20:12.894 12522 INFO neutron.plugins.ml2.managers [-] Registered mechanism drivers: ['linuxbridge']
2016-10-08 19:20:12.915 12522 WARNING neutron.openstack.common.db.sqlalchemy.session [-] This application has not enabled MySQL traditional mode, which means silent data corruption may occur. Please encourage the application developers to enable this mode.
^C2016-10-08 19:20:18.490 12522 DEBUG neutron.openstack.common.lockutils [-] Semaphore / lock released "_create_instance" inner /usr/lib/python2.6/site-packages/neutron/openstack/common/lockutils.py:252
[root@openstack-node1 ~]# neutron-linuxbridge-agent
--config-file=/etc/neutron/neutron.conf
--config-file=/etc/neutron/plugins/ml2/ml2_conf.ini
--config-file=/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini
[root@openstack-node1 ~]# cd openstack-inc/control/init.d
[root@openstack-node1 init.d]# cp openstack-neutron-* /etc/init.d/
[root@openstack-node1 init.d]# cd /etc/init.d/
[root@openstack-node1 init.d]# chmod +x /etc/init.d/openstack-neutron-*
[root@openstack-node1 init.d]# chkconfig --add openstack-neutron-server
[root@openstack-node1 init.d]# chkconfig --add openstack-neutron-linuxbridge-agent
[root@openstack-node1 init.d]# /etc/init.d/openstack-neutron-server start
Starting openstack-neutron-server: [ OK ]
[root@openstack-node1 init.d]# /etc/init.d/openstack-neutron-linuxbridge-agent start
Starting openstack-neutron-linuxbridge-agent: [ OK ]
8.5 测试Neutron安装
[root@openstack-node1 init.d]# cd
[root@openstack-node1 ~]# neutron agent-list
+--------------------------------------+--------------------+-----------------------------+-------+----------------+
| id | agent_type | host | alive | admin_state_up |
+--------------------------------------+--------------------+-----------------------------+-------+----------------+
| 8f55528a-6a46-4a25-91c2-a617f2f2950a | Linux bridge agent | openstack-node1.example.com | :-) | True |
+--------------------------------------+--------------------+-----------------------------+-------+----------------+
总结