1. openstack概述
1.1 云计算的架构
IaaS Infrastructure as a Service(基础架构即服务),主要为企业提供基础设置服务,如计算,存储,网络,开源的解决方案有:Openstack,Cloustack,Eucalyptus,OpenNebula,商业(VMware vSphere);
PaaS Platform-as-a-Service(平台即服务),主要为企业应用提供开发运行环境,从各类复杂的开发环境中解救,目前常见的开源解决方案有:Docker,Cloudfondry,OpenShift等; 商业的解决方案如Google App Engine,Microsoft Azure等;
SaaS Software-as-a-Service(软件即服务),提供软件服务,目前这类云较少,典型的有Citrix XenApp,GoogleDocs,MicrosoftOfficeOnline;
1.2 环境说明
10.1.2.130 controller 硬件配置:CPU: Intel(R) Xeon(R) CPU+8G内+300Gdisk+2*1000M网卡
10.1.2.131 neutron 硬件配置:CPU: Intel(R) Xeon(R) CPU+8G内+300Gdisk+2*1000M网卡
10.1.2.137 compute 硬件配置:CPU: Intel(R) Xeon(R) CPU+8G内+300Gdisk+2*1000M网卡
10.1.2.156 compute 硬件配置:CPU: Intel(R) Xeon(R) CPU+8G内+300Gdisk+2*1000M网卡
1.3 openstack概述
openstack是一个开源基于IaaS结构设计的云平台,其主要目标是为企业提供公有云(public cloud)、私有云(private cloud)、混合云(hybri cloud)等云计算服务,它由多个服务组件组合一起工作,如nova提供虚拟化,neutron提供网络连接,cinder提供外挂存储,swift提供对象存储,keystone实现集中认证。
1.4 openstack项目组成
服务名称 项目名称 说明信息
Compute nova 和底层的Hypervisor交换,管理instance的生命周期,包括创建,销毁
Networking neutron 为instance提供网络连接,能支持多种厂商的外部插件和多种网络模式
Block Storage cinder 为instance提供外挂永久存储,支持多种不同的后端存储方式,如ceph
Object Storage swift 存储非结构化数据,swift具有高容错性,数据复制和可扩容的结构
Identity Service keystone 为openstack服务提供认证和授权,通过endpoint找到网络中的不同服务
Image Service glance 为instance提供镜像服务检索,instance启动时需要下载对应的镜像
Dashboard horizon 提供一个用户和管理员的Web管理界面,支持大部分命令行下的功能
Telemetry Ceilometer 监控和度量服务,能基于CPU、内存、网络等资源进行监控和度量,展示
Database Service Trove 能够为虚拟机提供结构化和非结构化的数据库引擎,如MySQL,Redis等
Orchestration Heat 通过提前定义的HOT模板,实现instance的自动扩容和减少功能
1.5 组件之间的交互关系
说明:
1. horizon以图形的方式管理所有的project,包括nova虚拟机的创建,neutron网络,cinder存储,glance镜像等;
2. keystone为所有的服务提供认证和授权服务,通过keystone能够找到各个服务的endpoint,如nova的地址,neutron的地址,glance的地址,cinder的地址等;
3. 创建虚拟机,需要提交请求给nova-api,nova通过nova-scheduler选择合适的compue,nova和底层的hypervisor交互,需要建立虚拟机最初工作;
4. 虚拟机的创建需要下载合适的镜像,此时会请求glance-api,glance通过glance-registry找到和下载到合适的镜像到compute启动;
5. glance的镜像可以存放在不同的地方,如本地的Filesystem,统一存储ceph或者是swift上;
6. instance运行时需要建立网络,将请求交给neutron-server,neutron-server会根据网络请求,为虚拟机分配地址,建立网桥,构建iptables安全组规则,此时一个普通的instance就基本可以建立完成了;
7. instance如果需要外怪存储的话,可以向cinder-api发起请求,通过cinder-scheduler选择到合适的cinder-volume之后,cinder-volume会向端的存储请求存储空间,之后交由instance;
8. 和p_w_picpath类似,cinder的备份或者快照文件,可以存储在分布式的对象存储swift上;
说明:
以上是openstack各组件之间的交互关系,当然这只是笼统的组件之间的交换关系,具体详细的交互流程非常复杂,设计到消息队列MQ与数据持久存储DB和各组件api之间的交互关系,后续奉上instance创建的详细流程,或者可以查阅源代码获取更详细的流程。
1.6 节点部署的软件
说明:
controller需要部署支持服务有rabbitmq-server和mariaDB,基础必备的服务有openstack-keytone,openstack-glance,openstack-nova和neutron-server以及openstack-dashbaord,可选的服务有:openstack-cinder,openstack-swift,openstack-trove,openstack-heat和openstack-ceilometer服务,管理网的地址是10.1.2.130;
neutron节点需要部署不同的agent服务,包括neutron-dhcp-agent,neutron-l3-agent,neutron-openvswitch-agent,neutron-metadata-agent和openvswitch,管理网的地址是10.1.2.131;
compute节点需要部署计算和网络的服务,包括nova-compute,libvirt,网络有openstack-neutron-agent,openvswitch,可选的服务有Ceilemoter Agent,管理网地址是10.1.2.137和10.1.2.256;
2. openstack环境准备
2.1. 网络配置
按照如果的环境,配置好管理ip地址,主机名,此处将地址管理网地址配置在机器的第二张网卡上,同时确保机器能够正常的访问外网,配置的信息具体如下:
1. ip地址的配置
[root@controller ~]# ifconfig enp2s0: flags=4163mtu 1500 inet 10.1.2.130 netmask 255.255.240.0 broadcast 10.1.15.255 inet6 fe80::2e0:81ff:fedf:b3c2 prefixlen 64 scopeid 0x20 ether 00:e0:81:df:b3:c2 txqueuelen 1000 (Ethernet) RX packets 9193271 bytes 1715390159 (1.5 GiB) RX errors 40 dropped 15 overruns 0 frame 40 TX packets 1447411 bytes 113804698 (108.5 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device interrupt 17 memory 0xfbae0000-fbb00000 [root@controller ~]# vim /etc/sysconfig/network-scripts/ifcfg-enp2s0 TYPE=Ethernet BOOTPROTO=static NAME=enp2s0 IPADDR=10.1.2.130 NETMASK=255.255.255.0 GATEWAY=10.1.2.1
2. 主机名的配置
[root@controller ~]# hostnamectl set-hostname controller [root@controller ~]# cat /etc/hostname controller
3. hosts文件解析
[root@controller ~]# vim /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 10.1.2.130 controller 10.1.2.131 neutron 10.1.2.137 compute1 10.1.2.156 compute2
4. 主机名测试
[root@controller ~]# ping -c 1 controller PING controller (10.1.2.130) 56(84) bytes of data. 64 bytes from controller (10.1.2.130): icmp_seq=1 ttl=64 time=0.019 ms --- controller ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms [root@controller ~]# ping -c 1 neutron PING neutron (10.1.2.131) 56(84) bytes of data. 64 bytes from neutron (10.1.2.131): icmp_seq=1 ttl=64 time=0.254 ms --- neutron ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms [root@controller ~]# [root@controller ~]# ping -c 1 compute1 PING compute1 (10.1.2.137) 56(84) bytes of data. 64 bytes from compute1 (10.1.2.137): icmp_seq=1 ttl=64 time=0.184 ms [root@controller ~]# ping -c 1 compute2 PING compute2 (10.1.2.156) 56(84) bytes of data. 64 bytes from compute2 (10.1.2.156): icmp_seq=1 ttl=64 time=0.249 ms --- compute2 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms
2.2 ntpd时间同步
openstack环境里面,有些服务队时间很敏感,如nova,neutron,cinder和Rabbitmq进行交换的时候,如果时间不正确,可能会导致服务不可用(生产环境中经历过),从而影响服务,为保证时间的准确性,通过ntp确保时间的准确性,一般而言,可以可以指向局域网内的ntp服务器,建议compute和neutron节点,将ntp指向controller,确保集群内的时间是同步的。CentOS7内提供的ntp服务功能的是chronyd服务,执行步骤如下:
[root@controller ~]# yum install chrony -y [root@controller ~]# systemctl enable chronyd ln -s '/usr/lib/systemd/system/chronyd.service' '/etc/systemd/system/multi-user.target.wants/chronyd.service' [root@controller ~]# systemctl restart chronyd neutron和compute节点需要将地址指向controller: [root@neutron ~]# yum install chrony -y [root@neutron ~]# vim /etc/chrony.conf server controller [root@neutron ~]# systemctl enable chronyd ln -s '/usr/lib/systemd/system/chronyd.service' '/etc/systemd/system/multi-user.target.wants/chronyd.service' [root@neutron ~]# systemctl restart chronyd [root@neutron ~]# chronyc sources -v 210 Number of sources = 1 .-- Source mode '^' = server, '=' = peer, '#' = local clock. / .- Source state '*' = current synced, '+' = combined , '-' = not combined, | / '?' = unreachable, 'x' = time may be in error, '~' = time too variable. || .- xxxx [ yyyy ] +/- zzzz || / xxxx = adjusted offset, || Log2(Polling interval) -. | yyyy = measured offset, || \ | zzzz = estimated error. || | | MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^? controller 0 6 0 10y +0ns[ +0ns] +/- 0ns ****看到^?表示时间同步完成****
2.3 构建软件仓库
openstack的安装需要很多扩展包(尤其是一些python包),这些扩展包可以通过redhat的Epel仓库获得,同时需要openstack相关rpm包,这些rpm包可以通过redhat的RDO获取,不同的版本如Icehouse,Juno,Kilo在不通的路径中,只需要安装两个软件包即可完成,如下操作(所有的节点):
[root@controller ~]# rpm -ivh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epelrelease-7-2.noarch.rpm [root@controller ~]# rpm -ivh http://rdo.fedorapeople.org/openstack-juno/rdo-release-juno 更新系统和安装openstack-selinux包 [root@controller ~]# yum update -y [root@controller ~]# yum -y openstack-selinux [root@controller ~]# yum -y openstack-utils openstack-utils
2.4 安装数据库
openstack持久化数据需要存储到数据库中,如nova相关的主机信息,instance状态,网络相关的网络、子网、端口、地址范围、agent,cinder相关的volume存储,卷的分配情况等等。一般而言,在构建controller的高可用集群上,MySQL是必须要考虑的地方。数据库一般运行在controller节点,其他节点只需安装MySQL-python包即可,如下是安装和配置MariaDB的过程:
1.安装MariaDB
[root@controller ~]# yum -y install mariadb-server mariadb MySQL-python
2. 配置MariaDB
[root@controller ~]# vim /etc/my.cnf [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 bind-address = 10.1.2.130 default-storage-engine = innodb innodb_file_per_table collation-server = utf8_general_ci init-connect = 'SET NAMES utf8' character-set-server = utf8 [mysqld_safe] log-error=/var/log/mariadb/mariadb.log pid-file=/var/run/mariadb/mariadb.pid # include all files from the config directory !includedir /etc/my.cnf.d
3. 启动MariaDB数据库
[root@controller ~]# systemctl enable mariadb ln -s '/usr/lib/systemd/system/mariadb.service' '/etc/systemd/system/multi-user.target.wants/mariadb.service' [root@controller ~]# systemctl restart mariadb [root@controller ~]# mysql Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 2 Server version: 5.5.35-MariaDB MariaDB Server Copyright (c) 2000, 2013, Oracle, Monty Program Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | test | +--------------------+ 4 rows in set (0.00 sec) mariaDB初始化(设置root密码,删除匿名账号,删除测试库test,重载用户权限): [root@controller ~]# mysql_secure_installation
2.5 安装消息队列
openstack是使用消息队列用于组件之间的通信的桥梁,如nova,neutron,cinder这些组件之间交互都需要使用到消息队列,如创建虚拟机,首先nova-api接受用户创建instance的请求,后由nova-sceduler选择到合适的compute节点,然后生成(生产者)创建instance的命令放到消息队列中,被选中的compute节点(消费者)会从队列中提取队列信息,按照需求创建instance,创建完成之后,再放入队列中,返还给controller,从而实现生产和消费的过程。openstack旨在创建一个大规模的云环境,消息队列是众多组件交互所必须的桥梁。
openstack中能够支持的消息队列有多重,包括Qpid,Rabbitmq和ZeroMQ,其中Qpid在早期的icehouse中使用,以我个人的使用经验来说有两方面的缺陷:1. 默认只支持4000个connection,使用需要修改,2. Qpid对集群支持不好,需要借助其他高可用软件如pacemaker来完成,并且对数据的持久化支持不好,不建议使用。rabbitmq则自身就已经携带有集群机制,能够支持将队列放到内存和磁盘,并且对数据持久化的支持很好,并且本身就提供有一个Web管理界面,用于配置和监控,建议使用rabbitmq作为openstack中的消息队列,如下是rabbitmq的安装过程:
1. 安装rabbitmq-server
[root@controller ~]# yum -y install rabbitmq-server
2. 启动rabbitmq-server服务
[root@controller ~]# systemctl enable rabbitmq-server ln -s '/usr/lib/systemd/system/rabbitmq-server.service' '/etc/systemd/system/multi-user.target.wants/rabbitmq-server.service' [root@controller ~]# systemctl restart rabbitmq-server [root@controller ~]# netstat -antupl |grep 5672 tcp 0 0 0.0.0.0:25672 0.0.0.0:* LISTEN 26572/beam.smp tcp 0 0 0.0.0.0:15672 0.0.0.0:* LISTEN 26572/beam.smp tcp6 0 0 :::5672 :::* LISTEN 26572/beam.smp
3. 查看rabbitmq的状态
[root@controller ~]# rabbitmqctl status
Status of node rabbit@controller ... [{pid,26572}, {running_applications, [{rabbitmq_management,"RabbitMQ Management Console","3.3.5"}, {rabbitmq_web_dispatch,"RabbitMQ Web Dispatcher","3.3.5"}, {webmachine,"webmachine","1.10.3-rmq3.3.5-gite9359c7"}, {mochiweb,"MochiMedia Web Server","2.7.0-rmq3.3.5-git680dba8"}, {rabbitmq_management_agent,"RabbitMQ Management Agent","3.3.5"}, {rabbit,"RabbitMQ","3.3.5"}, {mnesia,"MNESIA CXC 138 12","4.11"}, {os_mon,"CPO CXC 138 46","2.2.14"}, {inets,"INETS CXC 138 49","5.9.8"}, {amqp_client,"RabbitMQ AMQP Client","3.3.5"}, {xmerl,"XML parser","1.3.6"}, {sasl,"SASL CXC 138 11","2.3.4"}, {stdlib,"ERTS CXC 138 10","1.19.4"}, {kernel,"ERTS CXC 138 10","2.16.4"}]}, {os,{unix,linux}}, {erlang_version, "Erlang R16B03-1 (erts-5.10.4) [source] [64-bit] [smp:8:8] [async-threads:30] [hipe] [kernel-poll:true]\n"}, {memory, [{total,41366944}, {connection_procs,5600}, {queue_procs,5600}, {plugins,356280}, {other_proc,14224800}, {mnesia,61072}, {mgmt_db,88304}, {msg_index,22520}, {other_ets,1051808}, {binary,18664}, {code,19737895}, {atom,703377}, {other_system,5091024}]},
rabbitmq常用的命令有:
list_users 查看用户,默认创建了一个guest/guset用户,作为rabbitmq的管理员 add_user创建用户,并设置密码 change_password 修改用户密码 delete_user 删除用户 set_user_tags ... 用户授权,默认创建用户之后,没有任何权限 add_vhost 创建虚拟目录,可以为每个用户创建一个虚拟目录,默认访问/目录 delete_vhost 删除虚拟目录 list_vhosts [ ...] 查看虚拟目录 set_permissions [-p ] 为虚拟目录授予权限,权限分为配置权限,写权限和读权限 clear_permissions [-p ] 回收权限 list_permissions [-p ] 查看权限 list_user_permissions 查看用户的权限 list_queues [-p ] [ ...] 查看队列 list_exchanges [-p ] [ ...] 查看exchange信息 list_bindings [-p ] [ ...] 查看bindings list_connections [ ...] 查看客户端连接 list_channels [ ...] 查看channels list_consumers [-p ] 查看consumers信息
4. 为nova,neutron,cinder,heat创建用户并授予权限
[root@controller ~]# rabbitmqctl add_user nova NOVA_MQPASS Creating user "nova" ... ...done. [root@controller ~]# rabbitmqctl add_user neutron NEUTRON_MQPASS Creating user "neutron" ... ...done. [root@controller ~]# rabbitmqctl add_user cinder CINDER_MQPASS Creating user "cinder" ... ...done. [root@controller ~]# rabbitmqctl add_user heat HEAT_MQPASS Creating user "heat" ... ...done. [root@controller ~]# rabbitmqctl list_users Listing users ... cinder [] guest [administrator] heat [] neutron [] nova [] ...done. [root@controller ~]# 此时没有权限,执行授权操作: [root@controller ~]# rabbitmqctl set_permissions -p / nova '.*' '.*' '.*' Setting permissions for user "nova" in vhost "/" ... ...done. [root@controller ~]# rabbitmqctl set_permissions -p / neutron '.*' '.*' '.*' Setting permissions for user "neutron" in vhost "/" ... ...done. [root@controller ~]# rabbitmqctl set_permissions -p / cinder '.*' '.*' '.*' Setting permissions for user "cinder" in vhost "/" ... ...done. [root@controller ~]# rabbitmqctl set_permissions -p / heat '.*' '.*' '.*' Setting permissions for user "heat" in vhost "/" ... ...done. [root@controller ~]# rabbitmqctl list_permissions Listing permissions in vhost "/" ... cinder .* .* .* guest .* .* .* heat .* .* .* neutron .* .* .* nova .* .* .* ...done. [root@controller ~]#
5. 配置rabbitmq web管理插件,rabbitmq默认的管理插件未启用(旧版,新版已启用),如果有需要可以启用,便于配置,管理和监控,配置方式如下:
[root@controller ~]# /usr/lib/rabbitmq/bin/rabbitmq-plugins list [e] amqp_client 3.3.5 [ ] cowboy 0.5.0-rmq3.3.5-git4b93c2d [ ] eldap 3.3.5-gite309de4 [e] mochiweb 2.7.0-rmq3.3.5-git680dba8 [ ] rabbitmq_amqp1_0 3.3.5 [ ] rabbitmq_auth_backend_ldap 3.3.5 [ ] rabbitmq_auth_mechanism_ssl 3.3.5 [ ] rabbitmq_consistent_hash_exchange 3.3.5 [ ] rabbitmq_federation 3.3.5 [ ] rabbitmq_federation_management 3.3.5 [E] rabbitmq_management 3.3.5 已经启动 [e] rabbitmq_management_agent 3.3.5 [ ] rabbitmq_management_visualiser 3.3.5 [ ] rabbitmq_mqtt 3.3.5 [ ] rabbitmq_shovel 3.3.5 [ ] rabbitmq_shovel_management 3.3.5 [ ] rabbitmq_stomp 3.3.5 [ ] rabbitmq_test 3.3.5 [ ] rabbitmq_tracing 3.3.5 [e] rabbitmq_web_dispatch 3.3.5 [ ] rabbitmq_web_stomp 3.3.5 [ ] rabbitmq_web_stomp_examples 3.3.5 [ ] sockjs 0.3.4-rmq3.3.5-git3132eb9 [e] webmachine 1.10.3-rmq3.3.5-gite9359c7 如果没有启用,则采用如下的方式启用,启用之后需要重启rabbitmq-server服务 [root@controller ~]# /usr/lib/rabbitmq/bin/rabbitmq-plugins enable rabbitmq_management [root@controller ~]# systemctl restart rabbitmq-server
在浏览器中输入地址:,输入用户名guest/guset即可登录(实际环境中,建议修改guest密码,管理员权限过大)
在页面中,可以轻松实现对rabbitmq的配置,管理和监控,当然也可以通过命令行的方式管理,至此openstack的基础环境准备完毕,如下开始openstack真正的安装之旅。
3. keystone的安装
3.1 keystone概述
keystone在整个openstack环境内,充当"大管家"的角色,它主要承担两方面的工作:1. 为所有的用户实现认证和授权工作,其中认证支持两种方式:用户密码和基于token的认证,2. 为每个服务提供访问端点,即endpoint,每个服务(keystone自己,nova,neutron,glance,cinder)需要将自己的端口以服务的形式向keystone注册,便于组件之间的交互使用,组件之间交互需要通过keystone作为桥梁,寻找到对应的服务。
keystone涉及到的概念有:
1. user 代表使用者,用户通常需要加入到tenant中,即project内,用户发起请求之后,将会获得一个token,通过token对openstack中的资源进行访问,token具有时效性,一段时间之后会过期;
2. tenant 租户,即project,openstack面向的是企业,公有云,所以以租户的形式定义,租户是user的即可,openstack中授权是针对tenant来实行的,如quota;
3. service 服务,即openstack中的各个project,如nova,neutron,每个project都会以service的形式注册到keystone中,便于组件之间的调用;
4. endpoint 端点,提供访问服务的接口,通常以url的形式访问,一般有三种endpoint的类型:adminrul,internalurl,publicurl,同时,endpoint可以可以划分区域,实现每个区域的资源隔离,如亚太区,北美区,每个区域除了keystone和horizon是共享之外,拥有自己的计算nova,网络neutron,存储cinder和swift,镜像glance等资源。(公有云环境,跨机房环境下使用)
5. role 角色,即为tenant分配操作的权限,通常操作权限定义在/etc/project/policy.json
下,如keystone的权限定义在/etc/keystone/policy.json中,权限定义了user能够操作权限的范围;
6. Credentials 验证信息,Credentials可以分为用户密码认证信息和基于token的认证信息;
7. Token 令牌 openstack支持令牌的认证方式,用户登陆之后,通过api会从keystone中获取到Token,然后用该Token访问openstack中的资源,Token具有时效性,到一定的周期之后会失效。此外,初始配置时,需要用到Token实现keystone的初始化配置。
如下是keystone的认证过程:
3.2 keystone的安装和配置
keystone只需要在controller上安装即可,需要准备配置好数据库和token即可,具体操作步骤如下:
1. 创建库和授权
MariaDB [(none)]> create database keystone; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | keystone | | mysql | | performance_schema | +--------------------+ 4 rows in set (0.00 sec) MariaDB [(none)]> grant all privileges on keystone.* to 'keystone'@'localhost' identified by 'KEYSTONE_DBPASS'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> grant all privileges on keystone.* to 'keystone'@'%' identified by 'KEYSTONE_DBPASS'; Query OK, 0 rows affected (0.00 sec) [root@controller ~]# mysql -ukeystone -pKEYSTONE_DBPASS Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 13 Server version: 5.5.35-MariaDB MariaDB Server Copyright (c) 2000, 2013, Oracle, Monty Program Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
2. 安装和配置kesytone
a、安装软件包
[root@controller ~]# yum -y install openstack-keystone python-keystoneclient
b、生成token:
[root@controller ~]# export ADMIN_TOKEN=$(openssl rand -hex 10) [root@controller ~]# echo $ADMIN_TOKEN f456479a1163f8edb68c
c、配置token
[root@controller ~]# openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token f456479a1163f8edb68c [root@controller ~]# vim /etc/keystone/keystone.conf [DEFAULT] admin_token = f456479a1163f8edb68c
d、配置数据库连接
[root@controller ~]# openstack-config --set /etc/keystone/keystone.conf database connection mysql://keystone:KEYSTONE_DBPASS@controller/keystone [root@controller ~]# vim /etc/keystone/keystone.conf [database] connection = mysql://keystone:KEYSTONE_DBPASS@controller/keystone
e、配置UUID token和SQL驱动(provider用在多region场景,分为UUID和PKI两种)
[root@controller ~]# openstack-config --set /etc/keystone/keystone.conf token provider keystone.token.providers.uuid.Provider [root@controller ~]# openstack-config --set /etc/keystone/keystone.conf token driver keystone.token.persistence.backends.sql.Token [root@controller ~]# vim /etc/keystone/keystone.conf [token] provider = keystone.token.providers.uuid.Provider driver = keystone.token.persistence.backends.sql.Token
f、为便于排错,显示详细的日志信息(可选)
[root@controller ~]# openstack-config --set /etc/keystone/keystone.conf DEFAULT verbose True
3. 生成PKI认证所需的证书文件
[root@controller ~]# keystone-manage pki_setup --keystone-user keystone --keystone-group keystone [root@controller ~]# ll -d /etc/keystone/ssl/ drwxr-xr-x 4 root root 32 Oct 23 14:50 /etc/keystone/ssl/ [root@controller ~]# chown -R keystone:keystone /etc/keystone/ssl/ [root@controller ~]# chmod -R o-rwx /etc/keystone/ssl/ [root@controller ~]# chown -R keystone:keystone /var/log/keystone/
4. 同步keystone数据库,生成keystone所需的表
[root@controller ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone [root@controller ~]# mysql -ukeystone -pKEYSTONE_DBPASS -e "show tables from keystone;" +-----------------------+ | Tables_in_keystone | +-----------------------+ | assignment | | credential | | domain | | endpoint | | group | | id_mapping | | migrate_version | | policy | | project | | region | | revocation_event | | role | | service | | token | | trust | | trust_role | | user | | user_group_membership | +-----------------------+
5. 启动keystone服务和校验服务状态
[root@controller ~]# systemctl enable openstack-keystone ln -s '/usr/lib/systemd/system/openstack-keystone.service' '/etc/systemd/system/multi-user.target.wants/openstack-keystone.service' [root@controller ~]# systemctl restart openstack-keystone [root@controller ~]# systemctl status openstack-keystone openstack-keystone.service - OpenStack Identity Service (code-named Keystone) Loaded: loaded (/usr/lib/systemd/system/openstack-keystone.service; enabled) Active: active (running) since Fri 2015-10-23 15:54:01 CST; 8s ago Main PID: 25920 (keystone-all) CGroup: /system.slice/openstack-keystone.service ├─25920 /usr/bin/python /usr/bin/keystone-all ├─25938 /usr/bin/python /usr/bin/keystone-all ├─25939 /usr/bin/python /usr/bin/keystone-all ├─25940 /usr/bin/python /usr/bin/keystone-all ├─25941 /usr/bin/python /usr/bin/keystone-all ├─25942 /usr/bin/python /usr/bin/keystone-all ├─25943 /usr/bin/python /usr/bin/keystone-all ├─25944 /usr/bin/python /usr/bin/keystone-all ├─25945 /usr/bin/python /usr/bin/keystone-all ├─25946 /usr/bin/python /usr/bin/keystone-all ├─25947 /usr/bin/python /usr/bin/keystone-all ├─25948 /usr/bin/python /usr/bin/keystone-all ├─25949 /usr/bin/python /usr/bin/keystone-all ├─25950 /usr/bin/python /usr/bin/keystone-all ├─25951 /usr/bin/python /usr/bin/keystone-all ├─25952 /usr/bin/python /usr/bin/keystone-all └─25953 /usr/bin/python /usr/bin/keystone-all Oct 23 15:54:00 controller systemd[1]: Starting OpenStack Identity Service (code-named Keystone)... Oct 23 15:54:01 controller systemd[1]: Started OpenStack Identity Service (code-named Keystone). 校验keystone的日志信息: [root@controller ~]# tail -f /var/log/keystone/keystone.log 2015-10-23 15:54:01.566 25920 INFO keystone.openstack.common.service [-] Started child 25949 2015-10-23 15:54:01.568 25949 INFO eventlet.wsgi.server [-] (25949) wsgi starting up on http://0.0.0.0:5000/ 2015-10-23 15:54:01.570 25920 INFO keystone.openstack.common.service [-] Started child 25950 2015-10-23 15:54:01.572 25950 INFO eventlet.wsgi.server [-] (25950) wsgi starting up on http://0.0.0.0:5000/ 2015-10-23 15:54:01.573 25920 INFO keystone.openstack.common.service [-] Started child 25951 2015-10-23 15:54:01.575 25951 INFO eventlet.wsgi.server [-] (25951) wsgi starting up on http://0.0.0.0:5000/ 2015-10-23 15:54:01.577 25920 INFO keystone.openstack.common.service [-] Started child 25952 2015-10-23 15:54:01.579 25952 INFO eventlet.wsgi.server [-] (25952) wsgi starting up on http://0.0.0.0:5000/ 2015-10-23 15:54:01.581 25920 INFO keystone.openstack.common.service [-] Started child 25953 2015-10-23 15:54:01.583 25953 INFO eventlet.wsgi.server [-] (25953) wsgi starting up on http://0.0.0.0:5000/
6. 配置定期清理过期的token(默认存在DB,防止影响性能)
[root@controller ~]# (crontab -l -u keystone 2>&1 | grep -q token_flush) || echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log 2>&1' >> /var/spool/cron/keystone
3.3 kesytone创建user,tenant,role和endpoint
keystone安装和配置完成之后,需要对keystone执行初始化配置,需要定义用户user,租户tenant,角色role,服务service和端点endpoint,每个服务都需要将自己以service的形式注册到keystone中,包括keystone自己。由于初始第一次keystone也没有注册,需要通过token的方式执行初始化(配置用户之后,也是通过token的方式执行认证)。配置的过程如下:
1. 定义token环境变量(或者通过keystone携带参数的方式执行)
[root@controller ~]# echo $ADMIN_TOKEN f456479a1163f8edb68c [root@controller ~]# export OS_SERVICE_TOKEN=${ADMIN_TOKEN} [root@controller ~]# export OS_SERVICE_ENDPOINT=http://controller:35357/v2.0 [root@controller ~]# echo $OS_SERVICE_TOKEN f456479a1163f8edb68c [root@controller ~]# echo $OS_SERVICE_ENDPOINT http://controller:35357/v2.0
2. 创建用户,租户和权限
a、创建tenant
[root@controller ~]# keystone tenant-create --name admin --description "Admin Tenant" +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | Admin Tenant | | enabled | True | | id | f536d97187844f3c8d7aa9d90823dfff | | name | admin | +-------------+----------------------------------+ [root@controller ~]# keystone tenant-list +----------------------------------+-------+---------+ | id | name | enabled | +----------------------------------+-------+---------+ | f536d97187844f3c8d7aa9d90823dfff | admin | True | +----------------------------------+-------+---------+ [root@controller ~]# [root@controller ~]# keystone tenant-get f536d97187844f3c8d7aa9d90823dfff +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | Admin Tenant | | enabled | True | | id | f536d97187844f3c8d7aa9d90823dfff | | name | admin | +-------------+----------------------------------+
b、创建admin用户
[root@controller ~]# keystone user-create --name admin --pass ADMIN_PASS --email [email protected] +----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | email | [email protected] | | enabled | True | | id | f0cad6978b234de795e5636fda5f73d9 | | name | admin | | username | admin | +----------+----------------------------------+ [root@controller ~]# [root@controller ~]# keystone user-list +----------------------------------+-------+---------+-------------------+ | id | name | enabled | email | +----------------------------------+-------+---------+-------------------+ | f0cad6978b234de795e5636fda5f73d9 | admin | True | [email protected] | +----------------------------------+-------+---------+-------------------+ [root@controller ~]# keystone user-get f0cad6978b234de795e5636fda5f73d9 +----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | email | [email protected] | | enabled | True | | id | f0cad6978b234de795e5636fda5f73d9 | | name | admin | | username | admin | +----------+----------------------------------+
c、创建admin角色
[root@controller ~]# keystone role-create --name admin +----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | id | d6615b672113463c917cadab0ab1e2b8 | | name | admin | +----------+----------------------------------+ [root@controller ~]# [root@controller ~]# keystone role-list +----------------------------------+-------+ | id | name | +----------------------------------+-------+ | d6615b672113463c917cadab0ab1e2b8 | admin | +----------------------------------+-------+ [root@controller ~]# [root@controller ~]# keystone role-get d6615b672113463c917cadab0ab1e2b8 +----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | id | d6615b672113463c917cadab0ab1e2b8 | | name | admin | +----------+----------------------------------+
d、将用户admin赋予admin角色和admin project内
[root@controller ~]# keystone user-role-add --user admin --role admin --tenant admin [root@controller ~]# keystone user-role-list --user admin --tenant admin +----------------------------------+-------+----------------------------------+----------------------------------+ | id | name | user_id | tenant_id | +----------------------------------+-------+----------------------------------+----------------------------------+ | d6615b672113463c917cadab0ab1e2b8 | admin | f0cad6978b234de795e5636fda5f73d9 | f536d97187844f3c8d7aa9d90823dfff | +----------------------------------+-------+----------------------------------+----------------------------------+
e、创建_member_角色,并将admin用户加入该角色(dashboard默认赋予_member_角色)
[root@controller ~]# keystone role-create --name _member_ +----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | id | 895d7b1804fa4eed9724f3eba25923ee | | name | _member_ | +----------+----------------------------------+ [root@controller ~]# keystone role-list +----------------------------------+----------+ | id | name | +----------------------------------+----------+ | 895d7b1804fa4eed9724f3eba25923ee | _member_ | | d6615b672113463c917cadab0ab1e2b8 | admin | +----------------------------------+----------+ [root@controller ~]# keystone user-role-add --user admin --role _member_ --tenant admin [root@controller ~]# keystone user-role-list --user admin --tenant admin +----------------------------------+----------+----------------------------------+----------------------------------+ | id | name | user_id | tenant_id | +----------------------------------+----------+----------------------------------+----------------------------------+ | 895d7b1804fa4eed9724f3eba25923ee | _member_ | f0cad6978b234de795e5636fda5f73d9 | f536d97187844f3c8d7aa9d90823dfff | | d6615b672113463c917cadab0ab1e2b8 | admin | f0cad6978b234de795e5636fda5f73d9 | f536d97187844f3c8d7aa9d90823dfff | +----------------------------------+----------+----------------------------------+----------------------------------+
3. 创建demo账号相关的角色
a、创建demo用户
[root@controller ~]# keystone user-create --name demo --pass DEMO_PASS --email [email protected] --enable True +----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | email | [email protected] | | enabled | True | | id | a2a102334d0d489690ca25caef692268 | | name | demo | | username | demo | +----------+----------------------------------+ [root@controller ~]# keystone user-list +----------------------------------+-------+---------+-------------------+ | id | name | enabled | email | +----------------------------------+-------+---------+-------------------+ | f0cad6978b234de795e5636fda5f73d9 | admin | True | [email protected] | | a2a102334d0d489690ca25caef692268 | demo | True | [email protected] | +----------------------------------+-------+---------+-------------------+ [root@controller ~]# keystone user-get a2a102334d0d489690ca25caef692268 +----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | email | [email protected] | | enabled | True | | id | a2a102334d0d489690ca25caef692268 | | name | demo | | username | demo | +----------+----------------------------------+
b、创建demo租户
[root@controller ~]# keystone tenant-create --name demo --description "Demo Tenant" --enable True +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | Demo Tenant | | enabled | True | | id | 917eeff57d604b8aab71b3e226359536 | | name | demo | +-------------+----------------------------------+ [root@controller ~]# keystone tenant-list +----------------------------------+-------+---------+ | id | name | enabled | +----------------------------------+-------+---------+ | f536d97187844f3c8d7aa9d90823dfff | admin | True | | 917eeff57d604b8aab71b3e226359536 | demo | True | +----------------------------------+-------+---------+ [root@controller ~]# [root@controller ~]# keystone tenant-get 917eeff57d604b8aab71b3e226359536 +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | Demo Tenant | | enabled | True | | id | 917eeff57d604b8aab71b3e226359536 | | name | demo | +-------------+----------------------------------+
c、用户和租户角色关联
[root@controller ~]# keystone user-role-add --user demo --tenant demo --role _member_ [root@controller ~]# keystone user-role-list --user demo --tenant demo +----------------------------------+----------+----------------------------------+----------------------------------+ | id | name | user_id | tenant_id | +----------------------------------+----------+----------------------------------+----------------------------------+ | 895d7b1804fa4eed9724f3eba25923ee | _member_ | a2a102334d0d489690ca25caef692268 | 917eeff57d604b8aab71b3e226359536 | +----------------------------------+----------+----------------------------------+----------------------------------+
3. 创建service租户,其他project之间的交互,需要加入到service这个租户里,后续需要使用
[root@controller ~]# keystone tenant-create --name service --description "Service Tenant" +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | Service Tenant | | enabled | True | | id | bcbf51353c534829a993707d3d7940ca | | name | service | +-------------+----------------------------------+ [root@controller ~]# keystone tenant-create --name service --description "Service Tenant" +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | Service Tenant | | enabled | True | | id | bcbf51353c534829a993707d3d7940ca | | name | service | +-------------+----------------------------------+ [root@controller ~]# keystone tenant-list +----------------------------------+---------+---------+ | id | name | enabled | +----------------------------------+---------+---------+ | f536d97187844f3c8d7aa9d90823dfff | admin | True | | 917eeff57d604b8aab71b3e226359536 | demo | True | | bcbf51353c534829a993707d3d7940ca | service | True | +----------------------------------+---------+---------+ [root@controller ~]# keystone tenant-get bcbf51353c534829a993707d3d7940ca +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | Service Tenant | | enabled | True | | id | bcbf51353c534829a993707d3d7940ca | | name | service | +-------------+----------------------------------+
4. 创建keystone自身的服务和端点endpoint
前面提到过,openstack中任何服务(包括keystone自己)都需要以service的形式将其访问的url注册到keystone中,这样便于组件之间通讯,如nova需要和neutron通讯,则向keystone寻找到neutron的url即可,endpoint通常分为三种:adminurl,interurl和publicurl。此外,keystone还可以对不同服务划分region,通过region将不通区域的服务进行隔离,具体阐述,有兴趣可以研究。如下是服务service和端点enpdoint的配置过程。
a、创建keystone的service类型,其他服务通过该catalog即可访问到keystone服务
[root@controller ~]# keystone service-create --name keystone --type identity --description "OpenStack Identity" +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | OpenStack Identity | | enabled | True | | id | 2a464e498c1c417b89bc1351e6f62fe9 | | name | keystone | | type | identity | +-------------+----------------------------------+ [root@controller ~]# keystone service-list +----------------------------------+----------+----------+--------------------+ | id | name | type | description | +----------------------------------+----------+----------+--------------------+ | 2a464e498c1c417b89bc1351e6f62fe9 | keystone | identity | OpenStack Identity | +----------------------------------+----------+----------+--------------------+ [root@controller ~]# keystone service-get 2a464e498c1c417b89bc1351e6f62fe9 +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | OpenStack Identity | | enabled | True | | id | 2a464e498c1c417b89bc1351e6f62fe9 | | name | keystone | | type | identity | +-------------+----------------------------------+
b、将keystone服务端口注册到service中,从而通过service访问keystone
[root@controller ~]# keystone endpoint-create \ > --service-id $(keystone service-list | awk '/ identity / {print $2}') \ > --publicurl http://controller:5000/v2.0 \ > --internalurl http://controller:5000/v2.0 \ > --adminurl http://controller:35357/v2.0 \ > --region regionOne +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | adminurl | http://controller:35357/v2.0 | | id | 4d6531b81f904358af8bc7cb5a64ebd9 | | internalurl | http://controller:5000/v2.0 | | publicurl | http://controller:5000/v2.0 | | region | regionOne | | service_id | 2a464e498c1c417b89bc1351e6f62fe9 | +-------------+----------------------------------+ [root@controller ~]# keystone endpoint-list +----------------------------------+-----------+-----------------------------+-----------------------------+------------------------------+----------------------------------+ | id | region | publicurl | internalurl | adminurl | service_id | +----------------------------------+-----------+-----------------------------+-----------------------------+------------------------------+----------------------------------+ | 4d6531b81f904358af8bc7cb5a64ebd9 | regionOne | http://controller:5000/v2.0 | http://controller:5000/v2.0 | http://controller:35357/v2.0 | 2a464e498c1c417b89bc1351e6f62fe9 | +----------------------------------+-----------+-----------------------------+-----------------------------+------------------------------+----------------------------------+
5. 校验keystone的配置
a、取消keystone的环境变量
[root@controller ~]# unset OS_SERVICE_TOKEN [root@controller ~]# unset OS_SERVICE_ENDPOINT
b、校验admin用户获取token是否正常
[root@controller ~]# keystone --os-username admin --os-tenant-name admin --os-password ADMIN_PASS --os-auth-url http://controller:35357/v2.0 token-get +-----------+----------------------------------+ | Property | Value | +-----------+----------------------------------+ | expires | 2015-10-23T12:25:07Z | | id | 6ffeedc065b84c49a3ab04b78c6b766a | | tenant_id | f536d97187844f3c8d7aa9d90823dfff | | user_id | f0cad6978b234de795e5636fda5f73d9 | +-----------+----------------------------------+
c、校验admin是否有管理权限,如用户,租户,角色,服务,端点等
[root@controller ~]# keystone --os-username admin --os-tenant-name admin --os-password ADMIN_PASS --os-auth-url http://controller:35357/v2.0 user-list +----------------------------------+-------+---------+-------------------+ | id | name | enabled | email | +----------------------------------+-------+---------+-------------------+ | f0cad6978b234de795e5636fda5f73d9 | admin | True | [email protected] | | a2a102334d0d489690ca25caef692268 | demo | True | [email protected] | +----------------------------------+-------+---------+-------------------+ [root@controller ~]# keystone --os-username admin --os-tenant-name admin --os-password ADMIN_PASS --os-auth-url http://controller:35357/v2.0 service-list +----------------------------------+----------+----------+--------------------+ | id | name | type | description | +----------------------------------+----------+----------+--------------------+ | 2a464e498c1c417b89bc1351e6f62fe9 | keystone | identity | OpenStack Identity | +----------------------------------+----------+----------+--------------------+ #有输出内容表示执行成功
d、校验demo账号获取token情况
[root@controller ~]# keystone --os-username demo --os-tenant-name demo --os-password DEMO_PASS --os-auth-url http://controller:35357/v2.0 token-get +-----------+----------------------------------+ | Property | Value | +-----------+----------------------------------+ | expires | 2015-10-23T12:29:51Z | | id | 35e168fc91db45c8b6829610fccb10f4 | | tenant_id | 917eeff57d604b8aab71b3e226359536 | | user_id | a2a102334d0d489690ca25caef692268 | +-----------+----------------------------------+ #有输出,表示正常
e、校验demo账号是否有权限
[root@controller ~]# keystone --os-username demo --os-tenant-name demo --os-password DEMO_PASS --os-auth-url http://controller:35357/v2.0 user-list [root@controller ~]# keystone --os-username demo --os-tenant-name demo --os-password DEMO_PASS --os-auth-url http://controller:35357/v2.0 endopint-list #没有输出,因为没有权限 [root@controller ~]# tail -f /var/log/keystone/keystone.log 2015-10-23 19:17:07.651 25945 INFO eventlet.wsgi.server [-] 10.1.2.130 - - [23/Oct/2015 19:17:07] "GET /v2.0/endpoints HTTP/1.1" 200 436 0.005924 2015-10-23 19:25:08.042 25938 INFO eventlet.wsgi.server [-] 10.1.2.130 - - [23/Oct/2015 19:25:08] "POST /v2.0/tokens HTTP/1.1" 200 1030 0.119968 2015-10-23 19:26:53.313 25944 INFO eventlet.wsgi.server [-] 10.1.2.130 - - [23/Oct/2015 19:26:53] "POST /v2.0/tokens HTTP/1.1" 200 1030 0.126793 2015-10-23 19:26:53.351 25943 INFO eventlet.wsgi.server [-] 10.1.2.130 - - [23/Oct/2015 19:26:53] "GET /v2.0/users HTTP/1.1" 200 419 0.032510 2015-10-23 19:27:01.055 25941 INFO eventlet.wsgi.server [-] 10.1.2.130 - - [23/Oct/2015 19:27:01] "POST /v2.0/tokens HTTP/1.1" 200 1030 0.134344 2015-10-23 19:27:01.092 25940 INFO eventlet.wsgi.server [-] 10.1.2.130 - - [23/Oct/2015 19:27:01] "GET /v2.0/OS-KSADM/services HTTP/1.1" 200 314 0.031583 2015-10-23 19:29:51.065 25939 INFO eventlet.wsgi.server [-] 10.1.2.130 - - [23/Oct/2015 19:29:51] "POST /v2.0/tokens HTTP/1.1" 200 971 0.173262 2015-10-23 19:30:44.797 25942 INFO eventlet.wsgi.server [-] 10.1.2.130 - - [23/Oct/2015 19:30:44] "POST /v2.0/tokens HTTP/1.1" 200 971 0.129379 2015-10-23 19:30:44.831 25945 WARNING keystone.common.wsgi [-] You are not authorized to perform the requested action: admin_required #告知,需要admin权限,完毕! 2015-10-23 19:30:44.832 25945 INFO eventlet.wsgi.server [-] 10.1.2.130 - - [23/Oct/2015 19:30:44] "GET /v2.0/users HTTP/1.1" 403 291 0.029468
6. 设置用户环境变量文件
使用keystone客户端和keystone服务交互的时候,可以在keystone命令后面加上类似如--os-username的参数,如果每次执行都需要加上改参数的话,会非常不便捷,为了避免每次执行命令都需要加上参数,keystone支持设置环境变量,将所需要的信息,以环境变量的方式加载,后续直接输入子命令即可,不需要加额外参数。
a、配置admin用户的环境变量
[root@controller ~]# vim /root/admin-openrc.sh export OS_TENANT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:35357/v2.0 [root@controller ~]# source /root/admin-openrc.sh [root@controller ~]# set |grep OS_ OS_AUTH_URL=http://controller:35357/v2.0 OS_PASSWORD=ADMIN_PASS OS_TENANT_NAME=admin OS_USERNAME=admin [root@controller ~]# keystone token-get +-----------+----------------------------------+ | Property | Value | +-----------+----------------------------------+ | expires | 2015-10-23T12:36:51Z | | id | d272718dde95490687890b8d267a312b | | tenant_id | f536d97187844f3c8d7aa9d90823dfff | | user_id | f0cad6978b234de795e5636fda5f73d9 | +-----------+----------------------------------+
b、配置demo账号的环境变量
[root@controller ~]# vim demo-openrc.sh export OS_TENANT_NAME=demo export OS_USERNAME=demo export OS_PASSWORD=DEMO_PASS export OS_AUTH_URL=http://controller:5000/v2.0 [root@controller ~]# source demo-openrc.sh [root@controller ~]# keystone token-get +-----------+----------------------------------+ | Property | Value | +-----------+----------------------------------+ | expires | 2015-10-23T12:38:29Z | | id | 15ae3014a14c4541ab8014222e70bcb3 | | tenant_id | 917eeff57d604b8aab71b3e226359536 | | user_id | a2a102334d0d489690ca25caef692268 | +-----------+----------------------------------+ [root@controller ~]# keystone user-list #没有任何输出,因为没有权限
说明:至此,keystone的配置已完成,配置过程中,对每个步骤,都做校验,以确保万无一失;如果遇到错误,请校验配置文件和日志信息,结合日志信息排错,后续,openstack中的所有服务,都需要注册一个账号,并以service的形式到keystone中注册访问端点endpoint。
配置文件:/etc/keystone/keystone.conf
日志文件:/var/log/keystone
常见的表:user,group,tenant,role,service,endpoint,region,plicy等
4. glance安装与配置
4.1 glance概述
Image Service镜像服务主要用于创建instance时提供镜像的发现,注册,检索服务,nova启动instance时,会向glance请求对应的p_w_picpath,然后下载对应的p_w_picpath到本地。glance的镜像支持多种存储方式,如常见的本地文件系统,分布式存储ceph,glusterfs上,或者存储在swift上,默认存放在controller的本地文件系统上,存储的路径是:/var/lib/glance/p_w_picpaths,为了确保有足够的空间,建议修改路径,或者划分一个单独的空间给glance使用。
glance由两个服务组成:glance-api和glance-registry,其中,glance-api负责接收外部发送的请求,glance-registry接收用户发送的请求后,完成向后端镜像的存储,检索镜像和元数据等功能。此外,镜像的所有信息,包括镜像的元数据信息,都会以持久化的方式保存在数据库中。
4.2 glance的安装
1. 创建数据库
MariaDB [(none)]> create database glance; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> grant all privileges on glance.* to 'glance'@'localhost' identified by 'GLANCE_DBPASS'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> grant all privileges on glance.* to 'glance'@'%' identified by 'GLANCE_DBPASS'; Query OK, 0 rows affected (0.00 sec) [root@controller ~]# mysql -uglance -pGLANCE_DBPASS -e 'SHOW DATABASES;' 测试 +--------------------+ | Database | +--------------------+ | information_schema | | glance | +--------------------+
2. 创建keystone认证的用户
a、创建用户 [root@controller ~]# keystone user-create --name glance --pass GLANCE_PASS --email [email protected] --enabled true +----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | email | [email protected] | | enabled | True | | id | 20d2eac83516496f8b273cbce3c7bc5b | | name | glance | | username | glance | +----------+----------------------------------+ [root@controller ~]# keystone user-list +----------------------------------+--------+---------+--------------------+ | id | name | enabled | email | +----------------------------------+--------+---------+--------------------+ | f0cad6978b234de795e5636fda5f73d9 | admin | True | [email protected] | | a2a102334d0d489690ca25caef692268 | demo | True | [email protected] | | 20d2eac83516496f8b273cbce3c7bc5b | glance | True | [email protected] | +----------------------------------+--------+---------+--------------------+ b、授予glance权限 [root@controller ~]# keystone user-role-add --user glance --tenant service --role admin [root@controller ~]# keystone user-role-list --user glance --tenant service +----------------------------------+-------+----------------------------------+----------------------------------+ | id | name | user_id | tenant_id | +----------------------------------+-------+----------------------------------+----------------------------------+ | d6615b672113463c917cadab0ab1e2b8 | admin | 20d2eac83516496f8b273cbce3c7bc5b | bcbf51353c534829a993707d3d7940ca | +----------------------------------+-------+----------------------------------+----------------------------------+ c、创建glance服务 [root@controller ~]# keystone service-create --type p_w_picpath --name glance --description "Openstack Glance Image Service" +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | Openstack Glance Image Service | | enabled | True | | id | 71bef5cfcb964b7c8091f53cda5ded11 | | name | glance | | type | p_w_picpath | +-------------+----------------------------------+ [root@controller ~]# keystone service-list +----------------------------------+----------+----------+--------------------------------+ | id | name | type | description | +----------------------------------+----------+----------+--------------------------------+ | 71bef5cfcb964b7c8091f53cda5ded11 | glance | p_w_picpath | Openstack Glance Image Service | | 2a464e498c1c417b89bc1351e6f62fe9 | keystone | identity | OpenStack Identity | +----------------------------------+----------+----------+--------------------------------+ d、将glance服务路径注册到keystone [root@controller ~]# keystone endpoint-create \ > --service-id $(keystone service-list | awk '/ p_w_picpath / {print $2}') \ > --publicurl http://controller:9292 \ > --internalurl http://controller:9292 \ > --adminurl http://controller:9292 \ > --region regionOne +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | adminurl | http://controller:9292 | | id | dc91f2c0d2c14ca19b61d130b0402550 | | internalurl | http://controller:9292 | | publicurl | http://controller:9292 | | region | regionOne | | service_id | 71bef5cfcb964b7c8091f53cda5ded11 | +-------------+----------------------------------+ [root@controller ~]# keystone endpoint-list +----------------------------------+-----------+-----------------------------+-----------------------------+------------------------------+----------------------------------+ | id | region | publicurl | internalurl | adminurl | service_id | +----------------------------------+-----------+-----------------------------+-----------------------------+------------------------------+----------------------------------+ | 4d6531b81f904358af8bc7cb5a64ebd9 | regionOne | http://controller:5000/v2.0 | http://controller:5000/v2.0 | http://controller:35357/v2.0 | 2a464e498c1c417b89bc1351e6f62fe9 | | dc91f2c0d2c14ca19b61d130b0402550 | regionOne | http://controller:9292 | http://controller:9292 | http://controller:9292 | 71bef5cfcb964b7c8091f53cda5ded11 | +----------------------------------+-----------+-----------------------------+-----------------------------+------------------------------+----------------------------------+
4.3. 安装glance服务
[root@controller ~]# yum -y install openstack-glance python-glanceclient
4. 配置glance-api服务
a、配置数据库连接
[root@controller ~]# openstack-config --set /etc/glance/glance-api.conf database connection mysql://glance:GLANCE_DBPASS@controller/glance [root@controller ~]# vim /etc/glance/glance-api.conf [database] connection = mysql://glance:GLANCE_DBPASS@controller/glance
b、配置keystone
[root@controller ~]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://controller:5000/v2.0 [root@controller ~]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken identity_uri http://controller:35357 [root@controller ~]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_tenant_name service [root@controller ~]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_user glance [root@controller ~]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_password GLANCE_PASS [root@controller ~]# openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone [root@controller ~]# vim /etc/glance/glance-api.conf [keystone_authtoken] auth_uri = http://controller:5000/v2.0 identity_uri = http://controller:35357 admin_tenant_name = service admin_user = glance admin_password = GLANCE_PASS
c、配置p_w_picpath存储位置,使用本地的文件系统存储
[root@controller ~]# openstack-config --set /etc/glance/glance-api.conf glance_store default_store file [root@controller ~]# openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/p_w_picpaths/ [root@controller ~]# openstack-config --set /etc/glance/glance-api.conf DEFAULT verbose True
4.4. 配置glance-registry服务
a、配置数据库连接
[root@controller ~]# openstack-config --set /etc/glance/glance-registry.conf database connection mysql://glance:GLANCE_DBPASS@controller/glance
b、配置keystone认证
[root@controller ~]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://controller:5000/v2.0 [root@controller ~]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken identity_uri http://controller:35357 [root@controller ~]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_tenant_name service [root@controller ~]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_user glance [root@controller ~]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_password GLANCE_PASS [root@controller ~]# openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone [root@controller ~]# openstack-config --set /etc/glance/glance-registry.conf DEFAULT verbose True
c、建立glance数据库中所需的表
[root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance [root@controller ~]# mysql -uglance -pGLANCE_DBPASS -e "show tables from glance;" +----------------------------------+ | Tables_in_glance | +----------------------------------+ | p_w_picpath_locations | | p_w_picpath_members | | p_w_picpath_properties | | p_w_picpath_tags | | p_w_picpaths | | metadef_namespace_resource_types | | metadef_namespaces | | metadef_objects | | metadef_properties | | metadef_resource_types | | migrate_version | | task_info | | tasks | +----------------------------------+
4.5. 启动并校验glance服务
1. 启动glance服务
[root@controller ~]# systemctl enable openstack-glance-api ln -s '/usr/lib/systemd/system/openstack-glance-api.service' '/etc/systemd/system/multi-user.target.wants/openstack-glance-api.service' [root@controller ~]# systemctl enable openstack-glance-registry ln -s '/usr/lib/systemd/system/openstack-glance-registry.service' '/etc/systemd/system/multi-user.target.wants/openstack-glance-registry.service' [root@controller ~]# systemctl restart openstack-glance-api [root@controller ~]# systemctl restart openstack-glance-registry #查看日志校验服务是否有异常,日志的路径位于:glance-api:/var/log/glance/glance-api.log,glance-registry:/var/log/glance/glance-registry
2. 校验glance服务
[root@controller ~]# wget http://cdn.download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img [root@controller ~]# file cirros-0.3.3-x86_64-disk.img cirros-0.3.3-x86_64-disk.img: QEMU QCOW Image (v2), 41126400 bytes [root@controller ~]# glance p_w_picpath-create --name cirros --disk-format qcow2 --container-format bare --file /root/cirros-0.3.3-x86_64-disk.img --is-public True --is-protected False --human-readable --progress [=============================>] 100% +------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ | checksum | 133eae9fb1c98f45894a4e60d8736619 | | container_format | bare | | created_at | 2015-10-28T10:39:48 | | deleted | False | | deleted_at | None | | disk_format | qcow2 | | id | 9b2eb6ec-f9c1-4020-a784-92795427fa81 | | is_public | True | | min_disk | 0 | | min_ram | 0 | | name | cirros | | owner | f536d97187844f3c8d7aa9d90823dfff | | protected | False | | size | 12.6MB | | status | active | | updated_at | 2015-10-28T10:39:49 | | virtual_size | None | +------------------+--------------------------------------+ [root@controller ~]# glance p_w_picpath-list #校验p_w_picpath是否上传成功 +--------------------------------------+--------+-------------+------------------+----------+--------+ | ID | Name | Disk Format | Container Format | Size | Status | +--------------------------------------+--------+-------------+------------------+----------+--------+ | 9b2eb6ec-f9c1-4020-a784-92795427fa81 | cirros | qcow2 | bare | 13200896 | active | +--------------------------------------+--------+-------------+------------------+----------+--------+ [root@controller ~]# glance p_w_picpath-show 9b2eb6ec-f9c1-4020-a784-92795427fa81 #查看p_w_picpath的详细信息 +------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ | checksum | 133eae9fb1c98f45894a4e60d8736619 | | container_format | bare | | created_at | 2015-10-28T10:39:48 | | deleted | False | | disk_format | qcow2 | | id | 9b2eb6ec-f9c1-4020-a784-92795427fa81 | | is_public | True | | min_disk | 0 | | min_ram | 0 | | name | cirros | | owner | f536d97187844f3c8d7aa9d90823dfff | | protected | False | | size | 13200896 | | status | active | | updated_at | 2015-10-28T10:39:49 | +------------------+--------------------------------------+
5. nova服务的安装与配置
5.1 nova服务概述
openstack是提供IaaS服务的云计算平台,主要功能是提供虚拟机instance给用户,instance的创建和管理主要由nova完成,nova通过api接口和底层的hypervisor进行交互,比如通过libvirt和底层的KVM交互,通过XenAPI和XenServer交互,通过VMwareAPI和VMware ESXi交互,以api的方式,大大提高了nova的可扩展性。
nova作为openstack的核心组件,需要和其他组件进行交互,如和keystone完成认证,获得token和资源的访问端点endpoint;和glance进行交互,获取镜像资源,完成镜像的下载与启动;和neutron进行交互,完成虚拟机网络的构建,如地址分配,端口创建,网桥创建和安全组规则的建立;与dashboard交互,完成页面对hypervisor和instance的管理。nova主要由以下几个服务共同完成相关的功能:
1. nova-api 负责接收和相应外部的api请求,支持OpenStack Compute API和Amazon EC2 Api接口
2. nova-scheduler 负责虚拟机的调度,根据多种调度算法,选择合适的hypervisor,常见的调度有:基于CPU,基于内存,随机调度等
3. nova-compute 通过API接口和底层的hypervisor交互,完成instance的创建,管理和销毁,并将状态同步至DB
4. nova-conductor 和nova-compute相互结合,完成compute节点对数据库状态的更新,避免数据库的接口直接被compute访问
5. nova-consoleauth 完成控制台,如VNC对instance访问的认证,compute节点通过proxy的方式,交由nova-consoleauth完成认证
6. nova-novncproxy 提供一个访问instance的web接口,该接口基于VNC协议,不需要安装客户端即可实现
7. 消息队列 高级消息队列完成和其他项目交互的中心通信枢纽,支持的MQ有:Rabbitmq,qpid,ZeroMQ等
8. database nova将instance创建,运行时的状态信息,都保存在数据库中,便于后续数据的持久化,通常使用MySQL或MariaDB
5.2 nova的安装与配置
1、创建数据库并授权
MariaDB [(none)]> create database nova; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> grant all privileges on nova.* to 'nova'@'localhost' identified by 'NOVA_DBPASS'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> grant all privileges on nova.* to 'nova'@'%' identified by 'NOVA_DBPASS'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> quit Bye [root@controller ~]# mysql -u nova -pNOVA_DBPASS -e 'show databases;' +--------------------+ | Database | +--------------------+ | information_schema | | nova | +--------------------+
2. 创建keystone认证用户
[root@controller ~]# source admin-openrc.sh [root@controller ~]# keystone user-create --name nova --pass NOVA_PASS --email [email protected] --enabled true +----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | email | [email protected] | | enabled | True | | id | 2b6c6343e7b94198b5877953bad09123 | | name | nova | | username | nova | +----------+----------------------------------+ [root@controller ~]# keystone user-list +----------------------------------+--------+---------+--------------------+ | id | name | enabled | email | +----------------------------------+--------+---------+--------------------+ | f0cad6978b234de795e5636fda5f73d9 | admin | True | [email protected] | | a2a102334d0d489690ca25caef692268 | demo | True | [email protected] | | 20d2eac83516496f8b273cbce3c7bc5b | glance | True | [email protected] | | 2b6c6343e7b94198b5877953bad09123 | nova | True | [email protected] | +----------------------------------+--------+---------+--------------------+ #赋予nova用户admin权限 [root@controller ~]# keystone user-role-add --user nova --tenant service --role admin [root@controller ~]# keystone user-role-list --user nova --tenant service +----------------------------------+-------+----------------------------------+----------------------------------+ | id | name | user_id | tenant_id | +----------------------------------+-------+----------------------------------+----------------------------------+ | d6615b672113463c917cadab0ab1e2b8 | admin | 2b6c6343e7b94198b5877953bad09123 | bcbf51353c534829a993707d3d7940ca | +----------------------------------+-------+----------------------------------+----------------------------------+
3. 创建nova服务并注册到keystone中
[root@controller ~]# keystone service-create --name nova --type compute --description "Openstack Nova Compute Service" +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | Openstack Nova Compute Service | | enabled | True | | id | 4fdfa2a177ff4b2b8f5e11053615d610 | | name | nova | | type | compute | +-------------+----------------------------------+ [root@controller ~]# keystone service-list +----------------------------------+----------+----------+--------------------------------+ | id | name | type | description | +----------------------------------+----------+----------+--------------------------------+ | 71bef5cfcb964b7c8091f53cda5ded11 | glance | p_w_picpath | Openstack Glance Image Service | | 2a464e498c1c417b89bc1351e6f62fe9 | keystone | identity | OpenStack Identity | | 4fdfa2a177ff4b2b8f5e11053615d610 | nova | compute | Openstack Nova Compute Service | +----------------------------------+----------+----------+--------------------------------+ #将nova service的端点注册到keystone中 [root@controller ~]# keystone endpoint-create \ > --service-id $(keystone service-list | awk '/ compute / {print $2}') \ > --publicurl http://controller:8774/v2/%\(tenant_id\)s \ > --internalurl http://controller:8774/v2/%\(tenant_id\)s \ > --adminurl http://controller:8774/v2/%\(tenant_id\)s \ > --region regionOne +-------------+-----------------------------------------+ | Property | Value | +-------------+-----------------------------------------+ | adminurl | http://controller:8774/v2/%(tenant_id)s | | id | fe1c4cdd14a74edea2cc457b8b0909cc | | internalurl | http://controller:8774/v2/%(tenant_id)s | | publicurl | http://controller:8774/v2/%(tenant_id)s | | region | regionOne | | service_id | 4fdfa2a177ff4b2b8f5e11053615d610 | +-------------+-----------------------------------------+ [root@controller ~]# keystone endpoint-list +----------------------------------+-----------+-----------------------------------------+-----------------------------------------+-----------------------------------------+----------------------------------+ | id | region | publicurl | internalurl | adminurl | service_id | +----------------------------------+-----------+-----------------------------------------+-----------------------------------------+-----------------------------------------+----------------------------------+ | 4d6531b81f904358af8bc7cb5a64ebd9 | regionOne | http://controller:5000/v2.0 | http://controller:5000/v2.0 | http://controller:35357/v2.0 | 2a464e498c1c417b89bc1351e6f62fe9 | | dc91f2c0d2c14ca19b61d130b0402550 | regionOne | http://controller:9292 | http://controller:9292 | http://controller:9292 | 71bef5cfcb964b7c8091f53cda5ded11 | | fe1c4cdd14a74edea2cc457b8b0909cc | regionOne | http://controller:8774/v2/%(tenant_id)s | http://controller:8774/v2/%(tenant_id)s | http://controller:8774/v2/%(tenant_id)s | 4fdfa2a177ff4b2b8f5e11053615d610 | +----------------------------------+-----------+-----------------------------------------+-----------------------------------------+-----------------------------------------+----------------------------------+
5.3 安装和配置nova
1. 安装nova
[root@controller ~]# yum -y install openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient
2. 配置数据库连接
[root@controller ~]# openstack-config --set /etc/nova/nova.conf database connection mysql://nova:NOVA_DBPASS@controller/nova
3. 配置rabbit
[root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_host controller [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_port 5672 [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_userid nova [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_password NOVA_MQPASS [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_virtual_host /
4. 配置keystone认证信息
[root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone [root@controller ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000/v2.0 [root@controller ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken identity_uri http://controller:35357 [root@controller ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service [root@controller ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova [root@controller ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password NOVA_PASS
5. 配置VNC信息
[root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.1.2.130 [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.1.2.130 [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 10.1.2.130 [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 10.1.2.130
6. 配置glance连接
[root@controller ~]# openstack-config --set /etc/nova/nova.conf glance host controller
7. 创建nova所需的表
[root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova [root@controller ~]# mysql -unova -pNOVA_DBPASS -e 'show tables from nova;' +--------------------------------------------+ | Tables_in_nova | +--------------------------------------------+ | agent_builds | | aggregate_hosts | | aggregate_metadata | | aggregates | | block_device_mapping |
8. 启动nova相关的服务
[root@controller system]# systemctl enable openstack-nova-api.service [root@controller system]# systemctl enable openstack-nova-scheduler.service [root@controller system]# systemctl enable openstack-nova-cert.service [root@controller system]# systemctl enable openstack-nova-consoleauth.service [root@controller system]# systemctl enable openstack-nova-conductor.service [root@controller system]# systemctl enable openstack-nova-novncproxy.service [root@controller system]# systemctl restart openstack-nova-api.service [root@controller system]# systemctl restart openstack-nova-scheduler.service [root@controller system]# systemctl restart openstack-nova-cert.service [root@controller system]# systemctl restart openstack-nova-consoleauth.service [root@controller system]# systemctl restart openstack-nova-novncproxy.service 校验controller上的nova服务是否正常: [root@controller ~]# nova service-list +----+------------------+------------+----------+---------+-------+----------------------------+-----------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +----+------------------+------------+----------+---------+-------+----------------------------+-----------------+ | 1 | nova-scheduler | controller | internal | enabled | up | 2015-11-13T06:22:57.000000 | - | | 2 | nova-cert | controller | internal | enabled | up | 2015-11-13T06:22:50.000000 | - | | 3 | nova-consoleauth | controller | internal | enabled | up | 2015-11-13T06:22:51.000000 | - | | 4 | nova-conductor | controller | internal | enabled | up | 2015-11-13T06:22:52.000000 | - | +----+------------------+------------+----------+---------+-------+----------------------------+-----------------+ #看到up,表示nova已经启动完毕!
5.4 nova-compute的配置
注:以上的配置,都是在controller上执行的配置,本节的操作,在compute上操作,对应的ip是10.1.2.137和10.1.2.156上,compute节点负责和底层的hypervisor交互,完成instance的整个生命周期,配置的过程如下:
安装软件包
[root@compute1 ~]# yum -y install openstack-nova-compute sysfsutils openstack-utils
2. 配置rabiitmq
[root@compute1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit [root@compute1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_host controller [root@compute1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_port 5672 [root@compute1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_userid nova [root@compute1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_password NOVA_MQPASS [root@compute1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_password rabbit_virtual_host /
3. 配置keystone连接
[root@compute1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone [root@compute1 ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000/v2.0 [root@compute1 ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken identity_uri http://controller:35357 [root@compute1 ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service [root@compute1 ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova [root@compute1 ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password NOVA_PASS
4. 配置VNC代理
[root@compute1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.1.2.137 [root@compute1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT vnc_enabled True [root@compute1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 0.0.0.0 [root@compute1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 10.1.2.137 [root@compute1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_base_url http://controller:6080/vnc_auto.html
5. 配置glance p_w_picpath所在的主机
[root@compute1 ~]# openstack-config --set /etc/nova/nova.conf glance host controller
6. 校验compute节点是否支持硬件kvm
[root@compute1 ~]# cat /proc/cpuinfo |grep flags |grep -E "(vmx|svm)" | uniq flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid 上述已经找到了vmx的标志位,如果为空,这需要设置kvm的连接类型为qemu [root@compute1 ~]# openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu
7. 启动nova-compute服务
[root@compute1 system]# systemctl enable libvirtd.service [root@compute1 system]# systemctl enable openstack-nova-compute.service [root@compute1 system]# systemctl restart libvirtd.service [root@compute1 system]# systemctl restart openstack-nova-compute.service 校验下compute服务状况: [root@compute1 ~]# openstack-status == Nova services == openstack-nova-api: inactive (disabled on boot) openstack-nova-compute: active #完毕 openstack-nova-network: inactive (disabled on boot) openstack-nova-scheduler: inactive (disabled on boot) == Support services == openvswitch: active dbus: active 相同的方法,在另外一台compute节点上安装nova-compute服务并启动,正常启动之后,在controller上确认nova-compute服务是否正常,如下: [root@controller ~]# nova service-list +----+------------------+------------+----------+---------+-------+----------------------------+-----------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +----+------------------+------------+----------+---------+-------+----------------------------+-----------------+ | 1 | nova-scheduler | controller | internal | enabled | up | 2015-11-13T09:21:37.000000 | - | | 2 | nova-cert | controller | internal | enabled | up | 2015-11-13T09:21:30.000000 | - | | 3 | nova-consoleauth | controller | internal | enabled | up | 2015-11-13T09:21:31.000000 | - | | 4 | nova-conductor | controller | internal | enabled | up | 2015-11-13T09:21:33.000000 | - | | 5 | nova-compute | compute1 | nova | enabled | up | 2015-11-13T09:21:31.000000 | - | | 6 | nova-compute | compute2 | nova | enabled | up | 2015-11-13T09:21:36.000000 | - | #10.1.2.137 nova-compute已经启动 +----+------------------+------------+----------+---------+-------+----------------------------+-----------------+ #10.1.2.156 nova-compute已经启动 在controller上校验nova和glance是否正常交互: [root@controller ~]# nova p_w_picpath-list +--------------------------------------+--------+--------+--------+ | ID | Name | Status | Server | +--------------------------------------+--------+--------+--------+ | 9b2eb6ec-f9c1-4020-a784-92795427fa81 | cirros | ACTIVE | | +--------------------------------------+--------+--------+--------+ #表明nova能够正常和glance交互,完毕!!
说明:至此,nova安装完毕,需要注意的是controller上需要启动的服务有:openstack-nova-api,openstack-nova-scheduler,openstack-nova-cert,openstack-nova-consolauth,openstack-nova-conductor,openstack-nova-novncproxy。compute节点则需要启动openstack-nova-compute和libvirtd服务,如果发现服务无法启动的话,检查对应的日志文件,根据日志的错误信息,排错,如openstack-nova-api的日志文件/var/log/nova/nova-api.log,检查日志文件信息,查找解决方案。
6. neutron安装与配置
6.1 neutron服务概述
neutron主要负责为openstack提供网络服务功能,即Network as a Service(NaaS)网络即服务功能,neutron起初有nova-network提供,后来随着网络功能越来越强大,后来发展成为一个独立的项目叫quamtum,由于版权的问题,后来改名为neutron。neutron支持不通的插件,如Cisco Plugin,Brocade Plugin,OpenvSwitch等插件,由插件组成一个虚拟的大网络(类似于vSphere的分布式交换机),在这基础之上,租户可以定义自己的网络,如虚拟交换机,虚拟机路由器,虚拟机网络,虚拟防火墙(FWaaS),×××服务(×××aaS)等功能,即所谓的软件定义网络SDN。
每个tenant都会有自己的一个单独的网络,即内部网络,相关的ip地址成为fix-ip,租户之间的网络隔离有多种隔离技术,常见的包括vlan,gre和vxlan,VLAN通不过vlan-id号区分不同tenant的流量,一般只支持4096个vlan;gre则通过tunnel的方式建立隧道,tenant的流量运行在隧道之上,每个租户拥有自己的一条隧道,相互不可见;vxlan则通过vni的方式构建网络头部,compue之间构建tunnel,每个租户都有自己的隧道,用于网络流量之间的隔离。
neutron包括如下组件:
1. neutron-server 负责接收用户发送的请求,转发至后端处理
2. neutron-l3-agent 负责instance和外部网络交互,即DNAT和SNAT
3. neutron-dhcp-agent 负责云网络中的动态地址分配
4. neutron-metadata-agent 网络元数据服务
5. neutron-openvswitch-agent 负责二层网络的构建
6. openvswitch 二层的OVS插件
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@neutron的配置比较复杂,需要在controller上配置,需要在network和所有的compute上配置@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
6.2 neutron的安装与配置(controller)
创建数据库并授权
MariaDB [(none)]> CREATE DATABASE neutron; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS'; Query OK, 0 rows affected (0.00 sec) [root@controller ~]# mysql -uneutron -pNEUTRON_DBPASS -e "show databases;" +--------------------+ | Database | +--------------------+ | information_schema | | neutron | +--------------------+ #授权完毕
2. 配置keystone用户
@配置keystone用户并授权@ [root@controller ~]# source /root/admin-openrc.sh [root@controller ~]# keystone user-list +----------------------------------+------------+---------+--------------------+ | id | name | enabled | email | +----------------------------------+------------+---------+--------------------+ | f0cad6978b234de795e5636fda5f73d9 | admin | True | [email protected] | | dd75270f109b45a1a4f29313f00e98c7 | ceilometer | True | | | a2a102334d0d489690ca25caef692268 | demo | True | [email protected] | | 20d2eac83516496f8b273cbce3c7bc5b | glance | True | [email protected] | | 9b7d6fa1ae6e4197b0a4b0a2f8ebb991 | heat | True | | | 2b6c6343e7b94198b5877953bad09123 | nova | True | [email protected] | +----------------------------------+------------+---------+--------------------+ [root@controller ~]# source /root/admin-openrc.sh [root@controller ~]# keystone user-create --name neutron --pass NEUTRON_PASS +----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | email | | | enabled | True | | id | f050e8a48a6e49b485c9d70e2a884029 | | name | neutron | | username | neutron | +----------+----------------------------------+ [root@controller ~]# keystone user-role-add --user neutron --tenant service --role admin [root@controller ~]# keystone user-role-list --user neutron --tenant service +----------------------------------+-------+----------------------------------+----------------------------------+ | id | name | user_id | tenant_id | +----------------------------------+-------+----------------------------------+----------------------------------+ | d6615b672113463c917cadab0ab1e2b8 | admin | f050e8a48a6e49b485c9d70e2a884029 | bcbf51353c534829a993707d3d7940ca | +----------------------------------+-------+----------------------------------+----------------------------------+ @配置service并注册endpoint@ [root@controller ~]# keystone service-create --name neutron --type network --description "OpenStack Networking" +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | OpenStack Networking | | enabled | True | | id | 6e25297b7d0a47ceb459313d90d69447 | | name | neutron | | type | network | +-------------+----------------------------------+ [root@controller ~]# keystone endpoint-create --service 6e25297b7d0a47ceb459313d90d69447 \ > --publicurl http://controller:9696 \ > --adminurl http://controller:9696 \ > --internalurl http://controller:9696 \ > --region regionOne +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | adminurl | http://controller:9696 | | id | 23529cf09a524071abed0aa56d25f458 | | internalurl | http://controller:9696 | | publicurl | http://controller:9696 | | region | regionOne | | service_id | 6e25297b7d0a47ceb459313d90d69447 | +-------------+----------------------------------+
3. 安装neutron所需的软件包
[root@controller ~]# yum -y install openstack-neutron openstack-neutron-ml2 python-neutronclient
4. 配置neutron,neutron的配置包括:数据库连接,keystone认证,rabbitmq,拓扑状态和插件的配置
a、配置数据库连接
[root@controller ~]# openstack-config --set /etc/neutron/neutron.conf database connection mysql://neutron:NEUTRON_DBPASS@controller/neutron
b、配置keystone认证
[root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000/v2.0 [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken identity_uri http://controller:35357 [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name service [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password NEUTRON_PASS
c、配置rabbitmq连接
[root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_host controller [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_port 5672 [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_userid neutron [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_password NEUTRON_MQPASS [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_virtual_host /
d、neutron使用二层插件
[root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2 [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips True
e、配置neutron和nova状态变更
[root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes True [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes True [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_url http://controller:8774/v2 [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_auth_url http://controller:35357/v2.0 [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_region_name regionOne [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_username nova [root@controller ~]# keystone tenant-list |grep service | bcbf51353c534829a993707d3d7940ca | service | True | service租户的id号码 [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_tenant_id bcbf51353c534829a993707d3d7940ca [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_password NOVA_PASS
f、配置OVS二层插件
[root@controller ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,gre [root@controller ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types gre [root@controller ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch [root@controller ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre tunnel_id_ranges 1:1000 [root@controller ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True [root@controller ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True [root@controller ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
g、配置nova支持neutron @@注意,修改的是nova的配置文件@@
[root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver [root@controller ~]# openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696 [root@controller ~]# openstack-config --set /etc/nova/nova.conf neutron auth_strategy keystone [root@controller ~]# openstack-config --set /etc/nova/nova.conf neutron admin_auth_url http://controller:35357/v2.0 [root@controller ~]# openstack-config --set /etc/nova/nova.conf neutron admin_tenant_name service [root@controller ~]# openstack-config --set /etc/nova/nova.conf neutron admin_username neutron [root@controller ~]# openstack-config --set /etc/nova/nova.conf neutron admin_password NEUTRON_PASS
h、启动neutron-server服务
[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini [root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade juno" neutron INFO [alembic.migration] Context impl MySQLImpl. INFO [alembic.migration] Will assume non-transactional DDL. INFO [alembic.migration] Running upgrade -> havana, havana_initial #重启nova服务,和neutron联动 [root@controller system]# systemctl restart openstack-nova-api.service [root@controller system]# systemctl restart openstack-nova-scheduler.service [root@controller system]# systemctl restart openstack-nova-conductor.service #重启neutron-server [root@controller system]# systemctl enable neutron-server.service ln -s '/usr/lib/systemd/system/neutron-server.service' '/etc/systemd/system/multi-user.target.wants/neutron-server.service' [root@controller system]# systemctl restart neutron-server.service #controller上校验neutron的配置 [root@controller ~]# neutron ext-list +-----------------------+-----------------------------------------------+ | alias | name | neutron支持的各种agent +-----------------------+-----------------------------------------------+ | security-group | security-group | | l3_agent_scheduler | L3 Agent Scheduler | | ext-gw-mode | Neutron L3 Configurable external gateway mode | | binding | Port Binding | | provider | Provider Network | | agent | agent | | quotas | Quota management support | | dhcp_agent_scheduler | DHCP Agent Scheduler | | l3-ha | HA Router extension | | multi-provider | Multi Provider Network | | external-net | Neutron external network | | router | Neutron L3 Router | | allowed-address-pairs | Allowed Address Pairs | | extraroute | Neutron Extra Route | | extra_dhcp_opt | Neutron Extra DHCP opts | | dvr | Distributed Virtual Router | +-----------------------+-----------------------------------------------+