openstack(liberty):部署实验平台(二,简单版本软件安装 part2)

继续前面的part1,将后续的compute以及network部分的安装过程记录完毕!

 

首先说说compute部分nova的安装。

 

n1。准备工作。创建数据库,配置权限!(密码依旧是openstack,还是在controller节点机器node0上操作)

1 mysql -u root -p
2 CREATE DATABASE nova;
3 GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
4 GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';

 

n2. 配置环境

1 source admin-openrc.sh

 

n3. 创建nova用户并添加role。

1 openstack user create --domain default --password-prompt nova
2 openstack role add --project service --user nova admin

 

n4.创建服务并建立endpoint。

1 openstack service create --name nova  --description "OpenStack Compute" compute
2 openstack endpoint create --region RegionOne compute public http://node0:8774/v2/%\(tenant_id\)s
3 openstack endpoint create --region RegionOne compute internal http://node0:8774/v2/%\(tenant_id\)s
4 openstack endpoint create --region RegionOne compute admin http://node0:8774/v2/%\(tenant_id\)s

 

n5.安装组建。

1 yum install openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient

 

n6.配置/etc/nova/nova.conf

 1 [DEFAULT]
 2 rpc_backend = rabbit
 3 auth_strategy = keystone
 4 my_ip = 192.168.1.100
 5 network_api_class = nova.network.neutronv2.api.API
 6 security_group_api = neutron
 7 linuxnet_interface_driver = nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
 8 firewall_driver = nova.virt.firewall.NoopFirewallDriver
 9 enabled_apis=osapi_compute,metadata
10 verbose = True
11 
12 [oslo_messaging_rabbit]
13 rabbit_host = node0
14 rabbit_userid = openstack
15 rabbit_password = openstack
16 
17 [database]
18 connection = mysql://nova:openstack@node0/nova
19 
20 [keystone_authtoken]
21 auth_uri = http://node0:5000
22 auth_url = http://node0:35357
23 auth_plugin = password
24 project_domain_id = default
25 user_domain_id = default
26 project_name = service
27 username = nova
28 password = openstack
29 
30 [vnc]
31 vncserver_listen = $my_ip
32 vncserver_proxyclient_address = $my_ip
33 
34 [glance]
35 host = node0
36 
37 [oslo_concurrency]
38 lock_path = /var/lib/nova/tmp

 

n7. 同步数据库

1 su -s /bin/sh -c "nova-manage db sync" nova

 

n8. 启动服务

1 systemctl enable openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service  openstack-nova-scheduler.service openstack-nova-conductor.service  openstack-nova-novncproxy.service
2 
3 systemctl start openstack-nova-api.service  openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service   openstack-nova-novncproxy.service

在这一步遇到了错误,主要是systemctl start openstack-nova-api.service的错误,没有权限!错误信息如下:

 1 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 CRITICAL nova [-] OSError: [Errno 13] Permission denied: '/usr/lib/python2.7/site-packages/keys'
 2 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova Traceback (most recent call last):
 3 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova   File "/usr/bin/nova-api", line 10, in <module>
 4 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova     sys.exit(main())
 5 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova   File "/usr/lib/python2.7/site-packages/nova/cmd/api.py", line 55, in main
 6 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova     server = service.WSGIService(api, use_ssl=should_use_ssl)
 7 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova   File "/usr/lib/python2.7/site-packages/nova/service.py", line 328, in __init__
 8 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova     self.app = self.loader.load_app(name)
 9 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova   File "/usr/lib/python2.7/site-packages/nova/wsgi.py", line 543, in load_app
10 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova     return deploy.loadapp("config:%s" % self.config_path, name=name)
11 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova   File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 247, in loadapp
12 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova     return loadobj(APP, uri, name=name, **kw)
13 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova   File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 272, in loadobj
14 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova     return context.create()
15 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova   File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 710, in create
16 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova     return self.object_type.invoke(self)
17 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova   File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 144, in invoke
18 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova     **context.local_conf)
19 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova   File "/usr/lib/python2.7/site-packages/paste/deploy/util.py", line 56, in fix_call
20 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova     val = callable(*args, **kw)
21 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova   File "/usr/lib/python2.7/site-packages/nova/api/openstack/urlmap.py", line 160, in urlmap_factory
22 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova     app = loader.get_app(app_name, global_conf=global_conf)
23 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova   File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 350, in get_app
24 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova     name=name, global_conf=global_conf).create()
25 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova   File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 710, in create
26 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova     return self.object_type.invoke(self)
27 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova   File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 144, in invoke
28 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova     **context.local_conf)
29 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova   File "/usr/lib/python2.7/site-packages/paste/deploy/util.py", line 56, in fix_call
30 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova     val = callable(*args, **kw)
31 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova   File "/usr/lib/python2.7/site-packages/nova/api/auth.py", line 78, in pipeline_factory_v21
32 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova     return _load_pipeline(loader, local_conf[CONF.auth_strategy].split())
33 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova   File "/usr/lib/python2.7/site-packages/nova/api/auth.py", line 58, in _load_pipeline
34 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova     app = loader.get_app(pipeline[-1])
35 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova   File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 350, in get_app
36 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova     name=name, global_conf=global_conf).create()
37 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova   File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 710, in create
38 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova     return self.object_type.invoke(self)
39 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova   File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 146, in invoke
40 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova     return fix_call(context.object, context.global_conf, **context.local_conf)
41 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova   File "/usr/lib/python2.7/site-packages/paste/deploy/util.py", line 56, in fix_call
42 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova     val = callable(*args, **kw)
43 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova   File "/usr/lib/python2.7/site-packages/nova/api/openstack/__init__.py", line 311, in factory
44 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova     return cls()
45 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova   File "/usr/lib/python2.7/site-packages/nova/api/openstack/compute/__init__.py", line 156, in __init__
46 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova     super(APIRouterV21, self).__init__(init_only)
47 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova   File "/usr/lib/python2.7/site-packages/nova/api/openstack/__init__.py", line 382, in __init__
48 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova     self._register_resources_check_inherits(mapper)
49 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova   File "/usr/lib/python2.7/site-packages/nova/api/openstack/__init__.py", line 406, in _register_resources_check_inherits
50 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova     for resource in ext.obj.get_resources():
51 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova   File "/usr/lib/python2.7/site-packages/nova/api/openstack/compute/cloudpipe.py", line 190, in get_resources
52 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova     CloudpipeController())]
53 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova   File "/usr/lib/python2.7/site-packages/nova/api/openstack/compute/cloudpipe.py", line 50, in __init__
54 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova     self.setup()
55 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova   File "/usr/lib/python2.7/site-packages/nova/api/openstack/compute/cloudpipe.py", line 57, in setup
56 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova     fileutils.ensure_tree(CONF.keys_path)
57 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova   File "/usr/lib/python2.7/site-packages/oslo_utils/fileutils.py", line 42, in ensure_tree
58 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova     os.makedirs(path, mode)
59 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova   File "/usr/lib64/python2.7/os.py", line 157, in makedirs
60 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova     mkdir(name, mode)
61 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova OSError: [Errno 13] Permission denied: '/usr/lib/python2.7/site-packages/keys'
62 Feb  4 10:41:46 localhost nova-api: 2016-02-04 10:41:46.765 11041 ERROR nova
63 Feb  4 10:41:46 localhost systemd: openstack-nova-api.service: main process exited, code=exited, status=1/FAILURE
64 Feb  4 10:41:46 localhost systemd: Failed to start OpenStack Nova API Server.
65 Feb  4 10:41:46 localhost systemd: Unit openstack-nova-api.service entered failed state.
66 Feb  4 10:41:46 localhost systemd: openstack-nova-api.service failed.
67 Feb  4 10:41:47 localhost systemd: openstack-nova-api.service holdoff time over, scheduling restart.
68 Feb  4 10:41:47 localhost systemd: Starting OpenStack Nova API Server...
View Code

由于是没有权限,结合错误日志分析,将/var/log/nova及/var/lib/nova的目录改属组,以及读写权限。

1 [root@node0 log]# chown -R nova:nova /var/lib/nova
2 [root@node0 log]# 
3 [root@node0 log]# chown -R nova:nova /var/log/nova
4 [root@node0 log]# 
5 [root@node0 log]# chmod -R 775 /var/lib/nova
6 [root@node0 log]# chmod -R 775 /var/log/nova

然后,在/etc/nova/nova.conf的【DEFAULT】里面添加下面的内容

1 state_path=/var/lib/nova 2 keys_path=$state_path/keys 3 log_dir=/var/log/nova

最后再次尝试启动nova服务,就正常了,ok!

 

下面的操作,是要在compute节点node1(192.168.1.110)上进行了。

c1.安装nova compute包

1 yum install openstack-nova-compute sysfsutils

 

c2.配置/etc/nova/nova.conf

 1 [DEFAULT]
 2 rpc_backend = rabbit
 3 auth_strategy = keystone
 4 my_ip = 192.168.1.110
 5 network_api_class = nova.network.neutronv2.api.API
 6 security_group_api = neutron
 7 linuxnet_interface_driver = nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
 8 firewall_driver = nova.virt.firewall.NoopFirewallDriver
 9 verbose = True
10 state_path=/var/lib/nova  #注意,这三行,在官方的安装指南中是没有的。 11 keys_path=$state_path/keys 12 log_dir=/var/log/nova 13 
14 [oslo_messaging_rabbit]
15 rabbit_host = node0
16 rabbit_userid = openstack
17 rabbit_password = openstack
18 
19 [keystone_authtoken]
20 auth_uri = http://node0:5000
21 auth_url = http://node0:35357
22 auth_plugin = password
23 project_domain_id = default
24 user_domain_id = default
25 project_name = service
26 username = nova
27 password = openstack
28 
29 [vnc]
30 enabled = True
31 vncserver_listen = 0.0.0.0
32 vncserver_proxyclient_address = $my_ip
33 novncproxy_base_url = http://node0:6080/vnc_auto.html
34 
35 [glance]
36 host = node0
37 
38 [oslo_concurrency]
39 lock_path = /var/lib/nova/tmp

 

c3. 配置虚拟化方案并启动

1 [root@node1 ~]# egrep -c '(vmx|svm)' /proc/cpuinfo    #这一步是检测compute机器是否支持硬件虚拟化,若返回值为0,则表示不支持,需要全软件虚拟,这时要配置虚拟类型为qemu 2 16

由于我的服务器支持硬件虚拟化,所以,我的virt_type采用默认的kvm. 下面开始启动虚拟服务以及nova compute服务。

1 systemctl enable libvirtd.service openstack-nova-compute.service
2 systemctl start libvirtd.service openstack-nova-compute.service

 

呀,什么情况,[root@node1 opt]# systemctl start openstack-nova-compute.service迟迟不OK,问题来了,查看日志/etc/log/nova/nova-compute.log,发现下面的错误:

1 2016-02-04 11:30:29.620 21675 INFO oslo_service.periodic_task [-] Skipping periodic task _periodic_update_dns because its interval is negative
2 2016-02-04 11:30:29.669 21675 INFO nova.virt.driver [-] Loading compute driver 'libvirt.LibvirtDriver'
3 2016-02-04 11:30:29.742 21675 INFO oslo.messaging._drivers.impl_rabbit [req-ba82b667-e62b-40d1-a549-07f211ee6d4a - - - - -] Connecting to AMQP server on 192.168.1.100:5672
4 2016-02-04 11:30:29.752 21675 ERROR oslo.messaging._drivers.impl_rabbit [req-ba82b667-e62b-40d1-a549-07f211ee6d4a - - - - -] AMQP server on 192.168.1.100:5672 is unreachable: [Errno 113] EHOSTUNREACH. Trying again in 1 seconds.
5 2016-02-04 11:30:30.769 21675 ERROR oslo.messaging._drivers.impl_rabbit [req-ba82b667-e62b-40d1-a549-07f211ee6d4a - - - - -] AMQP server on 192.168.1.100:5672 is unreachable: [Errno 113] EHOSTUNREACH. Trying again in 2 seconds.
6 2016-02-04 11:30:32.789 21675 ERROR oslo.messaging._drivers.impl_rabbit [req-ba82b667-e62b-40d1-a549-07f211ee6d4a - - - - -] AMQP server on 192.168.1.100:5672 is unreachable: [Errno 113] EHOSTUNREACH. Trying again in 2 seconds.
7 2016-02-04 11:30:34.808 21675 ERROR oslo.messaging._drivers.impl_rabbit [req-ba82b667-e62b-40d1-a549-07f211ee6d4a - - - - -] AMQP server on 192.168.1.100:5672 is unreachable: [Errno 113] EHOSTUNREACH. Trying again in 2 seconds.
8 2016-02-04 11:30:36.826 21675 ERROR oslo.messaging._drivers.impl_rabbit [req-ba82b667-e62b-40d1-a549-07f211ee6d4a - - - - -] AMQP server on 192.168.1.100:5672 is unreachable: [Errno 113] EHOSTUNREACH. Trying again in 2 seconds.
9 2016-02-04 11:30:38.845 21675 ERROR oslo.messaging._drivers.impl_rabbit [req-ba82b667-e62b-40d1-a549-07f211ee6d4a - - - - -] AMQP server on 192.168.1.100:5672 is unreachable: [Errno 113] EHOSTUNREACH. Trying again in 2 seconds.

分析,从compute node可以ping通controller,说明路由不存在问题。
但是从compute node上telnet无法到达controller。说明端口有问题。一说端口,就应该想到防火墙。。。。
在compute节点node1上做端口扫描controller节点node0.。。如下,发现只有22端口是开的,进一步说明防火墙的问题了。

1 [root@node1 ~]# nmap 192.168.1.100
2 
3 Starting Nmap 6.40 ( http://nmap.org ) at 2016-02-04 11:38 CST
4 Nmap scan report for node0 (192.168.1.100)
5 Host is up (0.00027s latency).
6 Not shown: 999 filtered ports
7 PORT   STATE SERVICE
8 22/tcp open  ssh
9 MAC Address: 18:03:73:F0:C3:98 (Dell)

查看iptables是关闭的,为何还有端口被锁定的问题???靠,我这个是centos7,里面多了一个firewalld的程序,将其关闭了就完事大吉了。

 

然后去node0这个控制节点查看服务是否ok:

1 [root@node0 log]# nova service-list
2 +----+------------------+-------+----------+---------+-------+----------------------------+-----------------+
3 | Id | Binary           | Host  | Zone     | Status  | State | Updated_at                 | Disabled Reason |
4 +----+------------------+-------+----------+---------+-------+----------------------------+-----------------+
5 | 1  | nova-consoleauth | node0 | internal | enabled | up    | 2016-02-04T03:48:50.000000 | -               |
6 | 2  | nova-conductor   | node0 | internal | enabled | up    | 2016-02-04T03:48:50.000000 | -               |
7 | 4  | nova-scheduler   | node0 | internal | enabled | up    | 2016-02-04T03:48:51.000000 | -               |
8 | 5  | nova-compute     | node1 | nova     | enabled | up    | 2016-02-04T03:48:43.000000 | -               |
9 +----+------------------+-------+----------+---------+-------+----------------------------+-----------------+

 

发现没有cert的服务,shit,又出错了。这个莫非和前面安装控制节点上的nova程序出错有关系?/var/log/nova及/var/lib/nova访问权限的问题。确认一下:

1 [root@node0 log]# systemctl status openstack-nova-cert.service
2 ● openstack-nova-cert.service - OpenStack Nova Cert Server
3    Loaded: loaded (/usr/lib/systemd/system/openstack-nova-cert.service; enabled; vendor preset: disabled)
4    Active: active (running) since Thu 2016-02-04 10:37:13 CST; 1h 13min ago
5  Main PID: 9373 (nova-cert)
6    CGroup: /system.slice/openstack-nova-cert.service
7            └─9373 /usr/bin/python2 /usr/bin/nova-cert

接下来查看/var/log/message的内容,看看是不是有问题?

 1 Feb 04 10:37:13 node0 nova-cert[9373]: crypto.ensure_ca_filesystem()
 2 Feb 04 10:37:13 node0 nova-cert[9373]: File "/usr/lib/python2.7/site-packages/nova/crypto.py", line 124, in ensure_ca_filesystem
 3 Feb 04 10:37:13 node0 nova-cert[9373]: fileutils.ensure_tree(ca_dir)
 4 Feb 04 10:37:13 node0 nova-cert[9373]: File "/usr/lib/python2.7/site-packages/oslo_utils/fileutils.py", line 42, in ensure_tree
 5 Feb 04 10:37:13 node0 nova-cert[9373]: os.makedirs(path, mode)
 6 Feb 04 10:37:13 node0 nova-cert[9373]: File "/usr/lib64/python2.7/os.py", line 157, in makedirs
 7 Feb 04 10:37:13 node0 nova-cert[9373]: mkdir(name, mode)
 8 Feb 04 10:37:13 node0 nova-cert[9373]: OSError: [Errno 13] Permission denied: '/usr/lib/python2.7/site-packages/CA'
 9 Feb 04 10:38:13 node0 systemd[1]: Started OpenStack Nova Cert Server.
10 Feb 04 10:51:36 node0 systemd[1]: Started OpenStack Nova Cert Server.

靠,真是这个错误!将cert服务重启一下呗,试试:看下面就可以知道,现在ok了

 1 [root@node0 log]# systemctl restart openstack-nova-cert.service
 2 [root@node0 log]# 
 3 [root@node0 log]# 
 4 [root@node0 log]# 
 5 [root@node0 log]# systemctl status openstack-nova-cert.service
 6 ● openstack-nova-cert.service - OpenStack Nova Cert Server
 7    Loaded: loaded (/usr/lib/systemd/system/openstack-nova-cert.service; enabled; vendor preset: disabled)
 8    Active: active (running) since Thu 2016-02-04 11:51:04 CST; 2s ago
 9  Main PID: 15008 (nova-cert)
10    CGroup: /system.slice/openstack-nova-cert.service
11            └─15008 /usr/bin/python2 /usr/bin/nova-cert
 1 [root@node0 log]# nova service-list
 2 +----+------------------+-------+----------+---------+-------+----------------------------+-----------------+
 3 | Id | Binary           | Host  | Zone     | Status  | State | Updated_at                 | Disabled Reason |
 4 +----+------------------+-------+----------+---------+-------+----------------------------+-----------------+
 5 | 1  | nova-consoleauth | node0 | internal | enabled | up    | 2016-02-04T03:51:10.000000 | -               |
 6 | 2  | nova-conductor   | node0 | internal | enabled | up    | 2016-02-04T03:51:10.000000 | -               |
 7 | 4  | nova-scheduler   | node0 | internal | enabled | up    | 2016-02-04T03:51:11.000000 | -               |
 8 | 5  | nova-compute     | node1 | nova     | enabled | up    | 2016-02-04T03:51:13.000000 | -               |
 9 | 6  | nova-cert        | node0 | internal | enabled | up    | 2016-02-04T03:51:10.000000 | -               |
10 +----+------------------+-------+----------+---------+-------+----------------------------+-----------------+

到这里为止,关于compute的nova服务已经安装配置完毕了!

 

下面开始network的相关的neutron的安装配置!这个还是在控制节点node0上操作

n1. 数据库创建即权限控制

1 mysql -u root -p
2 CREATE DATABASE neutron;
3 GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'openstack';
4 GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'openstack';

 

n2. 环境配置(CLI)

1 source admin-openrc.sh

 

n3.创建用户配置角色

1 openstack user create --domain default --password-prompt neutron
2 openstack role add --project service --user neutron admin

 

n4.创建服务以及endpoint

1 openstack service create --name neutron --description "OpenStack Networking" network
2 openstack endpoint create --region RegionOne network public http://node0:9696
3 openstack endpoint create --region RegionOne network internal http://node0:9696
4 openstack endpoint create --region RegionOne network admin http://node0:9696

 

n5. 配置网络(我这里的网络配置选择的是option2,即self-network,因为我的网络只有一个中转机可以访问外网,只有一个公网IP,最终给vm配置外网IP会失败。。。

1 yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge python-neutronclient ebtables ipset

 

n6. 配置/etc/neutron/neutron.conf

 1 [DEFAULT]
 2 core_plugin = ml2
 3 service_plugins = router
 4 allow_overlapping_ips = True
 5 rpc_backend = rabbit
 6 auth_strategy = keystone
 7 notify_nova_on_port_status_changes = True
 8 notify_nova_on_port_data_changes = True
 9 nova_url = http://node0:8774/v2
10 verbose = True
11 
12 [database]
13 connection = mysql://neutron:openstack@node0/neutron
14 
15 [oslo_messaging_rabbit]
16 rabbit_host = controller
17 rabbit_userid = openstack
18 rabbit_password = openstack
19 
20 [keystone_authtoken]
21 auth_uri = http://node0:5000
22 auth_url = http://node0:35357
23 auth_plugin = password
24 project_domain_id = default
25 user_domain_id = default
26 project_name = service
27 username = neutron
28 password = openstack
29 
30 [nova]
31 auth_url = http://node0:35357
32 auth_plugin = password
33 project_domain_id = default
34 user_domain_id = default
35 region_name = RegionOne
36 project_name = service
37 username = nova
38 password = openstack
39 
40 [oslo_concurrency]
41 lock_path = /var/lib/neutron/tmp

 

n7. 配置/etc/neutron/plugins/ml2/ml2_conf.ini

 1 [ml2]
 2 type_drivers = flat,vlan,vxlan
 3 tenant_network_types = vxlan
 4 mechanism_drivers = linuxbridge,l2population
 5 extension_drivers = port_security
 6 
 7 [ml2_type_flat]
 8 flat_networks = public
 9 
10 [ml2_type_vxlan]
11 vni_ranges = 1:1000
12 
13 [securitygroup]
14 enable_ipset = True

 

n8. 配置linux bridge /etc/neutron/plugins/ml2/linuxbridge_agent.ini

 1 [linux_bridge]
 2 physical_interface_mappings = public:em1
 3 
 4 [vxlan]
 5 enable_vxlan = True
 6 local_ip = 192.168.1.100
 7 l2_population = True
 8 
 9 [agent]
10 prevent_arp_spoofing = True
11 
12 [securitygroup]
13 enable_security_group = True
14 firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

 

n9. 配置L3 /etc/neutron/l3_agent.ini

1 [DEFAULT]
2 interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
3 external_network_bridge =
4 verbose = True

 

n10. 配置DHCP /etc/neutron/dhcp_agent.ini

1 [DEFAULT]
2 interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
3 dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
4 enable_isolated_metadata = True
5 
6 verbose = True
7 dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf

上面的文件dnsmasq-neutron.conf可能不存在,那么可以自己建一个。配置内容如下:

1 dhcp-option-force=26,1450

 

n11. 配置metadata信息。/etc/neutron/metadata_agent.ini

 1 [DEFAULT]
 2 auth_uri = http://node0:5000
 3 auth_url = http://node0:35357
 4 auth_region = RegionOne
 5 auth_plugin = password
 6 project_domain_id = default
 7 user_domain_id = default
 8 project_name = service
 9 username = neutron
10 password = openstack
11 
12 nova_metadata_ip = node0
13 
14 metadata_proxy_shared_secret = openstack
15 
16 verbose = True

 

n12. 配置compute使用网络/etc/nova/nova.conf

 1 [neutron]
 2 url = http://node0:9696
 3 auth_url = http://node0:35357
 4 auth_plugin = password
 5 project_domain_id = default
 6 user_domain_id = default
 7 region_name = RegionOne
 8 project_name = service
 9 username = neutron
10 password = openstack
11 
12 service_metadata_proxy = True
13 metadata_proxy_shared_secret = openstack

 

n13. 配置符号链接。

因为网络服务初始化需要一个符号链接/etc/neutron/plugin.ini指向paste文件/etc/neutron/plugins/ml2/ml2_conf.ini。

1 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

 

n14. 同步数据库(注意,这一步很费时间,我这里起码用去了1分钟的时间。。。

1 su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

 

n15. 激活网络服务

1 systemctl restart openstack-nova-api.service #重新启动nova api服务 2 
3 systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service #下面两条启动网络 4 
5 systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
6 
7 systemctl enable neutron-l3-agent.service #由于我的配置选择的是option2的self-network,所以需要配置l3的服务。 8 systemctl start neutron-l3-agent.service

 

下面,开始配置网络部分在compute节点node0的内容了。

nc1. 安装组件

1 yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset

 

nc2. 配置公共component。/etc/neutron/neutron.conf

 1 [DEFAULT]
 2 rpc_backend = rabbit
 3 auth_strategy = keystone
 4 verbose = True
 5 
 6 [oslo_messaging_rabbit]
 7 rabbit_host = node0
 8 rabbit_userid = openstack
 9 rabbit_password = openstack
10 
11 [keystone_authtoken]
12 auth_uri = http://node0:5000
13 auth_url = http://node0:35357
14 auth_plugin = password
15 project_domain_id = default
16 user_domain_id = default
17 project_name = service
18 username = neutron
19 password = openstack
20 
21 [oslo_concurrency]
22 lock_path = /var/lib/neutron/tmp

 

nc3. 配置linux bridge。/etc/neutron/plugins/ml2/linuxbridge_agent.ini。

 1 [linux_bridge]
 2 physical_interface_mappings = public:em1
 3 
 4 [vxlan]
 5 enable_vxlan = True
 6 local_ip = 192.168.1.110
 7 l2_population = True
 8 
 9 [agent]
10 prevent_arp_spoofing = True
11 
12 [securitygroup]
13 enable_security_group = True
14 firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

 

nc4. 配置计算节点使用网络/etc/nova/nova.conf

 1 [neutron]
 2 url = http://node0:9696
 3 auth_url = http://node0:35357
 4 auth_plugin = password
 5 project_domain_id = default
 6 user_domain_id = default
 7 region_name = RegionOne
 8 project_name = service
 9 username = neutron
10 password = openstack

 

nc5. 启动服务

1 systemctl restart openstack-nova-compute.service
2 
3 systemctl enable neutron-linuxbridge-agent.service
4 systemctl start neutron-linuxbridge-agent.service

 

nc6. 验证安装了多少扩展的服务(在controller节点或者compute节点执行都一样,自己分析为什么,很简单。。。)

 1 [root@node1 opt]# neutron ext-list
 2 +-----------------------+-----------------------------------------------+
 3 | alias                 | name                                          |
 4 +-----------------------+-----------------------------------------------+
 5 | dns-integration       | DNS Integration                               |
 6 | ext-gw-mode           | Neutron L3 Configurable external gateway mode |
 7 | binding               | Port Binding                                  |
 8 | agent                 | agent                                         |
 9 | subnet_allocation     | Subnet Allocation                             |
10 | l3_agent_scheduler    | L3 Agent Scheduler                            |
11 | external-net          | Neutron external network                      |
12 | flavors               | Neutron Service Flavors                       |
13 | net-mtu               | Network MTU                                   |
14 | quotas                | Quota management support                      |
15 | l3-ha                 | HA Router extension                           |
16 | provider              | Provider Network                              |
17 | multi-provider        | Multi Provider Network                        |
18 | extraroute            | Neutron Extra Route                           |
19 | router                | Neutron L3 Router                             |
20 | extra_dhcp_opt        | Neutron Extra DHCP opts                       |
21 | security-group        | security-group                                |
22 | dhcp_agent_scheduler  | DHCP Agent Scheduler                          |
23 | rbac-policies         | RBAC Policies                                 |
24 | port-security         | Port Security                                 |
25 | allowed-address-pairs | Allowed Address Pairs                         |
26 | dvr                   | Distributed Virtual Router                    |
27 +-----------------------+-----------------------------------------------+

验证网络agent启动了多少类型的服务:

 1 [root@node1 opt]# neutron agent-list
 2 +--------------------------------------+--------------------+-------+-------+----------------+---------------------------+
 3 | id                                   | agent_type         | host  | alive | admin_state_up | binary                    |
 4 +--------------------------------------+--------------------+-------+-------+----------------+---------------------------+
 5 | 6040557f-6075-483b-9703-2e8578614935 | L3 agent           | node0 | :-)   | True           | neutron-l3-agent          |
 6 | 7eac038e-9daf-4ffa-8261-537b148151bf | Linux bridge agent | node0 | :-)   | True           | neutron-linuxbridge-agent |
 7 | 82be88ad-e273-405d-ac59-57eba50861c8 | DHCP agent         | node0 | :-)   | True           | neutron-dhcp-agent        |
 8 | b0b1d65c-0943-48e9-a4a1-6308289dbd25 | Metadata agent     | node0 | :-)   | True           | neutron-metadata-agent    |
 9 | d615a741-bdb8-40f4-82c2-ea0b9da07bb8 | Linux bridge agent | node1 | :-)   | True           | neutron-linuxbridge-agent |
10 +--------------------------------------+--------------------+-------+-------+----------------+---------------------------+

 

到此,简单版本软件安装全部的内容安装完成,此时,可以通过命令行来创建instance了。

d1. 配置keypair。

我的node0上已经有了ssh key文件,所以就用这个现有的key作为key来创建vm时使用。目前主要就是要将pub的key文件添加到nova的管理体系。

1 nova keypair-add --pub-key ~/.ssh/id_rsa.pub hellokey

再来查看,添加的key信息:

1 [root@node0 opt]# nova keypair-list
2 +----------+-------------------------------------------------+
3 | Name     | Fingerprint                                     |
4 +----------+-------------------------------------------------+
5 | hellokey | 79:a5:5e:17:f8:b2:a0:9d:ec:5d:db:b6:7a:b0:e5:cc |
6 +----------+-------------------------------------------------+

 

d2. 添加security rules。

Permit ICMP (ping) 以及 ssh:

1 nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
2  nova secgroup-add-rule default tcp 22 22 0.0.0.0/0

查看一下添加的规则:

1 [root@node0 opt]# nova secgroup-list-rules default
2 +-------------+-----------+---------+-----------+--------------+
3 | IP Protocol | From Port | To Port | IP Range  | Source Group |
4 +-------------+-----------+---------+-----------+--------------+
5 |             |           |         |           | default      |
6 |             |           |         |           | default      |
7 | icmp        | -1        | -1      | 0.0.0.0/0 |              |
8 | tcp         | 22        | 22      | 0.0.0.0/0 |              |
9 +-------------+-----------+---------+-----------+--------------+

 

最后,来进行命令行下创建instance的操作。

d3. 创建虚拟网络

根据前面的配置,是想创建的网络既支持public的网络,也同时支持private的网络,但是,由于服务器只有一个interface连接到交换机。若要支持private的management网络,需要额外的网卡与外部交换机连接。所以,此处,网络的配置是按照public和private双网络配置,但是这里创建虚拟网络时,只创建public的。【因为即使创建了private的网络,在后续的数据流配置过程中会出错,测试中发现,private的网络有问题。修改配置,应该是可以解决的,但在这里,这个展示不是重点,就不费这个时间】

public的网络连接图如下:

openstack(liberty):部署实验平台(二,简单版本软件安装 part2)_第1张图片

1 neutron net-create public --shared --provider:physical_network public --provider:network_type flat

上述指令中的配置信息,依据的是配置文件/etc/neutron/plugins/ml2/ml2_conf.ini中的内容。

[ml2_type_flat]

flat_networks = public

[linux_bridge] 

physical_interface_mappings = public:em1

下面创建public的网络,指定网络的起点IP以及结束IP,指定一个IP范围,指定dns以及gateway。

1 [root@node0 opt]# neutron subnet-create public 192.168.1.0/24 --name public --allocation-pool start=192.168.1.140,end=192.168.1.254 --dns-nameserver 219.141.136.10 --gateway 192.168.1.1 
 

d4. 资源信息

 1 [root@node0 opt]# nova flavor-list
 2 +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
 3 | ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
 4 +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
 5 | 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
 6 | 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
 7 | 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
 8 | 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
 9 | 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
10 +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
1 [root@node0 opt]# nova image-list
2 +--------------------------------------+--------+--------+--------+
3 | ID                                   | Name   | Status | Server |
4 +--------------------------------------+--------+--------+--------+
5 | 686bec85-fe90-4aea-8026-b6f7cc8ed686 | cirros | ACTIVE |        |
6 +--------------------------------------+--------+--------+--------+
1 [root@node0 opt]# neutron net-list
2 +--------------------------------------+---------+-----------------------------------------------------+
3 | id                                   | name    | subnets                                             |
4 +--------------------------------------+---------+-----------------------------------------------------+
5 | 422a7cbd-46b0-474f-a09b-206387147997 | private | 95b1f1a1-0e09-4fc2-8d6c-3ea8bf6e6c4b 172.16.1.0/24  |
6 | ceb43e9a-69c1-4933-bca2-082801bfe34f | public  | 855aa082-0c19-465c-a622-418a8f7b8a4d 192.168.1.0/24 |
7 +--------------------------------------+---------+-----------------------------------------------------+

说明下,上面的net-list内容,其中有个private,因为我的网络配置是按照self-network选项配置的,即可以支持public,也可以支持private的类型,所以在创建虚拟网络的时候,就创建了一个vxlan的租户私有网络,但是,由于物理连接以及网关的配置等原因,测试私有网络时是有点问题的,这个问题是可以解决的,只不过,在我的这个实验平台环境下,比较麻烦点,我就没有处理这个私有网络的问题。

 

安全组信息,只有一个default。这个映射到前面添加keypair的操作过程。

1 [root@node0 opt]# nova secgroup-list
2 +--------------------------------------+---------+------------------------+
3 | Id                                   | Name    | Description            |
4 +--------------------------------------+---------+------------------------+
5 | 844c51a3-09e9-41a0-bcd6-a6f7c3cffa56 | default | Default security group |
6 +--------------------------------------+---------+------------------------+

 

d5.创建instance。

 1 [root@node0 nova]# nova boot --flavor m1.tiny --image cirros --nic net-id=ceb43e9a-69c1-4933-bca2-082801bfe34f --security-group default --key-name hellokey public-instance
 2 +--------------------------------------+-----------------------------------------------+
 3 | Property                             | Value                                         |
 4 +--------------------------------------+-----------------------------------------------+
 5 | OS-DCF:diskConfig                    | MANUAL                                        |
 6 | OS-EXT-AZ:availability_zone          |                                               |
 7 | OS-EXT-SRV-ATTR:host                 | -                                             |
 8 | OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                             |
 9 | OS-EXT-SRV-ATTR:instance_name        | instance-00000001                             |
10 | OS-EXT-STS:power_state               | 0                                             |
11 | OS-EXT-STS:task_state                | scheduling                                    |
12 | OS-EXT-STS:vm_state                  | building                                      |
13 | OS-SRV-USG:launched_at               | -                                             |
14 | OS-SRV-USG:terminated_at             | -                                             |
15 | accessIPv4                           |                                               |
16 | accessIPv6                           |                                               |
17 | adminPass                            | JCtWSicR3Z9B                                  |
18 | config_drive                         |                                               |
19 | created                              | 2016-02-04T06:42:13Z                          |
20 | flavor                               | m1.tiny (1)                                   |
21 | hostId                               |                                               |
22 | id                                   | 2830bb7e-9591-46d2-8a0a-1c329bcb39f8          |
23 | image                                | cirros (686bec85-fe90-4aea-8026-b6f7cc8ed686) |
24 | key_name                             | hellokey                                      |
25 | metadata                             | {}                                            |
26 | name                                 | public-instance                               |
27 | os-extended-volumes:volumes_attached | []                                            |
28 | progress                             | 0                                             |
29 | security_groups                      | default                                       |
30 | status                               | BUILD                                         |
31 | tenant_id                            | c6669377868c438f8a81cc234f85338f              |
32 | updated                              | 2016-02-04T06:42:14Z                          |
33 | user_id                              | 34b11c08da3b4c2ebfd6ac3203768bc4              |
34 +--------------------------------------+-----------------------------------------------+

 

下面看看创建的实例吧,这个创建过程非常快,绝对是秒级的,几秒内。

1 [root@node0 nova]# nova list
2 +--------------------------------------+-----------------+--------+------------+-------------+----------------------+
3 | ID                                   | Name            | Status | Task State | Power State | Networks             |
4 +--------------------------------------+-----------------+--------+------------+-------------+----------------------+
5 | 2830bb7e-9591-46d2-8a0a-1c329bcb39f8 | public-instance | ACTIVE | -          | Running     | public=192.168.1.142 |
6 +--------------------------------------+-----------------+--------+------------+-------------+----------------------+

 

d6。访问一下创建的实例

1 [root@node0 opt]# nova get-vnc-console public-instance novnc
2 +-------+----------------------------------------------------------------------------+
3 | Type  | Url                                                                        |
4 +-------+----------------------------------------------------------------------------+
5 | novnc | http://node0:6080/vnc_auto.html?token=9ab57a02-272c-46cc-b8b1-3a350467e679 |
6 +-------+----------------------------------------------------------------------------+

在浏览器中输入http://tzj_IP:6080/vnc_auto.html?token=9ab57a02-272c-46cc-b8b1-3a350467e679,注意,这里的tzj_IP是我在part1中显示的中间跳转机的IP。最后,浏览器中看到的效果如下图所示的效果:

openstack(liberty):部署实验平台(二,简单版本软件安装 part2)_第2张图片

说明下,上图中,1标识,提示用户点击该按钮,启动vnc显示出VM的登录界面,2标识提示用户login的信息,默认的是cirros为用户名,密码为cubswin:)

在这里,我提供一下我在跳转机的防火墙nat表中添加的dnat转换规则:

1 [root@fedora1 ~]# iptables -t nat -A PREROUTING -p tcp --dport 6080 -d tzj_IP -j DNAT --to-destination 192.168.1.100:6080
2 [root@fedora1 ~]# iptables -t nat -A PREROUTING -p udp --dport 6080 -d tzj_IP -j DNAT --to-destination 192.168.1.100:6080

 

下面看看ssh登录的这个instance的效果:

1 [root@node0 opt]# ssh cirros@192.168.1.142
2 $ ls
3 $ pwd
4 /home/cirros
5 $ 

 

到此,实验环境的平台,简单版本软件安装已经完成,可以创建出instance。也可以像访问具体的机器一样来访问创建出来的虚拟机!

 

为了创建虚拟机方便些,类似AWS的EC2那样子方便的创建VM,接下来,安装dashboard。这一步比较简单。在控制节点node0上操作。

d1. 安装dashboard

1 yum install openstack-dashboard

 

d2. 编辑并配置/etc/openstack-dashboard/local_settings

 1 OPENSTACK_HOST = "node0" #配置dashboard访问openstack的service
 2 ALLOWED_HOSTS = ['*', ]    #允许所有的机器都可以访问dashboard
 3 
 4 #配置memcached的session storage service
 5 CACHES = {
 6     'default': {
 7          'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
 8          'LOCATION': '127.0.0.1:11211',
 9     }
10 }
11 
12 #配置通过dashboard创建的用户默认为user角色
13 OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
14 
15 OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
16 
17 #配置API service的版本号,允许用户通过keystone v3的API登录dashboard
18 OPENSTACK_API_VERSIONS = {
19     "identity": 3,
20     "volume": 2,
21 }
22 
23 #配置时区
24 TIME_ZONE = "Asia/Chongqing"

 

d3. 启动dashboard(其实,就是重新启动apache)

1 systemctl enable httpd.service memcached.service
2 systemctl restart httpd.service memcached.service

 

d4. 验证dashboard的web访问http://node0/dashboard

由于我的实验环境使用了跳转机,服务器都是内网的IP,所以,这里要再次配置DNAT。

1 [root@fedora1 ~]# iptables -t nat -A PREROUTING -p tcp --dport 80 -d tzj_IP -j DNAT --to-destination 192.168.1.100:80

 

下面展示几张截图作为此实验平台环境搭建记录过程的结尾。

openstack(liberty):部署实验平台(二,简单版本软件安装 part2)_第3张图片

上图为登录界面。

 

openstack(liberty):部署实验平台(二,简单版本软件安装 part2)_第4张图片

上图为列举出当前admin用户下的instance列表信息。

 

openstack(liberty):部署实验平台(二,简单版本软件安装 part2)_第5张图片

上图为当前的keypair列表信息,只有一个hellokey。

 

openstack(liberty):部署实验平台(二,简单版本软件安装 part2)_第6张图片

上图表示通过dashboard创建VM的界面,有过AWS的EC2的经验的话,这些都很容易理解和接受!

 

你可能感兴趣的:(openstack(liberty):部署实验平台(二,简单版本软件安装 part2))