计算节点

增加Networking及配置【计算节点】


安装组件 yum install -y openstack-neutron openstack-neutron-linuxbridge ebtables ipset

配置普通组件  vim /etc/neutron/neutron.conf //更改或增加

[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
verbose = True
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = openstackpasswd
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = neutronpasswd
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp



配置linux桥接agent

[root@compute ~]# mv /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak
[root@compute ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini  增加以下内容
[linux_bridge]
physical_interface_mappings = public:eno16777736
[vxlan]
enable_vxlan = False
[agent]
prevent_arp_spoofing = True
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

配置compute使用网络

vi /etc/nova/nova.conf  //更改或增加

[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = neutron
password = neutronpasswd

启动服务

systemctl restart openstack-nova-compute.service   

systemctl enable neutron-linuxbridge-agent.service

systemctl start neutron-linuxbridge-agent.service

[root@compute ~]# systemctl restart openstack-nova-compute.service 
[root@compute ~]# systemctl enable neutron-linuxbridge-agent.service
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service.
[root@compute ~]# systemctl start neutron-linuxbridge-agent.service
[root@compute ~]#

验证配置(controller)


执行环境变量脚本

source admin-openrc.sh


列出所有的扩展

neutron ext-list

[root@controller network-scripts]# neutron ext-list
+-----------------------+-----------------------------------------------+
| alias                 | name                                          |
+-----------------------+-----------------------------------------------+
| dns-integration       | DNS Integration                               |
| ext-gw-mode           | Neutron L3 Configurable external gateway mode |
| binding               | Port Binding                                  |
| agent                 | agent                                         |
| subnet_allocation     | Subnet Allocation                             |
| l3_agent_scheduler    | L3 Agent Scheduler                            |
| external-net          | Neutron external network                      |
| flavors               | Neutron Service Flavors                       |
| net-mtu               | Network MTU                                   |
| quotas                | Quota management support                      |
| l3-ha                 | HA Router extension                           |
| provider              | Provider Network                              |
| multi-provider        | Multi Provider Network                        |
| extraroute            | Neutron Extra Route                           |
| router                | Neutron L3 Router                             |
| extra_dhcp_opt        | Neutron Extra DHCP opts                       |
| security-group        | security-group                                |
| dhcp_agent_scheduler  | DHCP Agent Scheduler                          |
| rbac-policies         | RBAC Policies                                 |
| port-security         | Port Security                                 |
| allowed-address-pairs | Allowed Address Pairs                         |
| dvr                   | Distributed Virtual Router                    |
+-----------------------+-----------------------------------------------+
[root@controller network-scripts]#

列出所有agent

neutron agent-list

[root@controller network-scripts]# neutron agent-list
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host       | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
| 4280e1bf-9167-4513-9128-8d71bb1235cc | DHCP agent         | controller | :-)   | True           | neutron-dhcp-agent        |
| 75faf736-924d-43a5-bb2c-620dcd474602 | Metadata agent     | controller | :-)   | True           | neutron-metadata-agent    |
| af9496f7-9c3a-4b29-9112-4fbd19a91b70 | Linux bridge agent | compute    | :-)   | True           | neutron-linuxbridge-agent |
| fdc74917-b760-48e4-b5d6-5290083521bf | Linux bridge agent | controller | :-)   | True           | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
[root@controller network-scripts]#

agent type如下:

Linux bridge agent

Linux bridge agent

DHCP agent

Metadata agent

必须要有4个,否则说明上面的某个步骤配置有问题。


增加dashboard  - horizon 【控制节点】

openstack dashboard也被称为Horizon,是一个web界面,使用管理员和用户能够管理openstack不同的资源和服务
dashboard 通过OpenStack APIs操作openstack云计算控制器
Horizon允许定制自己的商标
Horizon提供了核心类和可重复使用的木板和工具
这个部署使用的是 Apache web server.


安装包 yum install -y openstack-dashboard

编辑配置文件

vi /etc/openstack-dashboard/local_settings //更改或增加

OPENSTACK_HOST = "controller"   #配置openstack服务dashboard,运行在控制节点
ALLOWED_HOSTS = ['*', ]         #允许任何主机访问dashboard

#配置缓存会话存储服务,注意:注释掉其它session存储配置

CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
         'LOCATION': '127.0.0.1:11211',
     }
}

OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {
"identity": 3,
"volume": 2,
}

TIME_ZONE = "Asia/Chongqing"  #时区

重启服务  systemctl restart httpd.service memcached.service

此时可以去访问了 http://controller/dashboard   使用账号admin或者demon用户登陆即可,域为default

block storage又叫做cinder,用来给openstack提供存储服务,比如我们在阿里云购买一台云主机,同时想购买容量大的磁盘,通常叫做云盘,这个云盘就是block storage。

创建库并授权cinder用户

mysql -uroot -proot

> CREATE DATABASE cinder;

> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost'    IDENTIFIED BY 'cinder';

> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%'    IDENTIFIED BY 'cinder';

MariaDB [(none)]> CREATE DATABASE cinder;
Query OK, 1 row affected (0.03 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost'    IDENTIFIED BY 'cinder';
Query OK, 0 rows affected (0.14 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%'    IDENTIFIED BY 'cinder';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]>


执行初始化脚本

source admin-openrc.sh

创建cinder用户 (密码为cinderpasswd)

openstack user create --domain default --password-prompt cinder

[root@controller ~]# source admin-openrc.sh
[root@controller ~]# openstack user create --domain default --password-prompt cinder
User Password:
Repeat User Password:
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | default                          |
| enabled   | True                             |
| id        | 97c646aaca35428b853cc7b2dba399c3 |
| name      | cinder                           |
+-----------+----------------------------------+
[root@controller ~]#

添加admin角色

openstack role add --project service --user cinder admin

[root@controller ~]# openstack role add --project service --user cinder admin
[root@controller ~]#

增加block storage - 前期准备 【控制节点】

创建cinder和cinderv2 实例

openstack service create --name cinder \

 --description "OpenStack Block Storage" volume


openstack service create --name cinderv2 \

 --description "OpenStack Block Storage" volumev2

[root@controller ~]# openstack role add --project service --user cinder admin
[root@controller ~]# openstack service create --name cinder \
>  --description "OpenStack Block Storage" volume
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Block Storage          |
| enabled     | True                             |
| id          | 41d1d66196044f4c99f1f5f9a6891d87 |
| name        | cinder                           |
| type        | volume                           |
+-------------+----------------------------------+
[root@controller ~]# openstack service create --name cinderv2 \
>  --description "OpenStack Block Storage" volumev2
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Block Storage          |
| enabled     | True                             |
| id          | 1041e0133be44321803302fd928b8d45 |
| name        | cinderv2                         |
| type        | volumev2                         |
+-------------+----------------------------------+
[root@controller ~]#

创建块存储服务api终端

openstack endpoint create --region RegionOne volume public http://controller:8776/v1/%\(tenant_id\)s

[root@controller ~]# openstack endpoint create --region RegionOne volume public http://controller:8776/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | 096bdd462bd94a29899bdefeb0ed3734        |
| interface    | public                                  |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | 41d1d66196044f4c99f1f5f9a6891d87        |
| service_name | cinder                                  |
| service_type | volume                                  |
| url          | http://controller:8776/v1/%(tenant_id)s |
+--------------+-----------------------------------------+
[root@controller ~]#

openstack endpoint create --region RegionOne volume internal http://controller:8776/v1/%\(tenant_id\)s

[root@controller ~]# openstack endpoint create --region RegionOne volume internal http://controller:8776/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | 727676c160284e4ca653affaeb324c39        |
| interface    | internal                                |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | 41d1d66196044f4c99f1f5f9a6891d87        |
| service_name | cinder                                  |
| service_type | volume                                  |
| url          | http://controller:8776/v1/%(tenant_id)s |
+--------------+-----------------------------------------+
[root@controller ~]#


openstack endpoint create --region RegionOne volume admin http://controller:8776/v1/%\(tenant_id\)s

[root@controller ~]# openstack endpoint create --region RegionOne volume admin http://controller:8776/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | ea0c8d6c84bd43888ae842c4e2e57731        |
| interface    | admin                                   |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | 41d1d66196044f4c99f1f5f9a6891d87        |
| service_name | cinder                                  |
| service_type | volume                                  |
| url          | http://controller:8776/v1/%(tenant_id)s |
+--------------+-----------------------------------------+
[root@controller ~]#


openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(tenant_id\)s

[root@controller ~]# openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | 68201bbf68b542e4aeedab3164907ee4        |
| interface    | public                                  |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | 1041e0133be44321803302fd928b8d45        |
| service_name | cinderv2                                |
| service_type | volumev2                                |
| url          | http://controller:8776/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
[root@controller ~]#


openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(tenant_id\)s

[root@controller ~]# openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | 52cb76c604d544f09ae99a5f0bb25ec9        |
| interface    | internal                                |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | 1041e0133be44321803302fd928b8d45        |
| service_name | cinderv2                                |
| service_type | volumev2                                |
| url          | http://controller:8776/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
[root@controller ~]#


openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(tenant_id\)s

[root@controller ~]# openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | 6590ebabec184778ba818ffccd2efa45        |
| interface    | admin                                   |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | 1041e0133be44321803302fd928b8d45        |
| service_name | cinderv2                                |
| service_type | volumev2                                |
| url          | http://controller:8776/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
[root@controller ~]#


增加block storage - 安装和配置【控制节点】


安装包  yum install -y openstack-cinder python-cinderclient

编辑配置文件  vim /etc/cinder/cinder.conf  //更改或增加

[database]
connection = mysql://cinder:cinder@controller/cinder
[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 192.168.100.20
verbose = True
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = cinderpasswd
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = openstackpasswd
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp


同步数据  su -s /bin/sh -c "cinder-manage db sync" cinder


配置compute使用块存储

vi /etc/nova/nova.conf #增加以下配置

[cinder]
os_region_name=RegionOne


启动服务

systemctl restart openstack-nova-api.service

systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service

systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

[root@controller ~]# systemctl restart openstack-nova-api.service
[root@controller ~]# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-api.service to /usr/lib/systemd/system/openstack-cinder-api.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-scheduler.service to /usr/lib/systemd/system/openstack-cinder-scheduler.service.
[root@controller ~]# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
[root@controller ~]#


增加block storage - 配置storage节点【计算节点】


我们理应需要再准备一台单独的机器来做storage服务的,但是为了节省资源,我们就那compute节点和storage节点共用。这里需要为compute(storage)节点再增加一块磁盘(/dev/sdb)作为存储磁盘。

安装lvm 

yum install -y lvm2

启动服务

systemctl enable lvm2-lvmetad.service

systemctl start lvm2-lvmetad.service


创建物理卷 pvcreate /dev/sdb

创建卷组  vgcreate cinder-volumes /dev/sdb

[root@compute ~]# pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created
[root@compute ~]# vgcreate cinder-volumes /dev/sdb
  Volume group "cinder-volumes" successfully created
[root@compute ~]#

编辑配置文件   vi  /etc/lvm/lvm.conf

devices {
     filter = [ "a/sdb/", "r/.*/"]
说明: 如果还有第三块磁盘,应该再加上
filter = [ "a/sda/", "a/sdb/", "r/.*/"]



增加block storage - 配置storage节点 (compute)

安装包  yum install -y  openstack-cinder targetcli python-oslo-policy


编辑配置文件   vi /etc/cinder/cinder.conf

[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 192.168.100.21
enabled_backends = lvm
glance_host = controller
verbose = True

[database]
connection = mysql://cinder:cinder@controller/cinder

[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = openstackpasswd

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = cinderpasswd

[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp


增加block storage - 启动和验证 

启动服务 (compute)

systemctl enable openstack-cinder-volume.service target.service

systemctl start openstack-cinder-volume.service target.service


验证操作  (controller)

1. 执行初始化脚本

source admin-openrc.sh

2. 列出服务

cinder service-list

[root@controller ~]# source admin-openrc.sh
[root@controller ~]# cinder service-list
+------------------+-------------+------+---------+-------+----------------------------+-----------------+
|      Binary      |     Host    | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+-------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler |  controller | nova | enabled |   up  | 2016-09-24T15:06:52.000000 |        -        |
|  cinder-volume   | compute@lvm | nova | enabled |   up  | 2016-09-24T15:06:51.000000 |        -        |
+------------------+-------------+------+---------+-------+----------------------------+-----------------+
[root@controller ~]#


至此所有节点基本安装完成可以创建实例