1. 概述

    cinder作为openstack的快存储服务,为instance提供永久的volume服务,cinder作为一种可插拔式的服务,能够支持各种存储类型,包括专业的FC存储,如EMC,NetApp,HP,IBM,huawei等商场的专业存储服务器,存储厂商只要开发对应的驱动和cinder对接即可;此外,cinder还支持开源的分布式存储,如glusterfs,ceph,sheepdog,nfs等,通过开源的分布式存储方案,能够达到廉价的IP-SAN存储。本文以glusterfs构建分布式存储,以供cinder使用。

2. 构建glusterfs存储

    glusterfs是一种开源的分布式存储解决方案,能够支持集中方式:1. replicate复制(类似于RAID1),2.stripe分片(类似于RAID0),3. distribute-replicate分布式复制,4. distribute-replicate-stripe分布式复制和分片(类似于RAID10),本文采用的方式。

  1. 环境说明

    本文有两台机器组件glusterfs集群,分别是:10.1.112.55和10.1.112.56,两台机器分别有11块盘,每块3T,磁盘名字从/dev/sdb至/dev/sdl,挂载至/data2-/data12,如下:

[root@YiZhuang_10_1_112_55 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda2             9.9G  2.6G  6.9G  27% /
tmpfs                 7.8G     0  7.8G   0% /dev/shm
/dev/sda1            1008M   82M  876M   9% /boot
/dev/sda4             257G  188M  244G   1% /data1
/dev/sdb1             2.8T  118M  2.8T   1% /data2
/dev/sdc1             2.8T  118M  2.8T   1% /data3
/dev/sdd1             2.8T  118M  2.8T   1% /data4
/dev/sde1             2.8T  118M  2.8T   1% /data5
/dev/sdf1             2.8T  118M  2.8T   1% /data6
/dev/sdg1             2.8T  118M  2.8T   1% /data7
/dev/sdh1             2.8T  118M  2.8T   1% /data8
/dev/sdi1             2.8T  117M  2.8T   1% /data9
/dev/sdj1             2.8T  118M  2.8T   1% /data10
/dev/sdk1             2.8T  118M  2.8T   1% /data11
/dev/sdl1             2.8T  118M  2.8T   1% /data12

架构如下:

openstack运维实战系列(九)之cinder与glusterfs结合_第1张图片

2. 激活glusterfs邻居peer

[root@YiZhuang_10_1_112_55 ~]# gluster peer probe 10.1.112.56
peer probe: success. 
#查看
[root@YiZhuang_10_1_112_55 ~]# gluster peer status
Number of Peers: 1
Hostname: 10.1.112.56
Uuid: a720fd05-4fa7-4ff7-924e-2d8a40e48c18
State: Peer in Cluster (Connected)


3. 基于brick创建volume,切割成11份,分别存储在两台机器(类似于RAID10)

[root@YiZhuang_10_1_112_55 ~]# gluster volume create openstack_cinder stripe 11 replica 2 transport tcp \
10.1.112.55:/data2/cinder 10.1.112.56:/data2/cinder \
10.1.112.55:/data3/cinder 10.1.112.56:/data3/cinder \
10.1.112.55:/data4/cinder 10.1.112.56:/data4/cinder \
10.1.112.55:/data5/cinder 10.1.112.56:/data5/cinder \
10.1.112.55:/data6/cinder 10.1.112.56:/data6/cinder \
10.1.112.55:/data7/cinder 10.1.112.56:/data7/cinder \
10.1.112.55:/data8/cinder 10.1.112.56:/data8/cinder \
10.1.112.55:/data9/cinder 10.1.112.56:/data9/cinder \
10.1.112.55:/data10/cinder 10.1.112.56:/data10/cinder \
10.1.112.55:/data11/cinder 10.1.112.56:/data11/cinder \
10.1.112.55:/data12/cinder 10.1.112.56:/data12/cinder

4. 查看glusterfs volume的结构

[root@YiZhuang_10_1_112_55 ~]# gluster volume info
 
Volume Name: openstack_cinder
Type: Striped-Replicate
Volume ID: c55ff01b-3be0-4514-b622-83677f95924a
Status: Started
Number of Bricks: 1 x 11 x 2 = 22
Transport-type: tcp
Bricks:
Brick1: 10.1.112.55:/data2/cinder
Brick2: 10.1.112.56:/data2/cinder
Brick3: 10.1.112.55:/data3/cinder
Brick4: 10.1.112.56:/data3/cinder
Brick5: 10.1.112.55:/data4/cinder
Brick6: 10.1.112.56:/data4/cinder
Brick7: 10.1.112.55:/data5/cinder
Brick8: 10.1.112.56:/data5/cinder
Brick9: 10.1.112.55:/data6/cinder
Brick10: 10.1.112.56:/data6/cinder
Brick11: 10.1.112.55:/data7/cinder
Brick12: 10.1.112.56:/data7/cinder
Brick13: 10.1.112.55:/data8/cinder
Brick14: 10.1.112.56:/data8/cinder
Brick15: 10.1.112.55:/data9/cinder
Brick16: 10.1.112.56:/data9/cinder
Brick17: 10.1.112.55:/data10/cinder
Brick18: 10.1.112.56:/data10/cinder
Brick19: 10.1.112.55:/data11/cinder
Brick20: 10.1.112.56:/data11/cinder
Brick21: 10.1.112.55:/data12/cinder
Brick22: 10.1.112.56:/data12/cinder

5. 启动glusterfs volume

[root@YiZhuang_10_1_112_55 ~]# gluster volume start openstack_cinder        #开启glusterfs volume

#查看glusterfs volume的状态
[root@YiZhuang_10_1_112_55 ~]# gluster volume status
Status of volume: openstack_cinder
Gluster process                                         Port    Online  Pid
------------------------------------------------------------------------------
Brick 10.1.112.55:/data2/cinder                         59152   Y       4121
Brick 10.1.112.56:/data2/cinder                         59152   Y       43596
Brick 10.1.112.55:/data3/cinder                         59153   Y       4132
Brick 10.1.112.56:/data3/cinder                         59153   Y       43607
Brick 10.1.112.55:/data4/cinder                         59154   Y       4143
Brick 10.1.112.56:/data4/cinder                         59154   Y       43618
Brick 10.1.112.55:/data5/cinder                         59155   Y       4154
Brick 10.1.112.56:/data5/cinder                         59155   Y       43629
Brick 10.1.112.55:/data6/cinder                         59156   Y       4165
Brick 10.1.112.56:/data6/cinder                         59156   Y       43640
Brick 10.1.112.55:/data7/cinder                         59157   Y       4176
Brick 10.1.112.56:/data7/cinder                         59157   Y       43651
Brick 10.1.112.55:/data8/cinder                         59158   Y       4187
Brick 10.1.112.56:/data8/cinder                         59158   Y       43662
Brick 10.1.112.55:/data9/cinder                         59159   Y       4198
Brick 10.1.112.56:/data9/cinder                         59159   Y       43673
Brick 10.1.112.55:/data10/cinder                        59160   Y       4209
Brick 10.1.112.56:/data10/cinder                        59160   Y       43684
Brick 10.1.112.55:/data11/cinder                        59161   Y       4220
Brick 10.1.112.56:/data11/cinder                        59161   Y       43695
Brick 10.1.112.55:/data12/cinder                        59162   Y       4231
Brick 10.1.112.56:/data12/cinder                        59162   Y       43706
NFS Server on localhost                                 2049    Y       4244
Self-heal Daemon on localhost                           N/A     Y       4251
NFS Server on 10.1.112.56                               2049    Y       43718
Self-heal Daemon on 10.1.112.56                         N/A     Y       43727
 
Task Status of Volume openstack_cinder
------------------------------------------------------------------------------

6. 挂载测试

[root@YiZhuang_10_1_112_56 ~]# mount.glusterfs 10.1.112.56:openstack_cinder /media/ 
[root@YiZhuang_10_1_112_56 ~]# df
Filesystem             1K-blocks    Used   Available Use% Mounted on
/dev/sda2               10321208 2488348     7308572  26% /
tmpfs                    8140364       0     8140364   0% /dev/shm
/dev/sda1                1032088   83596      896064   9% /boot
/dev/sda4              268751588  191660   254908060   1% /data1
/dev/sdb1             2928834296   32972  2928801324   1% /data2
/dev/sdc1             2928834296   32972  2928801324   1% /data3
/dev/sdd1             2928834296   32972  2928801324   1% /data4
/dev/sde1             2928834296   32972  2928801324   1% /data5
/dev/sdf1             2928834296   32972  2928801324   1% /data6
/dev/sdg1             2928834296   32972  2928801324   1% /data7
/dev/sdh1             2928834296   32972  2928801324   1% /data8
/dev/sdi1             2928834296   32972  2928801324   1% /data9
/dev/sdj1             2928834296   32972  2928801324   1% /data10
/dev/sdk1             2928834296   32972  2928801324   1% /data11
/dev/sdl1             2928834296   32972  2928801324   1% /data12
10.1.112.56:openstack_cinder
                     32217177216  362752 32216814464   1% /media        #已经挂载

3. cinder和glusterfs结合

  1. cinder-volume端配置内容如下

[DEFAULT]
enabled_backends = glusterfs
[glusterfs]                                                          #最后添加
volume_driver = cinder.volume.drivers.glusterfs.GlusterfsDriver      #驱动  
glusterfs_shares_config = /etc/cinder/shares.conf                    #glusterfs存储
glusterfs_mount_point_base = /var/lib/cinder/volumes                 #挂载点
volume_backend_name = glusterfs                                      #后端名字,用于在controller上和cinder的type结合@@

2. 配置glusterfs存储配置

[root@YiZhuang_10_1_112_55 ~]# vim /etc/cinder/shares.conf 
10.1.112.55:/openstack_cinder

3. 重启cinder-volume服务

[root@YiZhuang_10_1_112_55 init.d]# chkconfig openstack-cinder-volume on
[root@YiZhuang_10_1_112_55 init.d]# service  openstack-cinder-volume restart
Stopping openstack-cinder-volume:                          [  OK  ]
Starting openstack-cinder-volume:                          [  OK  ]

@@@两台机器,执行相同的操作,并检查日志信息,看是否有错误/var/log/cinder/volume.log@@@@

4. controller节点检查服务状态

[root@controller ~]# cinder service-list
+------------------+--------------------------------+------+---------+-------+----------------------------+-----------------+
|      Binary      |              Host              | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+--------------------------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler |        controller              | nova | enabled |   up  | 2016-01-22T08:52:14.000000 |       None      |
|  cinder-volume   | YiZhuang_10_1_112_55@glusterfs | nova | enabled |   up  | 2016-01-22T08:52:17.000000 |       None      |    #说明正常
|  cinder-volume   | YiZhuang_10_1_112_56@glusterfs | nova | enabled |   up  | 2016-01-22T08:52:04.000000 |       None      |
+------------------+--------------------------------+------+---------+-------+----------------------------+-----------------+

5. controller建立type

[root@controller ~]# cinder type-create glusterfs
+--------------------------------------+------------+
|                  ID                  |    Name    |
+--------------------------------------+------------+
| 6688e8f9-e744-4c21-b570-fd81b099d4c0 | glusterfs  |
+--------------------------------------+------------+

6. controller配置cinder-type和volume_backend_name联动

[root@controller ~]# cinder type-key set  6688e8f9-e744-4c21-b570-fd81b099d4c0 volume_backend_name=glusterfs
#查看type的设置情况

[root@controller~]# cinder extra-specs-list
+--------------------------------------+-----------+----------------------------------------+
|                  ID                  |    Name   |              extra_specs               |
+--------------------------------------+-----------+----------------------------------------+
| 6688e8f9-e744-4c21-b570-fd81b099d4c0 | glusterfs | {u'volume_backend_name': u'glusterfs'} |    #关联完毕
+--------------------------------------+-----------+----------------------------------------+

7. 重启controller的cinder服务

[root@LuGu_10_1_81_205 ~]# /etc/init.d/openstack-cinder-api  restart
Stopping openstack-cinder-api:                             [  OK  ]
Starting openstack-cinder-api:                             [  OK  ]
[root@LuGu_10_1_81_205 ~]# /etc/init.d/openstack-cinder-scheduler restart
Stopping openstack-cinder-scheduler:                       [  OK  ]
Starting openstack-cinder-scheduler:                       [  OK  ]

4. 功能测试

  1. 创建cinder volume

[root@controller ~]# cinder create --display-name "test1" --volume-type glusterfs 10        #执行cinder type的类型
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     p_w_uploads     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2016-01-22T09:01:48.978864      |
| display_description |                 None                 |
|     display_name    |                test1                 |
|      encrypted      |                False                 |
|          id         | 3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001 |
|       metadata      |                  {}                  |
|         size        |                  10                  |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |              glusterfs               |
+---------------------+--------------------------------------+
[root@controller ~]# cinder show  3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001 
+--------------------------------+--------------------------------------+
|            Property            |                Value                 |
+--------------------------------+--------------------------------------+
|          p_w_uploads           |                  []                  |
|       availability_zone        |                 nova                 |
|            bootable            |                false                 |
|           created_at           |      2016-01-22T09:01:48.000000      |
|      display_description       |                 None                 |
|          display_name          |                test1                 |
|           encrypted            |                False                 |
|               id               | 3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001 |
|            metadata            |                  {}                  |
|     os-vol-host-attr:host      |    YiZhuang_10_1_112_55@glusterfs    |        #落在10.1.112.55这台机器
| os-vol-mig-status-attr:migstat |                 None                 |
| os-vol-mig-status-attr:name_id |                 None                 |
|  os-vol-tenant-attr:tenant_id  |   a49b16d5324a4d20bde2217b17200485   |
|              size              |                  10                  |
|          snapshot_id           |                 None                 |
|          source_volid          |                 None                 |
|             status             |              available               |        #创建成功,状态为available
|          volume_type           |              glusterfs               |
+--------------------------------+--------------------------------------+

2. 校验glusterfs的切割情况

[root@YiZhuang_10_1_112_56 ~]# for num in {2..12}
> do
> ls -lh /data${num}/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
> done
-rw-rw-rw- 2 root root 931M Jan 22 17:01 /data2/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
-rw-rw-rw- 2 root root 931M Jan 22 17:01 /data3/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
-rw-rw-rw- 2 root root 931M Jan 22 17:01 /data4/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
-rw-rw-rw- 2 root root 931M Jan 22 17:01 /data5/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
-rw-rw-rw- 2 root root 931M Jan 22 17:01 /data6/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
-rw-rw-rw- 2 root root 931M Jan 22 17:01 /data7/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
-rw-rw-rw- 2 root root 931M Jan 22 17:01 /data8/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
-rw-rw-rw- 2 root root 931M Jan 22 17:01 /data9/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
-rw-rw-rw- 2 root root 931M Jan 22 17:01 /data10/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
-rw-rw-rw- 2 root root 931M Jan 22 17:01 /data11/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
-rw-rw-rw- 2 root root 931M Jan 22 17:01 /data12/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001    #10G的磁盘,切割为11份,分别落在11个磁盘上,达到负载均衡的效果,另外一台机器的数据是一模一样

3. 校验两台机器的数据的md5

[root@YiZhuang_10_1_112_56 ~]# for num in {2..12}; do md5sum /data${num}/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001; done      
e0f8c6646f8ce81fe6be0b12f1511aa1  /data2/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e0f8c6646f8ce81fe6be0b12f1511aa1  /data3/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e0f8c6646f8ce81fe6be0b12f1511aa1  /data4/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e18d850c6c53cdeb2e346fcd28c7a189  /data5/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e18d850c6c53cdeb2e346fcd28c7a189  /data6/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e18d850c6c53cdeb2e346fcd28c7a189  /data7/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e18d850c6c53cdeb2e346fcd28c7a189  /data8/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e18d850c6c53cdeb2e346fcd28c7a189  /data9/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e18d850c6c53cdeb2e346fcd28c7a189  /data10/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e18d850c6c53cdeb2e346fcd28c7a189  /data11/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e18d850c6c53cdeb2e346fcd28c7a189  /data12/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001

另外一台机器
[root@YiZhuang_10_1_112_55 ~]# for num in {2..12} ; do md5sum /data${num}/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001; done      
e0f8c6646f8ce81fe6be0b12f1511aa1  /data2/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e0f8c6646f8ce81fe6be0b12f1511aa1  /data3/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e0f8c6646f8ce81fe6be0b12f1511aa1  /data4/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e18d850c6c53cdeb2e346fcd28c7a189  /data5/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e18d850c6c53cdeb2e346fcd28c7a189  /data6/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e18d850c6c53cdeb2e346fcd28c7a189  /data7/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e18d850c6c53cdeb2e346fcd28c7a189  /data8/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e18d850c6c53cdeb2e346fcd28c7a189  /data9/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e18d850c6c53cdeb2e346fcd28c7a189  /data10/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e18d850c6c53cdeb2e346fcd28c7a189  /data11/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e18d850c6c53cdeb2e346fcd28c7a189  /data12/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001

对比发现,两者的md5一模一样,说明两者是相同的文件,互为镜像,至此,glusterfs和cinder联动配置完毕!!

5. 总结

    cinder作为openstack中管理volume的一个服务,主要承担管理的角色,存储的功能,有专业的存储方案来完成,如本文的glusterfs开源分布式存储,此外,cinder还可以针对不同的后端设置不同的type,如后端可能是专业的存储服务器,或者是SSD构建的glusterfs,或者SATA构建的ceph存储,可以设置不同的type,分配给不同的虚拟机,已达到不同性能的需求,关于这些功能,参考openstack cinder的配置文档。