上一篇我们已经创建了GlusterFS集群,接下来就是我们怎么在Cinder后端来添加GlusterFS存储,其实熟悉了前面介绍的添加NFS的朋友,在参考GlusterFS其实是一样的,我们只需要修改cinder的配置文件即可。
1、在计算节点和存储节点安装glusterfs客户端软件
注意,由于我是单独创建了一个cinder存储节点,然后通过cinder存储节点的cinder-volume服务来挂载创建的两个glusterfs集群,所以我需要安装glusterfs客户端。
root@cinder:~# apt-get install glusterfs-client
2、在cinder节点的cinder.conf添加如下内容,添加如下内容[DEFAULT]
volume_driver=cinder.volume.drivers.glusterfs.GlusterfsDriver
glusterfs_mount_point_base = $state_path/mnt
glusterfs_qcow2_volumes = False
glusterfs_shares_config = /etc/cinder/glusterfs_shares
glusterfs_sparsed_volumes = True
3、创建glusterfs_shares文件
注意该文件的权限,所属组
root@cinder:~# ll /etc/cinder/glusterfs_shares
-rw-r----- 1 root cinder 19 Jul 17 16:45 /etc/cinder/glusterfs_shares
4、添加glusterfs集群内容,包括新创建的demo卷
root@cinder:~# cat /etc/cinder/glusterfs_shares
192.168.3.10:/demo
5、重启cinder节点的cinder-volume服务,查看信息,可以看到最后一行信息
root@cinder:~# mount
/dev/sda1 on / type ext4 (rw,errors=remount-ro)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
none on /sys/fs/cgroup type tmpfs (rw)
none on /sys/fs/fuse/connections type fusectl (rw)
none on /sys/kernel/debug type debugfs (rw)
none on /sys/kernel/security type securityfs (rw)
udev on /dev type devtmpfs (rw,mode=0755)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)
none on /run/shm type tmpfs (rw,nosuid,nodev)
none on /run/user type tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755)
none on /sys/fs/pstore type pstore (rw)
rpc_pipefs on /run/rpc_pipefs type rpc_pipefs (rw)
systemd on /sys/fs/cgroup/systemd type cgroup (rw,noexec,nosuid,nodev,none,name=systemd)
nfsd on /proc/fs/nfsd type nfsd (rw)
192.168.3.185:/data/nfs on /var/lib/cinder/mnt/74877087a01856b116a3558d2981626e type nfs (rw,vers=4,minorversion=1,addr=192.168.3.185,clientaddr=192.168.3.185)
192.168.3.10:/demo on /var/lib/cinder/mnt/91368e49d0bd20666a74c5d5ca9b41cc type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
控制节点
更新/etc/cinder/cinder.conf的nfs驱动
volume_driver=cinder.volume.drivers.glusterfs.GlusterfsDriver
计算节点
1、nova.conf更新cinder相关配置
volume_api_class=nova.volume.cinder.API
root@controller:~# cinder list +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+
| 59e68148-c433-440d-bceb-7002a069ac67 | in-use | a | 1 | None | false | 614a2641-1e8d-4442-9704-6ab62e3f39d5 | +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+
2、查看两个集群节点的存储内容,是一模一样的volume
root@fs1:~# ll /data/fs1/
total 20
drwxrwxr-x 3 root cinder 4096 Jul 8 16:47 ./
drwxr-xr-x 5 root root 4096 Jul 9 17:52 ../
drw------- 186 root root 4096 Jul 8 16:44 .glusterfs/
-rw-rw-rw- 2 108 ntp 1073741824 Jul 8 16:15 volume-59e68148-c433-440d-bceb-7002a069ac67
root@fs2:~# ll /data/fs2/
total 20
drwxrwxr-x 3 root 112 4096 Jul 17 18:34 ./
drwxr-xr-x 3 root root 4096 Jul 9 17:53 ../
drw------- 181 root root 4096 Jul 17 18:33 .glusterfs/
-rw-rw-rw- 2 108 113 1073741824 Jul 17 17:28 volume-59e68148-c433-440d-bceb-7002a069ac67
3、停止fs1的glusterfs服务,查看fs2的集群状态信息
root@fs2:~# gluster peer status
Number of Peers: 1
Hostname: 192.168.3.10
Uuid: 9bc2b23c-4e7f-4da9-9a24-c570f753066c
State: Peer in Cluster (Disconnected)
4、打开VM2虚拟机,尽管其中一个节点出现问题,我们仍然可以看到VM2挂接的云硬盘信息