1、准备工作:四台主机,每台主机各需两块磁盘sda1(系统盘)、sdb(数据盘)
主机1:glusterfs01-17 10.1.1.17
主机2:glusterfs02-19 10.1.1.19
主机3:glusterfs03-13 10.1.1.13
主机4:glusterfs04-14 10.1.1.14
四台主机都安装glusterfs,具体步骤:参考-----
2、修改/etc/hosts配置文件
[root@glusterfs01-17 ~]# vim /etc/hosts
[root@glusterfs01-17 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.1.1.17 glusterfs01-17
10.1.1.19 glusterfs02-19
10.1.1.13 glusterfs03-13
10.1.1.14 glusterfs04-14
3、四台主机都启动glusterd
[root@glusterfs01-17 ~]# /etc/init.d/glusterd restart
Stopping glusterd: [ OK ]
Starting glusterd: [ OK ]
4、把四台主机加入到信任存储池中(任意一台执行就可以,注意添加时排除本机)
[root@glusterfs01-17 ~]# gluster peer **probe(探查)** glusterfs02-19
[root@glusterfs01-17 ~]# gluster peer probe glusterfs03-13
[root@glusterfs01-17 ~]# gluster peer probe glusterfs04-14
[root@glusterfs01-17 ~]# gluster pool list
UUID Hostname State
6568de12-2e82-4a58-9923-636d06692ceb glusterfs03-13 Connected
c46c8dfd-ec38-47f2-b0a9-ff43724ff492 glusterfs02-19 Connected
383e4f28-71ca-4652-99b0-5093f8f33041 glusterfs04-14 Connected
b5e47ff6-4a58-4fbe-91e3-9b1761379084 localhost Connected
[root@glusterfs01-17 ~]# gluster peer status
Number of Peers: 3
Hostname: glusterfs03-13
Uuid: 6568de12-2e82-4a58-9923-636d06692ceb
State: Peer in Cluster (Connected)
Hostname: glusterfs02-19
Uuid: c46c8dfd-ec38-47f2-b0a9-ff43724ff492
State: Peer in Cluster (Connected)
Hostname: glusterfs04-14
Uuid: 383e4f28-71ca-4652-99b0-5093f8f33041
State: Peer in Cluster (Connected)
###移除信任存储池
[root@glusterfs01-17 ~]# gluster peer **detach** glusterfs03-13^C
5、创建volume卷
参考官方文档:https://docs.gluster.org/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/
To create a new volume
Create a new volume :
###语法 模式:[stripe | replica | disperse] 传输方式:[transport tcp | rdma | tcp,rdma]rdma:远程直接数据存取
[root@glusterfs01-17 ~]# gluster volume create [stripe | replica | disperse] [transport tcp | rdma | tcp,rdma]
For example, to create a volume called test-volume consisting of server3:/exp3 and server4:/exp4:
###实例
[root@glusterfs01-17 ~]# gluster volume create test-volume server3:/exp3 server4:/exp4
Creation of test-volume has been successful
Please start the volume to access data.
5-1)创建分发式卷
#Create the distributed volume:语法
# gluster volume create [transport tcp | rdma | tcp,rdma]
[root@glusterfs01-17 ~]# gluster volume create gv1 glusterfs01-17:/storage/brick1 glusterfs02-19:/storage/brick1 force
volume create: gv1: success: please start the volume to access data
[root@glusterfs01-17 ~]# gluster volume create test-volume transport rdma glusterfs01-17:/storage/brick1 glusterfs02-19:/storage/brick1
启动卷
[root@glusterfs01-17 ~]# gluster volume start gv1
volume start: gv1: success
挂载卷
[root@glusterfs01-17 ~]# mount -t glusterfs 127.0.0.1:/gv1 /gv1
[root@glusterfs01-17 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 18G 4.9G 12G 30% /
tmpfs 498M 0 498M 0% /dev/shm
/dev/sda1 190M 33M 147M 19% /boot
/dev/sdb1 4.9G 11M 4.6G 1% /storage/brick1
/dev/sdb2 4.9G 11M 4.6G 1% /storage/brick2
/dev/sdb3 4.9G 11M 4.6G 1% /storage/brick3
127.0.0.1:/gv1 9.7G 119M 9.1G 2% /gv1
5-2)创建复制式卷
Arbiter configuration for replica volumes
Arbiter volumes are replica 3 volumes where the 3rd brick acts as the arbiter brick. This configuration has mechanisms that prevent occurrence of split-brains.
It can be created with the following command:
# gluster volume create
More information about this configuration can be found at Features : afr-arbiter-volumes
Note that the arbiter configuration for replica 3 can be used to create distributed-replicate volumes as well.
副本卷的仲裁器配置
仲裁器卷是副本3卷,其中第3块砖充当仲裁器砖。这种结构有防止大脑分裂的机制。
可以使用以下命令创建它:
`# gluster volume create replica 3 arbiter 1 host1:brick1 host2:brick2 host3:brick3
有关此配置的更多信息,请参阅Features:afr仲裁器卷
请注意,副本3的仲裁器配置也可用于创建分布式复制卷。
#Create the replicated volume:语法
# gluster volume create [replica ] [transport tcp | rdma | tcp,rdma]
[root@glusterfs01-17 ~]# gluster volume create gv2 replica 2 transport tcp glusterfs03-13:/storage/brick1 glusterfs04-14:/storage/brick1 force
volume create: gv2: success: please start the volume to access data
##注意:replica 2:表示数据存储两份,所以节点必须是2的倍数
###启动卷
[root@glusterfs01-17 ~]# gluster volume start gv2
volume start: gv2: success
###挂载卷
[root@glusterfs01-17 ~]# mount -t glusterfs 127.0.0.1:/gv2 /gv2
[root@glusterfs01-17 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 18G 4.9G 12G 30% /
tmpfs 498M 0 498M 0% /dev/shm
/dev/sda1 190M 33M 147M 19% /boot
/dev/sdb1 4.9G 11M 4.6G 1% /storage/brick1
/dev/sdb2 4.9G 11M 4.6G 1% /storage/brick2
/dev/sdb3 4.9G 11M 4.6G 1% /storage/brick3
127.0.0.1:/gv1 9.7G 119M 9.1G 2% /gv1
127.0.0.1:/gv2 4.9G 60M 4.6G 2% /gv2
5-3)创建条带式卷
#Create the striped volume:语法
#gluster volume create [stripe ] [transport tcp | rdma | tcp,rdma]
[root@glusterfs01-17 ~]# gluster volume create gv3 stripe 2 transport tcp glusterfs01-17:/storage/brick2 glusterfs02-19:/storage/brick2 force
启动卷
挂载卷
5-4)创建分发式复制卷 #####重点是分发式复制卷
#Create the distributed replicated volume:
# gluster volume create [replica ] [transport tcp | rdma | tcp,rdma]
[root@glusterfs01-17 ~]# gluster volume create gv4 replicated 2 transport tcp glusterfs03-13:/storage/brick2 glusterfs04-14:/storage/brick2 force
启动卷
挂载卷
5-5)创建分发式条带卷
#Create the distributed striped volume:
# gluster volume create [stripe ] [transport tcp | rdma | tcp,rdma]
5-6)创建复制式条带卷
#Create a distributed striped replicated volume using the following command:
# gluster volume create [stripe ] [replica ] [transport tcp | rdma | tcp,rdma]
[root@glusterfs01-17 ~]# gluster volume create gv5 stripe 2 replica 2 transport tcp glusterfs01-17:/storage/brick3 glusterfs02-19:/storage/brick3 glusterfs03-13:/storage/brick3 glusterfs04-14:/storage/brick3 force
Creation of test-volume has been successful
Please start the volume to access data.
启动卷
挂载卷
小总结:创建glusterfs卷,总共分三步:
1、创建glusterfs卷。
2、启动glusterfs卷。
3、挂载glusterfs卷
6-1)启动卷
[root@glusterfs01-17 ~]# gluster volume start gv2
Starting volume will make its data inaccessible. Do you want to continue? (y/n) y
volume start: gv2: success
6-2)停止卷
[root@glusterfs01-17 ~]# gluster volume stop gv2
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: gv2: success
6-3)删除卷
[root@glusterfs01-17 ~]# gluster volume delete gv1
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: gv1: success
7、更改传输类型。例如,要同时启用tcp和rdma
语法:
gluster volume set test-volume config.transport tcp,rdma OR tcp OR rdma
[root@glusterfs01-17 ~]# gluster volume set gv6 config.transport tcp
volume set: success
[root@glusterfs01-17 ~]# gluster volume info gv6
Volume Name: gv6
Type: Distributed-Replicate
Volume ID: 5c9de764-7427-43e1-a0d5-3cea04631359
Status: Created
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: **tcp**
Bricks:
Brick1: glusterfs01-17:/storage/brick1
Brick2: glusterfs02-19:/storage/brick1
Brick3: glusterfs03-13:/storage/brick1
Brick4: glusterfs04-14:/storage/brick1
Options Reconfigured:
config.transport: tcp
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
8、重点是分发式复制卷为大部分企业应用,下面讨论这个模式
[root@glusterfs01-17 ~]# gluster volume create gv6 replica 2 transport tcp,rdma glusterfs01-17:/storage/brick1 glusterfs02-19:/storage/brick1 glusterfs03-13:/storage/brick1 glusterfs04-14:/storage/brick1 force
volume create: gv6: success: please start the volume to access data
[root@glusterfs01-17 ~]# gluster volume info gv6
Volume Name: gv6
Type: Distributed-Replicate
Volume ID: 5c9de764-7427-43e1-a0d5-3cea04631359
Status: Created
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp,rdma
Bricks:
Brick1: glusterfs01-17:/storage/brick1
Brick2: glusterfs02-19:/storage/brick1
Brick3: glusterfs03-13:/storage/brick1
Brick4: glusterfs04-14:/storage/brick1
Options Reconfigured:
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
8-1)扩展卷—扩容(向已有的卷内增brick:因为是复制卷而且选择复制两份数据,所以在添加brick时,必须是两个一起加)
实例说明:昨天公司内存储空间不足就新买了两块磁盘,模拟glusterfs05-15 glusterfs06-16,想加入glusterfs
官方文档:https://docs.gluster.org/en/latest/Administrator%20Guide/Managing%20Volumes/
第一步:添加信任存储池
[root@glusterfs01-17 ~]# gluster peer probe glusterfs05-15==glusterfs03-13:/storage/brick2
[root@glusterfs01-17 ~]# gluster peer probe glusterfs06-16==glusterfs04-14:/storage/brick2
第二步:添加至volume中
[root@glusterfs01-17 ~]# gluster volume add-brick gv6 glusterfs05-15:/storage/brick1 glusterfs06-16:/storage/brick1 force
[root@glusterfs01-17 ~]# gluster volume add-brick gv6 glusterfs03-13:/storage/brick2 glusterfs04-14:/storage/brick2 force
volume add-brick: success
[root@glusterfs01-17 ~]# gluster volume info gv6
Volume Name: gv6
Type: Distributed-Replicate
Volume ID: 5c9de764-7427-43e1-a0d5-3cea04631359
Status: Created
Snapshot Count: 0
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: glusterfs01-17:/storage/brick1
Brick2: glusterfs02-19:/storage/brick1
Brick3: glusterfs03-13:/storage/brick1
Brick4: glusterfs04-14:/storage/brick1
Brick5: glusterfs03-13:/storage/brick2
Brick6: glusterfs04-14:/storage/brick2
Options Reconfigured:
config.transport: tcp
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
8-2)缩容卷—缩容(在已有的卷内删除brick)
[root@glusterfs01-17 ~]# gluster volume remove-brick gv6 glusterfs03-13:/storage/brick2 glusterfs04-14:/storage/brick2 force
Remove-brick force will not migrate files from the removed bricks, so they will no longer be available on the volume.
###Remove brick force不会从已删除的bricks中迁移文件,因此它们在卷上不再可用。
Do you want to continue? (y/n) y
volume remove-brick commit force: success
[root@glusterfs01-17 ~]# gluster volume info gv6
Volume Name: gv6
Type: Distributed-Replicate
Volume ID: 5c9de764-7427-43e1-a0d5-3cea04631359
Status: Created
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: glusterfs01-17:/storage/brick1
Brick2: glusterfs02-19:/storage/brick1
Brick3: glusterfs03-13:/storage/brick1
Brick4: glusterfs04-14:/storage/brick1
Options Reconfigured:
config.transport: tcp
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
8-3)更换故障砖(官方文档)
[root@glusterfs01-17 ~]# gluster volume status
Volume gv3 is not started
Status of volume: gv6
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
**Brick glusterfs01-17:/storage/brick1 49152 0 Y 7819**
Brick glusterfs02-19:/storage/brick1 49152 0 Y 6783
Brick glusterfs03-13:/storage/brick2 49152 0 Y 6163
Brick glusterfs04-14:/storage/brick2 49152 0 Y 4516
Self-heal Daemon on localhost N/A N/A Y 7840
###Self-heal:自愈
Self-heal Daemon on glusterfs04-14 N/A N/A Y 4537
Self-heal Daemon on glusterfs03-13 N/A N/A Y 6184
Self-heal Daemon on glusterfs02-19 N/A N/A Y 6804
Task Status of Volume gv6
------------------------------------------------------------------------------
There are no active volume tasks
[root@glusterfs01-17 ~]# kill -15 7819
[root@glusterfs01-17 ~]# gluster volume status
Volume gv3 is not started
Status of volume: gv6
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
**Brick glusterfs01-17:/storage/brick1 N/A N/A N N/A**
Brick glusterfs02-19:/storage/brick1 49152 0 Y 6783
Brick glusterfs03-13:/storage/brick2 49152 0 Y 6163
Brick glusterfs04-14:/storage/brick2 49152 0 Y 4516
Self-heal Daemon on localhost N/A N/A Y 7840
Self-heal Daemon on glusterfs03-13 N/A N/A Y 6184
Self-heal Daemon on glusterfs04-14 N/A N/A Y 4537
Self-heal Daemon on glusterfs02-19 N/A N/A Y 6804
Task Status of Volume gv6
------------------------------------------------------------------------------
There are no active volume tasks
[root@glusterfs01-17 ~]# gluster volume replace-brick gv6 glusterfs01-17:/storage/brick1 glusterfs01-17:/storage/brick3 commit force
volume replace-brick: success: replace-brick commit force operation successful
[root@glusterfs01-17 ~]# gluster volume status
Volume gv3 is not started
Status of volume: gv6
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
**Brick glusterfs01-17:/storage/brick3 49152 0 Y 7923**
Brick glusterfs02-19:/storage/brick1 49152 0 Y 6783
Brick glusterfs03-13:/storage/brick2 49152 0 Y 6163
Brick glusterfs04-14:/storage/brick2 49152 0 Y 4516
Self-heal Daemon on localhost N/A N/A Y 7934
Self-heal Daemon on glusterfs02-19 N/A N/A Y 6863
Self-heal Daemon on glusterfs03-13 N/A N/A Y 6241
Self-heal Daemon on glusterfs04-14 N/A N/A Y 4587
Task Status of Volume gv6
------------------------------------------------------------------------------
There are no active volume tasks
8-4)重新平衡volume
[root@glusterfs01-17 ~]# gluster volume rebalance gv6 fix-layout start
volume rebalance: gv6: success: Rebalance on gv6 has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: 75a003a6-6eb3-4e0d-b874-06ee2bec7235
[root@glusterfs01-17 ~]# gluster volume rebalance gv6 status ##查看在平衡状态
Node status run time in h:m:s
--------- ----------- ------------
glusterfs03-13 fix-layout completed 0:0:1
glusterfs02-19 fix-layout completed 0:0:1
glusterfs04-14 fix-layout completed 0:0:0
localhost fix-layout completed 0:0:1
volume rebalance: gv6: success
[root@glusterfs01-17 ~]# gluster volume rebalance gv6 stop
Node status run time in h:m:s
--------- ----------- ------------
glusterfs03-13 fix-layout completed 0:0:1
glusterfs02-19 fix-layout completed 0:0:1
glusterfs04-14 fix-layout completed 0:0:0
localhost fix-layout completed 0:0:1
volume rebalance: gv6: success: rebalance process may be in the middle of a file migration.
The process will be fully stopped once the migration of the file is complete.
Please check rebalance process for completion before doing any further brick related tasks on the volume.
8-5)比特轮检查
gluster volume bitrot enable
gluster volume bitrot disable
9-1)配额目录管理(Managing Directory Quota)
官方文档:https://docs.gluster.org/en/latest/Administrator%20Guide/Directory%20Quota/
官方文档中还有:
Updating Memory Cache Size:更新内存缓存大小
Setting Alert Time:设置警报时间
Removing Disk Limit:删除磁盘限制
[root@glusterfs01-17 gv3]# mkdir data
[root@glusterfs01-17 gv3]# ls
1 2 3 4 5 6 7 8 data lost+found
##开启文件配额功能 gv6
[root@glusterfs01-17 gv3]# gluster volume quota gv6 enable
volume quota : success
##关闭文件配额功能 gv6
[root@glusterfs01-17 gv3]#g luster volume quota vg6 disable
##给/gv3/data分配额度
[root@glusterfs01-17 gv3]# gluster volume quota gv6 limit-usage /data 1GB
volume quota : success---------##注意:/data是相对路径,根就是挂载点目录
[root@glusterfs01-17 gv3]# gluster volume quota gv6 list
Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit exceeded?
-------------------------------------------------------------------------------------------------------------------------------
/data 1.0GB 80%(819.2MB) 0Bytes 1.0GB No No
9-2)使用df实用程序显示配额限制信息
[root@glusterfs01-17 gv3]# gluster volume set gv6 features.quota-deem-statfs on
volume set: success
[root@glusterfs01-17 gv3]# gluster volume quota gv6 list
Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit exceeded?
---------------------------------------------------------------------------------------------------------------------
/data 1.0GB 80%(819.2MB) 0Bytes 1.0GB No No
[root@glusterfs01-17 gv3]# df -hT /gv3/data/
Filesystem Type Size Used Avail Use% Mounted on
127.0.0.1:gv6 fuse.glusterfs 1.0G 0 1.0G 0% /gv3
[root@glusterfs01-17 gv3]# gluster volume set gv6 features.quota-deem-statfs off ##关闭就不能用df查看文件配额
volume set: success
[root@glusterfs01-17 gv3]# df -hT /gv3/data/
Filesystem Type Size Used Avail Use% Mounted on
127.0.0.1:gv6 fuse.glusterfs 9.7G 119M 9.1G 2% /gv3
10-1)管理GlusterFS卷快照(Managing GlusterFS Volume Snapshots)
gluster snapshot create <snapname> <volname> [no-timestamp:没有时间戳] [description <description>描述] [force]
###Snapshot clone快照克隆
gluster snapshot clone <clonename> <snapname>
###Restoring snaps 恢复快照
gluster snapshot restore <snapname>
###Deleting snaps 删除快照
gluster snapshot delete (all | <snapname> | volume <volname>)
###Listing of available snaps 可用快照列表
gluster snapshot list [volname]
###Information of available snaps 可用快照信息
gluster snapshot info [(snapname | volume <volname>)]
###Status of snapshots 快照的状态
gluster snapshot status [(snapname | volume <volname>)]
###Configuring the snapshot behavior 配置快照行为
snapshot config [volname] ([snap-max-hard-limit <count>捕捉最大硬限制] [snap-max-soft-limit <percent>捕捉最大软限制])
| ([auto-delete <enable|disable>自动删除])
| ([activate-on-create <enable|disable>创建时激活])
###Activating a snapshot 激活快照
gluster snapshot activate <snapname>
###Deactivate a snapshot 停用并快照
gluster snapshot deactivate <snapname>
###Accessing the snapshot 访问快照
#第一步:Mounting the snapshot 装载快照
mount -t glusterfs <hostname>:/snaps/<snap-name>/<volume-name> <mount-path>
#第二步:User serviceability 用户可用性
gluster volume set <volname> features.uss enable
gluster volume set <volname> snapshot-directory <new-name>