目录

一,分布式文件系统理论基础 1

1.1 分布式文件系统出现 2
1.2 典型代表NFS 2
1.3 面临的问题 2
1.4 GlusterFS概述 2

二,部署安装 5

2.1 GlusterFS 安装前的准备 5

1 .创建分布式卷gs1(在glusterfs01上操作) 22

  1. Creating Replicated Volumes(gs2)创建复制式卷 38
  2. Creating Striped Volumes 创建条带状卷 47
  3. Creating Distributed Striped Volumes 创建分布式条带卷 不支持 49
    5.Creating Distributed Replicated Volumes 创建分布式复制卷 50
    6.Creating Distributed Striped Replicated Volumes 创建分布式条带复制卷 56
  4. Creating Striped Replicated Volumes 创建条带复制卷 57
    8.Creating Dispersed Volumes 创建分散卷 59
    9.Creating Distributed Dispersed Volumes 创建分布式分散卷 66
  5. 存储卷中brick块设备的扩容 67

三,存储卷的缩减与删除 79
(1)对存储卷中的brick进行缩减 79
(2) 对存储卷进行删除操作 84

四,构建企业级分布式存储 85
4.1 硬件要求 85
4.2 系统要求和分区划分 85
4.3 网络环境 85
4.4 服务器摆放分布 86
4.5 构建高性能,高可用存储 87

五 生产环境遇到常见故障处理 92
5.1 硬盘故障 92
5.2 一台主机故障 92

一,分布式文件系统理论基础

1.1 分布式文件系统出现
计算机通过文件系统管理,存储数据,而现在数据信息爆炸的时代中人们可以获取的数据成指数倍的增长,单纯通过增加硬盘个数来扩展计算机文件系统的存储容量的方式,已经不能满足目前的需求。
分布式文件系统可以有效解决数据的存储和管理难题,将固定于某个地点的某个文件系统,扩展到任意多个地点/多个文件系统,众多的节点组成一个文件系统网络。每个节点可以分布在不同的地点,通过网络进行节点间的通信和数据传输。人们在使用分布式文件系统时,无需关心数据是存储在哪个节点上,或者是从哪个节点从获取的,只需要像使用本地文件系统一样管理和存储文件系统中的数据。

1.2 典型代表NFS

NFS(Network File System)即网络文件系统,它允许网络中的计算机之间通过TCP/IP网络共享资源。在NFS的应用中,本地NFS的客户端应用可以透明地读写位于远端NFS服务器上的文件,就像访问本地文件一样。NFS的优点如下:
(1)节约使用的磁盘空间
客户端经常使用的数据可以集中存放在一台机器上,并使用NFS发布,那么网络内部所有计算机可以通过网络访问,不必单独存储。
(2)节约硬件资源
NFS还可以共享软驱,CDROM和ZIP等的存储设备,减少整个网络上的可移动设备的数量。
(3)用户主目录设定
对于特殊用户,如管理员等,为了管理的需要,可能会经常登陆到网络中所有的计算机,若每个客户端,均保存这个用户的主目录很繁琐,而且不能保证数据的一致性。实际上,经过NFS服务的设定,然后在客户端指定这个用户的主目录位置,并自动挂载,就可以在任何计算机上使用用户主目录的文件。

1.3 面临的问题

存储空间不足,需要更大容量的存储
直接用NFS挂载存储,有一定风险,存在单点故障
某些场景不能满足需求,大量的访问磁盘IO是瓶颈

1.4 GlusterFS概述

GlusterFS是Scale-Out存储解决方案Gluster的核心,它是一个开源的分布式文件系统,具有强大的横向扩展能力,通过扩展能够支持数PB存储容量和处理数千客户端。GlusterFS借助TCP/IP或InfiniBand RDMA网络将物理分布的存储资源聚集在一起,使用单一全局命名空间来管理数据。
GlusterFS支持运行在任何标准IP网络上标准应用程序的标准客户端,用户可以在全局统一的命令空间中使用NFS/CIFS等标准协议来访问应用程序。GlusterFS使得用户可摆脱原有的独立,高成本的封闭存储系统,能够利用普通廉价的存储设备来部署可集中管理,横向扩展,虚拟化的存储池,存储容量可扩展至TB/PB级。

目前glusterfs已被redhat收购,它的官方网站是:http://www.gluster.org/
超高性能(64个节点时吞吐量也就是带宽甚至达到32GB/s

理论和实践上分析,GlusterFS目前主要适用大文件存储场景,对于小文件尤其是海量小文件(小于1M),存储效率和访问性能都表现不佳。海量小文件LOSF问题是工业界和学术界公认的难题,GlusterFS作为通用的分布式文件系统,并没有对小文件作额外的优化措施(小于1M),性能不好也是可以理解的。
[x] Media
文档,图片,音频,视频
[x] Shared storage
云存储,虚拟化存储,HPC(高性能计算)
[x] Big data
日志文件,RFID(射频识别)数据

二,部署安装

2.1 GlusterFS 安装前的准备

1.电脑一台,内存>=4G,可用磁盘空间大于50G
2.安装VMWARE Workstation虚拟机软件
3.安装好四台CentOS-7.6-x86_64的虚拟机
4.基本系统:1核CPU+1024M内存+10G硬盘
5.网络选择:网络地址转换(NAT)
6.关闭iptables和SELinux
7.预装glusterfs软件包

描述 IP 主机名
Linux_node1 192.168.0.120 centos120
Linux_node2 192.168.0.121 centos121
Linux_node3 192.168.0.122 centos122
Linux_node4 192.168.0.123 centos123
Linux_node5 192.168.0.124 centos124

#为了实验的准确性,请尽量和我用一个版本的Linux操作系统
#并用实验给的rpm包作为yum源

[root@centos120 ~]# cat /etc/redhat-release 
CentOS Linux release 7.6.1810 (Core) 
[root@centos120 ~]# uname -r
3.10.0-957.el7.x86_64
[root@centos120 ~]# 

以上五台机器都配置/etc/hosts文件,保证其相互连通

root@centos120 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.0.120 centos120
192.168.0.121 centos121
192.168.0.122 centos122
192.168.0.123 centos123
192.168.0.124 centos124

以上五台机器都关闭防火墙和selinux


[root@centos120 ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@centos120 ~]# systemctl stop firewalld
[root@centos120 ~]# getenforce 
Enforcing
[root@centos120 ~]# setenforce 0
[root@centos120 ~]# getenforce 
Permissive
[root@centos121 ~]# sed -i s#SELINUX=enforcing#SELINUX=disabled#g /etc/selinux/config
[root@centos120 ~]# grep =disable      /etc/selinux/config
SELINUX=disabled
[root@centos120 ~]# 

同步各个服务器的时间 其实这里严格来说应该NTP服务器就是chrony 同步时间服务


[root@centos120 glusterfs]# date -s "2020-02-25 19:00:00"
Tue Feb 25 19:00:00 CST 2020
[root@centos120 glusterfs]# 
[root@centos120 glusterfs]# 
[root@centos120 glusterfs]# 
[root@centos120 glusterfs]# hwclock -w
[root@centos120 glusterfs]# date
Tue Feb 25 19:00:08 CST 2020
[root@centos120 glusterfs]# 

以上五台机器都安装Glusterfs
先下载安装包

[root@centos123]# mkdir /opt/glusterfs
[root@centos123]# wget https://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/el-8.2/x86_64/
[root@centos120 glusterfs]# ll
total 18548
-rw-r--r--. 1 root root  703976 Feb 25 16:19 glusterfs-7.3-1.el8.x86_64.rpm
-rw-r--r--. 1 root root  126204 Feb 25 16:20 glusterfs-api-7.3-1.el8.x86_64.rpm
-rw-r--r--. 1 root root  282500 Feb 25 16:20 glusterfs-api-debuginfo-7.3-1.el8.x86_64.rpm
-rw-r--r--. 1 root root   57364 Feb 25 16:20 glusterfs-api-devel-7.3-1.el8.x86_64.rpm
-rw-r--r--. 1 root root  219964 Feb 25 16:21 glusterfs-cli-7.3-1.el8.x86_64.rpm
-rw-r--r--. 1 root root  420044 Feb 25 16:21 glusterfs-cli-debuginfo-7.3-1.el8.x86_64.rpm
-rw-r--r--. 1 root root  909632 Feb 25 16:23 glusterfs-client-xlators-7.3-1.el8.x86_64.rpm
-rw-r--r--. 1 root root 3264740 Feb 25 16:27 glusterfs-client-xlators-debuginfo-7.3-1.el8.x86_64.rpm
-rw-r--r--. 1 root root   59840 Feb 25 16:27 glusterfs-cloudsync-plugins-7.3-1.el8.x86_64.rpm
-rw-r--r--. 1 root root  120460 Feb 25 16:27 glusterfs-cloudsync-plugins-debuginfo-7.3-1.el8.x86_64.rpm
-rw-r--r--. 1 root root 2012468 Feb 25 16:30 glusterfs-debuginfo-7.3-1.el8.x86_64.rpm
-rw-r--r--. 1 root root 2347652 Feb 25 16:33 glusterfs-debugsource-7.3-1.el8.x86_64.rpm
-rw-r--r--. 1 root root  186648 Feb 25 16:33 glusterfs-devel-7.3-1.el8.x86_64.rpm
-rw-r--r--. 1 root root   67408 Feb 25 16:33 glusterfs-events-7.3-1.el8.x86_64.rpm
-rw-r--r--. 1 root root   73076 Feb 25 16:33 glusterfs-extra-xlators-7.3-1.el8.x86_64.rpm
-rw-r--r--. 1 root root  145392 Feb 25 16:34 glusterfs-extra-xlators-debuginfo-7.3-1.el8.x86_64.rpm
-rw-r--r--. 1 root root  174584 Feb 25 16:34 glusterfs-fuse-7.3-1.el8.x86_64.rpm
-rw-r--r--. 1 root root  356196 Feb 25 16:35 glusterfs-fuse-debuginfo-7.3-1.el8.x86_64.rpm
-rw-r--r--. 1 root root  217340 Feb 25 16:35 glusterfs-geo-replication-7.3-1.el8.x86_64.rpm
-rw-r--r--. 1 root root   93176 Feb 25 16:35 glusterfs-geo-replication-debuginfo-7.3-1.el8.x86_64.rpm
-rw-r--r--. 1 root root  456584 Feb 25 16:36 glusterfs-libs-7.3-1.el8.x86_64.rpm
-rw-r--r--. 1 root root 1332188 Feb 25 16:37 glusterfs-libs-debuginfo-7.3-1.el8.x86_64.rpm
-rw-r--r--. 1 root root   77576 Feb 25 16:38 glusterfs-rdma-7.3-1.el8.x86_64.rpm
-rw-r--r--. 1 root root  150872 Feb 25 16:38 glusterfs-rdma-debuginfo-7.3-1.el8.x86_64.rpm
-rw-r--r--. 1 root root 1431840 Feb 25 16:39 glusterfs-server-7.3-1.el8.x86_64.rpm
-rw-r--r--. 1 root root 3441312 Feb 25 16:44 glusterfs-server-debuginfo-7.3-1.el8.x86_64.rpm
-rw-r--r--. 1 root root   58484 Feb 25 16:44 glusterfs-thin-arbiter-7.3-1.el8.x86_64.rpm
-rw-r--r--. 1 root root   94448 Feb 25 16:44 glusterfs-thin-arbiter-debuginfo-7.3-1.el8.x86_64.rpm
-rw-r--r--. 1 root root   49436 Feb 25 16:44 python3-gluster-7.3-1.el8.x86_64.rpm
[root@centos120 glusterfs]# 

然后复制到其他所有节点

[root@centos120 glusterfs]# yum info createrepo
Summary     : Creates a common metadata repository
URL         : http://createrepo.baseurl.org/
License     : GPLv2
Description : This utility will generate a common metadata repository from a directory of rpm
            : packages.

[root@centos124 glusterfs]# yum -y install createrepo

参考官网安装步骤

https://docs.gluster.org/en/latest/Install-Guide/Install/
https://wiki.centos.org/SpecialInterestGroup/Storage/gluster-Quickstart

1.[root@centos120 glusterfs]# yum install centos-release-gluster -y
Installed:
  centos-release-gluster7.noarch 0:1.0-1.el7.centos                                                                
Dependency Installed:
  centos-release-storage-common.noarch 0:2-2.el7.centos                                                            
Complete!

2.[root@centos120 glusterfs]# yum install glusterfs-server -y 
Installed:
  glusterfs-server.x86_64 0:7.3-1.el7                                                                              
Dependency Installed:
  attr.x86_64 0:2.4.46-13.el7       glusterfs.x86_64 0:7.3-1.el7                glusterfs-api.x86_64 0:7.3-1.el7 
  glusterfs-cli.x86_64 0:7.3-1.el7  glusterfs-client-xlators.x86_64 0:7.3-1.el7 glusterfs-fuse.x86_64 0:7.3-1.el7
  glusterfs-libs.x86_64 0:7.3-1.el7 libtirpc.x86_64 0:0.2.4-0.16.el7            psmisc.x86_64 0:22.20-16.el7     
  rpcbind.x86_64 0:0.2.0-48.el7     userspace-rcu.x86_64 0:0.10.0-3.el7        
Complete!

3.[root@centos120 glusterfs]# glusterfs -V
glusterfs 7.3
Repository revision: git://git.gluster.org/glusterfs.git
Copyright (c) 2006-2016 Red Hat, Inc. 
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.
[root@centos120 glusterfs]# 
4.root@centos120 glusterfs]# systemctl enable glusterd
5.[root@centos120 glusterfs]# systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: enabled)
   Active: active (running) since Tue 2020-02-25 21:56:59 CST; 10s ago
     Docs: man:glusterd(8)
  Process: 7818 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 7819 (glusterd)
   CGroup: /system.slice/glusterd.service
           └─7819 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO

Feb 25 21:56:59 centos120 systemd[1]: Starting GlusterFS, a clustered file-system server...
Feb 25 21:56:59 centos120 systemd[1]: Started GlusterFS, a clustered file-system server.
[root@centos120 glusterfs]# 
Configure the trusted pool

先以4台机器做实验,只需要在一台机器上设置,其它机器也就可以相互信任了

[root@centos120 glusterfs]# gluster peer probe centos121
peer probe: success. 
[root@centos120 glusterfs]# gluster peer probe centos122
peer probe: success. 
[root@centos120 glusterfs]# gluster peer probe centos123
peer probe: success. 
[root@centos120 glusterfs]# gluster peer status 
Number of Peers: 3
Hostname: centos121
Uuid: 223c82a7-de86-4b33-8180-8352efd2ed5e
State: Peer in Cluster (Connected)
Hostname: centos122
Uuid: b98ca06f-8a15-48cd-a0ad-19f60fc5bfda
State: Peer in Cluster (Connected)
Hostname: centos123
Uuid: f56f0689-48e6-419d-81b0-0ac4ca6cc2fc
State: Peer in Cluster (Connected)
[root@centos120 glusterfs]# 

在其它三台机器查验如下:

[root@centos121 glusterfs]# gluster peer status 
Number of Peers: 3
Hostname: centos120
Uuid: ffffb612-e1f2-49bd-9991-909bb6f687fa
State: Peer in Cluster (Connected)
Hostname: centos122
Uuid: b98ca06f-8a15-48cd-a0ad-19f60fc5bfda
State: Peer in Cluster (Connected)
Hostname: centos123
Uuid: f56f0689-48e6-419d-81b0-0ac4ca6cc2fc
State: Peer in Cluster (Connected)
[root@centos121 glusterfs]# 
[root@centos122 glusterfs]# gluster peer status 
Number of Peers: 3
Hostname: centos120
Uuid: ffffb612-e1f2-49bd-9991-909bb6f687fa
State: Peer in Cluster (Connected)
Hostname: centos121
Uuid: 223c82a7-de86-4b33-8180-8352efd2ed5e
State: Peer in Cluster (Connected)
Hostname: centos123
Uuid: f56f0689-48e6-419d-81b0-0ac4ca6cc2fc
State: Peer in Cluster (Connected)
[root@centos122 glusterfs]# 
[root@centos123 glusterfs]# gluster peer status
Number of Peers: 3
Hostname: centos120
Uuid: ffffb612-e1f2-49bd-9991-909bb6f687fa
State: Peer in Cluster (Connected)
Hostname: centos121
Uuid: 223c82a7-de86-4b33-8180-8352efd2ed5e
State: Peer in Cluster (Connected)
Hostname: centos122
Uuid: b98ca06f-8a15-48cd-a0ad-19f60fc5bfda
State: Peer in Cluster (Connected)
[root@centos123 glusterfs]# 

以上4台虚拟机器都添加一块新的/dev/sdb 挂载硬盘做实验
都添加200MB大小硬盘,添加后执行以下命令,无需重启虚拟机即可识别新的硬盘


[root@centos120 glusterfs]# ls /sys/class/scsi_host/
host0  host1  host2
[root@centos120 glusterfs]# echo "- - -" > /sys/class/scsi_host/host0/scan
[root@centos120 glusterfs]# echo "- - -" > /sys/class/scsi_host/host1/scan
[root@centos120 glusterfs]# echo "- - -" > /sys/class/scsi_host/host2/scan
[root@centos120 glusterfs]# ls /sys/class/scsi_device/
0:0:0:0  0:0:1:0  2:0:0:0
[root@centos120 glusterfs]# echo 1 > /sys/class/scsi_device/0\:0\:0\:0/device/rescan
[root@centos120 glusterfs]# echo 1 > /sys/class/scsi_device/0\:0\:1\:0/device/rescan
[root@centos120 glusterfs]# echo 1 > /sys/class/scsi_device/2\:0\:0\:0/device/rescan
[root@centos120 glusterfs]# fdisk -l /dev/sdb
Disk /dev/sdb: 213 MB, 213909504 bytes, 417792 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
[root@centos121 glusterfs]# fdisk -l /dev/sdb
Disk /dev/sdb: 213 MB, 213909504 bytes, 417792 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
[root@centos121 glusterfs]# 
[root@centos122 glusterfs]# fdisk -l /dev/sdb
Disk /dev/sdb: 213 MB, 213909504 bytes, 417792 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
[root@centos122 glusterfs]# 
[root@centos123 glusterfs]# fdisk -l /dev/sdb
Disk /dev/sdb: 213 MB, 213909504 bytes, 417792 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
[root@centos123 glusterfs]# 

对/dev/sdb 分100MB来使用


[root@centos120 glusterfs]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0xe23bcea8.
Command (m for help): m
Command action
   a   toggle a bootable flag
   b   edit bsd disklabel
   c   toggle the dos compatibility flag
   d   delete a partition
   g   create a new empty GPT partition table
   G   create an IRIX (SGI) partition table
   l   list known partition types
   m   print this menu
   n   add a new partition
   o   create a new empty DOS partition table
   p   print the partition table
   q   quit without saving changes
   s   create a new empty Sun disklabel
   t   change a partition's system id
   u   change display/entry units
   v   verify the partition table
   w   write table to disk and exit
   x   extra functionality (experts only)
Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-417791, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-417791, default 417791): +100MB
Partition 1 of type Linux and of size 95 MiB is set
Command (m for help): p
Disk /dev/sdb: 213 MB, 213909504 bytes, 417792 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xe23bcea8
   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048      196607       97280   83  Linux
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[root@centos120 glusterfs]# 
[root@centos120 glusterfs]# fdisk -l /dev/sdb1
Disk /dev/sdb1: 99 MB, 99614720 bytes, 194560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
[root@centos120 glusterfs]# 
[root@centos121 glusterfs]# fdisk -l /dev/sdb1
Disk /dev/sdb1: 99 MB, 99614720 bytes, 194560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
[root@centos122 glusterfs]# fdisk -l /dev/sdb1
Disk /dev/sdb1: 99 MB, 99614720 bytes, 194560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
[root@centos122 glusterfs]# 
[root@centos123 glusterfs]# fdisk -l /dev/sdb1
Disk /dev/sdb1: 99 MB, 99614720 bytes, 194560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

同理以上5台机器的/dev/sdb2 分区并挂载

[root@centos120 ~]# mkfs.xfs /dev/sdb2
meta-data=/dev/sdb2              isize=512    agcount=4, agsize=6912 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=27648, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=855, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@centos121 ~]# 

格式化并挂载

[root@centos120 glusterfs]# mkdir -p /gluster/brick1 
[root@centos121 ~]# mkdir -p /gluster/brick2
[root@centos121 ~]#
[root@centos120 glusterfs]# mount /dev/sdb1 /gluster/brick1
[root@centos120 glusterfs]# mkfs.xfs /dev/sdb1 
meta-data=/dev/sdb1              isize=512    agcount=4, agsize=6080 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=24320, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=855, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@centos120 glusterfs]# mount /dev/sdb1 /gluster/brick1
[root@centos120 glusterfs]# df -h | grep gluster
/dev/sdb1                 92M  5.0M   87M   6% /gluster/brick1
[root@centos120 glusterfs]# 
[root@centos120 ~]# echo "/dev/sdb1   /gluster/brick1  xfs   defaults    0    0  "  >> /etc/fstab
[root@centos120 ~]# echo "/dev/sdb2   /gluster/brick2  xfs   defaults    0    0  "  >> /etc/fstab
[root@centos120 ~]# 
[root@centos120 ~]# mount -a
[root@centos120 ~]# 
[root@centos120 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   17G  1.5G   16G   9% /
devtmpfs                 475M     0  475M   0% /dev
tmpfs                    487M     0  487M   0% /dev/shm
tmpfs                    487M  7.7M  479M   2% /run
tmpfs                    487M     0  487M   0% /sys/fs/cgroup
/dev/sda1               1014M  133M  882M  14% /boot
tmpfs                     98M     0   98M   0% /run/user/0
/dev/sdb1                 92M  5.0M   87M   6% /gluster/brick1
/dev/sdb2                105M  5.7M  100M   6% /gluster/brick2
[root@centos120 ~]# 
[root@centos120 ~]# umount /gluster/brick2
[root@centos120 ~]# umount /gluster/brick1

前期准备工作已经完成,下面开始创建volume

[x] 基本卷:
分布式卷(Distributed):
复制卷(Replicated):
条带式卷(Striped):
[x] 复合卷:
分布式复制卷(Distributed Replicated):
分布式条带卷(Distributed Striped):
复制条带卷(Replicated Striped):
分布式复制条带卷(Distributed Replicated Striped):

1 .创建分布式卷gs1(在glusterfs01上操作)

Creating Distributed Volumes
In a distributed volume files are spread randomly across the bricks in the volume. Use distributed volumes where you need to scale storage and redundancy is either not important or is provided by other hardware/software layers.

在分布式卷中,文件在卷中的块上随机分布。在需要扩展存储和冗余不重要或由其他硬件/软件层提供的地方使用分布式卷

[root@centos120 ~]# gluster volume create gs1 centos120:/gluster/brick1
volume create: gs1: failed: The brick centos120:/gluster/brick1 is a mount point. Please create a sub-directory under the mount point and use that as the brick directory. Or use 'force' at the end of the command if you want to override this behavior.
[root@centos120 ~]# mkdir /gluster/brick1/bk1
[root@centos120 ~]# gluster volume create gs1 centos120:/gluster/brick1/bk1 centos121:/gluster/brick1/bk1 force
volume create: gs1: success: please start the volume to access data

#启动创建的卷(在glusterfs01上操作)


[root@centos120 ~]# gluster volume info 
Volume Name: gs1   #卷名
Type: Distribute   #分布式
Volume ID: bf50562e-5899-4dc2-bd91-ba7b04b4be81   #ID号
Status: Started   #启动状态
Snapshot Count: 0
Number of Bricks: 2   #一共两块设备
Transport-type: tcp   #tcp的连接方式
Bricks:   #快信息
Brick1: centos120:/gluster/brick1/bk1
Brick2: centos121:/gluster/brick1/bk1
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on  #NFS协议默认是关闭
[root@centos120 ~]# 

#4台虚拟机都能看到如下信息(在任意虚拟机上操作)


[root@centos121 ~]# gluster volume info 
Volume Name: gs1
Type: Distribute
Volume ID: bf50562e-5899-4dc2-bd91-ba7b04b4be81
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: centos120:/gluster/brick1/bk1
Brick2: centos121:/gluster/brick1/bk1
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on

1.2 volume的两种挂载方式

(1)以glusterfs方式挂载

[root@centos120 ~]# mount -t glusterfs  127.0.0.1:/gs1 /mnt
[root@centos120 ~]# df -h 
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   17G  1.5G   16G   9% /
devtmpfs                 475M     0  475M   0% /dev
tmpfs                    487M     0  487M   0% /dev/shm
tmpfs                    487M  7.7M  479M   2% /run
tmpfs                    487M     0  487M   0% /sys/fs/cgroup
/dev/sda1               1014M  133M  882M  14% /boot
tmpfs                     98M     0   98M   0% /run/user/0
/dev/sdb1                 92M  5.1M   87M   6% /gluster/brick1
/dev/sdb2                105M  5.7M  100M   6% /gluster/brick2
127.0.0.1:/gs1           184M   12M  172M   7% /mnt
[root@centos120 ~]# 

#在挂载好的/mnt目录里创建实验文件

[root@centos120 ~]# touch /mnt/{1..6}
[root@centos120 ~]# ll /mnt
total 0
-rw-r--r-- 1 root root 0 Feb 26 18:15 1
-rw-r--r-- 1 root root 0 Feb 26 18:15 2
-rw-r--r-- 1 root root 0 Feb 26 18:15 3
-rw-r--r-- 1 root root 0 Feb 26 18:15 4
-rw-r--r-- 1 root root 0 Feb 26 18:15 5
-rw-r--r-- 1 root root 0 Feb 26 18:15 6
[root@centos120 ~]# 

#在其他虚拟机上挂载分布式卷gs1,查看同步挂载结果,可见数据是同步的

[root@centos121 ~]# mount -t glusterfs  127.0.0.1:/gs1 /mnt
[root@centos121 ~]# ll /mnt
total 0
-rw-r--r-- 1 root root 0 Feb 26 18:15 1
-rw-r--r-- 1 root root 0 Feb 26 18:15 2
-rw-r--r-- 1 root root 0 Feb 26 18:15 3
-rw-r--r-- 1 root root 0 Feb 26 18:15 4
-rw-r--r-- 1 root root 0 Feb 26 18:15 5
-rw-r--r-- 1 root root 0 Feb 26 18:15 6
[root@centos122 ~]# mount -t glusterfs  127.0.0.1:/gs1 /mnt
[root@centos122 ~]# ll /mnt
total 0
-rw-r--r-- 1 root root 0 Feb 26 18:15 1
-rw-r--r-- 1 root root 0 Feb 26 18:15 2
-rw-r--r-- 1 root root 0 Feb 26 18:15 3
-rw-r--r-- 1 root root 0 Feb 26 18:15 4
-rw-r--r-- 1 root root 0 Feb 26 18:15 5
-rw-r--r-- 1 root root 0 Feb 26 18:15 6
[root@centos122 ~]# gluster volume status
Status of volume: gs1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick centos120:/gluster/brick1/bk1         49152     0          Y       7511 
Brick centos121:/gluster/brick1/bk1         49152     0          Y       7198 
Task Status of Volume gs1
------------------------------------------------------------------------------
There are no active volume tasks

(2)以NFS方式进行挂载
查看新版的glusterfs 的nfs默认是关闭的,

[root@centos120 ~]# gluster volume info
Volume Name: gs1
Type: Distribute
Volume ID: bf50562e-5899-4dc2-bd91-ba7b04b4be81
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: centos120:/gluster/brick1/bk1
Brick2: centos121:/gluster/brick1/bk1
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
[root@centos120 ~]# 

启动glusterfs的nfs协议

root@centos120 ~]# gluster volume set gs1 nfs.disable off
Gluster NFS is being deprecated in favor of NFS-Ganesha Enter "yes" to continue using Gluster NFS (y/n) y
volume set: success
[root@centos120 ~]# gluster volume info

Volume Name: gs1
Type: Distribute
Volume ID: bf50562e-5899-4dc2-bd91-ba7b04b4be81
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: centos120:/gluster/brick1/bk1
Brick2: centos121:/gluster/brick1/bk1
Options Reconfigured:
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: off   #NFS协议已经开启
[root@centos120 ~]# 

[root@centos120 ~]# gluster volume  status 
Status of volume: gs1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick centos120:/gluster/brick1/bk1         N/A       N/A        N       N/A  
Brick centos121:/gluster/brick1/bk1         49152     0          Y       7198 
NFS Server on localhost                     N/A       N/A        N       N/A  
NFS Server on centos121                     N/A       N/A        N       N/A  
NFS Server on centos122                     N/A       N/A        N       N/A  
NFS Server on centos123                     N/A       N/A        N       N/A  
Task Status of Volume gs1
------------------------------------------------------------------------------
There are no active volume tasks
[root@centos120 ~]# 

在其它节点一样可以查看到

[root@centos123 ~]# gluster volume status
Status of volume: gs1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick centos120:/gluster/brick1/bk1         N/A       N/A        N       N/A  
Brick centos121:/gluster/brick1/bk1         49152     0          Y       7198 
NFS Server on localhost                     N/A       N/A        N       N/A  
NFS Server on centos120                     N/A       N/A        N       N/A  
NFS Server on centos122                     N/A       N/A        N       N/A  
NFS Server on centos121                     N/A       N/A        N       N/A  

Task Status of Volume gs1
------------------------------------------------------------------------------
There are no active volume tasks

[root@centos123 ~]# 

以上NFS服务器并没有出现端口,说明NFS服务器还没起来

如果需要关闭NFS服务,执行如下命令:
#gluster volume reset disp_vol nfs.disable
以上结果是是什么原因呢?
如果NFS Server的挂载端口显示N/A表示未开启挂载功能,这是由于要先进行nfs挂载是需要装两个nfs的软件包的rpcbind和nfs-utils
当然就算系统装了这两个软件包,那么我们也需要开启rpcbind服务,然后在重启glusterfs服务才能够进行nfs挂载的操作。
现在我们就来开启centos120的nfs挂载功能,如下:
安装NFS服务

root@centos120 ~]# yum install nfs-utils rpcbind -y
[root@centos120 ~]# rpm -qa nfs-utils 
nfs-utils-1.3.0-0.65.el7.x86_64
[root@centos120 ~]# rpm -qa rpcbind   
rpcbind-0.2.0-48.el7.x86_64
[root@centos120 ~]# systemctl start rpcbind
[root@centos120 ~]# systemctl status  rpcbind
● rpcbind.service - RPC bind service
   Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2020-02-26 17:20:59 CST; 1h 34min ago
 Main PID: 6398 (rpcbind)
   CGroup: /system.slice/rpcbind.service
           └─6398 /sbin/rpcbind -w

Feb 26 17:20:58 centos120 systemd[1]: Starting RPC bind service...
Feb 26 17:20:59 centos120 systemd[1]: Started RPC bind service.
[root@centos120 ~]# 
[root@centos120 ~]# gluster pool list 
UUID                    Hostname    State
223c82a7-de86-4b33-8180-8352efd2ed5e    centos121   Connected 
b98ca06f-8a15-48cd-a0ad-19f60fc5bfda    centos122   Connected 
f56f0689-48e6-419d-81b0-0ac4ca6cc2fc    centos123   Connected 
ffffb612-e1f2-49bd-9991-909bb6f687fa    localhost   Connected 
[root@centos120 ~]# 

重启rpcbind和glusterd

[root@centos120 ~]#systemctl restart rpcbind
[root@centos120 ~]#systemctl restart glusterd
[root@centos120 ~]# systemctl status rpcbind
● rpcbind.service - RPC bind service
   Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2020-02-26 20:39:27 CST; 51min ago
 Main PID: 6405 (rpcbind)
   CGroup: /system.slice/rpcbind.service
           └─6405 /sbin/rpcbind -w

Feb 26 20:39:25 centos120 systemd[1]: Starting RPC bind service...
Feb 26 20:39:27 centos120 systemd[1]: Started RPC bind service.
[root@centos120 ~]# systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2020-02-26 21:13:55 CST; 16min ago
     Docs: man:glusterd(8)
 Main PID: 7147 (glusterd)
   CGroup: /system.slice/glusterd.service
           ├─7147 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
           └─7183 /usr/sbin/glusterfsd -s centos120 --volfile-id gs1.centos120.gluster-brick1-bk1 -p /var/run/gluster/...
Feb 26 21:13:53 centos120 systemd[1]: Starting GlusterFS, a clustered file-system server...
Feb 26 21:13:55 centos120 systemd[1]: Started GlusterFS, a clustered file-system server.

#查看NFS是否可以融合glusterd服务

[root@centos120 ~]# gluster volume status
Status of volume: gs1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick centos120:/gluster/brick1/bk1         49152     0          Y       7183 
Brick centos121:/gluster/brick1/bk1         49152     0          Y       7133 
NFS Server on localhost                     N/A       N/A        N       N/A  
NFS Server on centos121                     N/A       N/A        N       N/A  
NFS Server on centos122                     N/A       N/A        N       N/A  
NFS Server on centos123                     N/A       N/A        N       N/A  

Task Status of Volume gs1
------------------------------------------------------------------------------
There are no active volume tasks

[root@centos120 ~]# 

但是NFS在glusterd中还是没融合,原因不明

查看volume profile 即每个brick 的性能参数

[root@centos120 ~]# gluster volume profile gs1 start 
Starting volume profile on gs1 has been successful 
[root@centos120 ~]# gluster volume profile gs1 info 
Brick: centos120:/gluster/brick1/bk1
------------------------------------
Cumulative Stats:
 %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls         Fop
 ---------   -----------   -----------   -----------   ------------        ----
      0.00       0.00 us       0.00 us       0.00 us              2     RELEASE
      0.00       0.00 us       0.00 us       0.00 us              5  RELEASEDIR
Duration: 1299 seconds
Data Read: 0 bytes
Data Written: 0 bytes
Interval 1 Stats:
Duration: 145 seconds
Data Read: 0 bytes
Data Written: 0 bytes
Brick: centos121:/gluster/brick1/bk1
------------------------------------
Cumulative Stats:
 %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls         Fop
 ---------   -----------   -----------   -----------   ------------        ----
      0.00       0.00 us       0.00 us       0.00 us              4     RELEASE
      0.00       0.00 us       0.00 us       0.00 us              5  RELEASEDIR
Duration: 1298 seconds
Data Read: 0 bytes
Data Written: 0 bytes
Interval 1 Stats:
 Duration: 144 seconds
 Data Read: 0 bytes
Data Written: 0 bytes
[root@centos120 ~]# 

1.3 查看distributed volume 存储数据是如何在brick中分布的。

[root@centos120 ~]# ll /mnt/
total 0
-rw-r--r-- 1 root root 0 Feb 26 18:15 1
-rw-r--r-- 1 root root 0 Feb 26 18:15 2
-rw-r--r-- 1 root root 0 Feb 26 18:15 3
-rw-r--r-- 1 root root 0 Feb 26 18:15 4
-rw-r--r-- 1 root root 0 Feb 26 18:15 5
-rw-r--r-- 1 root root 0 Feb 26 18:15 6
[root@centos120 ~]# ll /gluster/brick1/bk1
total 0
-rw-r--r-- 2 root root 0 Feb 26 18:15 1
-rw-r--r-- 2 root root 0 Feb 26 18:15 5
[root@centos120 ~]# 
[root@centos121 ~]# ll /gluster/brick1/bk1
total 0
-rw-r--r-- 2 root root 0 Feb 26 18:15 2
-rw-r--r-- 2 root root 0 Feb 26 18:15 3
-rw-r--r-- 2 root root 0 Feb 26 18:15 4
-rw-r--r-- 2 root root 0 Feb 26 18:15 6
[root@centos121 ~]# 

由上可以知道数据是随机在brick中存储的。

[root@centos120 ~]# yum install  glusterfs-geo-replication 
Installed:
  glusterfs-geo-replication.x86_64 0:7.3-1.el7                                                                           

Dependency Installed:
  python-prettytable.noarch 0:0.7.2-3.el7     python2-gluster.x86_64 0:7.3-1.el7     rsync.x86_64 0:3.1.2-6.el7_6.1    

Complete!
[root@centos120 ~]# systemctl restart glusterfsd
[root@centos120 ~]# ps -ef | grep  gluster 
root       7638      1  0 18:14 ?        00:00:00 /usr/sbin/glusterfs --process-name fuse --volfile-server=127.0.0.1 --volfile-id=/gs1 /mnt
root       8330      1  1 18:36 ?        00:00:00 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
root       8377   6920  0 18:36 pts/0    00:00:00 grep --color=auto gluster
[root@centos120 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   17G  1.5G   16G   9% /
devtmpfs                 475M     0  475M   0% /dev
tmpfs                    487M     0  487M   0% /dev/shm
tmpfs                    487M  7.7M  479M   2% /run
tmpfs                    487M     0  487M   0% /sys/fs/cgroup
/dev/sda1               1014M  133M  882M  14% /boot
tmpfs                     98M     0   98M   0% /run/user/0
/dev/sdb1                 92M  5.1M   87M   6% /gluster/brick1
/dev/sdb2                105M  5.7M  100M   6% /gluster/brick2
127.0.0.1:/gs1            92M  6.0M   86M   7% /mnt
[root@centos120 ~]# 
[root@centos120 ~]# gluster volume info 

Volume Name: gs1
Type: Distribute
Volume ID: bf50562e-5899-4dc2-bd91-ba7b04b4be81
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: centos120:/gluster/brick1/bk1
Brick2: centos121:/gluster/brick1/bk1
Options Reconfigured:
nfs.disable: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
diagnostics.latency-measurement: on
diagnostics.count-fop-hits: on
  1. Creating Replicated Volumes(gs2)创建复制式卷

Replicated volumes create copies of files across multiple bricks in the volume. You can use replicated volumes in environments where high-availability and high-reliability are critical.
复制的卷跨卷中的多个块创建文件副本。您可以在高可用性和高可靠性非常关键的环境中使用复制卷。

To create a replicated volume
Create a trusted storage pool.
Create the replicated volume:
# gluster volume create [replica ] [transport tcp | rdma | tcp,rdma]
For example, to create a replicated volume with two storage servers:
# gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2

Creation of test-volume has been successful
Please start the volume to access data.
If the transport type is not specified, tcp is used as the default. You can also set additional options if required, such as auth.allow or auth.reject.
Note:
Make sure you start your volumes before you try to mount them or else client operations after the mount will hang.
GlusterFS will fail to create a replicate volume if more than one brick of a replica set is present on the same peer. For eg. a four node replicated volume where more than one brick of a replica set is present on the same peer.

# gluster volume create  replica 4 server1:/brick1 server1:/brick2 server2:/brick3 server4:/brick4
volume create: : failed: Multiple bricks of a replicate volume 
Use the force option at the end of command if you still want to create the volume with this configuration.

2.1创建复制式卷gs2

root@centos120 ~]# gluster volume create gs2 replica 2 centos122:/gluster/brick1/bk1 centos123:/gluster/brick1/bk1
Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/.
Do you still want to continue?
 (y/n) y
volume create: gs2: success: please start the volume to access data

2.2 启动复制式卷gs2

[root@centos120 ~]# gluster volume start gs2
volume start: gs2: success

2.3 查看复制式卷gs2状态

root@centos120 ~]# gluster volume status gs2
Status of volume: gs2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick centos122:/gluster/brick1/bk1         49152     0          Y       7207 
Brick centos123:/gluster/brick1/bk1         49152     0          Y       7603 
Self-heal Daemon on localhost               N/A       N/A        Y       8330 
Self-heal Daemon on centos122               N/A       N/A        Y       7228 
Self-heal Daemon on centos123               N/A       N/A        Y       7624 
Self-heal Daemon on centos124               N/A       N/A        Y       7021 
Self-heal Daemon on centos121               N/A       N/A        Y       8274 

Task Status of Volume gs2
------------------------------------------------------------------------------
There are no active volume tasks
[root@centos120 ~]# gluster volume info gs2
Volume Name: gs2
Type: Replicate    #复制式
Volume ID: 122ddf33-eddc-473f-82cb-e01ca9b56a70
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: centos122:/gluster/brick1/bk1
Brick2: centos123:/gluster/brick1/bk1
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
[root@centos120 ~]#

2.4 挂载复制式卷gs2

[root@centos122 ~]# mount -t glusterfs 127.0.0.1:/gs2 /mnt2
[root@centos122 ~]# ll /gluster/brick1
total 0
drwxr-xr-x 3 root root 78 Feb 27 18:08 bk1

3.2.5 查看复制式卷gs2的数据存储,可以知道其在每个节点的brick是复制式存储的,也就是一个数据库存储2份

[root@centos122 ~]# ll /gluster/brick1/bk1/
total 0
-rw-r--r-- 2 root root 0 Feb 27 18:08 1
-rw-r--r-- 2 root root 0 Feb 27 18:08 2
-rw-r--r-- 2 root root 0 Feb 27 18:08 3
-rw-r--r-- 2 root root 0 Feb 27 18:08 4
-rw-r--r-- 2 root root 0 Feb 27 18:08 5
-rw-r--r-- 2 root root 0 Feb 27 18:08 6
[root@centos122 ~]# ll /mnt2/
total 0
-rw-r--r-- 1 root root 0 Feb 27 18:08 1
-rw-r--r-- 1 root root 0 Feb 27 18:08 2
-rw-r--r-- 1 root root 0 Feb 27 18:08 3
-rw-r--r-- 1 root root 0 Feb 27 18:08 4
-rw-r--r-- 1 root root 0 Feb 27 18:08 5
-rw-r--r-- 1 root root 0 Feb 27 18:08 6
[root@centos122 ~]# 
[root@centos123 ~]# mount -t glusterfs 127.0.0.1:/gs2 /mnt2
[root@centos123 ~]# ll /gluster/brick1/bk1/
total 0
-rw-r--r-- 2 root root 0 Feb 27 18:08 1
-rw-r--r-- 2 root root 0 Feb 27 18:08 2
-rw-r--r-- 2 root root 0 Feb 27 18:08 3
-rw-r--r-- 2 root root 0 Feb 27 18:08 4
-rw-r--r-- 2 root root 0 Feb 27 18:08 5
-rw-r--r-- 2 root root 0 Feb 27 18:08 6
[root@centos123 ~]# ll /mnt2/
total 0
-rw-r--r-- 1 root root 0 Feb 27 18:08 1
-rw-r--r-- 1 root root 0 Feb 27 18:08 2
-rw-r--r-- 1 root root 0 Feb 27 18:08 3
-rw-r--r-- 1 root root 0 Feb 27 18:08 4
-rw-r--r-- 1 root root 0 Feb 27 18:08 5
-rw-r--r-- 1 root root 0 Feb 27 18:08 6

2.6 Arbiter configuration for replica volumes 复制卷的仲裁配置

Arbiter volumes are replica 3 volumes where the 3rd brick acts as the arbiter brick. This configuration has mechanisms that prevent occurrence of split-brains.
仲裁卷是复制的3卷,其中第三块砖充当仲裁块。这种配置具有防止分裂大脑发生的机制。
It can be created with the following command:

# gluster volume create    replica 3 arbiter 1 host1:brick1 host2:brick2 host3:brick3

More information about this configuration can be found at Features : afr-arbiter-volumes
Note that the arbiter configuration for replica 3 can be used to create distributed-replicate volumes as well.

注意,副本3的仲裁配置也可用于创建分布式复制卷。

删掉gs2卷,需要先删除上面数据和umount/mnt2

[root@centos120 ~]# gluster volume stop gs2
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: gs2: success
[root@centos120 ~]# gluster volume delete gs2
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: gs2: success
[root@centos120 ~]# 
[root@centos120 ~]# 
[root@centos120 ~]# 
[root@centos120 ~]# 
[root@centos120 ~]# 
[root@centos120 ~]# gluster volume status
Status of volume: gs1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick centos120:/gluster/brick1/bk1         49152     0          Y       6998 
Brick centos121:/gluster/brick1/bk1         49152     0          Y       7025 
NFS Server on localhost                     N/A       N/A        N       N/A  
NFS Server on centos123                     N/A       N/A        N       N/A  
NFS Server on centos122                     N/A       N/A        N       N/A  
NFS Server on centos124                     N/A       N/A        N       N/A  
NFS Server on centos121                     N/A       N/A        N       N/A  

Task Status of Volume gs1
------------------------------------------------------------------------------
There are no active volume tasks

[root@centos120 ~]# gluster volume info 

Volume Name: gs1
Type: Distribute
Volume ID: bf50562e-5899-4dc2-bd91-ba7b04b4be81
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: centos120:/gluster/brick1/bk1
Brick2: centos121:/gluster/brick1/bk1
Options Reconfigured:
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: off
[root@centos120 ~]#

[root@centos120 ~]# gluster volume create gs2_1 replica 3 arbiter 1  centos122:/gluster/brick1/bk1 centos123:/gluster/brick1/bk1 centos124:/gluster/brick1/bk1 force 
volume create: gs2_1: success: please start the volume to access data
[root@centos120 ~]# gluster volume start gs2_1
volume start: gs2_1: success
[root@centos120 ~]# gluster volume status gs2_1
Status of volume: gs2_1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick centos122:/gluster/brick1/bk1         49152     0          Y       7628 
Brick centos123:/gluster/brick1/bk1         49152     0          Y       7973 
Brick centos124:/gluster/brick1/bk1         49152     0          Y       7145 
Self-heal Daemon on localhost               N/A       N/A        Y       8895 
Self-heal Daemon on centos122               N/A       N/A        Y       7649 
Self-heal Daemon on centos124               N/A       N/A        Y       7166 
Self-heal Daemon on centos121               N/A       N/A        Y       8542 
Self-heal Daemon on centos123               N/A       N/A        Y       7994 

Task Status of Volume gs2_1
------------------------------------------------------------------------------
There are no active volume tasks

[root@centos120 ~]# gluster volume info  gs2_1

Volume Name: gs2_1
Type: Replicate
Volume ID: 8369c3b4-6dcb-49f3-9233-0f4781a29a30
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: centos122:/gluster/brick1/bk1
Brick2: centos123:/gluster/brick1/bk1
Brick3: centos124:/gluster/brick1/bk1 (arbiter)
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
[root@centos120 ~]# 

[root@centos122 ~]# mount -t glusterfs 127.0.0.1:/gs2_1 /mnt2
[root@centos122 ~]# touch /mnt2/{1..7}
[root@centos122 ~]# ll /mnt2/
total 0
-rw-r--r-- 1 root root 0 Feb 27 18:49 1
-rw-r--r-- 1 root root 0 Feb 27 18:49 2
-rw-r--r-- 1 root root 0 Feb 27 18:49 3
-rw-r--r-- 1 root root 0 Feb 27 18:49 4
-rw-r--r-- 1 root root 0 Feb 27 18:49 5
-rw-r--r-- 1 root root 0 Feb 27 18:49 6
-rw-r--r-- 1 root root 0 Feb 27 18:49 7
[root@centos122 ~]# ll /gluster/brick1/bk1/
total 0
-rw-r--r-- 2 root root 0 Feb 27 18:49 1
-rw-r--r-- 2 root root 0 Feb 27 18:49 2
-rw-r--r-- 2 root root 0 Feb 27 18:49 3
-rw-r--r-- 2 root root 0 Feb 27 18:49 4
-rw-r--r-- 2 root root 0 Feb 27 18:49 5
-rw-r--r-- 2 root root 0 Feb 27 18:49 6
-rw-r--r-- 2 root root 0 Feb 27 18:49 7
[root@centos122 ~]# 

[root@centos123 ~]# mount -t glusterfs 127.0.0.1:/gs2_1 /mnt2
[root@centos123 ~]# ll /gluster/brick1/bk1/
total 0
-rw-r--r-- 2 root root 0 Feb 27 18:49 1
-rw-r--r-- 2 root root 0 Feb 27 18:49 2
-rw-r--r-- 2 root root 0 Feb 27 18:49 3
-rw-r--r-- 2 root root 0 Feb 27 18:49 4
-rw-r--r-- 2 root root 0 Feb 27 18:49 5
-rw-r--r-- 2 root root 0 Feb 27 18:49 6
-rw-r--r-- 2 root root 0 Feb 27 18:49 7
[root@centos123 ~]# 

[root@centos124 ~]# mount -t glusterfs 127.0.0.1:/gs2_1 /mnt2
[root@centos124 ~]# ll /gluster/brick1/bk1/
total 0
-rw-r--r-- 2 root root 0 Feb 27 18:49 1
-rw-r--r-- 2 root root 0 Feb 27 18:49 2
-rw-r--r-- 2 root root 0 Feb 27 18:49 3
-rw-r--r-- 2 root root 0 Feb 27 18:49 4
-rw-r--r-- 2 root root 0 Feb 27 18:49 5
-rw-r--r-- 2 root root 0 Feb 27 18:49 6
-rw-r--r-- 2 root root 0 Feb 27 18:49 7
[root@centos124 ~]# 
  1. Creating Striped Volumes 创建条带状卷
    Striped volumes stripes data across bricks in the volume. For best results, you should use striped volumes only in high concurrency environments accessing very large files.
    在卷中的砖块上条纹数据。为了获得最佳结果,您应该仅在访问非常大的文件的高并发环境中使用带状卷。
    Note: The number of bricks should be equal to the stripe count for a striped volume.
    注意:块的数量应该等于条纹卷的条纹数。

# gluster volume create test-volume stripe 2 transport tcp server1:/exp1 server2:/exp2
Creation of test-volume has been successful
Please start the volume to access data.
[root@centos120 ~]# gluster volume create gs3 stripe 2 centos120:/gluster/brick2/bk2 centos121:/gluster/brick2/bk2
stripe option not supported  #提示不支持stripe option ,不知为何
Usage:
volume create  [stripe ] [[replica  [arbiter ]]|[replica 2 thin-arbiter 1]] [disperse []] [disperse-data ] [redundancy ] [transport ]  ... [force]
[root@centos120 ~]# 
  1. Creating Distributed Striped Volumes 创建分布式条带卷 不支持(不知为何)
Distributed striped volumes stripes files across two or more nodes in the cluster. For best results, you should use distributed striped volumes where the requirement is to scale storage and in high concurrency environments accessing very large files is critical.
在集群中的两个或多个节点上分布带状卷。为了获得最佳结果,您应该使用分布式条带卷,因为这需要扩展存储空间,并且在高并发环境中,访问非常大的文件非常重要。
Note: The number of bricks should be a multiple of the stripe count for a distributed striped volume.
注意:砖块的数量应该是分布式条纹卷的条纹数的倍数。

To create a distributed striped volume
Create a trusted storage pool.
Create the distributed striped volume:

# gluster volume create [stripe ] [transport tcp | rdma | tcp,rdma]
For example, to create a distributed striped volume across eight storage servers:
# gluster volume create test-volume stripe 4 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6 server7:/exp7 server8:/exp8

Creation of test-volume has been successful
Please start the volume to access data.

实测还是不支持Gluster 的stripe 选项,不知为何

[root@centos120 ~]# gluster volume create gs3 stripe 2 centos120:/gluster/brick2 centos121:/gluster/brick2 centos122:/gluster/brick2 centos123:/gluster/brick2
stripe option not supported
Usage:
volume create  [stripe ] [[replica  [arbiter ]]|[replica 2 thin-arbiter 1]] [disperse []] [disperse-data ] [redundancy ] [transport ]  ... [force]
[root@centos120 ~]# 

5.Creating Distributed Replicated Volumes 创建分布式复制卷

Distributes files across replicated bricks in the volume. You can use distributed replicated volumes in environments where the requirement is to scale storage and high-reliability is critical. Distributed replicated volumes also offer improved read performance in most environments

跨卷中复制的块分发文件。您可以在需要扩展存储和高可靠性的环境中使用分布式复制卷。在大多数环境中,分布式复制卷还提供了更好的读取性能

Note: The number of bricks should be a multiple of the replica count for a distributed replicated volume. Also, the order in which bricks are specified has a great effect on data protection. Each replica_count consecutive bricks in the list you give will form a replica set, with all replica sets combined into a volume-wide distribute set. To make sure that replica-set members are not placed on the same node, list the first brick on every server, then the second brick on every server in the same order, and so on.

注意:块的数量应该是分布式复制卷的副本计数的倍数。而且,指定砖块的顺序对数据保护有很大的影响。列表中的每个连续replica_count砖你给将形成一套副本,所有副本集组合成一套volume-wide分发。确保副本集合成员不放在同一节点,列表第一个砖在每一个服务器,每个服务器上的第二个砖在相同的顺序,等等。

Create a trusted storage pool.
Create the distributed replicated volume:

# gluster volume create [replica ] [transport tcp | rdma | tcp,rdma]
For example, a four node distributed (replicated) volume with a two-way mirror:
# gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4

Creation of test-volume has been successful
Please start the volume to access data.
For example, to create a six node distributed (replicated) volume with a two-way mirror:

# gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6

Creation of test-volume has been successful
Please start the volume to access data.Note: - Make sure you start your volumes before you try to mount them or else client operations after the mount will hang.
GlusterFS will fail to create a distribute replicate volume if more than one brick of a replica set is present on the same peer. For eg. for a four node distribute (replicated) volume where more than one brick of a replica set is present on the same peer.

如果在同一对等点上存在多个副本集块,则GlusterFS将无法创建分发复制卷。如。对于在同一对等点上存在多个副本集块的四个节点分发(复制)卷。

# gluster volume create  replica 2 server1:/brick1 server1:/brick2 server2:/brick3 server4:/brick4
volume create: : failed: Multiple bricks of a replicate volume are present on the same server. This setup is not optimal. Use 'force' at the end of the command if you want to override this behavior.
[root@centos120 ~]# gluster volume create gs2_2 replica 2 transport tcp centos120:/gluster/brick2/bk2 centos121:/gluster/brick2/bk2 centos122:/gluster/brick2/bk2 centos123:/gluster/brick2/bk2
Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/.
Do you still want to continue?
 (y/n) y
volume create: gs2_2: success: please start the volume to access data
[root@centos120 ~]# gluster volume info gs2_2

Volume Name: gs2_2
Type: Distributed-Replicate
Volume ID: c2d0be7b-8df9-421a-88ea-512821cb9261
Status: Created
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: centos120:/gluster/brick2/bk2
Brick2: centos121:/gluster/brick2/bk2
Brick3: centos122:/gluster/brick2/bk2
Brick4: centos123:/gluster/brick2/bk2
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
[root@centos120 ~]#
[root@centos120 ~]# gluster volume start gs2_2
volume start: gs2_2: success
[root@centos120 ~]# gluster volume status  gs2_2
Status of volume: gs2_2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick centos120:/gluster/brick2/bk2         49153     0          Y       9820 
Brick centos121:/gluster/brick2/bk2         49153     0          Y       9127 
Brick centos122:/gluster/brick2/bk2         49152     0          Y       8359 
Brick centos123:/gluster/brick2/bk2         49152     0          Y       8668 
Self-heal Daemon on localhost               N/A       N/A        Y       9841 
Self-heal Daemon on centos122               N/A       N/A        Y       8380 
Self-heal Daemon on centos123               N/A       N/A        Y       8689 
Self-heal Daemon on centos121               N/A       N/A        Y       9148 
Self-heal Daemon on centos124               N/A       N/A        Y       7866 
Task Status of Volume gs2_2
------------------------------------------------------------------------------
There are no active volume tasks
[root@centos120 ~]# 

创建数据

[root@centos120 ~]# mount -t glusterfs 127.0.0.1:/gs2_2  /mnt2
[root@centos120 ~]# touch /mnt2/{a..f}
[root@centos120 ~]# ll /mnt2/
total 0
-rw-r--r-- 1 root root 0 Feb 27 20:43 a
-rw-r--r-- 1 root root 0 Feb 27 20:43 b
-rw-r--r-- 1 root root 0 Feb 27 20:43 c
-rw-r--r-- 1 root root 0 Feb 27 20:43 d
-rw-r--r-- 1 root root 0 Feb 27 20:43 e
-rw-r--r-- 1 root root 0 Feb 27 20:43 f
[root@centos120 ~]# 

查看数据分布 ,两两节点做mirror 数据复制

[root@centos120 ~]# ll /gluster/brick2/bk2/
total 0
-rw-r--r-- 2 root root 0 Feb 27 20:43 a
-rw-r--r-- 2 root root 0 Feb 27 20:43 b
-rw-r--r-- 2 root root 0 Feb 27 20:43 c
-rw-r--r-- 2 root root 0 Feb 27 20:43 e
[root@centos120 ~]# 
[root@centos121 ~]# ll /gluster/brick2/bk2/
total 0
-rw-r--r-- 2 root root 0 Feb 27 20:43 a
-rw-r--r-- 2 root root 0 Feb 27 20:43 b
-rw-r--r-- 2 root root 0 Feb 27 20:43 c
-rw-r--r-- 2 root root 0 Feb 27 20:43 e
[root@centos121 ~]# 
[root@centos122 ~]# ll /gluster/brick2/bk2/
total 0
-rw-r--r-- 2 root root 0 Feb 27 20:43 d
-rw-r--r-- 2 root root 0 Feb 27 20:43 f
[root@centos122 ~]# 
[root@centos123 ~]# ll /gluster/brick2/bk2/
total 0
-rw-r--r-- 2 root root 0 Feb 27 20:43 d
-rw-r--r-- 2 root root 0 Feb 27 20:43 f
[root@centos123 ~]# 

查看存储总大小减半,

[root@centos123 ~]# df -h | grep mnt2
127.0.0.1:/gs2_2         210M   14M  196M   7% /mnt2
[root@centos123 ~]# 

6.Creating Distributed Striped Replicated Volumes 创建分布式条带复制卷

Distributed striped replicated volumes distributes striped data across replicated bricks in the cluster. For best results, you should use distributed striped replicated volumes in highly concurrent environments where parallel access of very large files and performance is critical. In this release, configuration of this volume type is supported only for Map Reduce workloads.

分布式条带式复制卷跨集群中的复制块分布条带式数据。为了获得最佳结果,您应该在高度并发的环境中使用分布式条带复制卷,在这种环境中,并行访问非常大的文件和性能非常重要。在这个版本中,仅对Map Reduce工作负载支持此卷类型的配置。

Note: The number of bricks should be a multiples of number of stripe count and replica count for a distributed striped replicated volume.

注意:块的数量应该是分布式条带复制卷的条带计数和副本计数的倍数

To create a distributed striped replicated volume
Create a trusted storage pool.
Create a distributed striped replicated volume using the following command:

# gluster volume create [stripe ] [replica ] [transport tcp | rdma | tcp,rdma]

For example, to create a distributed replicated striped volume across eight storage servers:

# gluster volume create test-volume stripe 2 replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6 server7:/exp7 server8:/exp8

Creation of test-volume has been successful
Please start the volume to access data.
GlusterFS will fail to create a distribute replicate volume if more than one brick of a replica set is present on the same peer. For eg. a four node distribute (replicated) volume where more than one brick of a replica set is present on the same peer.

# gluster volume create <volname> stripe 2 replica 2 server1:/brick1 server1:/brick2 server2:/brick3 server4:/brick4

volume create: : failed: Multiple bricks of a replicate volume are present on the same server. This setup is not optimal.这种设置不是最优的
Use 'force' at the end of the command if you want to override this behavior.

  1. Creating Striped Replicated Volumes 创建条带复制卷

Striped replicated volumes stripes data across replicated bricks in the cluster. For best results, you should use striped replicated volumes in highly concurrent environments where there is parallel access of very large files and performance is critical. In this release, configuration of this volume type is supported only for Map Reduce workloads.

条带化复制卷跨集群中的复制块条带数据。为了获得最佳结果,您应该在高度并发的环境中使用条带复制卷,在这种环境中可以并行访问非常大的文件,并且性能非常重要。在这个版本中,仅对Map Reduce工作负载支持此卷类型的配置。

The number of bricks should be a multiple of the replicate count and stripe count for a striped replicated volume.

块的数量应该是带条纹复制卷的复制计数和条纹计数的倍数。

数据存储类似华云cstor

To create a striped replicated volume
Create a trusted storage pool consisting of the storage servers that will comprise the volume.

创建一个受信任的存储池,该存储池由组成卷的存储服务器组成

Create a striped replicated volume :
# gluster volume create [stripe ] [replica ] [transport tcp | rdma | tcp,rdma]

For example, to create a striped replicated volume across four storage servers:
# gluster volume create test-volume stripe 2 replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4

Creation of test-volume has been successful
Please start the volume to access data.
To create a striped replicated volume across six storage servers

# gluster volume create test-volume stripe 3 replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6

Creation of test-volume has been successful
Please start the volume to access data.
GlusterFS will fail to create a distribute replicate volume if more than one brick of a replica set is present on the same peer. For eg. a four node distribute (replicated) volume where more than one brick of replica set is present on the same peer.

# gluster volume create <volname> stripe 2 replica 2 server1:/brick1 server1:/brick2 server2:/brick3 server4:/brick4

volume create: : failed: Multiple bricks of a replicate volume are present on the same server. This setup is not optimal. Use force at the end of the command if you want to override this behavior.

8.Creating Dispersed Volumes 创建分散卷

Dispersed volumes are based on erasure codes. It stripes the encoded data of files, with some redundancy added, across multiple bricks in the volume. You can use dispersed volumes to have a configurable level of reliability with minimum space waste.

分散的卷基于擦除码。它将文件的编码数据条带化,并在卷中的多个块上添加一些冗余。您可以使用分散的卷来获得可配置的可靠性级别,同时最小化空间浪费。

= * (#Bricks - Redundancy)
All bricks of a disperse set should have the same capacity, otherwise, when the smallest brick becomes full, no additional data will be allowed in the disperse set.

It's important to note that a configuration with 3 bricks and redundancy 1 will have less usable space (66.7% of the total physical space) than a configuration with 10 bricks and redundancy 1 (90%). However the first one will be safer than the second one (roughly the probability of failure of the second configuration if more than 4.5 times bigger than the first one).

For example, a dispersed volume composed of 6 bricks of 4TB and a redundancy of 2 will be completely operational even with two bricks inaccessible. However a third inaccessible brick will bring the volume down because it won't be possible to read or write to it. The usable space of the volume will be equal to 16TB.

The implementation of erasure codes in GlusterFS limits the redundancy to a value smaller than #Bricks / 2 (or equivalently, redundancy * 2 < #Bricks). Having a redundancy equal to half of the number of bricks would be almost equivalent to a replica-2 volume, and probably a replicated volume will perform better in this case.

Optimal volumes

One of the worst things erasure codes have in terms of performance is the RMW (Read-Modify-Write) cycle. Erasure codes operate in blocks of a certain size and it cannot work with smaller ones. This means that if a user issues a write of a portion of a file that doesn't fill a full block, it needs to read the remaining portion from the current contents of the file, merge them, compute the updated encoded block and, finally, writing the resulting data.

This adds latency, reducing performance when this happens. Some GlusterFS performance xlators can help to reduce or even eliminate this problem for some workloads, but it should be taken into account when using dispersed volumes for a specific use case.

Current implementation of dispersed volumes use blocks of a size that depends on the number of bricks and redundancy: 512 * (#Bricks - redundancy) bytes. This value is also known as the stripe size.

Using combinations of #Bricks/redundancy that give a power of two for the stripe size will make the disperse volume perform better in most workloads because it's more typical to write information in blocks that are multiple of two (for example databases, virtual machines and many applications).
These combinations are considered optimal.

For example, a configuration with 6 bricks and redundancy 2 will have a stripe size of 512 * (6 - 2) = 2048 bytes, so it's considered optimal. A configuration with 7 bricks and redundancy 2 would have a stripe size of 2560 bytes, needing a RMW cycle for many writes (of course this always depends on the use case).

To create a dispersed volume
Create a trusted storage pool.
Create the dispersed volume:

# gluster volume create [disperse [<count>]] [redundancy <count>] [transport tcp | rdma | tcp,rdma]

A dispersed volume can be created by specifying the number of bricks in a disperse set, by specifying the number of redundancy bricks, or both.
If disperse is not specified, or the is missing, the entire volume will be treated as a single disperse set composed by all bricks enumerated in the command line.

If redundancy is not specified, it is computed automatically to be the optimal value. If this value does not exist, it's assumed to be '1' and a warning message is shown:

# gluster volume create test-volume disperse 4 server{1..4}:/bricks/test-volume

There isn't an optimal redundancy value for this configuration. Do you want to create the volume with redundancy 1 ? (y/n)
In all cases where redundancy is automatically computed and it's not equal to '1', a warning message is displayed:

# gluster volume create test-volume disperse 6 server{1..6}:/bricks/test-volume

The optimal redundancy for this configuration is 2. Do you want to create the volume with this value ? (y/n)

redundancy must be greater than 0, and the total number of bricks must be greater than 2 * redundancy. This means that a dispersed volume must have a minimum of 3 bricks.

If the transport type is not specified, tcp is used as the default. You can also set additional options if required, like in the other volume types

Note:
Make sure you start your volumes before you try to mount them or else client operations after the mount will hang.
GlusterFS will fail to create a dispersed volume if more than one brick of a disperse set is present on the same peer.

# gluster volume create  disperse 3 server1:/brick{1..3}
volume create: : failed: Multiple bricks of a replicate volume are present on the same server. This setup is not optimal.
Do you still want to continue creating the volume? (y/n)
Use the force option at the end of command if you want to create the volume in this case.

实践

[root@centos124 ~]# gluster volume create gs4 disperse 4 centos120:/gluster/brick1/bk1 centos121:/gluster/brick1/bk1 centos122:/gluster/brick1/bk1 centos123:/gluster/brick1/bk1
There isn't an optimal redundancy value for this configuration. Do you want to create the volume with redundancy 1 ? (y/n) y
volume create: gs4: failed: Staging failed on centos123. Error: /gluster/brick1/bk1 is already part of a volume
Staging failed on centos121. Error: /gluster/brick1/bk1 is already part of a volume
Staging failed on centos120. Error: /gluster/brick1/bk1 is already part of a volume
Staging failed on centos122. Error: /gluster/brick1/bk1 is already part of a volume
[root@centos124 ~]# gluster volume create gs4 disperse 4 centos120:/gluster/brick1/bk1 centos121:/gluster/brick1/bk1 centos122:/gluster/brick1/bk1 centos123:/gluster/brick1/bk1 force 
There isn't an optimal redundancy value for this configuration. Do you want to create the volume with redundancy 1 ? (y/n) y
volume create: gs4: success: please start the volume to access data
[root@centos124 ~]# 
[root@centos124 ~]# gluster volume start gs4
volume start: gs4: success
[root@centos124 ~]# gluster volume status
Status of volume: gs4
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick centos120:/gluster/brick1/bk1         49152     0          Y       11293
Brick centos121:/gluster/brick1/bk1         49153     0          Y       10503
Brick centos122:/gluster/brick1/bk1         49152     0          Y       9214 
Brick centos123:/gluster/brick1/bk1         49152     0          Y       9527 
Self-heal Daemon on localhost               N/A       N/A        Y       8205 
Self-heal Daemon on centos123               N/A       N/A        Y       9548 
Self-heal Daemon on centos122               N/A       N/A        Y       9235 
Self-heal Daemon on centos120               N/A       N/A        Y       11314
Self-heal Daemon on centos121               N/A       N/A        Y       10524
Task Status of Volume gs4
------------------------------------------------------------------------------
There are no active volume tasks
[root@centos124 ~]# gluster volume info 
Volume Name: gs4
Type: Disperse
Volume ID: 67bc6e10-6e28-402a-9a76-964e5b437a3c
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (3 + 1) = 4
Transport-type: tcp
Bricks:
Brick1: centos120:/gluster/brick1/bk1
Brick2: centos121:/gluster/brick1/bk1
Brick3: centos122:/gluster/brick1/bk1
Brick4: centos123:/gluster/brick1/bk1
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
[root@centos124 ~]# 

挂载

[root@centos120 ~]# mount -t glusterfs 127.0.0.1:/gs4   /mnt
[root@centos120 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   17G  1.6G   16G   9% /
devtmpfs                 475M     0  475M   0% /dev
tmpfs                    487M     0  487M   0% /dev/shm
tmpfs                    487M  7.7M  479M   2% /run
tmpfs                    487M     0  487M   0% /sys/fs/cgroup
/dev/sdb2                105M  5.8M   99M   6% /gluster/brick2
/dev/sda1               1014M  133M  882M  14% /boot
/dev/sdb1                 92M  5.1M   87M   6% /gluster/brick1
tmpfs                     98M     0   98M   0% /run/user/0
127.0.0.1:/gs4           275M   18M  258M   7% /mnt   #(3+1)可用空间只有全部的66.7%
[root@centos120 ~]# touch /mnt/{1..7}
[root@centos120 ~]# ll /mnt/
total 0
-rw-r--r-- 1 root root 0 Feb 27 23:22 1
-rw-r--r-- 1 root root 0 Feb 27 23:22 2
-rw-r--r-- 1 root root 0 Feb 27 23:22 3
-rw-r--r-- 1 root root 0 Feb 27 23:22 4
-rw-r--r-- 1 root root 0 Feb 27 23:22 5
-rw-r--r-- 1 root root 0 Feb 27 23:22 6
-rw-r--r-- 1 root root 0 Feb 27 23:22 7
[root@centos120 ~]# ll /gluster/brick1/
total 0
drwxr-xr-x 3 root root 87 Feb 27 23:22 bk1

数据是分散存储

[root@centos120 ~]# ll /gluster/brick1/bk1/
total 0
-rw-r--r-- 2 root root 0 Feb 27 23:22 1
-rw-r--r-- 2 root root 0 Feb 27 23:22 2
-rw-r--r-- 2 root root 0 Feb 27 23:22 3
-rw-r--r-- 2 root root 0 Feb 27 23:22 4
-rw-r--r-- 2 root root 0 Feb 27 23:22 5
-rw-r--r-- 2 root root 0 Feb 27 23:22 6
-rw-r--r-- 2 root root 0 Feb 27 23:22 7
[root@centos120 ~]# 
[root@centos120 ~]# cp -p /opt/glusterfs/glusterfs-server-7.3-1.el8.x86_64.rpm  /mnt/
[root@centos120 ~]
[root@centos120 ~]# du -sh /opt/glusterfs/glusterfs-server-7.3-1.el8.x86_64.rpm
1.4M    /opt/glusterfs/glusterfs-server-7.3-1.el8.x86_64.rpm
[root@centos120 ~]# du -sh /mnt/glusterfs-server-7.3-1.el8.x86_64.rpm
1.4M    /mnt/glusterfs-server-7.3-1.el8.x86_64.rpm
[root@centos120 ~]# du -sh  /gluster/brick1/bk1/glusterfs-server-7.3-1.el8.x86_64.rpm
964K    /gluster/brick1/bk1/glusterfs-server-7.3-1.el8.x86_64.rpm
[root@centos120 ~]# 
[root@centos121 ~]# du -sh  /gluster/brick1/bk1/glusterfs-server-7.3-1.el8.x86_64.rpm
472K    /gluster/brick1/bk1/glusterfs-server-7.3-1.el8.x86_64.rpm
[root@centos121 ~]# 
[root@centos122 ~]# du -sh  /gluster/brick1/bk1/glusterfs-server-7.3-1.el8.x86_64.rpm
964K    /gluster/brick1/bk1/glusterfs-server-7.3-1.el8.x86_64.rpm
[root@centos122 ~]# 
[root@centos123 ~]# du -sh  /gluster/brick1/bk1/glusterfs-server-7.3-1.el8.x86_64.rpm
964K    /gluster/brick1/bk1/glusterfs-server-7.3-1.el8.x86_64.rpm
[root@centos123 ~]# 

过几分钟,数据居然平均分布了

[root@centos120 ~]# du -sh  /gluster/brick1/bk1/glusterfs-server-7.3-1.el8.x86_64.rpm
472K    /gluster/brick1/bk1/glusterfs-server-7.3-1.el8.x86_64.rpm
[root@centos120 ~]# 
[root@centos121 ~]# du -sh  /gluster/brick1/bk1/glusterfs-server-7.3-1.el8.x86_64.rpm
472K    /gluster/brick1/bk1/glusterfs-server-7.3-1.el8.x86_64.rpm
[root@centos121 ~]#
[root@centos122 ~]# du -sh  /gluster/brick1/bk1/glusterfs-server-7.3-1.el8.x86_64.rpm
472K    /gluster/brick1/bk1/glusterfs-server-7.3-1.el8.x86_64.rpm
[root@centos122 ~]# 
[root@centos123 ~]# du -sh  /gluster/brick1/bk1/glusterfs-server-7.3-1.el8.x86_64.rpm
472K    /gluster/brick1/bk1/glusterfs-server-7.3-1.el8.x86_64.rpm
[root@centos123 ~]# 

9.Creating Distributed Dispersed Volumes 创建分布式分散卷

Distributed dispersed volumes are the equivalent to distributed replicated volumes, but using dispersed subvolumes instead of replicated ones.
To create a distributed dispersed volume
Create a trusted storage pool.
Create the distributed dispersed volume:

# gluster volume create disperse <count> [redundancy <count>] [transport tcp | rdma | tcp,rdma]

To create a distributed dispersed volume, the disperse keyword and is mandatory, and the number of bricks specified in the command line must must be a multiple of the disperse count.

要创建分布式分散卷,必须使用离散关键字和,并且命令行中指定的砖块数量必须是离散数的倍数。

redundancy is exactly the same as in the dispersed volume.冗余度与分散的体积完全相同

If the transport type is not specified, tcp is used as the default. You can also set additional options if required, like in the other volume types.

GlusterFS will fail to create a distributed dispersed volume if more than one brick of a disperse set is present on the same peer.# gluster volume create disperse 3 server1:/brick{1..6}
volume create: : failed: Multiple bricks of a replicate volume are present on the same server. This setup is not optimal.

Do you still want to continue creating the volume? (y/n)
Use the force option at the end of command if you want to create the volume in this case.

  1. 存储卷中brick块设备的扩容
    10.1分布式复制卷的扩容
[root@centos120 ~]# gluster volume create gs1 replica 2 centos120:/gluster/brick1/bk1 centos121:/gluster/brick1/bk1 force 
volume create: gs1: success: please start the volume to access data
[root@centos120 ~]# gluster volume start gs1
volume start: gs1: success
[root@centos120 ~]# gluster volume status
Status of volume: gs1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick centos120:/gluster/brick1/bk1         49153     0          Y       7342 
Brick centos121:/gluster/brick1/bk1         49153     0          Y       7130 
Self-heal Daemon on localhost               N/A       N/A        Y       7363 
Self-heal Daemon on centos121               N/A       N/A        Y       7151 
Self-heal Daemon on centos123               N/A       N/A        Y       7184 
Self-heal Daemon on centos124               N/A       N/A        Y       7077 
Self-heal Daemon on centos122               N/A       N/A        Y       7147 
Task Status of Volume gs1
------------------------------------------------------------------------------
There are no active volume tasks
[root@centos120 ~]# gluster volume info 
Volume Name: gs1
Type: Replicate
Volume ID: dd0692ed-ece8-432d-96ac-282b72764a24
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: centos120:/gluster/brick1/bk1
Brick2: centos121:/gluster/brick1/bk1
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
[root@centos120 ~]# mount -t glusterfs 127.0.0.1:/gs1 /mnt
[root@centos120 ~]# touch /mnt/{1..6}
[root@centos120 ~]# ls /mnt/
1  2  3  4  5  6
[root@centos120 ~]# ll /gluster/brick1/bk1/
total 0
-rw-r--r-- 2 root root 0 Feb 28 15:58 1
-rw-r--r-- 2 root root 0 Feb 28 15:58 2
-rw-r--r-- 2 root root 0 Feb 28 15:58 3
-rw-r--r-- 2 root root 0 Feb 28 15:58 4
-rw-r--r-- 2 root root 0 Feb 28 15:58 5
-rw-r--r-- 2 root root 0 Feb 28 15:58 6

开始扩容

[root@centos120 ~]# gluster volume add-brick gs1 replica 2 centos120:/gluster/brick2/bk2 centos121:/gluster/brick2/bk2
Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/.
Do you still want to continue?
 (y/n) y
volume add-brick: failed: /gluster/brick2/bk2 is already part of a volume
[root@centos120 ~]#
[root@centos120 ~]# gluster volume add-brick gs1 replica 2 centos120:/gluster/brick2/bk2 centos121:/gluster/brick2/bk2 force 
volume add-brick: success

查看空间已经扩容扩大

[root@centos120 ~]# df -h | grep  mnt
127.0.0.1:/gs1           197M   13M  184M   7% /mnt

[root@centos120 ~]# gluster volume status
Status of volume: gs1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick centos120:/gluster/brick1/bk1         49153     0          Y       7342 
Brick centos121:/gluster/brick1/bk1         49153     0          Y       7130 
Brick centos120:/gluster/brick2/bk2         49154     0          Y       7522 
Brick centos121:/gluster/brick2/bk2         49154     0          Y       7293 
Self-heal Daemon on localhost               N/A       N/A        Y       7363 
Self-heal Daemon on centos123               N/A       N/A        Y       7184 
Self-heal Daemon on centos122               N/A       N/A        Y       7147 
Self-heal Daemon on centos121               N/A       N/A        Y       7151 
Self-heal Daemon on centos124               N/A       N/A        Y       7077
Task Status of Volume gs1
------------------------------------------------------------------------------
There are no active volume tasks
[root@centos120 ~]# gluster volume info 
Volume Name: gs1
Type: Distributed-Replicate
Volume ID: dd0692ed-ece8-432d-96ac-282b72764a24
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: centos120:/gluster/brick1/bk1
Brick2: centos121:/gluster/brick1/bk1
Brick3: centos120:/gluster/brick2/bk2
Brick4: centos121:/gluster/brick2/bk2
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
[root@centos120 ~]# ll /gluster/brick2/bk2/
total 0
-rw-r--r-- 2 root root 0 Feb 27 20:43 a
-rw-r--r-- 2 root root 0 Feb 27 20:43 b
-rw-r--r-- 2 root root 0 Feb 27 20:43 c
-rw-r--r-- 2 root root 0 Feb 27 20:43 e
[root@centos120 ~]# 
[root@centos121 ~]# mount -t glusterfs 127.0.0.1:/gs1 /mnt
[root@centos121 ~]# ls /mnt/
1  2  3  4  5  6
[root@centos121 ~]# ls /gluster/brick1/bk1/
1  2  3  4  5  6
[root@centos121 ~]# ll /gluster/brick2/bk2/
total 0
-rw-r--r-- 2 root root 0 Feb 27 20:43 a
-rw-r--r-- 2 root root 0 Feb 27 20:43 b
-rw-r--r-- 2 root root 0 Feb 27 20:43 c
-rw-r--r-- 2 root root 0 Feb 27 20:43 e
[root@centos121 ~]# 

对分布式复制卷和分布式条带卷进行扩容时,要特别注意,如果创建卷之初的时候选择的是replica 2 或者stripe 2。那么扩容时,就必须一次性扩容两个或两个的倍数的块设备。

例如你给一个分布式复制卷的replica为2,你在增加bricks的时候数量必须为2,4,6,8等。

[root@centos120 ~]# touch /mnt/{12..20}
[root@centos120 ~]# ls /mnt/
1  12  13  14  15  16  17  18  19  2  20  3  4  5  6  a  b  c  e
[root@centos120 ~]# ls /gluster/brick1/bk1/ /gluster/brick2/bk2/
/gluster/brick1/bk1/:
1  13  2  20  3  4  5  6
/gluster/brick2/bk2/:
12  14  15  16  17  18  19  a  b  c  e
[root@centos120 ~]# 
[root@centos121 ~]# ls /mnt/
1  12  13  14  15  16  17  18  19  2  20  3  4  5  6  a  b  c  e
[root@centos121 ~]# ls /gluster/brick1/bk1 /gluster/brick2/bk2/
/gluster/brick1/bk1:
1  13  2  20  3  4  5  6
/gluster/brick2/bk2/:
12  14  15  16  17  18  19  a  b  c  e
[root@centos121 ~]# 

创建数据

[root@centos120 ~]# touch /mnt/{1..20}
[root@centos120 ~]# ls /gluster/brick1/bk1 /gluster/brick2/bk2/
/gluster/brick1/bk1:
1  11  13  20  5  7  8  9
/gluster/brick2/bk2/:
10  12  14  15  16  17  18  19  2  3  4  6
[root@centos120 ~]# gluster volume rebalance gs1 start
volume rebalance: gs1: success: Rebalance on gs1 has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: abb7d66f-4c16-49eb-9a11-c7e23fe1789d
[root@centos120 ~]# ls /gluster/brick1/bk1 /gluster/brick2/bk2/
/gluster/brick1/bk1:
1  11  13  20  5  7  8  9
/gluster/brick2/bk2/:
10  12  14  15  16  17  18  19  2  3  4  6

查看平衡数据状态

[root@centos120 ~]# gluster volume rebalance gs1 status
Node Rebalanced-files          size       scanned      failures       skipped               status  run time in h:m:s
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               centos121                0        0Bytes            20             0             0            completed        0:00:01
                               localhost                0        0Bytes            20             0             0            completed        0:00:00
volume rebalance: gs1: success
[root@centos120 ~]# 
[root@centos121 ~]# ls /gluster/brick1/bk1 /gluster/brick2/bk2/
/gluster/brick1/bk1:
1  11  13  20  5  7  8  9
/gluster/brick2/bk2/:
10  12  14  15  16  17  18  19  2  3  4  6
[root@centos121 ~]# 

三,存储卷的缩减与删除

(1)对存储卷中的brick进行缩减
注意:你可能想在线缩小卷的大小,例如:当硬件损坏或者网络故障的时候,你可能想在卷中移除相关的bricks。注意,当你移除bricks的时候,你在gluster的挂载点将不能继续访问是数据,只有配置文件中的信息移除后你才能继续访问bricks的数据。当移除分布式复制卷或者分布式条带卷的时候,移除的bricks数目必须是replica或者stripe的倍数。例如:一个分布式条带卷的stripe是2,当你移除bricks的时候必须是2,4,6,8等。
先umount

[root@centos120 ~]# umount /mnt

再停止卷

[root@centos120 ~]# gluster volume stop gs1
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: gs1: success

再移除卷

[root@centos120 ~]# gluster volume remove-brick gs1 replica 2 centos120:/gluster/brick1/bk1  centos121:/gluster/brick1/bk1 force 
Remove-brick force will not migrate files from the removed bricks, so they will no longer be available on the volume.
Do you want to continue? (y/n) y
volume remove-brick commit force: success
[root@centos120 ~]# 

查看卷的brick是否被移除

[root@centos120 ~]# gluster volume info gs1 
Volume Name: gs1
Type: Replicate
Volume ID: dd0692ed-ece8-432d-96ac-282b72764a24
Status: Stopped
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: centos120:/gluster/brick2/bk2
Brick2: centos121:/gluster/brick2/bk2
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
[root@centos120 ~]# 

查看数据还是存在,正常

[root@centos120 ~]# ls /gluster/brick2/bk2 /gluster/brick2/bk2
/gluster/brick2/bk2:
10  12  14  15  16  17  18  19  2  3  4  6
/gluster/brick2/bk2:
10  12  14  15  16  17  18  19  2  3  4  6
[root@centos120 ~]# 
[root@centos121 ~]# ls /gluster/brick2/bk2 /gluster/brick2/bk2
/gluster/brick2/bk2:
10  12  14  15  16  17  18  19  2  3  4  6
/gluster/brick2/bk2:
10  12  14  15  16  17  18  19  2  3  4  6
[root@centos121 ~]# 

启动volume

[root@centos121 ~]# gluster volume start gs1
volume start: gs1: success
[root@centos121 ~]# mount -t glusterfs 127.0.0.1:/gs1 /mnt
/sbin/mount.glusterfs: according to mtab, GlusterFS is already mounted on /mnt
[root@centos121 ~]# df -h | grep mnt
127.0.0.1:/gs1           105M  9.1M   96M   9% /mnt
[root@centos121 ~]# 

查看数据

[root@centos121 ~]# ls /mnt/
10  12  14  15  16  17  18  19  2  3  4  6
[root@centos121 ~]# 

(2)对存储卷进行删除操作

[root@centos120 ~]# umount /mnt
[root@centos120 ~]# gluster volume stop gs1
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: gs1: success
[root@centos120 ~]# gluster  volume  delete  gs1
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: gs1: success
[root@centos120 ~]# gluster volume info 
No volumes present
[root@centos120 ~]# 

无论是缩减卷还是删除卷,并不会是清除卷中的数据。数据仍旧会保存在对应磁盘上。

四,构建企业级分布式存储
4.1 硬件要求
一般选择2U的机型,磁盘STAT盘4T,如果I/O要求比较高,可以采购SSD固态硬盘。为了充分保证系统的稳定性和性能,要求所有glusterfs服务器硬件配置尽量一致,尤其是硬盘数量和大小。机器的RAID卡需要带电池,缓存越大,性能越好。一般情况下,建议做RAID10,如果出于空间要求考虑,需要做RAID5,建议最好能有1-2块硬盘的热备盘。
4.2 系统要求和分区划分
系统要求使用CentOS6.x,安装完成后升级到最新版本,安装的时候,不要使用LVM,建议/boot分区200M,根分区100G,swap分区和内存一样大小,剩余空间给gluster使用,划分单独的硬盘空间。系统安装软件没有特殊要求,建议除了开发工具和基本的管理软件,其他软件一律不装。
4.3 网络环境
网络要求全部千兆环境,gluster服务器至少有2块网卡,1块网卡绑定供gluster使用,剩余一块分配管理网络ip,用于系统管理。如果有条件购买万兆交换机,服务器配置万兆网卡,存储性能会更好。网络方面如果安全性要求高,可以多网卡绑定。
4.4 服务器摆放分布
服务器主备机器要放在不同的机柜,连接不同的交换机,即使一个机柜出现问题,还有一份数据正常访问。

4.5 构建高性能,高可用存储
一般在企业中,采用的是分布式复制卷,因为有数据备份,数据相对安全,分布式条带卷目前对glusterfs来说没有完全成熟,存在一定的是数据安全风险。
6.5.1 开启防火墙端口
一般在企业应用中Linux防火墙是打开的,开通服务器之间访问的端口

iptables -I INPUT -p tcp --dport 24007:24011 -j ACCEPT
iptables -I INPUT -p tcp --dport 49152:49162 -j ACCEPT

[root@glusterfs01 ~]# cat /etc/glusterfs/glusterd.vol 
volume management
    type mgmt/glusterd
    option working-directory /var/lib/glusterd
    option transport-type socket,rdma
    option transport.socket.keepalive-time 10
    option transport.socket.keepalive-interval 2
    option transport.socket.read-fail-log off
    option ping-timeout 0
    option event-threads 1
#   option base-port 49152      #默认端口可以在这里改,因为这个端口可能会和企业里的kvm端口冲突

6.5.2 Glusterfs文件系统优化
Performance.quick-read:优化读取小文件的性能
Performance.read-ahead:用预读的方式提高读取的性能,有利于应用频繁持续性的访问文件,当应用完成当前数据块读取的时候,下一个数据块就已经准备好了。
Performance.write-behind:写入数据时,先写入缓存内,再写入硬盘内,以提高写入的性能。
Performance.io-cache:缓存已经被读过的。

调整方法:
Glusster volume set <卷> <参数>

[root@glusterfs01 ~]# gluster volume info gs2
Volume Name: gs2
Type: Replicate
Volume ID: c76fe8fd-71a7-4395-9dd2-ef1dc85163b8
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: glusterfs03:/gluster/brick1
Brick2: glusterfs04:/gluster/brick1
Options Reconfigured:
performance.readdir-ahead: on
[root@glusterfs01 ~]# gluster volume set gs2 performance.read-ahead on   #设置预缓存优化
volume set: success 
[root@glusterfs01 ~]# gluster volume info gs2

Volume Name: gs2
Type: Replicate
Volume ID: c76fe8fd-71a7-4395-9dd2-ef1dc85163b8
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: glusterfs03:/gluster/brick1
Brick2: glusterfs04:/gluster/brick1
Options Reconfigured:
performance.read-ahead: on      #已经添加上了
performance.readdir-ahead: on
[root@glusterfs01 ~]# gluster volume set gs2 performance.cache-size 256MB    #设置读缓存大小
volume set: success
[root@glusterfs01 ~]# gluster volume info gs2

Volume Name: gs2
Type: Replicate
Volume ID: c76fe8fd-71a7-4395-9dd2-ef1dc85163b8
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: glusterfs03:/gluster/brick1
Brick2: glusterfs04:/gluster/brick1
Options Reconfigured:
performance.cache-size: 256MB       #已经添加上了
performance.read-ahead: on
performance.readdir-ahead: on

6.5.3 监控及日常维护
使用Zabbix自带模板即可。Cpu,内存,主机存活,磁盘空间,主机运行时间,系统load。日常情况要查看服务器的监控值,遇到报警要及时处理。
#以下命令在复制卷的场景下才会有

#gluster volume status gs2 查看节点NFS是否在线
(开没开端口)
#gluster volume heal gs2 full 启动完全修复

#gluster volume heal gs2 info 查看需要修复的文件

#gluster volume heal gs2 info healed 查看修复成功的文件

#gluster volume heal gs2 info heal-failed 查看修复失败文件

#gluster volume heal gs2 info split-brain 查看脑裂的文件

#gluster volume quota gs2 enable --激活quota功能

#gluster volume quota gs2 disable --关闭quota功能

#gluster volume quota gs2 limit-usage /data 10GB --/gs2/data 目录限制

#gluster volume quota gs2 list --quota 信息列表

#gluster volume quota gs2 list /data --限制目录的quota信息

#gluster volume set gs2 features.quota-timeout 5 --设置信息的超时事实上时间

#gluster volume quota gs2 remove /data -删除某个目录的quota设置

备注:

1)quota 功能,主要是对挂载点下的某个目录进行空间限额。如:/mnt/glusterfs/data目录,而不是对组成卷组的空间进行限制
回到顶部

五 生产环境遇到常见故障处理

5.1 硬盘故障
因为底层做了raid配置,有硬件故障,直接更换硬盘,会自动同步数据。(raid5)
5.2 一台主机故障
一台节点故障的情况包括以下类型:
1,物理故障
2,同时有多块硬盘故障,造成是数据丢失
3,系统损坏不可修复

解决方法:
找一台完全一样的机器,至少要保证硬盘数量和大小一致,安装系统,配置和故障机同样的ip,安装gluster软件,保证配置一样,在其他健康的节点上执行命令gluster peer status,查看故障服务器的uuid

#例如:

[root@glusterfs03 ~]# gluster peer status
Number of Peers: 3

Hostname: glusterfs02
Uuid: 0b52290d-96b0-4b9c-988d-44062735a8a8
State: Peer in Cluster (Connected)

Hostname: glusterfs04
Uuid: a43ac51b-641c-4fc4-be56-f6873423b462
State: Peer in Cluster (Connected)

Hostname: glusterfs01
Uuid: 198f2c7c-1104-4671-8989-b430b77540e9
State: Peer in Cluster (Connected)
[root@glusterfs03 ~]# 

修改新加机器的/var/lib/glusterd/glusterd.info和故障机器的一样

[root@glusterfs04 ~]# cat /var/lib/glusterd/glusterd.info 
UUID=a43ac51b-641c-4fc4-be56-f6873423b462
operating-version=30712

在新机器挂载目录上执行磁盘故障的操作(任意节点)


[root@glusterfs04 ~]# gluster volume heal gs2 full
Launching heal operation to perform full self heal on volume gs2 has been successful 
Use heal info commands to check status

就会自动开始同步,但是同步的时候会影响整个系统的性能
可以查看状态

[root@glusterfs04 ~]# gluster volume heal gs2 info
Brick glusterfs03:/gluster/brick1
Status: Connected
Number of entries: 0
Brick glusterfs04:/gluster/brick1
Status: Connected
Number of entries: 0