100.GluaterFS

1.安装

机器准备:
10.20.16.214
10.20.16.227
10.20.16.228

//1.安装yum源(三台操作)
[root@host214 ~]# yum install centos-release-gluster -y
Loaded plugins: fastestmirror, langpacks
Determining fastest mirrors
 * base: centos.ustc.edu.cn
 * extras: mirrors.aliyun.com
 * updates: centos.ustc.edu.cn
base 
···
//2.安装相关组件(三台操作)
[root@host228 ~]# yum install -y glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma
//3.创建工作目录(三台操作)
[root@host214  ~]# mkdir -p /data/gluster
//日志目录
[root@host214  ~]# mkdir -p /data/gluster/data 
//存储目录
[root@host214  ~]# mkdir -p /data/gluster/log
//4.修改日志目录(三台操作)
[root@host214 ~]# vim /etc/sysconfig/glusterd
# Change the glusterd service defaults here.
# See "glusterd --help" outpout for defaults and possible values.
GLUSTERD_LOGFILE="/data/gluster/log/gluster.log"
#GLUSTERD_LOGFILE="/var/log/gluster/gluster.log"
#GLUSTERD_LOGLEVEL="NORMAL"
//5.启动glusterd服务(三台操作)
[root@host214 ~]# systemctl enable glusterd.service
[root@host214 ~]# systemctl start glusterd.service
//6.查看集群状态
[root@host214 ~]# gluster peer status
Number of Peers: 0
//7.添加节点到集群 <删除节点 gluster peer detach hostname>
[root@host214 ~]# gluster peer probe 10.20.16.214
peer probe: success. Probe on localhost not needed
[root@host214 ~]# gluster peer probe 10.20.16.227
peer probe: success. 
[root@host214 ~]# gluster peer probe 10.20.16.228
//8.再次查看集群状态
[root@host214 ~]# gluster peer status
Number of Peers: 2

Hostname: 10.20.16.227
Uuid: cdede1bc-e2c9-4cac-a414-084e6dd5a57a
State: Peer in Cluster (Connected)

Hostname: 10.20.16.228
Uuid: 0fbe5e1d-d1eb-4bf6-bcd2-44646707aaf9
State: Peer in Cluster (Connected)

2. volume的模式

  • 分布模式DHT:默认模式,将文件已hash算法随机分布到一台服务器节点中存储。

    无容灾能力,节点一旦故障,数据丢失

    gluster volume create NEW-VOLNAME [transport [tcp | rdma | tcp,rdma]] NEW-BRICK...
    gluster volume create volume-tomcat 10.20.16.227:/data/gluster/data 10.20.16.228:/data/gluster/data
    
default.png
  • 副本模式AFR:指定副本数量replica,并将文件复制到 replica x 个节点中。

    顾名思义,多副本模式至少两个节点以上,通过数据冗余达到可靠性。

    gluster volume create NEW-VOLNAME [replica COUNT] [transport [tcp | rdma | tcp,rdma]] NEW-BRICK...
    gluster volume create test-volume replica 2 transport tcp 10.20.16.227:/data/gluster/data 10.20.16.228:/data/gluster/data
    
Replicated.png
  • 分布式副本模式, 最少需要4台服务器才能创建。

    想通过冗余实现数据的高可用性和扩展性时使用。

    gluster volume create NEW-VOLNAME [replica COUNT] [transport [tcp | rdma | tcp,rdma]] NEW-BRICK...
    gluster volume create test-volume replica 2 transport tcp 10.20.16.227:/data/gluster/data/tomcat 10.20.16.228:/data/gluster/data/tomcat
    
    Distributed_Replicated.png
  • Striped模式:

    此中模式主要考虑在存储大文件时,多文件进行分割存储。主要考虑大文件会长时间连接同一个client,达不到负载的效果。

     gluster volume create NEW-VOLNAME [stripe COUNT] [transport [tcp | dma | tcp,rdma]] NEW-BRICK...
     gluster volume create test-volume stripe 2 transport tcp 10.20.16.227:/data/gluster/data 10.20.16.228:/data/gluster/data
    
    striped.png
  • 分布式Striped模式

    相对于Striped,希望其分布在不同的volume 中

    gluster volume create NEW-VOLNAME [stripe COUNT] [transport [tcp | rdma | tcp,rdma]] NEW-BRICK...
    gluster volume create volume-tomcat stripe 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4
    
    Distributed_Striped.png

3. volume的使用

[root@host214  ~]# gluster volume info
No volumes present
//1.volume的创建(默认模式DHT),volume-tomcat 为volume的名称
//此处建议单独挂载盘
[root@host214 ~]# gluster volume create volume-tomcat 10.20.16.227:/data/gluster/data 10.20.16.228:/data/gluster/data
//2.volume的状态查看
[root@host214 ~]# gluster volume status
Volume volume-tomcat is not started
//3.volume的启动
[root@host214 ~]# gluster volume start volume-tomcat
volume start: volume-tomcat: success
//4.再次查看volume的状态
[root@host214 ~]# gluster volume status
Status of volume: volume-tomcat
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.20.16.227:/data/gluster/data       49152     0          Y       28547
Brick 10.20.16.228:/data/gluster/data       49152     0          Y       27653
 
Task Status of Volume volume-tomcat
------------------------------------------------------------------------------
There are no active volume tasks

//5.查看volumn的信息
[root@host214 ~]#  gluster volume info volume-tomcat
Volume Name: volume-tomcat
Type: Distribute
Volume ID: 0fe1d91c-0938-481f-8fff-c693451d07d8
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 10.20.16.227:/data/gluster/data
Brick2: 10.20.16.228:/data/gluster/data
Options Reconfigured:
transport.address-family: inet
nfs.disable: on //nfs默认关闭,开启:gluster volume set volume-tomcat nfs.disable off

//6.开启nfs支持
[root@host214 yum.repos.d]# gluster volume set test-volume nfs.disable off
Gluster NFS is being deprecated in favor of NFS-Ganesha Enter "yes" to continue using Gluster NFS (y/n) y
volume set: success

//7.volumn的停止和删除
[root@host214 ~]# gluster volume stop  volume-tomcat 
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: volume-tomcat: success
[root@host214 ~]# gluster volume delete volume-tomcat 
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: volume-tomcat: success

4. volume的客户端挂载

//1.客户端安装
[root@host229 ~]# yum install -y glusterfs glusterfs-fuse
//2.创建挂载点
[root@host229 gluster]# mkdir -p /data/gluster/mnt/tomcat
[root@host229 mnt]# mount -t glusterfs 10.20.16.227:volume-tomcat  /data/gluster/mnt/tomcat
//3.查看挂载情况
[root@host229 mnt]# df -h 
Filesystem                         Size  Used Avail Use% Mounted on
/dev/mapper/centos_host22900-root  459G  5.1G  430G   2% /
devtmpfs                            63G     0   63G   0% /dev
tmpfs                               63G     0   63G   0% /dev/shm
tmpfs                               63G  4.0G   59G   7% /run
tmpfs                               63G     0   63G   0% /sys/fs/cgroup
/dev/sda2                          1.9G  143M  1.6G   9% /boot
/dev/sdb                           733G  6.2G  690G   1% /data
tmpfs                               13G   12K   13G   1% /run/user/42
none                               500M     0  500M   0% /data/new-root
tmpfs                               63G   12K   63G   1% /data/cloud/work/kubernetes/kubelet/pods/71d91318-e25f-11e8-ac35-001b21992e84/volumes/kubernetes.io~secret/kube-router-token-92lqd
/dev/dm-3                           10G   34M   10G   1% /data/cloud/work/docker/devicemapper/mnt/823dfab8bc081bdc58bd99b18fbad1e020b0e1015fd1eca09264a29f06c40462
/dev/dm-4                           10G  138M  9.9G   2% /data/cloud/work/docker/devicemapper/mnt/7bbf2c271cb9a7fcadf076f4b51d8be77078176668df802ef2b73fd980b65039
tmpfs                               63G   12K   63G   1% /data/cloud/work/kubernetes/kubelet/pods/d864b038-e261-11e8-ac35-001b21992e84/volumes/kubernetes.io~secret/default-token-tpwrt
/dev/dm-5                           10G   34M   10G   1% /data/cloud/work/docker/devicemapper/mnt/88f1371654520f9ecd338feb5b07454eb9fabec3d0fc96e9cb9a548d83d827dc
shm                                 64M     0   64M   0% /data/cloud/work/docker/containers/5e092c97514a74eb4aea962ec4bee811a0fd8d974a438c01cd5c9a720aaaa5fd/mounts/shm
/dev/dm-6                           10G  513M  9.5G   6% /data/cloud/work/docker/devicemapper/mnt/084293e6be234edf62f63477ed3f630735b04a30309368c21d3c906467004749
tmpfs                               13G     0   13G   0% /run/user/0
10.20.16.227:volume-tomcat          733G   11G  693G   2% /data/gluster/mnt/tomcat

5. heketi的安装

[root@host214 ~]# yum install heketi heketi-client -y

Heketi的executor的执行插件有三种配置方式

  • mock: 用于开发测试,不会向任何节点发送命令
  • ssh: Heket会通过登录到其他nodes执行命令,所以此种方式需要配置ssh免密
  • kubernetes: GlusterFS容器化安装时和kubernetes整合
//ssh免密码配置四台机器都执行,其他机器的执行此处省略...
[root@host214 ~]#  mkdir /etc/heketi/
[root@host214 ~]# ssh-keygen -f /etc/heketi/heketi_key -t rsa -N ''
Generating public/private rsa key pair.
Your identification has been saved in /etc/heketi/heketi_key.
Your public key has been saved in /etc/heketi/heketi_key.pub.
The key fingerprint is:
SHA256:lm9fd1nmahvqkZAqCl5nDGTduCL4yiq6NdAe8UttHSs root@host214
The key's randomart image is:
+---[RSA 2048]----+
|                 |
|     . o         |
|  . o o o        |
| o = . o + .     |
|o + = E S o     o|
| + + * o o . . oo|
|  * o = . o o o.+|
|o+ + + . . . +.+.|
|Oo. .      .+.o. |
+----[SHA256]-----+
[root@host214 heketi]# ssh-copy-id -i /etc/heketi/heketi_key.pub [email protected]
[root@host214 heketi]# ssh-copy-id -i /etc/heketi/heketi_key.pub [email protected]
[root@host214 heketi]# ssh-copy-id -i /etc/heketi/heketi_key.pub [email protected]
[root@host214 heketi]# ssh-copy-id -i /etc/heketi/heketi_key.pub [email protected]
//修改配置文件
//修改heketi.json的配置文件
[root@host214 ~]#  vim /etc/heketi/heketi.json

调整说明如下

{
  "_port_comment": "Heketi Server Port Number",
  "port": "8080", //指定服务端口

  "_use_auth": "Enable JWT authorization. Please enable for deployment",
  "use_auth": true,

  "_jwt": "Private keys for access",
  "jwt": {
    "_admin": "Admin has access to all APIs",
    "admin": {
      "key": "password"   //指定管理员密码
    },
    "_user": "User only has access to /volumes endpoint",
    "user": {
      "key": "password"   //指定普通用户密码
    }
  },

  "_glusterfs_comment": "GlusterFS Configuration",
  "glusterfs": {
    "_executor_comment": [
      "Execute plugin. Possible choices: mock, ssh",
      "mock: This setting is used for testing and development.",
      "      It will not send commands to any node.",
      "ssh:  This setting will notify Heketi to ssh to the nodes.",
      "      It will need the values in sshexec to be configured.",
      "kubernetes: Communicate with GlusterFS containers over",
      "            Kubernetes exec api."
    ],
    "executor": "ssh",   //采用ssh方式

    "_sshexec_comment": "SSH username and private key file information",
    "sshexec": {
      "keyfile": "/etc/heketi/heketi_key",  //指定上面产生的公钥地址
      "user": "root",
      "port": "22",
      "fstab": "/etc/fstab"
    },

    "_kubeexec_comment": "Kubernetes configuration",
    "kubeexec": {
      "host" :"https://kubernetes.host:8443",
      "cert" : "/path/to/crt.file",
      "insecure": false,
      "user": "kubernetes username",
      "password": "password for kubernetes user",
      "namespace": "OpenShift project or Kubernetes namespace",
      "fstab": "Optional: Specify fstab file on node.  Default is /etc/fstab"
    },

    "_db_comment": "Database file name",
    "db": "/var/lib/heketi/heketi.db",

    "_loglevel_comment": [
      "Set log level. Choices are:",
      "  none, critical, error, warning, info, debug",
      "Default is warning"
    ],
    "loglevel" : "debug"
  }
}

Gluster集群信息:/etc/heketi/heketi-topology.json

{
  "clusters": [
    {
      "nodes": [
        {
          "node": {
            "hostnames": {
              "manage": [
                "10.20.16.214"
              ],
              "storage": [
                "10.20.16.214"
              ]
            },
            "zone": 1
          },
          "devices": [
            {
              "name": "/dev/nvme0n1p3",
              "destroydata": false
            }
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "10.20.16.227"
              ],
              "storage": [
                "10.20.16.227"
              ]
            },
            "zone": 1
          },
          "devices": [
            {
              "name": "/dev/nvme0n1p3",
              "destroydata": false
            }
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "10.20.16.228"
              ],
              "storage": [
                "10.20.16.228"
              ]
            },
            "zone": 1
          },
          "devices": [
            {
              "name": "/dev/sda4",
              "destroydata": false
            }
         ]
        }
      ]
    }
  ]
}

启动heketi

[root@host214 heketi]# nohup heketi --config=/etc/heketi/heketi.json &
//分区并创建pv,由于没有单独的设备,现分区的作为单独设备
[root@host214 heketi]# fdisk /dev/nvme0n1
Command (m for help): n
Partition type:
   p   primary (1 primary, 0 extended, 3 free)
   e   extended
Select (default p): 
Using default response p
Partition number (2-4, default 2): 
First sector (2007029168-3907029167, default 2007029760): 
Using default value 2007029760
Last sector, +sectors or +size{K,M,G} (2007029760-3907029167, default 3907029167): 3007029167
Partition 2 of type Linux and of size 476.9 GiB is set

Command (m for help): n
Partition type:
   p   primary (2 primary, 0 extended, 2 free)
   e   extended
Select (default p): 
Using default response p
Partition number (3,4, default 3): 
First sector (2007029168-3907029167, default 3007029248): 
Using default value 3007029248
Last sector, +sectors or +size{K,M,G} (3007029248-3907029167, default 3907029167): 
Using default value 3907029167
Partition 3 of type Linux and of size 429.2 GiB is set

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
//更新分区信息,有时候刚分区完找不到设备
[root@host214 heketi]# partprobe /dev/nvme0n1p3
//创建pv,其他机器步骤相同
[root@host214 heketi]# pvcreate /dev/nvme0n1p3
WARNING: ext4 signature detected on /dev/nvme0n1p3 at offset 1080. Wipe it? [y/n]: y
  Wiping ext4 signature on /dev/nvme0n1p3.
  Physical volume "/dev/nvme0n1p3" successfully created.
//加载集群信息
[root@host214 heketi]# heketi-cli --server http://10.20.16.214:8080 --user admin --secret "password" topology load --json=heketi-topology.json
    Found node 10.20.16.214 on cluster a50fb14fbea763203f1503b186fe7ca4
        Found device /dev/nvme0n1p3 ... OK
    Found node 10.20.16.227 on cluster a50fb14fbea763203f1503b186fe7ca4
        Adding device /dev/nvme0n1p3 ... OK
    Found node 10.20.16.228 on cluster a50fb14fbea763203f1503b186fe7ca4
        Adding device /dev/sda4 ... OK

6容器化部署

 image: dk-reg.op.douyuyuba.com/library/heketi:5
 volumes:
      - "/etc/heketi:/etc/heketi"
      - "/var/lib/heketi:/var/lib/heketi"
      - "/etc/localtime:/etc/localtime"

QA

volume create: share: failed: parent directory /data/gluster/data/tomcat is already part of a volume

setfattr -x trusted.glusterfs.volume-id /data/share  #这里/data/share换成你实际的路径
setfattr -x trusted.gfid /data/share   #同上 可能会提示not attributes 无妨
rm /data/share/.glusterfs -rf

heketi 启动失败
Error: unknown shorthand flag: 'c' in -config=/etc/heketi/heketi.json
unknown shorthand flag: 'c' in -config=/etc/heketi/heketi.json

--config 而不是-config

你可能感兴趣的:(100.GluaterFS)