CentOS7上Glusterfs的安装及使用(gluster/heketi)

1.glusterfs安装

安装并设置自启动:

yum -y install centos-release-gluster
yum -y install glusterfs-server
systemctl enable glusterd
systemctl start glusterd

配置每台机器hosts(此处我把k8s每台节点都作为server提供卷):

vim /etc/hosts
    10.142.21.21    k8smaster01 
    10.142.21.22    k8smaster02
    10.142.21.23    k8smaster03
    10.142.21.24    k8sslave01
    10.142.21.25    k8sslave02
    10.142.21.26    k8sslave03

2.glusterfs使用

方法一:通过gluster命令(gluster-client)使用

安装glusterfs client客户端命令

yum安装:

yum -y install centos-release-gluster
yum -y install glusterfs-client

client节点的/etc/hosts中添加glusterfs-server信息:

vim /etc/hosts  
    10.142.21.21    k8smaster01 
    10.142.21.22    k8smaster02
    10.142.21.23    k8smaster03
    10.142.21.24    k8sslave01
    10.142.21.25    k8sslave02
    10.142.21.26    k8sslave03

gluster命令使用

为存储池添加节点Node:

gluster peer probe k8smaster01
gluster peer probe k8smaster02
gluster peer probe k8smaster03
gluster peer probe k8sslave01
gluster peer probe k8sslave02
gluster peer probe k8sslave03

创建GlusterFS卷并使用(以复本卷为例):

  • 每台机器上以/gluster/gv0目录各创建一个brick:
 mkdir -p /gluster/gv0
  • 以其中3台机器的brick创建一个有3复本的逻辑卷gv0:
gluster volume create  gv0 replica 3 k8smaster01:/gluster/gv0 k8smaster02:/gluster/gv0 k8smaster03:/gluster/gv0 force   
  • 启用volume:
gluster volume start gv0
  • client挂载gv0卷到/mnt/glusterfs目录并使用:
mkdir /mnt/glusterfs
mount -t glusterfs k8smaster01:/gv0 /mnt/glusterfs

附,其他相关操作:

从GlusterFS卷gv0移除某一brick:
    gluster volume remove-brick gv0 replica 2 k8smaster01:/gluster/gv0 force

删除GlusterFS卷gv0:
    需要先stop卷:
        gluster volume stop gv0
    再删:
        gluster volume delete gv0

方法二:通过Heketi提供的restapi使用

首先生成配置heketi访问Gluster节点的ssh秘钥对

Heketi使用SSH来配置GlusterFS的所有节点。创建SSH密钥对:

mkdir /etc/heketi
ssh-keygen -f /etc/heketi/heketi_key -t rsa -N ''
#chown heketi:heketi /etc/heketi/heketi_key*

制作完成后会在当前目录下生成heketi_key、heketi_key.pub,将公钥heketi_key.pub拷贝到所有glusterfs节点上/etc/heketi/keketi_key.pub(包括你登陆的第一个节点),/etc/heketi/heketi.json 中 的 keyfile 指向 生成的 私钥heketi_key(包含路径)

ssh-copy-id -i /etc/heketi/heketi_key.pub root@10.142.21.21
...

备注,以上ssh-copy-id一句等效于以下三句:

scp /etc/heketi/heketi_key.pub root@10.142.21.21:/tmp
ssh root@10.142.21.21
cat /tmp/heketi_key.pub >> /root/.ssh/authorized_keys

Heketi安装(yum安装方式)

安装(装在某一台节点上):

yum -y install heketi heketi-client

创建存储db的文件夹:

mkdir /dcos/heketi
chown -R heketi:heketi /dcos/heketi

配置 heketi.json:

vim /etc/heketi/heketi.json

{
  "_port_comment": "Heketi Server Port Number",
  "port": "8088",

  "_use_auth": "Enable JWT authorization. Please enable for deployment",
  "use_auth": false,

  "_jwt": "Private keys for access",
  "jwt": {
    "_admin": "Admin has access to all APIs",
    "admin": {
      "key": "123456"
    },
    "_user": "User only has access to /volumes endpoint",
    "user": {
      "key": "123456"
    }
  },

  "_glusterfs_comment": "GlusterFS Configuration",
  "glusterfs": {
    "_executor_comment": [
      "Execute plugin. Possible choices: mock, ssh",
      "mock: This setting is used for testing and development.",
      "      It will not send commands to any node.",
      "ssh:  This setting will notify Heketi to ssh to the nodes.",
      "      It will need the values in sshexec to be configured."
    ],
    "executor": "ssh",

    "_sshexec_comment": "SSH username and private key file information",
    "sshexec": {
      "keyfile": "/etc/heketi/heketi_key",
      "user": "root"
    },

    "_db_comment": "Database file name",
    "db": "/dcos/heketi/heketi.db"
  }
}

备注:这里需要注意只是测试的话用mock 授权,standalone模式就 ssh 授权,k8s下就 kubernetes授权

重启heketi服务:

systemctl enable heketi
systemctl restart heketi

测试heketi是否好用:

curl http://localhost:8088/hello

Heketi安装(容器化部署方式)

You will need to create a directory which has a directory containing configuraiton and any private key if necessary, and an empty directory used for storing the database. Directory and files must be read/write by user with id 1000 and if an ssh private key is used, it must also have a mod of 0600.
创建数据持久化的目录:

mkdir -p /dcos/heketi-docker/config
mkdir -p /dcos/heketi-docker/db
vim /dcos/heketi-docker/config/heketi.json     //heketi.json文件内容同上                  
cp /etc/heketi/heketi_key /dcos/heketi-docker/config/                   
chmod 600 /dcos/heketi-docker/config/heketi_key
chown 1000:1000 -R /dcos/heketi-docker

docker run启动:

# docker run --name=heketi -d -p 8089:8088 \
             -v /dcos/heketi-docker/config:/etc/heketi \
             -v /dcos/heketi-docker/db:/dcos/heketi \
             -v /etc/hosts:/etc/hosts \
             -v /etc/localtime:/etc/localtime \
             heketi/heketi:4        ----这个镜像包含heketi和heketi-cli

更进一步,可以将heketi部署到k8s上,heketi.yaml如下:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: heketi
  namespace: kube-system
  labels:
    app: heketi
spec:
  replicas: 1
  selector:
    matchLabels:
      app: heketi
  template:
    metadata:
      labels:
        app: heketi
    spec:
      containers:
        - name: heketi
          image: heketi/heketi:4
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: config
              mountPath: /etc/heketi
            - name: db
              mountPath: /dcos/heketi
            - name: time
              mountPath: /etc/localtime
            - name: hosts
              mountPath: /etc/hosts
          ports:
            - containerPort: 8088
              name: heketi-api
      volumes:
        - name: config
          hostPath:
            path: /dcos/heketi-docker/config
        - name: db
          hostPath:
            path: /dcos/heketi-docker/db
        - name: time
          hostPath:
            path: /etc/localtime
        - name: hosts
          hostPath:
            path: /etc/hosts
      nodeSelector:
        kubernetes.io/hostname: k8smaster03
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: heketi
  name: heketi
  namespace: kube-system
spec:
  type: NodePort
  ports:
  - name: heketi
    port: 8088
    protocol: TCP
    targetPort: 8088
    nodePort: 30088
  selector:
    app: heketi

heketi-cli使用

Heketi集群初始化:

  • 创建cluster:
heketi-cli --server http://10.142.21.23:30088 --user admin  --json=true cluster create

{"id":"6c7c910c74b80fe2a043e527d03d6fc1","nodes":[],"volumes":[]}
  • 依次将6个节点作为node添加到cluster:
heketi-cli --server http://10.142.21.23:30088 --user admin --secret 123456 --json=true node add --cluster="6c7c910c74b80fe2a043e527d03d6fc1" --management-host-name=10.142.21.21 --storage-host-name=10.142.21.21 --zone=1
...

注意:
对接k8s的话,上边这个必须management-host-name要用ip地址,不可以用域名,否则从controller-manager中可以看到报错:

glusterfs: failed to create endpoint Endpoints "glusterfs-dynamic-gluster-pvc1" is invalid: [subsets[0].addresses[0].ip: Invalid value: "lk-glusterfs-47-80": must be a valid IP address, (e.g. 10.9.8.7), subsets[0].addresses[1].ip: Invalid value: "lk-glusterfs-47-79": must be a valid IP address, (e.g. 10.9.8.7), subsets[0].addresses[2].ip: Invalid value: "lk-glusterfs-47-78": must be a valid IP address, (e.g. 10.9.8.7)]
  • 每台设备node上各添加一块裸硬盘/dev/sdc(没创建过任何分区),创建device:
heketi-cli --server http://10.142.21.23:30088 --user admin --secret 123456 --json=true device add --name="/dev/sdc" --node="a117cd328d609acc15e88dc0b6ab4889"
...

其实以上三步可以简化成通过topology文件创建的方式:

  • 创建topology.json文件:
vim /etc/heketi/topology.json
{
  "clusters": [
    {
      "nodes": [
        {
          "node": {
            "hostnames": {
              "manage": [
                "10.142.21.24"
              ],
              "storage": [
                "10.142.21.24"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/sdc"
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "10.142.21.25"
              ],
              "storage": [
                "10.142.21.25"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/sdc"
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "10.142.21.26"
              ],
              "storage": [
                "10.142.21.26"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/sdc"
          ]
        }
      ]
    }
  ]
}

该文件格式比较简单,基本上是告诉heketi要创建一个3节点的集群,其中每个节点包含的配置有FQDN,IP地址以及至少一个将用作GlusterFS块的备用块设备。

  • 将该文件发送给heketi创建:
以systemd起heketi:
    heketi-cli --server http://10.142.21.21:8088 --user admin --secret 123456 topology load --json=/etc/heketi/topology.json

以容器起heketi&heketi-cli:
    docker cp topology.json 容器ID:/etc/heketi/
    docker exec 容器ID heketi-cli --server http://10.142.21.21:8088 --user admin --secret 123456 topology load --json=/etc/heketi/topology.json

以k8s方式起heketi&heketi-cli:
    kubectl cp topology.json heketi-67d99d8bb6-bzsvx:/etc/heketi/ -n kube-system
    kubectl exec -it heketi-67d99d8bb6-bzsvx -n kube-system heketi-cli topology load -- --json=/etc/heketi/topology.json --server http://10.142.21.23:30088 --user admin --secret 123456

结果:
创建成功后,heketi会在每个gluster节点上创建一个逻辑卷组(vg_2d8771e16bfe0b267fe2b7133584af43),通过vgscan或vgdisplay可以看到

创建volume:

以systemd起heketi:
    heketi-cli --server http://10.132.47.79:8088 --user admin --secret 123456 volume create --size=100 --replica=3 --clusters=d691b29a06374c4da7e94bba71d027bf

以容器起heketi:
    docker exec 容器ID heketi-cli --server http://10.142.21.23:30088 --user admin --secret 123456 volume create --size=100 --replica=3 --clusters=d691b29a06374c4da7e94bba71d027bf

以k8s方式起heketi&heketi-cli:
kubectl exec -it heketi-5ff9bb8c89-dzsc9 -n kube-system heketi-cli topology load -- --server http://10.142.21.21:30088 --user admin --secret 123456 volume create --size=100 --replica=3 --clusters=d691b29a06374c4da7e94bba71d027bf

结果:
结果是在相应节点的逻辑卷组(vg_2d8771e16bfe0b267fe2b7133584af43)下会创建逻辑卷(/var/lib/heketi/mounts/vg_7157a2d1d7899269823997ad62e6debd/brick_1185ef33c9719d9063ccc2ccd0df96e7/brick)作为glusterfs的brick

gluster volume info

Volume Name: vol_50b5237866378d655af326a74fc7d68c
Type: Distributed-Replicate
Volume ID: b82dc1b3-b59c-48cd-ace1-155a0c05c42f
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: 10.142.21.22:/var/lib/heketi/mounts/vg_7157a2d1d7899269823997ad62e6debd/brick_1185ef33c9719d9063ccc2ccd0df96e7/brick
Brick2: 10.142.21.27:/var/lib/heketi/mounts/vg_a850891bc0f47849bfdbdb115ee19656/brick_75894b35ae36dd5253bd79df1539b970/brick
Brick3: 10.142.21.21:/var/lib/heketi/mounts/vg_78a29014ac0df3d31b1a357e096e8917/brick_3423914cb4d9873c670f5fab0ffe44e5/brick
Brick4: 10.142.21.25:/var/lib/heketi/mounts/vg_25cc714589cec30e3550297cced4ce44/brick_6ca46dd4a13f7ac0b598d83fbf43c2fc/brick
Brick5: 10.142.21.26:/var/lib/heketi/mounts/vg_12e5dbb7b9372e0d72b8ec2166b68048/brick_87e969be58d93ff7deba65587d2e5637/brick
Brick6: 10.142.21.24:/var/lib/heketi/mounts/vg_54b5dc50bfe00644a39e289d9abf7c99/brick_5a21762ac6c5ab35472648455310af56/brick
Options Reconfigured:
transport.address-family: inet
nfs.disable: on

heketi-cli --server http://10.142.21.21:30088 --user admin --secret 123456 topology info

参考:
1.https://wiki.centos.org/SpecialInterestGroup/Storage/gluster-Quickstart
2.https://jimmysong.io/kubernetes-handbook/practice/storage-for-containers-using-glusterfs-with-openshift.html

你可能感兴趣的:(存储)