GlusterFS introduction
GlusterFS is the core for Scale-out storage solution of Redhat. It is an open source, clustered file system capable of scaling to several petabytes and handling thousands of clients. GlusterFS aggregates various storage servers over Ethernet or Infiniband RDMA interconnect into one large parallel network file system. It aggregate disk and memory resources and manage data in a single global namespace. GlusterFS is based on a stackable user space design, delivering exceptional performance for diverse workloads.Installation of GlusterFS in RHEL6.5 on system z
There are a lot of documents about installation on Redhat or Fedora for x86 architecture, but seledom can find some guides for zLinux. So I wrote this simple installation guide to tell the user how to install it on zlinux.
Enviroment prepared
Operating environment: Redhat 6.5 on system z , two nodes, one client
Node 1:
Name:ora1 IP Addr: 172.16.27.142
Node 2:
Name:ora2 IP Addr:172.16.27.143
Client:
IP Addr:172.16.31.81
Be sure your /etc/hosts is the same between node1, node2 and client.
1>. Install GlusterFS from source code for each node.
Download the latest Gluster source from http://www.gluster.org/download
2>.Extract the source code using the following command:
tar -xvf glusterfs-3.5.2.tar.gz
GlusterFS needs Flex, python, bison and openssl. Before buiding the software, need to install these repository. You can find these in OS installation DVD.(flex-2.5.4a-13.s390x.rpm, glibc-kernheaders-2.4-9.1.87.src.rpm , openssl-devel-1.0.1e-15.el6.s390x.rpm,bison-2.6.5 )
use: rpm -ivh ****.rpm to install.
3>. Run the configuration utility using the following command.
GlusterFS configure summary
===========================
FUSE client : yes
Infiniband verbs : no
epoll IO multiplex : yes
argp-standalone : no
fusermount : yes
readline : no
georeplication : yes
Linux-AIO : yes
Enable Debug : no
systemtap : no
Block Device xlator : no
glupy : no
Use syslog : yes
XML output : no
QEMU Block formats : no
Encryption xlator : no
The configuration summary shows the components that will be built with GlusterFS.
4>. Build the GlusterFS software using the following commands
make
make install
5>. Verify that the correct version of GlusterFS is installed, using the following command:
/usr/local/sbin/glusterfs -V
glusterfs 3.5.2 built on Oct 22 2014 03:17:25
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. http://www.redhat.com/
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.
Also install GlusterFS on client.
- Testing
A. start GlusterFS service
[root@ora1 ~]# /etc/init.d/glusterd start
B. Adding servers to trust storage pool
[root@ora1 glusterfs]# /usr/local/sbin/gluster
gluster> peer probe ora2
peer probe: success.
Verify the peer status from the first server using the following commands:
gluster> peer status
Number of Peers: 1
Hostname: ora2
Uuid: 87484e4c-8647-4bdc-b716-5878ff7f90d4
State: Peer in Cluster (Connected)
c. Setting up GlusterFS Server Volumes
on both node1 and node2,
mkdir /dir1,
Then Create a replicated volume:
gluster> volume create dir1 replica 2 ora1:/dir1 ora2:/dir1 force
volume create: dir1: success: please start the volume to access data
gluster> volume start dir1
volume start: dir1: success
(Optional) You can display the volume information:
gluster> volume info
Volume Name: dir1
Type: Replicate
Volume ID: e0a9cb48-e5ed-4ded-8450-6708f36eeb94
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: ora1:/dir1
Brick2: ora2:/dir1
D. Setting up GlusterFS client
Manually mount a Gluster volume,we will mount the dir1 on server to local directory mnt/glusterfs:
[root@rhel65 glusterfs-3.5.2]# mkdir /mnt/glusterfs
[root@rhel65 glusterfs-3.5.2]# mount -t glusterfs ora1:/dir1 /mnt/glusterfs
Verify the mounting results:
[root@rhel65 glusterfs-3.5.2]# mount -t fuse.glusterfs
if it shows the below messages, it means mounting sucessful:
ora1:/dir1 on /mnt/glusterfs type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
[root@rhel65 glusterfs-3.5.2]# cd /mnt/glusterfs/
[root@rhel65 glusterfs]# touch file1 file2 file3
[root@rhel65 glusterfs]# ls -l
total 0
-rw-r--r-- 1 root root 0 Oct 22 03:34 file1
-rw-r--r-- 1 root root 0 Oct 22 03:34 file2
-rw-r--r-- 1 root root 0 Oct 22 03:34 file3
Because when we create the dir1, we use replicated type, Replicated volumes create copies of files across multiple bricks in the volume.
Verify node1 and node2:
Node1:
[root@ora1 dir1]# ls -l
total 0
-rw-r--r--. 2 root root 0 Oct 22 2014 file1
-rw-r--r--. 2 root root 0 Oct 22 2014 file2
-rw-r--r--. 2 root root 0 Oct 22 2014 file3
Node2:
[root@ora2 ~]# cd /dir1
[root@ora2 dir1]# ls -l
total 0
-rw-r--r--. 2 root root 0 Oct 22 2014 file1
-rw-r--r--. 2 root root 0 Oct 22 2014 file2
-rw-r--r--. 2 root root 0 Oct 22 2014 file3