转【红帽GFS集群文件系统配置指南】

本节中将简单的介绍下RedHat的集群文件系统GFS的配置,集群文件系统同普通的文件系统,例如:ext3,ufs,ntfs不一样,集群文件系统采用分布式锁管理,可以实现多个节点同时读写文件。主流的集群文件主要有IBM的GPFS,Oracle公司出品的ocfs以及红帽公司出品的GFS。说来惭愧,在技术上一直有3大心愿没完成,包括oracle dataguard,grid control和gfs,时至今日,终于基本上实现了这三大指标!接下来就可以将rac环境下的归档日志存储在GFS上了

一:环境介绍

节点1 IP:192.168.1.51/24

操作系统:rhel5.4 64位 (kvm虚拟机)

主机名:dg51.yang.com

节点2 IP:192.168.1.52/24

操作系统:rhel5.4 64位 (kvm虚拟机)

主机名:dg51.yang.com

共享存储IP:192.168.1.100/24

操作系统:rhel6.0 64位

主机名:rhel6.yang.com

二: 配置共享存储并分区

[root@dg51 ~]# fdisk -l /dev/sda

Disk /dev/sda: 10.7 GB, 10737418240 bytes

64 heads, 32 sectors/track, 10240 cylinders

Units = cylinders of 2048 * 512 = 1048576 bytes

Device Boot      Start         End      Blocks   Id  System

/dev/sda1               1       10240    10485744   83  Linux

这部分的配置请参考:http://www.linuxidc.com/Linux/2011-09/43537.htm

三:安装集群存储包组,两个节点上均需要配置

[root@dg51 ~]# cat /etc/yum.repos.d/base.repo [base] name=base baseurl=ftp://192.168.1.100/pub/iso5/Server gpgcheck=0 enable=1

[Cluster] name=Cluster baseurl=ftp://192.168.1.100/pub/iso5/Cluster gpgcheck=0 enable=1

[ClusterStorage] name=ClusterStorage baseurl=ftp://192.168.1.100/pub/iso5/ClusterStorage gpgcheck=0 enable=1

[root@dg51 ~]# yum -y groupinstall "Cluster Storage"  "Clustering"

 

三:创建配置文件,启动相关进程,两个节点做同样的配置

 

[root@dg51 ~]# system-config-cluster

 

[root@dg51 ~]# cat /etc/cluster/cluster.conf <?xml version="1.0" ?> <cluster config_version="2" name="dg_gfs">         <fence_daemon post_fail_delay="0" post_join_delay="3"/>         <clusternodes>                 <clusternode name="dg51.yang.com" nodeid="1" votes="1">                         <fence/>                 </clusternode>                 <clusternode name="dg52.yang.com" nodeid="2" votes="1">                         <fence/>                 </clusternode>         </clusternodes>         <cman expected_votes="1" two_node="1"/>         <fencedevices/>         <rm>                 <failoverdomains/>                 <resources/>         </rm> </cluster>

 

这里直接将配置文件复制到节点2 [root@dg51 ~]# scp /etc/cluster/cluster.conf dg52:/etc/cluster/ [root@dg51 ~]# lvmconf --enable-cluster

 

[root@dg51 ~]# service rgmanager start Starting Cluster Service Manager: [  OK  ] [root@dg51 ~]# chkconfig rgmanager on

 

[root@dg51 ~]# service ricci start Starting oddjobd: [  OK  ] generating SSL certificates...  done Starting ricci: [  OK  ] [root@dg51 ~]# chkconfig ricci on

 

root@dg51 ~]# service cman start Starting cluster:    Loading modules... done    Mounting configfs... done    Starting ccsd... done    Starting cman... done    Starting daemons... done    Starting fencing... done  [  OK  ] [root@dg51 ~]# chkconfig cman on

 

[root@dg51 ~]# service clvmd start Starting clvmd: [  OK  ] Activating VGs: [  OK  ] [root@dg51 ~]# chkconfig clvmd on

 

[root@dg51 ~]# clustat Cluster Status for dg_gfs @ Sat Dec 10 17:25:09 2011 Member Status: Quorate

 

Member Name                             ID   Status ------ ----                             ---- ------ dg51.yang.com                               1 Online, Local dg52.yang.com                               2 Online

四:在共享存储上划分LVM,在一个节点上操作即可

 

[root@dg51 ~]# fdisk -l /dev/sda (在开始之前,需要将分区改为8e)

 

Disk /dev/sda: 10.7 GB, 10737418240 bytes 64 heads, 32 sectors/track, 10240 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes

 

   Device Boot      Start         End      Blocks   Id  System /dev/sda1               1       10240    10485744   8e  Linux LVM

 

[root@dg51 ~]# pvcreate /dev/sda1   Physical volume "/dev/sda1" successfully created [root@dg51 ~]# vgcreate dg_gfs /dev/sda1   Clustered volume group "dg_gfs" successfully created [root@dg51 ~]# vgdisplay dg_gfs   --- Volume group ---   VG Name               dg_gfs   System ID               Format                lvm2   Metadata Areas        1   Metadata Sequence No  1   VG Access             read/write   VG Status             resizable   Clustered             yes   Shared                no   MAX LV                0   Cur LV                0   Open LV               0   Max PV                0   Cur PV                1   Act PV                1   VG Size               10.00 GB   PE Size               4.00 MB   Total PE              2559   Alloc PE / Size       0 / 0     Free  PE / Size       2559 / 10.00 GB   VG UUID               hMivT2-FIuF-QX2N-EXU9-CaZF-wR5a-8QS7t4

 

[root@dg51 ~]# lvcreate -n gfs1 -l 2559 dg_gfs   Logical volume "gfs1" created

 

如果出现下面的错误,在两个节点上重启下clvmd进程即可 [root@dg51 ~]# lvcreate -n gfs1 -l 2559 dg_gfs   Error locking on node dg52.yang.com: Volume group for uuid not found:

 

hMivT2FIuFQX2NEXU9CaZFwR5a8QS7t4Ft4RMjI9V6a3jUudYQe0i1IygtIlaHxc   Aborting. Failed to activate new LV to wipe the start of it.

 

[root@dg51 ~]# service clvmd restart Deactivating VG dg_gfs:   0 logical volume(s) in volume group "dg_gfs" now active [  OK  ] Stopping clvm:  [  OK  ] Starting clvmd: [  OK  ] Activating VGs:   0 logical volume(s) in volume group "dg_gfs" now active

 

[root@dg51 ~]# service clvmd status clvmd (pid 5494) is running... active volumes: gfs1 [  OK  ]

 

[root@dg51 ~]# lvscan   ACTIVE            '/dev/dg_gfs/gfs1' [10.00 GB] inherit

 

[root@dg52 ~]# lvscan   ACTIVE            '/dev/dg_gfs/gfs1' [10.00 GB] inherit

 

五:格式化lvm卷

 

[root@dg51 ~]# gfs_mkfs -h Usage:

 

gfs_mkfs [options] <device>

 

Options:

 

  -b <bytes>       Filesystem block size   -D               Enable debugging code   -h               Print this help, then exit   -J <MB>          Size of journals   -j <num>         Number of journals   -O               Don't ask for confirmation   -p <name>        Name of the locking protocol   -q               Don't print anything   -r <MB>          Resource Group Size   -s <blocks>      Journal segment size   -t <name>        Name of the lock table   -V               Print program version information, then exit

 

[root@dg51 ~]# gfs_mkfs -p lock_dlm  -t dg_gfs:gfs -j 2 /dev/dg_gfs/gfs1 This will destroy any data on /dev/dg_gfs/gfs1.   It appears to contain a gfs filesystem.

 

Are you sure you want to proceed? [y/n] y

 

Device:                    /dev/dg_gfs/gfs1 Blocksize:                 4096 Filesystem Size:           2554644 Journals:                  2 Resource Groups:           40 Locking Protocol:          lock_dlm Lock Table:                dg_gfs:gfs

 

Syncing... All Done

 

六:在两个节点上分别挂载,并测试写入数据

 

[root@dg51 ~]# mount -t gfs /dev/mapper/dg_gfs-gfs1 /dg_archivelog/ [root@dg51 ~]# df -h Filesystem            Size  Used Avail Use% Mounted on /dev/vda3              28G  8.6G   18G  34% / /dev/vda1              99M   12M   83M  13% /boot tmpfs                 391M     0  391M   0% /dev/shm /dev/mapper/dg_gfs-gfs1                       9.8G   20K  9.8G   1% /dg_archivelog

 

[root@dg52 ~]# mkdir /dg_archivelog2 [root@dg52 ~]# mount -t gfs /dev/dg_gfs/gfs1 /dg_archivelog2/ [root@dg52 ~]# df -h Filesystem            Size  Used Avail Use% Mounted on /dev/vda3              28G  9.1G   17G  36% / /dev/vda1              99M   12M   83M  13% /boot tmpfs                 391M     0  391M   0% /dev/shm /dev/mapper/dg_gfs-gfs1                       9.8G   20K  9.8G   1% /dg_archivelog2

 

[root@dg52 ~]# cp /etc/hosts /dg_archivelog2/ [root@dg51 ~]# cat /dg_archivelog/hosts # Do not remove the following line, or various programs # that require network functionality will fail. 127.0.0.1               localhost.localdomain localhost ::1             localhost6.localdomain6 localhost6 192.168.1.51            dg51.yang.com   dg51 192.168.1.52            dg52.yang.com   dg52 192.168.1.55            grid5.yang.com  grid5

 

如果要开机自动挂载,可在/etc/fstab文件中添加开机自动挂载项 [root@dg51 ~]# tail -1 /etc/fstab /dev/mapper/dg_gfs-gfs1  /dg_archivelog         gfs     defaults        0 0

 

 

七:性能测试 

 

[root@dg51 ~]# dd if=/dev/zero of=/dg_archivelog/gfs_test bs=10M count=100 100+0 records in 100+0 records out 1048576000 bytes (1.0 GB) copied, 9.11253 seconds, 115 MB/s iscsi模拟出来的共享存储,I/0性能一般,只能用于学习

 

 

 

八:使用浏览器进行管理

 

[root@dg51 ~]# luci_admin  init Initializing the luci server

Creating the 'admin' user

Enter password: Confirm password:

Please wait... The admin password has been successfully set. Generating SSL certificates... The luci server has been successfully initialized You must restart the luci server for changes to take effect. Run "service luci restart" to do so

[root@dg51 ~]# service luci restart Shutting down luci: [  OK  ] Starting luci: Generating https SSL certificates...  done [  OK  ]

Point your web browser to https://dg51.yang.com:8084 to access luci

 

 

 

 

 

 

 

 

 

 

 

 

你可能感兴趣的:(文件系统)