Redhat iscsi共享存储+ gfs安装配置

感谢伟哥的文档跟帮助,伟哥的blog://http://ylw6006.blog.51cto.com 此人大牛也... 

环境:

系统:redhat 5.4
服务端:10.0.0.52
节点1:10.0.0.53
节点2:10.0.0.54

Redhat iscsi共享存储安装:

服务器端: 

首先需要安装scsi-target-utils工具包,然后将tgtd服务设置成开机自动启动,然后划出一个LVM做共享盘阵,LVM的名称为mydata

[root@localhost ~]# yum install -y scsi-target-utils.x86_64
[root@localhost ~]# service tgtd start
Starting SCSI target daemon:                               [  OK  ]
[root@localhost ~]# chkconfig tgtd on
[root@localhost ~]# lvs
  LV       VG   Attr   LSize   Origin Snap%  Move Log Copy%  Convert
  Lvmydata vg01 -wi-ao 310.41G                                      
  Lvroot   vg01 -wi-ao  50.00G                                      
  Lvusr    vg01 -wi-ao  50.00G

每次重启tgtd服务的时候,之前使用tgtdadmin绑定的target和logicalunit都会失效,因而写了个脚本用于简化操作

IQN命名规范:iqn.date. reverse.domain.name:optional name,例如:iqn.2011-12-15.com.hsf.data:shareddisk
这里设置允许所有的IP进行进行挂载,取消target绑定的时候需要先取消logicalunit然后取消target   

[root@localhost ~]# vi /etc/init.d/tgtdrules   
#!/bin/sh  
# chkconfig: - 59 85  
# Source function library.  
 
. /etc/rc.d/init.d/functions  
 
start() {  
        echo -e "Starting Tgtdrules Server:\n"  
 
        # Target  
        tgtadm  --lld iscsi --op new --mode target --tid 1 -T iqn.2011-12-15.com.hsf.data:shareddisk  
 
        # Lun  
        tgtadm  --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 -b /dev/vg01/Lvmydata  
 
        # Init  
        #tgtadm  --lld iscsi --op bind --mode target --tid 1 -I 10.0.0.53  
        #tgtadm  --lld iscsi --op bind --mode target --tid 1 -I 10.0.0.54  
        tgtadm  --lld iscsi --op bind --mode target --tid 1 -I ALL  
}  
 
stop() {  
        echo -e "Stopping Tgtdrules Server:\n"  
 
        # Lun  
        tgtadm  --lld iscsi --op delete --mode logicalunit --tid 1 --lun 1   
 
        # Target  
        tgtadm  --lld iscsi --op delete --mode target --tid 1   
 
        # Init  
        #tgtadm  --lld iscsi --op unbind --mode target --tid 1 -I 10.0.0.53  
        #tgtadm  --lld iscsi --op unbind --mode target --tid 1 -I 10.0.0.54  
        tgtadm  --lld iscsi --op unbind --mode target --tid 1 -I ALL  
}  
 
status() {  
        tgtadm --lld iscsi -o show -m target
}  
case "$1" in  
  start)
  start
  ;;  
 
  stop)  
  stop  
  ;;  
 
  status)  
  status  
  ;;  
 
  *)  
  echo {1}quot;Usage: tgtdrules {start|stop|status}"  
  ;;  
 
esac  
exit 0
[root@localhost ~]# chmod  +x /etc/init.d/tgtdrules

服务器端测试: 

[root@localhost ~]# service tgtd status 
tgtd (pid 29246 29245) is running...
[root@localhost ~]# netstat -ntpl| grep :3260
tcp        0      0 0.0.0.0:3260                0.0.0.0:*                   LISTEN      29245/tgtd          
tcp        0      0 :::3260                     :::*                        LISTEN      29245/tgtd
[root@localhost ~]# service tgtdrules start 
[root@localhost ~]# service tgtdrules status 
Target 1: iqn.2011-12-15.com.hsf.data:shareddisk
    System information:
        Driver: iscsi
        State: ready
    I_T nexus information:
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: deadbeaf1:0
            SCSI SN: beaf10
            Size: 0 MB
            Online: Yes
            Removable media: No
            Backing store: No backing store
        LUN: 1
            Type: disk
            SCSI ID: deadbeaf1:1
            SCSI SN: beaf11
            Size: 333296 MB
            Online: Yes
            Removable media: No
            Backing store: /dev/vg01/Lvmydata
    Account information:
    ACL information:
        ALL

节点安装:

安装iscsi-initiator-utils软件包,设置iscsi进程开机自动启动 

[root@localhost ~]# yum install -y iscsi-initiator-utils.x86_64
[root@localhost ~]# service iscsi start
iscsid is stopped
Turning off network shutdown. Starting iSCSI daemon:       [  OK  ]
                                                           [  OK  ]
Setting up iSCSI targets: iscsiadm: No records found!
                                                           [  OK  ]
[root@localhost ~]# chkconfig iscsi on
[root@localhost ~]# service iscsi status
iscsid (pid  29527) is running...

自动挂载iscsi脚本内容如下,挂载和卸载之前先发现,探测一次服务器端的共享是否正常 

[root@localhost ~]# vi /etc/init.d/iscsiadmrules    
#!/bin/bash  
# chkconfig: - 20 85  
# Source function library.  
 
. /etc/rc.d/init.d/functions  
 
start() {  
echo -e "Starting Iscsiadmrules Server:\n"  
iscsiadm --mode discovery --type sendtargets --portal 10.0.0.52  
iscsiadm --mode node --targetname iqn.2011-12-15.com.hsf.data:shareddisk --portal 10.0.0.52:3260 --login  
}  
 
stop() {  
echo -e "Stopping Iscsiadmrules Server:\n"  
iscsiadm --mode discovery --type sendtargets --portal 10.0.0.52 
iscsiadm --mode node --targetname iqn.2011-12-15.com.hsf.data:shareddisk --portal 10.0.0.52:3260 --logout  
}  
 
case "$1" in  
 start)  
 start  
 ;;  
 
 stop)  
 stop  
 ;;  
esac  
exit 0  
[root@localhost ~]# chmod  +x /etc/rc.d/init.d/iscsiadmrules

挂载:  

[root@localhost ~]# service iscsiadmrules start 
Starting Iscsiadmrules Server:
10.0.0.52:3260,1 iqn.2011-12-15.com.hsf.data:shareddisk
Logging in to [iface: default, target: iqn.2011-12-15.com.hsf.data:shareddisk, portal: 10.0.0.52,3260]
Login to [iface: default, target: iqn.2011-12-15.com.hsf.data:shareddisk, portal: 10.0.0.52,3260]: successful
[root@localhost ~]# service iscsi status
iscsid (pid  29527) is running...
[root@localhost ~]# fdisk -l
Disk /dev/sda: 449.4 GB, 449495171072 bytes
255 heads, 63 sectors/track, 54648 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          25      200781   83  Linux
/dev/sda2              26        1069     8385930   82  Linux swap / Solaris
/dev/sda3            1070       54648   430373317+  8e  Linux LVM
Disk /dev/sdb: 333.2 GB, 333296173056 bytes
255 heads, 63 sectors/track, 40520 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdb doesn't contain a valid partition table

共享存储以/sdb形式加载进来了.

到服务端看一下.

[root@localhost ~]# service tgtdrules status 
Target 1: iqn.2011-12-15.com.hsf.data:shareddisk
    System information:
        Driver: iscsi
        State: ready
    I_T nexus information:
        I_T nexus: 1
            Initiator: iqn.1994-05.com.redhat:424056be2e26
            Connection: 0
                IP Address: 10.0.0.53
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: deadbeaf1:0
            SCSI SN: beaf10
            Size: 0 MB
            Online: Yes
            Removable media: No
            Backing store: No backing store
        LUN: 1
            Type: disk
            SCSI ID: deadbeaf1:1
            SCSI SN: beaf11
            Size: 333296 MB
            Online: Yes
            Removable media: No
            Backing store: /dev/vg01/Lvmydata
    Account information:
    ACL information:
        ALL

可以看到10.0.0.53已经挂载了.

在另外一台客户端也进行同步的操作再去服务端查看下:

[root@localhost ~]# service tgtdrules status 
Target 1: iqn.2011-12-15.com.hsf.data:shareddisk
    System information:
        Driver: iscsi
        State: ready
    I_T nexus information:
        I_T nexus: 1
            Initiator: iqn.1994-05.com.redhat:424056be2e26
            Connection: 0
                IP Address: 10.0.0.53
        I_T nexus: 2
            Initiator: iqn.1994-05.com.redhat:ba96a2c6310
            Connection: 0
                IP Address: 10.0.0.54
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: deadbeaf1:0
            SCSI SN: beaf10
            Size: 0 MB
            Online: Yes
            Removable media: No
            Backing store: No backing store
        LUN: 1
            Type: disk
            SCSI ID: deadbeaf1:1
            SCSI SN: beaf11
            Size: 333296 MB
            Online: Yes
            Removable media: No
            Backing store: /dev/vg01/Lvmydata
    Account information:
    ACL information:
        ALL

GFS安装配置

先配置下yum

[root@localhost ~]# cat /etc/yum.repos.d/base.repo 
[base]
name=RHEL 5.4 Server
baseurl=ftp://10.0.0.23/pub/Server/
gpgcheck=0
[VT]
name=RHEL 5.4 VT
baseurl=ftp://10.0.0.23/pub/VT/
gpgcheck=0
[Cluster]
name=EHRL 5.4 Cluster
baseurl=ftp://10.0.0.23/pub/Cluster
gpgcheck=0
[ClusterStorage]
name=EHRL 5.4 ClusterStorage
baseurl=ftp://10.0.0.23/pub/ClusterStorage
gpgcheck=0
[root@localhost ~]#  yum -y groupinstall "Cluster Storage"  "Clustering"

创建配置文件,启动相关进程,两个节点做同样的配置 

[root@localhost ~]# vi /etc/cluster/cluster.conf
<?xml version="1.0" ?>
<cluster config_version="2" name="file_gfs">
        <fence_daemon post_fail_delay="0" post_join_delay="3"/>
        <clusternodes>
                <clusternode name="10.0.0.53" nodeid="1" votes="1">
                        <fence/>
                </clusternode>
                <clusternode name="10.0.0.54" nodeid="2" votes="1">
                        <fence/>
                </clusternode>
        </clusternodes>
        <cman expected_votes="1" two_node="1"/>
        <fencedevices/>
        <rm>
                <failoverdomains/>
                <resources/>
        </rm>
</cluster>
[root@localhost ~]# lvmconf --enable-cluster
[root@localhost ~]# service rgmanager start
Starting Cluster Service Manager:                          [  OK  ]
[root@localhost ~]# chkconfig rgmanager on
[root@localhost ~]# service ricci start
Starting system message bus:                               [  OK  ]
Starting oddjobd:                                          [  OK  ]
generating SSL certificates...  done
Starting ricci:                                            [  OK  ]
[root@localhost ~]#  chkconfig ricci on
[root@localhost ~]# service cman start
Starting cluster: 
   Loading modules... done
   Mounting configfs... done
   Starting ccsd... done
   Starting cman... done
   Starting daemons... done
   Starting fencing... done
                                                           [  OK  ]
[root@localhost ~]# chkconfig cman on
[root@localhost ~]# service clvmd start
Starting clvmd:                                            [  OK  ]
Activating VGs:   3 logical volume(s) in volume group "vg01" now active
                                                           [  OK  ]
[root@localhost ~]# chkconfig clvmd on
[root@localhost ~]#  clustat 
Cluster Status for file_gfs @ Thu Dec 15 13:53:40 2011
Member Status: Quorate
 Member Name                                                     ID   Status
 ------ ----                                                     ---- ------
 10.0.0.53                                                           1 Online, Local
 10.0.0.54                                                           2 Offline

在共享存储上划分LVM,在一个节点上操作即可 

[root@localhost ~]# pvcreate /dev/sdb1 
  Physical volume "/dev/sdb1" successfully created
[root@localhost ~]# vgcreate file_gfs /dev/sdb1 
  Clustered volume group "file_gfs" successfully created
[root@localhost ~]# vgdisplay file_gfs
  --- Volume group ---
  VG Name               file_gfs
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  Clustered             yes
  Shared                no
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               310.40 GB
  PE Size               4.00 MB
  Total PE              79462
  Alloc PE / Size       0 / 0   
  Free  PE / Size       79462 / 310.40 GB
  VG UUID               XebmJz-qHFl-Pe5U-gvEL-y1Ml-HbRV-j3GvSd
[root@localhost mnt]# lvcreate -n gfs -l 79462 file_gfs
  Logical volume "gfs" created

如果报以下的错误:

[root@localhost ~]# lvcreate -n gfs1 -l 79462 file_gfs
  Error locking on node 10.0.0.54: Volume group for uuid not found: XebmJzqHFlPe5UgvELy1MlHbRVj3GvSdig4REYsoFSRsUE7byErDdOYcJwu3DTct
  Aborting. Failed to activate new LV to wipe the start of it.


可以先重启下两台节点的clvmd.

[root@localhost ~]# service clvmd restart
Deactivating VG file_gfs:   0 logical volume(s) in volume group "file_gfs" now active
                                                           [  OK  ]
Stopping clvm:                                             [  OK  ]
Starting clvmd:                                            [  OK  ]
Activating VGs:   0 logical volume(s) in volume group "file_gfs" now active
  3 logical volume(s) in volume group "vg01" now active
                                                           [  OK  ]

如果还是不行.就在两台节点上vgs查看下.是不是都能看到这个vg.

[root@localhost mnt]# vgs
  VG       #PV #LV #SN Attr   VSize   VFree
  file_gfs   1   1   0 wz--nc 310.40G    0 
  vg01       1   3   0 wz--n- 410.41G    0 
[root@localhost ~]# service clvmd restart
Deactivating VG file_gfs:   0 logical volume(s) in volume group "file_gfs" now active
                                                           [  OK  ]
Stopping clvm:                                             [  OK  ]
Starting clvmd:                                            [  OK  ]
Activating VGs:   1 logical volume(s) in volume group "file_gfs" now active
  3 logical volume(s) in volume group "vg01" now active
                                                           [  OK  ]

格式化lvm卷 

[root@localhost ~]# gfs_mkfs -h
Usage:
gfs_mkfs [options] <device>
Options:
  -b <bytes>       Filesystem block size
  -D               Enable debugging code
  -h               Print this help, then exit
  -J <MB>          Size of journals
  -j <num>         Number of journals
  -O               Don't ask for confirmation
  -p <name>        Name of the locking protocol
  -q               Don't print anything
  -r <MB>          Resource Group Size
  -s <blocks>      Journal segment size
  -t <name>        Name of the lock table
  -V               Print program version information, then exit
[root@localhost ~]# gfs_mkfs -p lock_dlm  -t file_gfs:gfs -j 2 /dev/file_gfs/gfs   
This will destroy any data on /dev/file_gfs/gfs.
Are you sure you want to proceed? [y/n] y
Device:                    /dev/file_gfs/gfs
Blocksize:                 4096
Filesystem Size:           81297320
Journals:                  2
Resource Groups:           1242
Locking Protocol:          lock_dlm
Lock Table:                file_gfs:gfs
Syncing...
All Done

在两个节点上分别挂载,并测试写入数据 

[root@localhost ~]# mount -t gfs /dev/file_gfs/gfs /data/
[root@localhost data]# dd if=/dev/zero of=aa bs=10M count=100
100+0 records in
100+0 records out
1048576000 bytes (1.0 GB) copied, 1.25589 seconds, 835 MB/s

你可能感兴趣的:(redhat,service,File,存储,System,locking)