大纲
一、准备工作
二、iSCSI安装与配置
三、cman安装与配置
四、cLVM安装与配置
五、gfs2安装与配置
一、准备工作
系统环境
CentOS5.8 x86_64
Initiator
node1.network.com node1 172.16.1.101
node2.network.com node2 172.16.1.105
node3.network.com node3 172.16.1.106
Target
node4.network.com /dev/hda 172.16.1.111
软件包
iscsi-initiator-utils-6.2.0.872-16.el5.x86_64.rpm
scsi-target-utils-1.0.14-2.el5
cman-2.0.115-124.el5.x86_64.rpm
rgmanager-2.0.52-54.el5.centos.x86_64.rpm
lvm2-cluster-2.02.88-10.el5.x86_64.rpm
gfs2-utils-0.1.62-44.el5.x86_64.rpm
拓扑图
1、时间同步
[root@node1 ~]# ntpdate s2c.time.edu.cn [root@node2 ~]# ntpdate s2c.time.edu.cn [root@node3 ~]# ntpdate s2c.time.edu.cn [root@node4 ~]# ntpdate s2c.time.edu.cn 可根据需要在每个节点上定义crontab任务 [root@node1 ~]# which ntpdate /sbin/ntpdate [root@node1 ~]# echo "*/5 * * * * /sbin/ntpdate s2c.time.edu.cn &> /dev/null" >> /var/spool/cron/root [root@node1 ~]# crontab -l */5 * * * * /sbin/ntpdate s2c.time.edu.cn &> /dev/null
2、主机名称要与uname -n保持一致,并通过/etc/hosts解析
node1 [root@node1 ~]# hostname node1.network.com [root@node1 ~]# uname -n node1.network.com [root@node1 ~]# sed -i 's@\(HOSTNAME=\).*@\1node1.network.com@g' /etc/sysconfig/network node2 [root@node2 ~]# hostname node2.network.com [root@node2 ~]# uname -n node2.network.com [root@node2 ~]# sed -i 's@\(HOSTNAME=\).*@\1node2.network.com@g' /etc/sysconfig/network node3 [root@node3 ~]# hostname node3.network.com [root@node3 ~]# uname -n node3.network.com [root@node3 ~]# sed -i 's@\(HOSTNAME=\).*@\1node3.network.com@g' /etc/sysconfig/network node4 [root@node4 ~]# hostname node4.network.com [root@node4 ~]# uname -n node4.network.com [root@node4 ~]# sed -i 's@\(HOSTNAME=\).*@\1node4.network.com@g' /etc/sysconfig/network node1添加hosts解析 [root@node1 ~]# vim /etc/hosts [root@node1 ~]# cat /etc/hosts # Do not remove the following line, or various programs # that require network functionality will fail. 127.0.0.1 CentOS5.8 CentOS5 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 172.16.1.101 node1.network.com node1 172.16.1.105 node2.network.com node2 172.16.1.106 node3.network.com node3 172.16.1.111 node4.network.com node4 拷贝此hosts文件至node2 [root@node1 ~]# scp /etc/hosts node2:/etc/ The authenticity of host 'node2 (172.16.1.105)' can't be established. RSA key fingerprint is 13:42:92:7b:ff:61:d8:f3:7c:97:5f:22:f6:71:b3:24. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node2,172.16.1.105' (RSA) to the list of known hosts. root@node2's password: hosts 100% 233 0.2KB/s 00:00 拷贝此hosts文件至node3 [root@node1 ~]# scp /etc/hosts node3:/etc/ The authenticity of host 'node3 (172.16.1.110)' can't be established. RSA key fingerprint is 13:42:92:7b:ff:61:d8:f3:7c:97:5f:22:f6:71:b3:24. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node3,172.16.1.110' (RSA) to the list of known hosts. hosts 100% 320 0.3KB/s 00:00 拷贝此hosts文件至node4 [root@node1 ~]# scp /etc/hosts node4:/etc/ The authenticity of host 'node4 (172.16.1.111)' can't be established. RSA key fingerprint is 13:42:92:7b:ff:61:d8:f3:7c:97:5f:22:f6:71:b3:24. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node4,172.16.1.111' (RSA) to the list of known hosts. hosts 100% 358 0.4KB/s 00:00
3、关闭iptables和selinux
node1 [root@node1 ~]# service iptables stop [root@node1 ~]# vim /etc/sysconfig/selinux [root@node1 ~]# cat /etc/sysconfig/selinux # This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - SELinux is fully disabled. #SELINUX=permissive SELINUX=disabled # SELINUXTYPE= type of policy in use. Possible values are: # targeted - Only targeted network daemons are protected. # strict - Full SELinux protection. SELINUXTYPE=targeted node2 [root@node2 ~]# service iptables stop [root@node2 ~]# vim /etc/sysconfig/selinux [root@node2 ~]# cat /etc/sysconfig/selinux # This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - SELinux is fully disabled. #SELINUX=permissive SELINUX=disabled # SELINUXTYPE= type of policy in use. Possible values are: # targeted - Only targeted network daemons are protected. # strict - Full SELinux protection. SELINUXTYPE=targeted node3 [root@node3 ~]# service iptables stop [root@node3 ~]# vim /etc/sysconfig/selinux [root@node3 ~]# cat /etc/sysconfig/selinux # This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - SELinux is fully disabled. #SELINUX=permissive SELINUX=disabled # SELINUXTYPE= type of policy in use. Possible values are: # targeted - Only targeted network daemons are protected. # strict - Full SELinux protection. SELINUXTYPE=targeted node4 [root@node4 ~]# service iptables stop [root@node4 ~]# vim /etc/sysconfig/selinux [root@node4 ~]# cat /etc/sysconfig/selinux # This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - SELinux is fully disabled. #SELINUX=permissive SELINUX=disabled # SELINUXTYPE= type of policy in use. Possible values are: # targeted - Only targeted network daemons are protected. # strict - Full SELinux protection. SELINUXTYPE=targeted
二、iSCSI安装与配置
1、安装scsi-target-utils
只需要在共享存储上装 [root@node4 ~]# hostname node4.network.com [root@node4 ~]# yum install -y scsi-target-utils
2、编辑配置文件
[root@node4 ~]# vim /etc/tgt/targets.conf [root@node4 ~]# grep -A 7 "^verdor_id soysauce lun 2 incominguser iscsiuser iscsiuser initiator-address 172.16.0.0/16
3、启动服务并设置开机自启动
[root@node4 ~]# service tgtd start Starting SCSI target daemon: Starting target framework daemon 查看状态信息 [root@node4 ~]# tgtadm --lld iscsi --mode target --op show Target 1: iqn.2016-01.com.network:teststore.disk1 System information: Driver: iscsi State: ready I_T nexus information: LUN information: LUN: 0 Type: controller SCSI ID: IET 00010000 SCSI SN: beaf10 Size: 0 MB, Block size: 1 Online: Yes Removable media: No Readonly: No Backing store type: null Backing store path: None Backing store flags: LUN: 2 Type: disk SCSI ID: IET 00010002 SCSI SN: beaf12 Size: 21475 MB, Block size: 512 Online: Yes Removable media: No Readonly: No Backing store type: rdwr Backing store path: /dev/hda Backing store flags: Account information: iscsiuser ACL information: 172.16.0.0/16 [root@node4 ~]# chkconfig tgtd on [root@node4 ~]# chkconfig --list tgtd tgtd 0:off 1:off 2:on 3:on 4:on 5:on 6:off Target配置完毕
4、安装并配置三个Initiator
node1 [root@node1 ~]# yum install -y iscsi-initiator-utils [root@node1 ~]# echo "InitiatorName=iqn.2016-01.com.network:node1" > /etc/iscsi/initiatorname.iscsi [root@node1 ~]# cat /etc/iscsi/initiatorname.iscsi InitiatorName=iqn.2016-01.com.network:node1 配置用户认证所需的账号密码,启用并修改下列三项 [root@node1 ~]# vim /etc/iscsi/iscsid.conf node.session.auth.authmethod = CHAP node.session.auth.username = iscsiuser node.session.auth.password = iscsiuser node2 [root@node2 ~]# yum install -y iscsi-initiator-utils [root@node2 ~]# echo "InitiatorName=iqn.2016-01.com.network:node2" > /etc/iscsi/initiatorname.iscsi [root@node2 ~]# cat /etc/iscsi/initiatorname.iscsi InitiatorName=iqn.2016-01.com.network:node2 配置用户认证所需的账号密码,启用并修改下列三项 [root@node2 ~]# vim /etc/iscsi/iscsid.conf node.session.auth.authmethod = CHAP node.session.auth.username = iscsiuser node.session.auth.password = iscsiuser node3 [root@node3 ~]# yum install -y iscsi-initiator-utils [root@node3 ~]# echo "InitiatorName=iqn.2016-01.com.network:node3" > /etc/iscsi/initiatorname.iscsi [root@node3 ~]# cat /etc/iscsi/initiatorname.iscsi InitiatorName=iqn.2016-01.com.network:node3 配置用户认证所需的账号密码,启用并修改下列三项 [root@node3 ~]# vim /etc/iscsi/iscsid.conf node.session.auth.authmethod = CHAP node.session.auth.username = iscsiuser node.session.auth.password = iscsiuser
5、三个节点启动iscsi服务并设置开机自启动
node1 [root@node1 ~]# service iscsi start iscsid (pid 2318) is running... Setting up iSCSI targets: iscsiadm: No records found [ OK ] [root@node1 ~]# chkconfig iscsi on [root@node1 ~]# chkconfig --list iscsi iscsi 0:off 1:off 2:on 3:on 4:on 5:on 6:off node2 [root@node2 ~]# service iscsi start iscsid (pid 3427) is running... Setting up iSCSI targets: iscsiadm: No records found [ OK ] [root@node2 ~]# chkconfig iscsi on [root@node2 ~]# chkconfig --list iscsi iscsi 0:off 1:off 2:on 3:on 4:on 5:on 6:off node3 [root@node1 ~]# service iscsi start iscsid (pid 5762) is running... Setting up iSCSI targets: iscsiadm: No records found [ OK ] [root@node3 ~]# chkconfig iscsi on [root@node3 ~]# chkconfig --list iscsi iscsi 0:off 1:off 2:on 3:on 4:on 5:on 6:off
6、发现Target并登录
node1 发现Target,完成之后会在/var/lib/iscsi/send_targets下保存数据目录 [root@node1 ~]# iscsiadm -m discovery -t st -p 172.16.1.111 172.16.1.111:3260,1 iqn.2016-01.com.network:teststore.disk1 [root@node1 ~]# ls /var/lib/iscsi/send_targets/172.16.1.111,3260/ iqn.2016-01.com.network:teststore.disk1,172.16.1.111,3260,1,default st_config 此时查看一下当前本地的磁盘只有/dev/hda和/dev/sda两块盘 [root@node1 ~]# fdisk -l Disk /dev/hda: 21.4 GB, 21474836480 bytes 15 heads, 63 sectors/track, 44384 cylinders Units = cylinders of 945 * 512 = 483840 bytes Device Boot Start End Blocks Id System /dev/hda1 1 2068 977098+ 83 Linux Disk /dev/sda: 21.4 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 2610 20860402+ 8e Linux LVM 登录Target [root@node1 ~]# iscsiadm -m node -T iqn.2016-01.com.network:teststore.disk1 -p 172.16.1.111 -l Logging in to [iface: default, target: iqn.2016-01.com.network:teststore.disk1, portal: 172.16.1.111,3260] (multiple) Login to [iface: default, target: iqn.2016-01.com.network:teststore.disk1, portal: 172.16.1.111,3260] successful. 再次查看一下磁盘,可以看到已经多了一块盘/dev/sdb [root@node1 ~]# fdisk -l Disk /dev/hda: 21.4 GB, 21474836480 bytes 15 heads, 63 sectors/track, 44384 cylinders Units = cylinders of 945 * 512 = 483840 bytes Device Boot Start End Blocks Id System /dev/hda1 1 2068 977098+ 83 Linux Disk /dev/sda: 21.4 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 2610 20860402+ 8e Linux LVM Disk /dev/sdb: 21.4 GB, 21474836480 bytes 64 heads, 32 sectors/track, 20480 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Device Boot Start End Blocks Id System node2 发现Target,完成之后会在/var/lib/iscsi/send_targets下保存数据目录 [root@node2 ~]# iscsiadm -m discovery -t st -p 172.16.1.111 172.16.1.111:3260,1 iqn.2016-01.com.network:teststore.disk1 此时查看一下当前本地的磁盘只有/dev/hda和/dev/sda两块盘 [root@node2 ~]# fdisk -l Disk /dev/hda: 21.4 GB, 21474836480 bytes 15 heads, 63 sectors/track, 44384 cylinders Units = cylinders of 945 * 512 = 483840 bytes Device Boot Start End Blocks Id System /dev/hda1 1 2068 977098+ 83 Linux Disk /dev/sda: 21.4 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 2610 20860402+ 8e Linux LVM 登录Target [root@node2 ~]# iscsiadm -m node -T iqn.2016-01.com.network:teststore.disk1 -p 172.16.1.111 -l Logging in to [iface: default, target: iqn.2016-01.com.network:teststore.disk1, portal: 172.16.1.111,3260] (multiple) Login to [iface: default, target: iqn.2016-01.com.network:teststore.disk1, portal: 172.16.1.111,3260] successful. 再次查看一下磁盘,可以看到已经多了一块盘/dev/sdb [root@node2 ~]# fdisk -l Disk /dev/hda: 21.4 GB, 21474836480 bytes 15 heads, 63 sectors/track, 44384 cylinders Units = cylinders of 945 * 512 = 483840 bytes Device Boot Start End Blocks Id System /dev/hda1 1 2068 977098+ 83 Linux Disk /dev/sda: 21.4 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 2610 20860402+ 8e Linux LVM Disk /dev/sdb: 21.4 GB, 21474836480 bytes 64 heads, 32 sectors/track, 20480 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Device Boot Start End Blocks Id System node3 发现Target [root@node3 ~]# iscsiadm -m discovery -t st -p 172.16.1.111 172.16.1.111:3260,1 iqn.2016-01.com.network:teststore.disk1 此时查看一下当前本地的磁盘只有/dev/hda和/dev/sda两块盘 [root@node3 ~]# fdisk -l Disk /dev/hda: 21.4 GB, 21474836480 bytes 15 heads, 63 sectors/track, 44384 cylinders Units = cylinders of 945 * 512 = 483840 bytes Device Boot Start End Blocks Id System /dev/hda1 1 2068 977098+ 83 Linux Disk /dev/sda: 21.4 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 2610 20860402+ 8e Linux LVM 登录Target [root@node3 ~]# iscsiadm -m node -T iqn.2016-01.com.network:teststore.disk1 -p 172.16.1.111 -l Logging in to [iface: default, target: iqn.2016-01.com.network:teststore.disk1, portal: 172.16.1.111,3260] (multiple) Login to [iface: default, target: iqn.2016-01.com.network:teststore.disk1, portal: 172.16.1.111,3260] successful. 再次查看一下磁盘,可以看到已经多了一块盘/dev/sdb [root@node3 ~]# fdisk -l Disk /dev/hda: 21.4 GB, 21474836480 bytes 15 heads, 63 sectors/track, 44384 cylinders Units = cylinders of 945 * 512 = 483840 bytes Device Boot Start End Blocks Id System /dev/hda1 1 2068 977098+ 83 Linux Disk /dev/sda: 21.4 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 2610 20860402+ 8e Linux LVM Disk /dev/sdb: 21.4 GB, 21474836480 bytes 64 heads, 32 sectors/track, 20480 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Device Boot Start End Blocks Id System 到此iSCSI安装与配置完毕
三、cman安装与配置
1、各节点安装cman和rgmanager
[root@node1 ~]# yum install -y cman rgmanager [root@node2 ~]# yum install -y cman rgmanager [root@node3 ~]# yum install -y cman rgmanager
2、生成配置文件
只需要在一个节点上创建即可 [root@node1 ~]# ccs_tool create tcluster 查看生成的配置文件 [root@node1 ~]# cat /etc/cluster/cluster.conf
3、配置fence设备
[root@node1 ~]# ccs_tool addfence meatware fence_manual running ccs_tool update... 查看fence设备 [root@node1 ~]# ccs_tool lsfence Name Agent meatware fence_manual
4、添加节点
[root@node1 ~]# ccs_tool addnode -v 1 -n 1 -f meatware node1.network.com running ccs_tool update... [root@node1 ~]# ccs_tool addnode -v 1 -n 2 -f meatware node2.network.com running ccs_tool update... [root@node1 ~]# ccs_tool addnode -v 1 -n 3 -f meatware node3.network.com running ccs_tool update... 查看添加的各节点 [root@node1 ~]# ccs_tool lsnode Cluster name: tcluster, config_version: 30 Nodename Votes Nodeid Fencetype node1.network.com 1 1 meatware node2.network.com 1 2 meatware node3.network.com 1 3 meatware
5、各节点启动cman服务
node1 [root@node1 ~]# service cman start Starting cluster: Loading modules... done Mounting configfs... done Starting ccsd... done Starting cman... done Starting daemons... done Starting fencing... done Tuning DLM... done [ OK ] node2 [root@node2 ~]# service cman start Starting cluster: Loading modules... done Mounting configfs... done Starting ccsd... done Starting cman... done Starting daemons... done Starting fencing... done Tuning DLM... done [ OK ] node3 [root@node3 ~]# service cman start Starting cluster: Loading modules... done Mounting configfs... done Starting ccsd... done Starting cman... done Starting daemons... done Starting fencing... done Tuning DLM... done [ OK ] 建议配置个组播地址,并手动复制/etc/cluster/cluster.conf下的配置文件至其他节点上 到此,cman安装与配置完毕
四、cLVM安装与配置
1、三个节点上安装lvm2-cluster
[root@node1 ~]# yum install -y lvm2-cluster [root@node2 ~]# yum install -y lvm2-cluster [root@node3 ~]# yum install -y lvm2-cluster
2、三个节点启用集群lvm功能,并启动服务
node1 [root@node1 ~]# lvmconf --enable-cluster [root@node1 ~]# egrep "[[:space:]]{2,}locking_type" /etc/lvm/lvm.conf locking_type = 3 [root@node1 ~]# service clvmd start Starting clvmd: Activating VG(s): 2 logical volume(s) in volume group "VolGroup00" now active [ OK ] node2 [root@node2 ~]# lvmconf --enable-cluster [root@node2 ~]# egrep "[[:space:]]{2,}locking_type" /etc/lvm/lvm.conf locking_type = 3 [root@node2 ~]# service clvmd start Starting clvmd: Activating VG(s): 2 logical volume(s) in volume group "VolGroup00" now active [ OK ] node3 [root@node3 ~]# lvmconf --enable-cluster [root@node3 ~]# egrep "[[:space:]]{2,}locking_type" /etc/lvm/lvm.conf locking_type = 3 [root@node3 ~]# service clvmd start Starting clvmd: Activating VG(s): 2 logical volume(s) in volume group "VolGroup00" now active clvmd not running on node node2.network.com [ OK ]
3、三个节点设置cman、rgmanager、clvmd服务开机自启动
node1 [root@node1 ~]# chkconfig cman on [root@node1 ~]# chkconfig rgmanager on [root@node1 ~]# chkconfig clvmd on node2 [root@node2 ~]# chkconfig cman on [root@node2 ~]# chkconfig rgmanager on [root@node2 ~]# chkconfig clvmd on node3 [root@node3 ~]# chkconfig cman on [root@node3 ~]# chkconfig rgmanager on [root@node3 ~]# chkconfig clvmd on
4、在一个节点上创建集群逻辑卷
[root@node1 ~]# fdisk -l Disk /dev/hda: 21.4 GB, 21474836480 bytes 15 heads, 63 sectors/track, 44384 cylinders Units = cylinders of 945 * 512 = 483840 bytes Device Boot Start End Blocks Id System /dev/hda1 1 2068 977098+ 83 Linux Disk /dev/sda: 21.4 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 2610 20860402+ 8e Linux LVM Disk /dev/sdb: 21.4 GB, 21474836480 bytes 64 heads, 32 sectors/track, 20480 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Device Boot Start End Blocks Id System 我这里的共享存储是/dev/sdb 首先做成物理卷(PV) [root@node1 ~]# pvcreate /dev/sdb Writing physical volume data to disk "/dev/sdb" Physical volume "/dev/sdb" successfully created 查看物理卷 [root@node1 ~]# pvs PV VG Fmt Attr PSize PFree /dev/sda2 VolGroup00 lvm2 a-- 19.88G 0 /dev/sdb lvm2 a-- 20.00G 20.00G 创建卷组(VG) [root@node1 ~]# vgcreate clustervg /dev/sdb Clustered volume group "clustervg" successfully created 查看卷组 [root@node1 ~]# vgs VG #PV #LV #SN Attr VSize VFree VolGroup00 1 2 0 wz--n- 19.88G 0 clustervg 1 0 0 wz--nc 20.00G 20.00G 创建逻辑卷(LV) [root@node1 ~]# lvcreate -L 10G -n clusterlv clustervg Logical volume "clusterlv" created 查看逻辑卷 [root@node1 ~]# lvs LV VG Attr LSize Origin Snap% Move Log Copy% Convert LogVol00 VolGroup00 -wi-ao 17.91G LogVol01 VolGroup00 -wi-ao 1.97G clusterlv clustervg -wi-a- 10.00G 此时在其它节点上验证是否能看到刚才在node1上创建的逻辑卷 [root@node2 ~]# lvs LV VG Attr LSize Origin Snap% Move Log Copy% Convert LogVol00 VolGroup00 -wi-ao 17.91G LogVol01 VolGroup00 -wi-ao 1.97G clusterlv clustervg -wi-a- 10.00G [root@node3 ~]# lvs LV VG Attr LSize Origin Snap% Move Log Copy% Convert LogVol00 VolGroup00 -wi-ao 17.91G LogVol01 VolGroup00 -wi-ao 1.97G clusterlv clustervg -wi-a- 10.00G 在其它节点上是可以看到的,好的,到此cLVM配置完毕,接下来就是最后格式化为集群文件系统
五、gfs2安装与配置
1、三个节点安装gfs2-utils
[root@node1 ~]# yum install -y gfs2-utils [root@node2 ~]# yum install -y gfs2-utils [root@node3 ~]# yum install -y gfs2-utils
2、格式化之前创建的那个逻辑卷
[root@node1 ~]# mkfs.gfs2 -j 2 -p lock_dlm -t tcluster:locktb1 /dev/mapper/clustervg-clusterlv This will destroy any data on /dev/mapper/clustervg-clusterlv. Are you sure you want to proceed? [y/n] y Device: /dev/mapper/clustervg-clusterlv Blocksize: 4096 Device Size 10.00 GB (2621440 blocks) Filesystem Size: 10.00 GB (2621438 blocks) Journals: 2 Resource Groups: 40 Locking Protocol: "lock_dlm" Lock Table: "tcluster:locktb1" UUID: A95DC652-8C10-3F3F-C552-2B95684E792F 补充mkfs.gfs2命令的常用选项 mkfs.gfs2 -j #: 指定日志区域的个数,有几个就能够被几个节点所挂载; -J #: 指定日志区域的大小,默认为128MB; -p {lock_dlm|lock_nolock}: -t: 锁表的名称,格式为clustername:locktablename clustername为当前节点所在的集群的名称,locktablename要在当前集群惟一; -D:启用debug功能
3、创建目录并挂载
创建挂载目录 [root@node1 ~]# mkdir /mydata/ 挂载 [root@node1 ~]# mount -t gfs2 /dev/mapper/clustervg-clusterlv /mydata/ [root@node1 ~]# ls /mydata/ 查看空间及其它属性信息 [root@node1 ~]# gfs2_tool df /mydata /mydata: SB lock proto = "lock_dlm" SB lock table = "tcluster:locktb1" SB ondisk format = 1801 SB multihost format = 1900 Block size = 4096 Journals = 2 Resource Groups = 40 Mounted lock proto = "lock_dlm" Mounted lock table = "tcluster:locktb1" Mounted host data = "jid=0:id=131073:first=1" Journal number = 0 Lock module flags = 0 Local flocks = FALSE Local caching = FALSE Type Total Blocks Used Blocks Free Blocks use% ------------------------------------------------------------------------ data 2621144 66195 2554949 3% inodes 2554965 16 2554949 0% 此时再在另外一个节点上挂载 [root@node2 ~]# mkdir /mydata/ [root@node2 ~]# mount -t gfs2 /dev/mapper/clustervg-clusterlv /mydata/ [root@node2 ~]# cd /mydata/ [root@node2 mydata]# touch node2.txt 在node1节点上查看/mydata目录下是否有此文件 [root@node1 ~]# ls /mydata/ node2.txt
4、查看集群文件系统所有可调的参数
[root@node1 ~]# gfs2_tool gettune /mydata/ gfs2_tool: gfs2 Filesystem /mydata/ is not mounted. 注意/mydata后不能加/ [root@node1 ~]# gfs2_tool gettune /mydata new_files_directio = 0 # 某个节点创建文件后,立即通知其他节点并写到磁盘中,1表示启用 new_files_jdata = 0 quota_scale = 1.0000 (1, 1) logd_secs = 1 recoverd_secs = 60 statfs_quantum = 30 stall_secs = 600 quota_cache_secs = 300 quota_simul_sync = 64 statfs_slow = 0 complain_secs = 10 max_readahead = 262144 quota_quantum = 60 quota_warn_period = 10 jindex_refresh_secs = 60 log_flush_secs = 60 # 文件系统中日志区域刷新时间间隔,默认60s incore_log_blocks = 1024 可以设置new_files_directio值为1 [root@node1 ~]# gfs2_tool settune /mydata new_files_directio 1 再次查看,可以看到已然改为1 [root@node1 ~]# gfs2_tool gettune /mydata new_files_directio = 1 new_files_jdata = 0 quota_scale = 1.0000 (1, 1) logd_secs = 1 recoverd_secs = 60 statfs_quantum = 30 stall_secs = 600 quota_cache_secs = 300 quota_simul_sync = 64 statfs_slow = 0 complain_secs = 10 max_readahead = 262144 quota_quantum = 60 quota_warn_period = 10 jindex_refresh_secs = 60 log_flush_secs = 60 incore_log_blocks = 1024
5、添加日志区域
之前我们格式化为gfs2文件系统时,指定了日志区域为2个,如果你的节点数大于2 那么有两个设备挂载了共享的文件系统,那么其它的节点是挂不上去 所有我们可以通过增加日志区域来解决 查看当前已有的日志区域 [root@node1 ~]# gfs2_tool journals /dev/mapper/clustervg-clusterlv journal1 - 128MB journal0 - 128MB 2 journal(s) found. 增加一个日志区域 [root@node1 ~]# gfs2_jadd -j 1 /dev/mapper/clustervg-clusterlv Filesystem: /mydata Old Journals 2 New Journals 3 此时第三个节点就可以挂上来了 [root@node3 ~]# mount -t gfs2 /dev/mapper/clustervg-clusterlv /mydata/ [root@node3 ~]# ls /mydata/ node2.txt [root@node3 ~]# rm -f /mydata/node2.txt
6、扩展集群逻辑卷
首先查看我们的逻辑卷大小为10G [root@node1 ~]# lvs LV VG Attr LSize Origin Snap% Move Log Copy% Convert LogVol00 VolGroup00 -wi-ao 17.91G LogVol01 VolGroup00 -wi-ao 1.97G clusterlv clustervg -wi-ao 10.00G 查看卷组使用情况,发现还有10G剩余 [root@node1 ~]# vgs VG #PV #LV #SN Attr VSize VFree VolGroup00 1 2 0 wz--n- 19.88G 0 clustervg 1 1 0 wz--nc 20.00G 10.00G 扩展逻辑卷,增加5G [root@node1 ~]# lvextend -L +5G /dev/mapper/clustervg-clusterlv Extending logical volume clusterlv to 15.00 GB Logical volume clusterlv successfully resized 增加完毕,lvs查看发现逻辑卷大小为15G,已经扩展了 [root@node1 ~]# lvs LV VG Attr LSize Origin Snap% Move Log Copy% Convert LogVol00 VolGroup00 -wi-ao 17.91G LogVol01 VolGroup00 -wi-ao 1.97G clusterlv clustervg -wi-ao 15.00G 请注意,我们扩展的只是物理边界,并没有扩展逻辑边界 [root@node1 ~]# df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 ext3 18G 2.5G 14G 16% / /dev/sda1 ext3 99M 13M 81M 14% /boot tmpfs tmpfs 122M 0 122M 0% /dev/shm /dev/mapper/clustervg-clusterlv gfs2 10G 388M 9.7G 4% /mydata [root@node1 ~]# df -hP Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 18G 2.5G 14G 16% / /dev/sda1 99M 13M 81M 14% /boot tmpfs 122M 0 122M 0% /dev/shm /dev/mapper/clustervg-clusterlv 10G 388M 9.7G 4% /mydata 扩展逻辑边界 [root@node1 ~]# gfs2_grow /dev/mapper/clustervg-clusterlv FS: Mount Point: /mydata FS: Device: /dev/mapper/clustervg-clusterlv FS: Size: 2621438 (0x27fffe) FS: RG size: 65533 (0xfffd) DEV: Size: 3932160 (0x3c0000) The file system grew by 5120MB. gfs2_grow complete. 再次查看 [root@node1 ~]# df -hP Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 18G 2.5G 14G 16% / /dev/sda1 99M 13M 81M 14% /boot tmpfs 122M 0 122M 0% /dev/shm /dev/mapper/clustervg-clusterlv 15G 388M 15G 3% /mydata
到此,一个cman + rgmanager + iSCSI + gfs2 + cLVM 实现集群共享存储的解决方案已然搭建完成