拓扑图
node1 node2
| |
|_____________|
|
swich
|
--------
| |
| |
node3 node4
IP配置:
真机node0:192.168.122.1/24
node1 192.168.122.10/24
node2 192.168.122.20/24
node3 192.168.122.30/24
node4 192.168.122.40/24
#############################实验环境############################
换外桥xenbr0 //#brctl show
iptables selinux off
/etc/hosts 解析
yum Server VT Cluster ClusterStorage
dd两个硬盘,挂载到存储节点node3,node4
#dd if=/dev/zero of=/var/lib/xen/images/disk1 bs=1M count=2048
#dd if=/dev/zero of=/var/lib/xen/images/disk2 bs=1M count=2048
# vim /etc/xen/vm3 //url路径改为硬盘路径,硬盘类型改为第二块或其他,以免冲突开不开机
21 disk = [ "file:/var/lib/xen/images/vm3.img,hda,w", ",hdc:cdrom,r","file:/var/lib/xen/images/disk1,hdb,w" ]
# vim /etc/xen/vm4
21 disk = [ "file:/var/lib/xen/images/vm3.img,hda,w", ",hdc:cdrom,r","file:/var/lib/xen/images/disk2,hdb,w" ]
重启两个存储节点,fdisk -l去查看硬盘是否挂载成功
################################################################
=======================配置存储节点node3,node4======================
[root@node30 ~]# fdisk -l //查看各硬盘的位置类型
Disk /dev/hdb: 2147 MB, 2147483648 bytes
[root@node3 ~]# yum install scsi-target-utils -y
[root@node3 ~]# vim /etc/tgt/targets.conf
14 default-driver iscsi
30
31 <target iqn.2013-03.com.uplooking:node3.storage1>
32 backing-store /dev/hdb
33 verdor_id node3
34 product_id storage1
35 initiator-address 192.168.122.10
36 initiator-address 192.168.122.20
37 </target>
38
[root@node3 ~]# service tgtd start
[root@node3 ~]# netstat -tunpl | grep tgtd
tcp 0 0 0.0.0.0:3260 0.0.0.0:* LISTEN 2855/tgtd
[root@node3 ~]# tgt-admin --show
Target 1: iqn.2013-03.com.uplooking:node3.storage1
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Readonly: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: IET 00010001
SCSI SN: beaf11
Size: 2147 MB, Block size: 512
Online: Yes
Removable media: No
Readonly: No
Backing store type: rdwr
Backing store path: /dev/hdb
Backing store flags:
Account information:
ACL information:
192.168.122.10
192.168.122.20
======================配置前端应用节点node1,node2=========================
[root@node1 ~]# yum install iscsi-initiator-utils -y
[root@node1 ~]# service iscsid start
发现存储:
[root@node1 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.122.30:3260
192.168.122.30:3260,1 iqn.2013-03.com.uplooking:node3.storage1
[root@node1 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.122.40:3260
192.168.122.40:3260,1 iqn.2013-03.com.uplooking:node4.storage1
登录存储:
[root@node1 ~]# iscsiadm -m node -T iqn.2013-03.com.uplooking:node3.storage1 -l
Logging in to [iface: default, target: iqn.2013-03.com.uplooking:node3.storage1, portal: 192.168.122.30,3260] (multiple)
Login to [iface: default, target: iqn.2013-03.com.uplooking:node3.storage1, portal: 192.168.122.30,3260] successful.
[root@node1 ~]# iscsiadm -m node -T iqn.2013-03.com.uplooking:node4.storage1 -l
Logging in to [iface: default, target: iqn.2013-03.com.uplooking:node4.storage1, portal: 192.168.122.40,3260] (multiple)
Login to [iface: default, target: iqn.2013-03.com.uplooking:node4.storage1, portal: 192.168.122.40,3260] successful.
[root@node1 ~]# fdisk -l
Disk /dev/hda: 8388 MB, 8388608000 bytes
255 heads, 63 sectors/track, 1019 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/hda1 * 1 13 104391 83 Linux
/dev/hda2 14 842 6658942+ 83 Linux
/dev/hda3 843 907 522112+ 82 Linux swap / Solaris
Disk /dev/sda: 2147 MB, 2147483648 bytes
67 heads, 62 sectors/track, 1009 cylinders
Units = cylinders of 4154 * 512 = 2126848 bytes
Disk /dev/sda doesn't contain a valid partition table
Disk /dev/sdb: 2147 MB, 2147483648 bytes
67 heads, 62 sectors/track, 1009 cylinders
Units = cylinders of 4154 * 512 = 2126848 bytes
Disk /dev/sdb doesn't contain a valid partition table
[root@node1 ~]# ll /dev/sda
brw-r----- 1 root disk 8, 0 Mar 23 12:41 /dev/sda
[root@node1 ~]# ll /dev/sdb
brw-r----- 1 root disk 8, 16 Mar 23 12:41 /dev/sdb
谁先挂载上谁是sda,后挂的是sdb,如果下次使用挂载顺序搞颠,那么就会出现本该向oracle目录里面写东西,实际最后写在了mysql里。下面引入udev,在kernel里面的udev模块上添加策略,使sda的挂载永远指向同一个目录,解决了挂载顺序颠倒指向不准确的问题。
查看设备信息:
[root@node1 ~]# udevinfo -a -p /sys/block/sda
[root@node1 ~]# udevinfo -a -p /sys/block/sdb
配置udev规则:
[root@node1 ~]# vim /etc/udev/rules.d/80=scicsi.rules
SUBSYSTEM=="block",SYSFS{size}=="4194304",SYSFS{vendor}=="node4",SYSFS{storage1}=="VIRTUAL-DISK",SYMLINK="iscsi/node4-disk"
SUBSYSTEM=="block",SYSFS{size}=="4194304",SYSFS{vendor}=="node3",SYSFS{storage1}=="VIRTUAL-DISK",SYMLINK="iscsi/node3-disk"
[root@node1 ~]# start_udev
Starting udev: [ OK ]
[root@node1 ~]# ls -l /dev/iscsi/
total 0
lrwxrwxrwx 1 root root 6 Mar 23 13:42 node3-disk -> ../sdb
lrwxrwxrwx 1 root root 6 Mar 23 13:42 node4-disk -> ../sda
[root@node1 ~]#
=====================测试存储为ext3会出现什么情况===========================
[root@node1 ~]# mkdir /iscsi/
[root@node1 ~]# pvcreate /dev/sda
[root@node1 ~]# pvcreate /dev/sdb
[root@node1 ~]# vgcreate vgiscsi /dev/sda /dev/sdb
[root@node1 ~]# lvcreate -l 250 -n lviscsi vgiscsi
[root@node1 ~]# ll /dev/vgiscsi/lviscsi
lrwxrwxrwx 1 root root 27 Mar 23 12:59 /dev/vgiscsi/lviscsi -> /dev/mapper/vgiscsi-lviscsi
[root@node1 ~]# mkfs.ext3 /dev/vgiscsi/lviscsi
[root@node1 ~]# mount /dev/vgiscsi/lviscsi /iscsi/
[root@node1 ~]# cd /iscsi/
[root@node1 iscsi]# ls
lost+found
[root@node1 iscsi]# touch file
[root@node1 iscsi]# echo 111 > file
[root@node1 iscsi]# ls
file lost+found
[root@node1 iscsi]# cat file
111
[root@node2 ~]# mkdir /iscsi/
[root@node2 iscsi]# pvscan
PV /dev/sda VG vgiscsi lvm2 [2.00 GB / 1.02 GB free]
PV /dev/sdb VG vgiscsi lvm2 [2.00 GB / 2.00 GB free]
Total: 2 [3.99 GB] / in use: 2 [3.99 GB] / in no VG: 0 [0 ]
[root@node2 iscsi]# vgchange -ay vgiscsi
1 logical volume(s) in volume group "vgiscsi" now active
[root@node2 iscsi]# mount /dev/vgiscsi/lviscsi /iscsi/
[root@node2 iscsi]# ls
file lost+found
[root@node2 iscsi]# cat file
111
[root@node2 iscsi]# echo 222 >> file
[root@node2 iscsi]# cat file
111
222
[root@node1 iscsi]# cat file
111
实验证明存储节点上的ext3硬盘挂载到应用节点,在一个节点上写入在另外一个节点上是无法看到的,违背数据的一致性,所以不可行。前端节点看到的都是自己主机还没来得及同步到存储上的缓存。
============================GFS配置=================================
1。利用cman做GFS的锁检查
2。加载GFS mod
3。格式化工具
dlm(分布式锁管理)
[root@node1 ~]# yum install cman rgmanager system-config-cluster -y
[root@node2 ~]# yum install cman rgmanager -y
[root@station100 ~]# ssh 192.168.122.10 -X
[root@node1 ~]# system-config-cluster
创建一个新集群--->添加集群节点 node1.uplooking.com --->save /etc/cluster/cluster.conf
node2.uplooking.com
[root@node1 cluster]# scp /etc/cluster/cluster.conf 192.168.122.20:/etc/cluster/
两边同时起cman:
[root@node1 cluster]# service cman start
[root@node2 cluster]# service cman start
[root@node1 cluster]# uname -r
2.6.18-308.el5
[root@node1 cluster]# yum install gfs2-utils kmod-gfs -y
[root@node1 cluster]# modprobe gfs2
[root@node1 cluster]# lsmod | grep gfs2
gfs2 354825 1 lock_dlm
[root@node1 cluster]# mkfs.gfs2 -t iscsi_cluster:table1 -p lock_dlm -j 2 /dev/vgiscsi/lviscsi
This will destroy any data on /dev/vgiscsi/lviscsi.
It appears to contain a ext3 filesystem.
Are you sure you want to proceed? [y/n] y
Device: /dev/vgiscsi/lviscsi
Blocksize: 4096
Device Size 0.98 GB (256000 blocks)
Filesystem Size: 0.98 GB (255997 blocks)
Journals: 2
Resource Groups: 4
Locking Protocol: "lock_dlm"
Lock Table: "iscsi_cluster:table1"
UUID: 712E0AF9-2AAC-C6E4-B912-389550264A30
[root@node1 iscsi]# mount -t gfs2 -o lockproto=lock_dlm /dev/vgiscsi/lviscsi /iscsi/
[root@node1 iscsi]# touch file1
[root@node1 iscsi]# ls
file1
[root@node2 iscsi]# mount -t gfs2 -o lockproto=lock_dlm /dev/vgiscsi/lviscsi /iscsi/
[root@node2 iscsi]# ls
file1
[root@node1 ~]# vim a.sh
while true
do
echo 1 >> /iscsi/file1
sleep 1
done
[root@node1 ~]# chmod +x a.sh
[root@node2 ~]# vim a.sh
while true
do
echo 2 >> /iscsi/file1
sleep 1
done
[root@node2 ~]# chmod +x a.sh
node1,node2上同时启动脚本,并且在node1上开个实时监视
[root@node1 ~]# tail -f /iscsi/file1
1
1
2
1
2
1
2
1
2
1
2
一端要想写入必须检查自己和另一端的锁表格,看看是否有写锁,有的话继续等,没有的话,它会上把写锁,去做自己的写入。GFS2保证了数据的同步性,一致性。
#########################看能不能做到开机自动挂载##############################
node1&&node2
[root@node1 ~]# vim /etc/fstab
/dev/vgiscsi/lviscsi /iscsi gfs2 defaults,lockproto=lock_dlm 0 0
关机重启之后,mount无法挂载
[root@node1 ~]# yum install lvm2-cluster
[root@node1 ~]# lvmconf --enable-cluster
[root@node1 ~]# service clvmd start
[root@node1 ~]# mount -t gfs2 -o lockproto=lock_dlm /dev/vgiscsi/lviscsi /iscsi/
or[root@node1 ~]# mount -a
[root@node1 ~]# chkconfig iscsid on
[root@node1 ~]# chkconfig cman on
[root@node1 ~]# chkconfig clvmd on
[root@node1 ~]# chkconfig gfs2 on
再次测试,看是否能mount上。
#######################################################################
开机重启加载顺序:
1。先加载/etc/fstab
/dev/vgiscsi/lviscsi /iscsi gfs2 defaults,lockproto=lock_dlm 0 0
不过它不会看到/dev/vgiscsi/lviscsi这个文件,所以会加载失败
2。再次开启这四项服务:iscsi cman clvmd gfs2
3。最后开启gfs2服务时,它会把/etc/fstab里面的gfs2文件重新去挂载。
4。MOUNT成功。