iscsi集群搭建

一.

共享磁盘服务器(target主机):192.168.122.14

两台集群服务器(initiator主机):192.168.122.12    192.168.122.13


target 主机配置:

1.yum install -y scsi-target-utils

2.vim /etc/tgt/targets.conf

<target iqn.2014-11.com.example:server.target1>
    backing-store /dev/vda1              需要共享的磁盘分区
        initiator-address 192.168.122.12   需要共享到的initiator主机ip
</target>
3./etc/init.d/tgtd start                      启动tgt服务
Starting SCSI target daemon:                               [  OK  ]

tgt服务器配置完毕,现在去initiator服务器进行配置


initiator主机配置:

1.yum install -y iscsi-initiator-utils
2.[root@server12 ~]# iscsiadm -m discovery -t st -p 192.168.122.14   看是否能够查询到服务器共享的磁盘
192.168.122.14:3260,1 iqn.2014-11.com.example:server.target1

3.[root@server12 ~]# iscsiadm -m node -p 192.168.122.14 -l
Logging in to [iface: default, target: iqn.2014-11.com.example:server.target1, portal: 192.168.122.14,3260] (multiple)
Login to [iface: default, target: iqn.2014-11.com.example:server.target1, portal: 192.168.122.14,3260] successful.
4.fdisk -l               查看磁盘,可以看到共享到的第二块磁盘
Disk /dev/sdb: 1073 MB, 1073741824 bytes
34 heads, 61 sectors/track, 1011 cylinders
Units = cylinders of 2074 * 512 = 1061888 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


5.fdisk   -cu  /dev/sdb    将共享的sda分区
 6.partx -d /dev/sdb
    partx -a /dev/sdb
 7.  mkfs.ext4  /dev/sdb1
 8.     mkdir /mnt/data         建立挂载路径
 9.   vim /etc/fstab             设立开机自挂载
   /dev/sdb1               /mnt/data               ext4    defaults,_netdev 0 0
10.mount -a
11./etc/init.d/iscsi restart
12./etc/init.d/iscsid restart

二.利用共享的磁盘做lvm卷

7.[root@server12 ~]# pvcreate /dev/sdb    创建pv
  Writing physical volume data to disk "/dev/sdb"
  Physical volume "/dev/sdb" successfully created
  8.[root@server12 ~]# pvs                     查看pv
  PV         VG       Fmt  Attr PSize PFree
  /dev/sda2  VolGroup lvm2 a--  5.51g    0
  /dev/sdb            lvm2 a--  1.00g 1.00g
9.[root@server12 ~]# vgcreate net_vg /dev/sdb  创建
  Volume group "net_vg" successfully created
 10.[root@server12 ~]# vgs         查看vgs
  VG       #PV #LV #SN Attr   VSize    VFree   
  VolGroup   1   2   0 wz--n-    5.51g       0
  net_vg     1   0   0 wz--n- 1020.00m 1020.00m
11.[root@server12 ~]# lvcreate -L 500M -n net_lv1 net_vg  创建lv
  Logical volume "net_lv1" created
12.格式化文件系统
[root@server12 ~]# mkfs.ext4 /dev/net_vg/net_lv1
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
Stride=0 blocks, Stripe width=0 blocks
128016 inodes, 512000 blocks
25600 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67633152
63 block groups
8192 blocks per group, 8192 fragments per group
2032 inodes per group
Superblock backups stored on blocks:
    8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409

Writing inode tables: done                            
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 29 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.


然后就可以像(一.8)一样进行挂载了

三.双路径磁盘共享

共享磁盘服务器(target主机):192.168.122.14     192.168.122.114   双网卡

两台集群服务器(initiator主机):192.168.122.12    192.168.122.13

target主机启动双网卡以后。进行配置文件的修改,然后启动tgtd就可以了

集群服务器配置:

1.确保initiator主机识别到不同ip的共享磁盘内容一样

[root@server12 ~]# iscsiadm -m discovery -t st -p 192.168.122.114
192.168.122.114:3260,1 iqn.2014-10.com.example:server.target1



2.[root@server12 ~]# iscsiadm -m node -p 192.168.122.14 -l
Logging in to [iface: default, target: iqn.2014-10.com.example:server.target1, portal: 192.168.122.14,3260] (multiple)
Login to [iface: default, target: iqn.2014-10.com.example:server.target1, portal: 192.168.122.14,3260] successful.
[root@server12 ~]# iscsiadm -m node -p 192.168.122.114 -l
Logging in to [iface: default, target: iqn.2014-10.com.example:server.target1, portal: 192.168.122.114,3260] (multiple)
Login to [iface: default, target: iqn.2014-10.com.example:server.target1, portal: 192.168.122.114,3260] successful.

3.yum list device-mapper*   安装磁盘的传输用的软件  

yum install device-mapper.x86_64 device-mapper-event.x86_64 device-mapper-event-libs.x86_64 device-mapper-libs.x86_64 device-mapper-event-libs.i686 device-mapper-multipath.x86_64 device-mapper-multipath-libs.x86_64 device-mapper-persistent-data.x86_64

4.mpathconf –enable   生成配置文件

5.vim /etc/multipath.conf  修改配置文件


blacklist {
        devnode "sda"
}

6.

multipaths {
#       multipath {
#               wwid                    3600508b4000156d700012000000b0000
#               alias                   yellow
#               path_grouping_policy    multibus
#               path_checker            readsector0
#               path_selector           "round-robin 0"
#               failback                manual
#               rr_weight               priorities
#               no_path_retry           5
#       }

        multipath {
                wwid                    "1IET     00010001"
                alias                   helloc
        }
}

其中 wwid 是你用 multipath -l 查看的。
alias 别名
blacklist 黑名单

黑名单过滤不参与多路径配置的设备,如本地磁盘


eg:[root@server12 multipath]# multipath -l                 查看wwid
mpathb (1IET     00010001) dm-2 IET,VIRTUAL-DISK
size=500M features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=0 status=active
| `- 7:0:0:1 sdb 8:16 active undef running
`-+- policy='round-robin 0' prio=0 status=enabled
  `- 8:0:0:1 sdc 8:32 active undef running

7.配置完成之后,重启multipathd服务,使用下面的命令清空已有的multipath记录
   #multipath -F
    然后使用multipath -v2重新扫描设备,这时会在/dev/mapper/目录下生成和别名对应的设备文件。

control           helloc            VolGroup-lv_root  VolGroup-lv_swap

8.安装 sysstat     监测系统性能及效率

watch sysstat 



你可能感兴趣的:(iscsi集群搭建)