网络存储服务器ip-san的搭建

网络存储服务器ip-san的搭建

ip-san百度百科简介:IP SAN简称SAN(Storage Area Network),中文意思存储局域网络,IP SAN使存储空间得到更加充分的利用,并使得安装和管理更加有效。SAN是一种将存储设备、连接设备和接口集成在一个高速网络中的技术。SAN本身就是一个存储网络,承担了数据存储任务。

1.服务端 yum安装 scsitarget

[root@node01 ~]# yum -y install scsi-target-utils  #yum 安装

2.部分配置

[root@node01 ~]# ll /etc/tgt/targets.conf           #主要配置文件
-rw------- 1 root root 6945 Sep  4  2013 /etc/tgt/targets.conf
[root@node01 ~]# service tgtd restart            #启动服务
Stopping SCSI target daemon: not running                   [FAILED]
Starting SCSI target daemon:                               [  OK  ]
[root@node01 ~]# chkconfig tgtd on
[root@node01 ~]# netstat -antup |grep 3260     #检查服务端口
tcp        0      0 0.0.0.0:3260                0.0.0.0:*                   LISTEN      1125/tgtd           
tcp        0      0 :::3260                     :::*                        LISTEN      1125/tgtd           
[root@node01 ~]# 

二.客户端配置
2.1 安装iscsi-initiator-utils,并配置服务信息


```bash
[root@node02 ~]# yum -y install iscsi-initiator-utils
[root@node02 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.137.101    #发现target存储
Starting iscsid: [ OK ]
192.168.137.101:3260,1 iqn.2016-08.cn.node01.www:target4_scan
[root@node02 ~]#
[root@node02 ~]# /etc/init.d/iscsid status                                #查看客户端服务
iscsid (pid 1153) is running...
[root@node02 ~]#
[root@node02 ~]# tree /var/lib/iscsi/                                     #发现target服务,信息会写入/var/lib/iscsi 目录下
/var/lib/iscsi/
├── ifaces
├── isns
├── nodes
│   └── iqn.2016-08.cn.node01.www:target4_scan
│   └── 192.168.137.101,3260,1
│   └── default
├── send_targets
│   └── 192.168.137.101,3260
│   ├── iqn.2016-08.cn.node01.www:target4_scan,192.168.137.101,3260,1,default -> /var/lib/iscsi/nodes/iqn.2016-08.cn.node01.www:target4_scan/192.168.137.101,3260,1
│   └── st_config
├── slp
└── static
10 directories, 2 files
[root@node02 ~]#
[root@node02 ~]# /etc/init.d/iscsid  start                   #先启动iscsid 服务
[root@node02 ~]# /etc/init.d/iscsi  start                    #在启动iscsi服务 该服务是根据iscsid服务信息/var/lib/iscsi/ 来识别设备的
Starting iscsi:                                            [  OK  ]
[root@node02 ~]# 

2.2 检查是否发现磁盘, 以及卸载和登录scsi设备


```bash
[root@node02 ~]# ll /dev/sdb
brw-rw---- 1 root disk 8, 16 Aug 12 00:48 /dev/sdb
[root@node02 ~]# lsblk                        #lsblk查看块信息的命令
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0   20G  0 disk 
├─sda1   8:1    0  200M  0 part /boot
├─sda2   8:2    0   10G  0 part /
└─sda3   8:3    0    1G  0 part [SWAP]
sr0     11:0    1  3.6G  0 rom  /media
sdb      8:16   0    5G  0 disk 
[root@node02 ~]#
[root@node02 ~]# tree /var/lib/iscsi/
/var/lib/iscsi/
├── ifaces
├── isns
├── nodes
│   └── iqn.2016-08.cn.node01.www:target4_scan
│       └── 192.168.137.101,3260,1
│           └── default
├── send_targets
│   └── 192.168.137.101,3260
│       ├── iqn.2016-08.cn.node01.www:target4_scan,192.168.137.101,3260,1,default -> /var/lib/iscsi/nodes/iqn.2016-08.cn.node01.www:target4_scan/192.168.137.101,3260,1
│       └── st_config
├── slp
└── static

10 directories, 2 files
[root@node02 ~]# iscsiadm -m node -T iqn.2016-08.cn.node01.www:target4_scan -u              #卸载scsi设备
Logging out of session [sid: 1, target: iqn.2016-08.cn.node01.www:target4_scan, portal: 192.168.137.101,3260]
Logout of [sid: 1, target: iqn.2016-08.cn.node01.www:target4_scan, portal: 192.168.137.101,3260] successful.
[root@node02 ~]# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0   20G  0 disk 
├─sda1   8:1    0  200M  0 part /boot
├─sda2   8:2    0   10G  0 part /
└─sda3   8:3    0    1G  0 part [SWAP]
sr0     11:0    1  3.6G  0 rom  /media
[root@node02 ~]# iscsiadm -m node -T iqn.2016-08.cn.node01.www:target4_scan -l               #登录scsi设备
Logging in to [iface: default, target: iqn.2016-08.cn.node01.www:target4_scan, portal: 192.168.137.101,3260] (multiple)
Login to [iface: default, target: iqn.2016-08.cn.node01.www:target4_scan, portal: 192.168.137.101,3260] successful.
[root@node02 ~]# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0   20G  0 disk 
├─sda1   8:1    0  200M  0 part /boot
├─sda2   8:2    0   10G  0 part /
└─sda3   8:3    0    1G  0 part [SWAP]
sr0     11:0    1  3.6G  0 rom  /media
sdb      8:16   0    5G  0 disk 
[root@node02 ~]#

2.3 使用scsi设备,并做写入操作

[root@node02 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 9.9G 869M 8.5G 10% /
tmpfs 238M 0 238M 0% /dev/shm
/dev/sda1 194M 27M 158M 15% /boot
/dev/sr0 3.6G 3.6G 0 100% /media
[root@node02 ~]#
[root@node02 ~]# fdisk /dev/sdb                 #分区
 
[root@node02 ~]# lsblk                          #查看分区块
NNAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0   20G  0 disk 
├─sda1   8:1    0  200M  0 part /boot
├─sda2   8:2    0   10G  0 part /
└─sda3   8:3    0    1G  0 part [SWAP]
sr0     11:0    1  3.6G  0 rom  /media
sdb      8:16   0    5G  0 disk 
└─sdb1   8:17   0    5G  0 part 
[root@node02 ~]# mkfs.ext4 /dev/sdb1             #格式化分区
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
328656 inodes, 1312222 blocks
65611 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1346371584
41 block groups
32768 blocks per group, 32768 fragments per group
8016 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 21 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@node02 ~]# mkdir /scsi               #挂载分区 到新建目录/scsi
[root@node02 ~]# mount /dev/sdb1 /scsi/
[root@node02 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 9.9G 869M 8.5G 10% /
tmpfs 238M 0 238M 0% /dev/shm
/dev/sda1 194M 27M 158M 15% /boot
/dev/sr0 3.6G 3.6G 0 100% /media
/dev/sdb1 5.0G 139M 4.6G 3% /scsi
[root@node02 ~]# cp -r /root/* /scsi/    #写入验证
[root@node02 ~]# ll /scsi/
total 36
-rw------- 1 root root 980 Aug 12 00:59 anaconda-ks.cfg
-rw-r--r-- 1 root root 10197 Aug 12 00:59 install.log
-rw-r--r-- 1 root root 3161 Aug 12 00:59 install.log.syslog
drwx------ 2 root root 16384 Aug 12 00:58 lost+found
[root@node02 ~]# df -h /scsi/
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 5.0G 139M 4.6G 3% /scsi
[root@node02 ~]#

三、添加另外客户端192.168.137.103。操作同192.168.137.102基本类似,只是本次不再需要格式化分区

[root@node03 ~]# yum -y install iscsi-initiator-utils
[root@node03 ~]#
[root@node03 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.137.101
Starting iscsid: [ OK ]
192.168.137.101:3260,1 iqn.2016-08.cn.node01.www:target4_scan
[root@node03 ~]# yum -y install tree
[root@node03 ~]# tree /var/lib/iscsi/
/var/lib/iscsi/
├── ifaces
├── isns
├── nodes
│   └── iqn.2016-08.cn.node01.www:target4_scan
│   └── 192.168.137.101,3260,1
│   └── default
├── send_targets
│   └── 192.168.137.101,3260
│   ├── iqn.2016-08.cn.node01.www:target4_scan,192.168.137.101,3260,1,default -> /var/lib/iscsi/nodes/iqn.2016-08.cn.node01.www:target4_scan/192.168.137.101,3260,1
│   └── st_config
├── slp
└── static
10 directories, 2 files
[root@node03 ~]# /etc/init.d/iscsid restart
Stopping iscsid:
Starting iscsid: [ OK ]
[root@node03 ~]# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0   20G  0 disk 
├─sda1   8:1    0  200M  0 part /boot
├─sda2   8:2    0   10G  0 part /
└─sda3   8:3    0    1G  0 part [SWAP]
sr0     11:0    1  3.6G  0 rom  /media
[root@node03 ~]# /etc/init.d/iscsi restart
Stopping iscsi: [ OK ]
Starting iscsi: [ OK ]
[root@node03 ~]# lsblk                      #可以看到设备sdb1 了
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0   20G  0 disk 
├─sda1   8:1    0  200M  0 part /boot
├─sda2   8:2    0   10G  0 part /
└─sda3   8:3    0    1G  0 part [SWAP]
sr0     11:0    1  3.6G  0 rom  /media
sdb      8:16   0    5G  0 disk 
└─sdb1   8:17   0    5G  0 part 
[root@node03 ~]#
[root@node03 ~]#
[root@node03 ~]# mkdir /scsi           
[root@node03 ~]# mount /dev/sdb1 /scsi/              #挂载
[root@node03 ~]# cd !$
cd /scsi/
[root@node03 scsi]# ll                               #可以可看到在node02中写入的内容
total 36
-rw------- 1 root root 980 Aug 12 00:59 anaconda-ks.cfg
-rw-r--r-- 1 root root 10197 Aug 12 00:59 install.log
-rw-r--r-- 1 root root 3161 Aug 12 00:59 install.log.syslog
drwx------ 2 root root 16384 Aug 12 00:58 lost+found
[root@node03 scsi]# cp /etc/passwd /scsi/            #在node03上写入新文件passwd,在node02验证
[root@node03 scsi]# ll
total 40
-rw------- 1 root root 980 Aug 12 00:59 anaconda-ks.cfg
-rw-r--r-- 1 root root 10197 Aug 12 00:59 install.log
-rw-r--r-- 1 root root 3161 Aug 12 00:59 install.log.syslog
drwx------ 2 root root 16384 Aug 12 00:58 lost+found
-rw-r--r-- 1 root root 901 Aug 12 01:11 passwd
[root@node03 scsi]#
#在node02 未能识别node03写入的passwd文件, 没有同步,是因为我们使用ext4文件系统 不支持,使用GFS可以保持同步
[root@node02 scsi]# ll
total 36
-rw------- 1 root root   980 Aug 12 00:59 anaconda-ks.cfg
-rw-r--r-- 1 root root 10197 Aug 12 00:59 install.log
-rw-r--r-- 1 root root  3161 Aug 12 00:59 install.log.syslog
drwx------ 2 root root 16384 Aug 12 00:58 lost+found
[root@node02 scsi]# 

网络存储服务器ip-san的搭建_第1张图片

你可能感兴趣的:(linux)