lv & fs

最近的项目中用到了HDS的多路径软件HDLM,基于它创建lvm着实费了一份劲。
以下说明如何在redhat as 5.5下基于HDLM创建LVM。


1. 修改/etc/lvm/lvm.conf
编辑/etc/lvm/lvm.conf,加入以下两行:
filter = [ "a|sddlm[a-p][a-p].*|", "r|/dev/sd|" ] 
types = [ "sddlmfdrv", 16 ] 
注释下面内容: 
#types=[ "fd", 16 ] 

完成以上修改后重启机器。

2. 启动lvm2-monitor
查看chkconfig,显示这个服务是自动开的,但实际上,在一个全新的系统,它不会自动启动,需要手工启动它。 
[root@dwdb01 init.d]# chkconfig --list|grep lvm
lvm2-monitor 0:off 1:on 2:on 3:on 4:on 5:on 6:off

/etc/inid.d/lvm2-monitor start

以上两步必须完成,否则会如下错误:
......
/dev/sddlmbl: open failed: No such device or address
/dev/sddlmal: open failed: No such device or address
Found duplicate PV xbK95vugi2xImNWNjm3V5DKyBMBVLs2k: using /dev/sdnp1 not /dev/sdno1
......

3. 创建PV
PV可以基于整个lun来创建,也可以基于单个分区创建。如果基于单个分区创建,则建议把分区的flag改成8e。
pvcreate /dev/sddlmmn 
pvcreate /dev/sddlmmo 
pvcreate /dev/sddlmmp 
pvcreate /dev/sddlmna 
pvcreate /dev/sddlmnb 
pvcreate /dev/sddlmnc 
pvcreate /dev/sddlmnd 
pvcreate /dev/sddlmne 
pvcreate /dev/sddlmkah
pvcreate /dev/sddlmkai
pvcreate /dev/sddlmkaj
pvcreate /dev/sddlmkak
pvcreate /dev/sddlmkal
pvcreate /dev/sddlmkam
pvcreate /dev/sddlmkan
pvcreate /dev/sddlmkao
pvcreate /dev/sddlmibp
pvcreate /dev/sddlmjba
pvcreate /dev/sddlmjbb
pvcreate /dev/sddlmjbc
pvcreate /dev/sddlmjbd
pvcreate /dev/sddlmjbe
pvcreate /dev/sddlmjbf
pvcreate /dev/sddlmjbg

创建24个PV。可以用pvdisplay查看PV的信息。

4. 创建VG
[root@dwdb01 ~]# vgcreate -s 256M vg_ams1 /dev/sddlmmn /dev/sddlmmo /dev/sddlmmp /dev/sddlmna /dev/sddlmnb /dev/sddlmnc /dev/sddlmnd /dev/sddlmne /dev/sddlmkah /dev/sddlmkai /dev/sddlmkaj /dev/sddlmkak /dev/sddlmkal /dev/sddlmkam /dev/sddlmkan /dev/sddlmkao /dev/sddlmibp /dev/sddlmjba /dev/sddlmjbb /dev/sddlmjbc /dev/sddlmjbd /dev/sddlmjbe /dev/sddlmjbf /dev/sddlmjbg
Volume group "vg_ams1" successfully created

因为单个lv的最大大小为(PE*65535),而当前所有PV大小超过有近10T,因此需要设置PE大小为256M,这样最大可以支持16T大小。
创建好了以后可以用vgdisplay查看vg信息。

[root@dwdb01 ~]# vgdisplay 
--- Volume group ---
VG Name vg_ams1
System ID 
Format lvm2
Metadata Areas 24
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 24
Act PV 24
VG Size 9.37 TB
PE Size 256.00 MB
Total PE 38376
Alloc PE / Size 0 / 0 
Free PE / Size 38376 / 9.37 TB
VG UUID uOCjon-I9E6-yAxU-v4y8-XlFQ-ghnV-L633fv

5. 激活VG 
[root@dwdb01 ~]# vgchange -a y vg_ams1
0 logical volume(s) in volume group "vg_ams1" now active


6. 创建LV

[root@dwdb01 ~]# lvcreate -i 24 -I 1024 -l 19200 -n lv_fs1 vg_ams1
Logical volume "lv_fs1" created

[root@dwdb01 ~]# lvcreate -i 24 -I 1024 -l 19176 -n lv_fs2 vg_ams1
Logical volume "lv_fs2" created

-i:映射到几个PV
-I:条带大小,单位是K
-l:LE个数,必须为-i的倍数
-n:lv名称

这里,因为一共有24个PV,为了完全跨所有磁盘,把条带宽度设置为PV的个数(24);因为放的都是大数据,因此深度设置为1M。

因为格式化ext3文件系统的时候,单个LV大小不能超过8T,因此需要把这个VG划分为2个lv。

用lvdisplay查看lv信息
[root@dwdb01 ~]# lvdisplay
--- Logical volume ---
LV Name /dev/vg_ams1/lv_fs1
VG Name vg_ams1
LV UUID hBrR0h-Fkte-QNHF-v7zD-VXK1-JVTZ-6IOBtI
LV Write Access read/write
LV Status available
# open 0
LV Size 4.69 TB
Current LE 19200
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 98304
Block device 253:0

--- Logical volume ---
LV Name /dev/vg_ams1/lv_fs2
VG Name vg_ams1
LV UUID MXbade-Vjhm-htYE-uJY8-8JS6-7F1v-EaIxC6
LV Write Access read/write
LV Status available
# open 0
LV Size 4.68 TB
Current LE 19176
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 98304
Block device 253:1

7. 创建文件系统

这里选择ext3文件系统。

[root@dwdb01 ~]# mke2fs -j -b 4096 /dev/vg_ams1/lv_fs1
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
629145600 inodes, 1258291200 blocks
62914560 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
38400 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks: 
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 
102400000, 214990848, 512000000, 550731776, 644972544

Writing inode tables: done 
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 25 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.

[root@dwdb01 ~]# mke2fs -j -b 4096 /dev/vg_ams1/lv_fs2
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
628359168 inodes, 1256718336 blocks
62835916 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
38352 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks: 
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 
102400000, 214990848, 512000000, 550731776, 644972544

Writing inode tables: done 
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 36 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.


8. 修改文件系统属性
默认情况下,每隔180天或者挂载25次后,重新启动会强制进行文件系统检查。因为这个文件系统非常大,如果在启动时自检时间会非常长,是不可接受的,因此要禁用掉这些属性,在需要时手工检查就可以。

[root@dwdb01 ~]# tune2fs -c -1 -i -1 /dev/vg_ams1/lv_fs1
tune2fs 1.39 (29-May-2006)
Setting maximal mount count to -1
Setting interval between checks to 18446744073709465216 seconds
[root@dwdb01 ~]# tune2fs -c -1 -i -1 /dev/vg_ams1/lv_fs2
tune2fs 1.39 (29-May-2006)
Setting maximal mount count to -1
Setting interval between checks to 18446744073709465216 seconds

9. 挂载目录
mkdir /ams_fs1
mkdir /ams_fs2

mount -t ext3 /dev/vg_ams1/lv_fs1 /ams_fs1
mount -t ext3 /dev/vg_ams1/lv_fs2 /ams_fs2

10. 修改配置文件
为了重启后挂载继续生效,在/etc/fstab添加如下两行:

/dev/vg_ams1/lv_fs1 /ams_fs1 ext3 defaults 1 2
/dev/vg_ams1/lv_fs2 /ams_fs2 ext3 defaults 1 2

你可能感兴趣的:(redhat,ext,filter,Access,Allocation,Types)