RHEL 6.4 部署RAID5+LVM

POC环境:

实验环境须添加3块磁盘

.创建RAID5操作:

[root@localhost ~]# uname -a

Linux localhost.localdomain2.6.32-358.el6.x86_64 #1 SMP Tue Jan 29 11:47:41 EST 2013 x86_64 x86_64 x86_64GNU/Linux

 

[root@localhost ~]# cat /etc/redhat-release

Red Hat Enterprise Linux Server release 6.4(Santiago)

1.添加磁盘并分区,可以每块磁盘只分一个区分区类型为fd

[root@localhost ~]# fdisk /dev/sdb

Command (m for help): t

Selected partition 1

  Device Boot      Start         End      Blocks  Id  System

/dev/sdb1               1        6527   52428096   fd  Linux raid autodetect

Hex code (type L to list codes): fd

[root@localhost ~]# fdisk /dev/sdc

[root@localhost ~]# fdisk /dev/sdd

 

2.建立raid5卷:


[root@localhost ~]# mdadm  --create /dev/md0 --level=5 --raid-devices=3/dev/sdb1 /dev/sdc1 /dev/sdd1

 mdadm:Defaulting to version 1.2 metadata

 mdadm:array /dev/md0 started.


3.格式化raid5卷:

[root@localhost ~]# mkfs.ext4  /dev/md0


4.查看卷的信息并写入配置文件:

[root@localhost ~]# mdadm  --detail  --scan

ARRAY /dev/md0 metadata=1.2name=localhost.localdomain:0 UUID=7c870ec1:e16dd689:d786b14a:2f48e7b4

[root@localhost ~]# mdadm  --detail  --scan  >>/etc/mdadm.conf


.raid5部署成LVM

1.把分区变成pv

[root@localhost data]# pvcreate  /dev/md0

 

2.pv加入到一个叫vg1vg

[root@localhost data]# vgcreate  vg1  /dev/md0

 

3.vg中取出20G做一个叫lv1的卷:

[root@localhost data]# lvcreate  -L  20G  -n  lv1  vg1

 

4.格式化lv1卷:

[root@localhost data]# mkfs.ext4  /dev/vg1/lv1


root@localhost ~]# mkdir  /data

 

[root@localhost ~]# mount  /dev/vg1/lv1  /data

 

[root@localhost ~]# df  -k

Filesystem           1K-blocks      Used Available Use% Mounted on

/dev/mapper/VolGroup-lv_root

                      16102344    966144 14318232   7% /

tmpfs                   247208         0   247208   0% /dev/shm

/dev/sda1               495844     37615   432629   8% /boot

/dev/md0             103144736    192116 97713120   1% /data


5.自动挂载,编辑/etc/fstab:

[root@localhost ~]# vi  /etc/fstab


6.查看挂载情况:

[root@localhost ~]# mount

/dev/vg1/lv1  on  /data type ext4 (rw)

 

[root@localhost ~]# cat  /proc/mdstat

Personalities : [raid6] [raid5] [raid4]

md0 : active raid5 sdd1[3] sdc1[1] sdb1[0]

     104790016 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]

 

unused devices: <none>

/data下新建文件qq1qq2qq3

 

.模拟硬盘故障:

1.标记/sdb1已经在raid中失效

[root@localhost ~]# mdadm  /dev/md0  --fail   /dev/sdb1

mdadm: set /dev/sdb1 faulty in /dev/md0

移除坏的硬盘:

[root@localhost ~]# mdadm  /dev/md0  --remove  /dev/sdb1

mdadm: hot removed /dev/sdb1 from /dev/md0

2.此时查看/data中文件仍然存在,但不能创建文件:

[root@localhost data]# ll

total 16

drwx------ 2 root root 16384 Aug 15 03:33lost+found

-rw-r--r-- 1 root root     0 Aug 15 03:39 qq1

-rw-r--r-- 1 root root     0 Aug 15 03:39 qq2

-rw-r--r-- 1 root root     0 Aug 15 03:39 qq3

[root@localhostdata]# touch qq4  仍然可以创建新文件

[root@localhost data]# ll

total 16

drwx------ 2 root root 16384 Aug 15 19:29 lost+found

-rw-r--r-- 1 root root     0 Aug 15 19:32 qq1

-rw-r--r-- 1 root root     0 Aug 15 19:32 qq2

-rw-r--r-- 1 root root     0 Aug 15 19:32 qq3

-rw-r--r-- 1 root root     0 Aug 15 19:37 qq4

关机:

[root@localhost ~]# shutdown -h now

.恢复

1.系统启动完成以后,给新硬盘/dev/sdb做与/dev/sdc相同的分区

并格式化sdb1


[root@localhostdata]# sfdisk  -d  /dev/sdc  |  sfdisk /dev/sdb

Disk /dev/sdb: 53.7 GB, 53687091200 bytes

255 heads, 63 sectors/track, 6527 cylinders

Units = cylinders of 16065 * 512 = 8225280bytes

Sector size (logical/physical): 512 bytes /512 bytes

I/O size (minimum/optimal): 512 bytes / 512bytes

Disk identifier: 0x2482a65f 

 

2./dev/sdb的分区挂到raid1里面

[root@localhostdata]# mdadm  --manage  /dev/md0 --add  /dev/sdb1

 mdadm: added /dev/sdb1

3.查看结果:

[root@localhost ~]# cat /proc/mdstat

Personalities : [raid6] [raid5] [raid4]

md0 : active raid5 sdb1[4] sdd1[3] sdc1[1]

      41895936 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU]

      [=======>.............]  recovery = 38.8% (8136576/20947968) finish=1.3min speed=157496K/sec


unused devices: <none>

 

在日志中看到重建的过程

 

md: bind<sdb1>

md: recovery of RAID array md0

md: minimum _guaranteed_  speed: 1000 KB/sec/disk.

md: using maximum available idle IO bandwidth  (but not more than 200000 KB/sec) for recovery.

md: using 128k window, over a total of 31438720k.

md: md0: recovery done.

 

 

4.查看文件是否丢失

这时不用在挂载lv1,上面已经自动挂载过

[root@localhost data]# ll

total 16

drwx------ 2 root root 16384 Aug 15 03:33lost+found

-rw-r--r-- 1 root root     0 Aug 15 03:39 qq1

-rw-r--r-- 1 root root     0 Aug 15 03:39 qq2

-rw-r--r-- 1 root root     0 Aug 15 03:39 qq3

-rw-r--r-- 1 root root     0 Aug 15 19:37 qq4

 

 

 

 

 

 


你可能感兴趣的:(lvm,raid5,REHL6.4)