solaris 10 X86下给zfs做镜像

    系统在安装的时候使用单块盘c1t0d0做了一个存储池,zpool池名为rpool,大小20G,如下所示:
[root@node03 /]# zpool status
config:
        NAME          STATE     READ WRITE CKSUM
        rpool         online        0     0     0
        c1t0d0s0  ONLINE       0     0     0
[root@node03 /]# zfs list
NAME                     USED  AVAIL  REFER  MOUNTPOINT
rpool                   10.6G  8.93G    36K  /rpool
rpool/ROOT              5.63G  8.93G    21K  legacy
rpool/ROOT/s10u8        5.63G  8.93G  5.40G  /
rpool/ROOT/s10u8@nease  86.5M      -  4.76G  -
rpool/ROOT/s10u8@vxvm    152M      -  5.39G  -
rpool/dump              1.00G  8.93G  1.00G  -
rpool/export              44K  8.93G    23K  /export
rpool/export/home         21K  8.93G    21K  /export/home
rpool/swap                 4G  12.9G  53.5M  -
下面是做镜像的步骤:
[root@node03 /]# zpool attach rpool c1t0d0s0 c1t8d0s0
Please be sure to invoke installgrub(1M) to make 'c1t8d0s0' bootable.
[root@node03 /]# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t8d0s0
stage1 written to partition 0 sector 0 (abs 16065)
stage2 written to partition 0, 272 sectors starting at 50 (abs 16115)
执行完以后,使用zpool status查看,两块硬盘正在同步:
[root@node03 /]# zpool status  
  pool: rpool
state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h0m, 0.07% done, 1h8m to go
config:
        NAME          STATE     READ WRITE CKSUM
        rpool         ONLINE       0     0     0
          mirror      ONLINE       0     0     0
            c1t0d0s0  ONLINE       0     0     0  4.80M resilvered
            c1t8d0s0  ONLINE       0     0     0
errors: No known data errors
现在已经同步完成,如下:
[root@node03 /]# zpool status
  pool: rpool
state: ONLINE
scrub: resilver completed after 0h0m with 0 errors on Fri Feb  2 21:48:20 2007
config:
        NAME          STATE     READ WRITE CKSUM
        rpool         ONLINE       0     0     0
          mirror      ONLINE       0     0     0
            c1t0d0s0  ONLINE       0     0     0  61.7M resilvered
            c1t8d0s0  ONLINE       0     0     0
errors: No known data errors
重启后,再使用zpool status查看:
[root@node03 /]# zpool status
  pool: rpool
state: ONLINE
scrub: none requested
config:
        NAME          STATE     READ WRITE CKSUM
        rpool         ONLINE       0     0     0
          mirror      ONLINE       0     0     0
            c1t0d0s0  ONLINE       0     0     0
            c1t8d0s0  ONLINE       0     0     0
errors: No known data errors
现在去掉一个盘,看其是否能启动。这里去掉c1t0d0吧,在虚拟机里直接删除掉,就可以了,在vmware启动的时候按f2进bios设置从哪个盘启动即可。删除后使用zpool status查看:
[root@node03 /]# zpool status
  pool: rpool
state: DEGRADED
status: One or more devices could not be used because the label is missing or
        invalid.  Sufficient replicas exist for the pool to continue
        functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-4J
scrub: none requested
config:
        NAME          STATE     READ WRITE CKSUM
        rpool         DEGRADED     0     0     0
          mirror      DEGRADED     0     0     0
            c1t0d0s0  UNAVAIL      6   140     0  experienced I/O failures
            c1t8d0s0  ONLINE       0     0     0
errors: No known data errors
接着重启,更改bios:
solaris 10 X86下给zfs做镜像_第1张图片
VMware Virtual SCSI Hard Drive (0:8)是启动盘,设置后,保存重启即可。
[root@node03 /]# zpool status
  pool: rpool
state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
        the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-2Q
scrub: none requested
config:
        NAME          STATE     READ WRITE CKSUM
        rpool         DEGRADED     0     0     0
          mirror      DEGRADED     0     0     0
            c1t0d0s0  UNAVAIL      0     0     0  cannot open
            c1t8d0s0  ONLINE       0     0     0
errors: No known data errors
可以启动了,下面是做换坏盘的过程:
新添加一块盘,名称是c1t9d0.
首先给c1t9d0分区,把全部空间都给solaris分区。
[root@node03 /]# fdisk -b /usr/lib/fs/ufs/mboot /dev/rdsk/c1t9d0p0
No fdisk table exists. The default partition for the disk is:
  a 100% "SOLARIS System" partition
Type "y" to accept the default partition,  otherwise type "n" to edit the
partition table.
y
把ct8d0上的分区信息复制到c1t9d0上:
[root@node03 /]# prtvtoc /dev/rdsk/c1t8d0s0 |fmthard -s - /dev/rdsk/c1t9d0s0
fmthard:  New volume table of contents now in place.
接着使用zpool relpace 命令用新的磁盘更换旧的磁盘.
[root@node03 /]# zpool replace rpool c1t0d0s0 c1t9d0s0
Please be sure to invoke installgrub(1M) to make 'c1t9d0s0' bootable.
安装引导环境:
[root@node03 /]# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t9d0s0
stage1 written to partition 0 sector 0 (abs 16065)
stage2 written to partition 0, 272 sectors starting at 50 (abs 16115)
接着查看同步状态:
[root@node03 /]# zpool status
  pool: rpool
state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h2m, 25.34% done, 0h6m to go
config:
        NAME            STATE     READ WRITE CKSUM
        rpool           DEGRADED     0     0     0
          mirror        DEGRADED     0     0     0
            replacing   DEGRADED 13.4K     0     0
              c1t0d0s0  UNAVAIL      0     0     0  cannot open
              c1t9d0s0  ONLINE       0     0 13.4K  1.18G resilvered
            c1t8d0s0    ONLINE       0     0     0
errors: No known data errors
[root@node03 /]#
同步完后结果如下:
[root@node03 /]# zpool  status
  pool: rpool
state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
scrub: resilver completed after 0h26m with 0 errors on Fri Feb  2 22:39:04 2007
config:
        NAME          STATE     READ WRITE CKSUM
        rpool         ONLINE       0     0     0
          mirror      ONLINE       0     0     0
            c1t9d0s0  ONLINE       0     0  131K  6.15G resilvered
            c1t8d0s0  ONLINE       0     0     0
errors: No known data errors
 
Sparc 平台的操作:
-bash-3.00# prtvtoc /dev/rdsk/c0t0d0s0|fmthard -s - /dev/rdsk/c0t1d0s0
fmthard:  New volume table of contents now in place.
-bash-3.00# zpool attach rpool c0t0d0s0 c0t1d0s0
Please be sure to invoke installboot(1M) to make 'c0t1d0s0' bootable.
-bash-3.00# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk  /dev/rdsk/c0t1d0s0
-bash-3.00# zpool status
  pool: rpool
state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h1m, 5.80% done, 0h28m to go
config:
        NAME          STATE     READ WRITE CKSUM
        rpool         ONLINE       0     0     0
          mirror      ONLINE       0     0     0
            c0t0d0s0  ONLINE       0     0     0
            c0t1d0s0  ONLINE       0     0     0  1.29G resilvered
errors: No known data errors
拔掉c0t0d0盘:
AVAILABLE DISK SELECTIONS:
       0. c0t0d0 <drive not available>
          /pci@1d,700000/scsi@4/sd@0,0
       1. c0t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
          /pci@1d,700000/scsi@4/sd@1,0
Specify disk (enter its number): ^D
-bash-3.00# zpool status
  pool: rpool
state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
        repaired.
scrub: none requested
config:
        NAME          STATE     READ WRITE CKSUM
        rpool         DEGRADED     0     0     0
          mirror      DEGRADED     0     0     0
            c0t0d0s0  FAULTED      4    95     1  too many errors
            c0t1d0s0  ONLINE       0     0     0
errors: No known data errors
更改eeprom:
-bash-3.00# eeprom "boot-device=disk1 disk net"
重启后:
-bash-3.00# zpool status
  pool: rpool
state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
        repaired.
scrub: none requested
config:
        NAME          STATE     READ WRITE CKSUM
        rpool         DEGRADED     0     0     0
          mirror      DEGRADED     0     0     0
            c0t0d0s0  FAULTED      0     0     0  too many errors
            c0t1d0s0  ONLINE       0     0     0
errors: No known data errors
ok,如果插回c0t0d0盘,只需执行zpool clear rpool c0t0d0s0命令即可同步:
-bash-3.00# zpool clear rpool c0t0d0s0
-bash-3.00# zpool status
  pool: rpool
state: ONLINE
scrub: resilver completed after 0h0m with 0 errors on Tue Feb  2 15:58:34 2010
config:
        NAME          STATE     READ WRITE CKSUM
        rpool         ONLINE       0     0     0
          mirror      ONLINE       0     0     0
            c0t0d0s0  ONLINE       0     0     0  11.8M resilvered
            c0t1d0s0  ONLINE       0     0     0
errors: No known data errors

本文出自 “candon123” 博客,谢绝转载!

你可能感兴趣的:(Solaris,镜像,职场,休闲,ZFS)