测试机器 10.199.128.69
机器型号
[root@test_raid ~]# dmidecode | grep "Product" Product Name: PowerEdge R720xd Product Name: 068CDY
厂商
[root@test_raid ~]# dmidecode| grep "Manufacturer" Manufacturer: Dell Inc.
序号信息
[root@test_raid ~]# dmidecode | grep -B 4 "Serial Number" | more System Information Manufacturer: Dell Inc. Product Name: PowerEdge R720xd Version: Not Specified Serial Number: 8V3Q342 -- Base Board Information Manufacturer: Dell Inc. Product Name: 068CDY Version: A01 Serial Number: ..CN779214AR02CC.
CPU 信息
[root@test_raid ~]# dmidecode | grep "CPU" Socket Designation: CPU1 Version: Intel(R) Xeon(R) CPU E5-2695 v2 @ 2.40GHz Socket Designation: CPU2 Version: Intel(R) Xeon(R) CPU E5-2695 v2 @ 2.40GHz
物理 CPU 个数
[root@test_raid ~]# dmidecode | grep "Socket Designation: CPU" |wc -l 2
生产日期
[root@test_raid ~]# dmidecode | grep "Date" Release Date: 07/09/2014
充电状态
[root@test_raid ~]# megacli -AdpBbuCmd -GetBbuStatus -aALL |grep "Charger Status" Charger Status: Complete
充电百分比
[root@test_raid ~]# megacli -AdpBbuCmd -GetBbuStatus -aALL |grep "Relative State of Charge" Relative State of Charge: 100 %
当前 RAID 数量
[root@test_raid ~]# megacli -cfgdsply -aALL |grep "Number of DISK GROUPS:" Number of DISK GROUPS: 1
RAID 卡信息
[root@test_raid ~]# megacli -cfgdsply –aALL | more ============================================================================== Adapter: 0 Product Name: PERC H710P Mini Memory: 1024MB BBU: Present Serial No: 49F033N ============================================================================== Number of DISK GROUPS: 1 DISK GROUP: 0 Number of Spans: 1 SPAN: 0 Span Reference: 0x00 Number of PDs: 2 Number of VDs: 1 Number of dedicated Hotspares: 0 Virtual Drive Information: Virtual Drive: 0 (Target Id: 0) Name :system_vd RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0 Size : 3.637 TB Mirror Data : 3.637 TB State : Optimal Strip Size : 64 KB Number Of Drives : 2 Span Depth : 1 Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Disk's Default Ongoing Progresses: Background Initialization: Completed 13%, Taken 63 min. Encryption Type : None Default Power Savings Policy: Controller Defined Current Power Savings Policy: None Can spin up in 1 minute: Yes LD has drives that support T10 power conditions: Yes LD's IO profile supports MAX power savings with cached writes: No Bad Blocks Exist: No Is VD Cached: Yes Cache Cade Type : Read Only Physical Disk Information: Physical Disk: 0 Enclosure Device ID: 32 Slot Number: 0 Drive's postion: DiskGroup: 0, Span: 0, Arm: 0 Enclosure position: 1 Device Id: 0 WWN: 5000C50062A960D0 Sequence Number: 2 Media Error Count: 0 Other Error Count: 0 Predictive Failure Count: 0 Last Predictive Failure Event Seq Number: 0 PD Type: SAS Raw Size: 3.638 TB [0x1d1c0beb0 Sectors] Non Coerced Size: 3.637 TB [0x1d1b0beb0 Sectors] Coerced Size: 3.637 TB [0x1d1b00000 Sectors] Firmware state: Online, Spun Up Device Firmware Level: GS0F Shield Counter: 0 Successful diagnostics completion on : N/A SAS Address(0): 0x5000c50062a960d1 SAS Address(1): 0x0 Connected Port Number: 0(path0) Inquiry Data: SEAGATE ST4000NM0023 GS0FZ1Z6ABTC FDE Capable: Not Capable FDE Enable: Disable Secured: Unsecured Locked: Unlocked Needs EKM Attention: No Foreign State: None Device Speed: 6.0Gb/s Link Speed: 6.0Gb/s Media Type: Hard Disk Device Drive Temperature :29C (84.20 F) PI Eligibility: No Drive is formatted for PI information: No PI: No PI Port-0 : Port status: Active Port's Linkspeed: 6.0Gb/s Port-1 : Port status: Active Port's Linkspeed: Unknown Drive has flagged a S.M.A.R.T alert : No Physical Disk: 1 Enclosure Device ID: 32 Slot Number: 1 Drive's postion: DiskGroup: 0, Span: 0, Arm: 1 Enclosure position: 1 Device Id: 1 WWN: 5000C50062A98C78 Sequence Number: 2 Media Error Count: 0 Other Error Count: 0 Predictive Failure Count: 0 Last Predictive Failure Event Seq Number: 0 PD Type: SAS Raw Size: 3.638 TB [0x1d1c0beb0 Sectors] Non Coerced Size: 3.637 TB [0x1d1b0beb0 Sectors] Coerced Size: 3.637 TB [0x1d1b00000 Sectors] Firmware state: Online, Spun Up Device Firmware Level: GS0F Shield Counter: 0 Successful diagnostics completion on : N/A SAS Address(0): 0x5000c50062a98c79 SAS Address(1): 0x0 Connected Port Number: 0(path0) Inquiry Data: SEAGATE ST4000NM0023 GS0FZ1Z6ABD4 FDE Capable: Not Capable FDE Enable: Disable Secured: Unsecured Locked: Unlocked Needs EKM Attention: No Foreign State: None Device Speed: 6.0Gb/s Link Speed: 6.0Gb/s Media Type: Hard Disk Device Drive Temperature :29C (84.20 F) PI Eligibility: No Drive is formatted for PI information: No PI: No PI Port-0 : Port status: Active Port's Linkspeed: 6.0Gb/s Port-1 : Port status: Active Port's Linkspeed: Unknown Drive has flagged a S.M.A.R.T alert : No
其他物理信息
[root@test_raid ~]# megacli -PDList -aALL
当前 RAID 磁盘信息
[root@test_raid ~]# megacli -LDInfo -LALL –aAll Adapter 0 -- Virtual Drive Information: Virtual Drive: 0 (Target Id: 0) Name :system_vd RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0 Size : 3.637 TB Mirror Data : 3.637 TB State : Optimal Strip Size : 64 KB Number Of Drives : 2 Span Depth : 1 Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Disk's Default Ongoing Progresses: Background Initialization: Completed 14%, Taken 64 min. Encryption Type : None Default Power Savings Policy: Controller Defined Current Power Savings Policy: None Can spin up in 1 minute: Yes LD has drives that support T10 power conditions: Yes LD's IO profile supports MAX power savings with cached writes: No Bad Blocks Exist: No Is VD Cached: Yes Cache Cade Type : Read Only
raid 控制器个数
[root@test_raid ~]# megacli -adpCount Controller Count: 1.
raid 控制器时间
[root@test_raid ~]# megacli -AdpGetTime –aALL Adapter 0: Date: 12/31/2014 Time: 16:21:15
raid cache 策略
[root@test_raid ~]# megacli -cfgdsply -aALL |grep Polic Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Disk's Default Default Power Savings Policy: Controller Defined Current Power Savings Policy: None
查看磁盘缓存策略
[root@test_raid ~]# megacli -LDGetProp -Cache -L0 -a0 <- 第一个 RAID [root@test_raid ~]# megacli -LDGetProp -Cache -L1 -a0 <- 第二个 RAID [root@test_raid ~]# megacli -LDGetProp -Cache -LALL -a0 Adapter 0-VD 0(target id: 0): Cache Policy:WriteBack, ReadAdaptive, Direct, No Write Cache if bad BBU
设置磁盘缓存策略
缓存策略解释:
WT (Write through) WB (Write back) NORA (No read ahead) RA (Read ahead) ADRA (Adaptive read ahead) Cached Direct -RW|RO|Blocked|RemoveBlocked | WT|WB|ForcedWB [-Immediate] |RA|NORA | DsblP | Cached|Direct | -EnDskCache|DisDskCache | CachedBadBBU|NoCachedBadBBU -Lx|-L0,1,2|-Lall -aN|-a0,1,2|-aALL
设定直接回写
[root@test_raid ~]# megacli -LDSetProp WT -L0 -a0 Set Write Policy to WriteThrough on Adapter 0, VD 0 (target id: 0) success
直接回写
[root@test_raid ~]# megacli -LDSetProp -Direct -L0 -a0 Set Cache Policy to Direct on Adapter 0, VD 0 (target id: 0) success
关闭缓存
[root@test_raid ~]# megacli -LDSetProp -DisDskCache -L0 -a0 Set Disk Cache Policy to Disabled on Adapter 0, VD 0 (target id: 0) success
磁盘检测方法
查询磁盘个数, 序号
[root@test_raid ~]# megacli -PDList -aALL | grep 'Inquiry Data:' Inquiry Data: SEAGATE ST4000NM0023 GS0FZ1Z6ABTC Inquiry Data: SEAGATE ST4000NM0023 GS0FZ1Z6ABD4 Inquiry Data: SEAGATE ST4000NM0023 GS0FZ1Z69SFJ Inquiry Data: SEAGATE ST4000NM0023 GS0FZ1Z6A4Z7 Inquiry Data: SEAGATE ST4000NM0023 GS0FZ1Z6A4X5 Inquiry Data: SEAGATE ST4000NM0023 GS0FZ1Z6A5YG Inquiry Data: SEAGATE ST4000NM0023 GS0FZ1Z6AB8R Inquiry Data: SEAGATE ST4000NM0023 GS0FZ1Z6AALM Inquiry Data: SEAGATE ST4000NM0023 GS0FZ1Z6A4N0 Inquiry Data: SEAGATE ST4000NM0023 GS0FZ1Z6A51S Inquiry Data: SEAGATE ST4000NM0023 GS0FZ1Z69ST5 Inquiry Data: SEAGATE ST4000NM0023 GS0FZ1Z6A4V1
[root@test_raid ~]# megacli -PDList -aALL | grep WWN WWN: 5000C50062A960D0 WWN: 5000C50062A98C78 WWN: 5000C50062A9AF54 WWN: 5000C50062A98F30 WWN: 5000C50062A993AC WWN: 5000C50062A93EA4 WWN: 5000C50062A9998C WWN: 5000C50062A9CB4C WWN: 5000C50062A9B52C WWN: 5000C50062A98CB0 WWN: 5000C50062A99CF0 WWN: 5000C50062A99990
检测磁盘 ID 注意, 该ID 值用于标注磁盘
[root@test_raid ~]# megacli -PDlist -aALL | grep "ID" | uniq Enclosure Device ID: 32
检测当前 raid 组及每个 raid 组对应的磁盘
[root@test_raid ~]# megacli -cfgdsply –aALL | grep -E "DISK\ GROUP|Slot\ Number" Number of DISK GROUPS: 1 DISK GROUP: 0 Slot Number: 0 Slot Number: 1
查询当前磁盘的序号, 并且可以检测磁盘是否损坏 (注, 当前第 6 个磁盘出问题)
[root@test_raid ~]# megacli -PDList -aALL | grep -E "Drive\:\ \ Not\ Supported|Slo" Slot Number: 0 Slot Number: 1 Slot Number: 2 Slot Number: 3 Slot Number: 4 Slot Number: 5 Drive: Not Supported Slot Number: 6 Slot Number: 7 Slot Number: 8 Slot Number: 9 Slot Number: 10 Slot Number: 11
创建 RAID 前, 需要检测是否具有 foreign 配置, 如果有需要清除( foreign = 某个新加入的磁盘之前已经创建了 RAID, 需要初始化)
[root@test_raid ~]# megacli -PDlist -aALL | grep "Foreign State" Foreign State: None Foreign State: None Foreign State: None Foreign State: None Foreign State: None Foreign State: None (Foreign) 加入具有 foreign 配置, 则显示该配置 (对应上一个命令中 Slot Number: 5磁盘) Foreign State: None Foreign State: None Foreign State: None Foreign State: None Foreign State: None Foreign State: None
把上面标注为 Foreign 磁盘标注为 unconfigrue good
[root@test_raid ~]# megacli -PDMakeGood -PhysDrv[32:5] -a0 Adapter: 0: Failed to change PD state at EnclId-32 SlotId-5. [由于当前并不是 foreign 状态, 因此返回错误] Exit Code: 0x01
清除 foreign 配置
[root@test_raid ~]# megacli -CfgForeign -Scan -a0 There is no foreign configuration on controller 0. Exit Code: 0x00
创建 raid 0 方法 ( 3 个磁盘 )
[root@test_raid ~]# megacli -CfgLdAdd -r0 [32:2,32:3,32:4] WB Direct -a0 Adapter 0: Created VD 1 Adapter 0: Configured the Adapter!! Exit Code: 0x00
检查 raid 组与对应磁盘
[root@test_raid ~]# megacli -cfgdsply –aALL | grep -E "DISK\ GROUP|Slot\ Number|RAID\ Level|Target" Number of DISK GROUPS: 2 DISK GROUP: 0 Virtual Drive: 0 (Target Id: 0) [磁盘虚拟 ID, 删除时候使用] RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0 [primay-1] raid 1 Slot Number: 0 Slot Number: 1 DISK GROUP: 1 Virtual Drive: 1 (Target Id: 1) RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 [primary-0] raid 0 Slot Number: 2 Slot Number: 3 Slot Number: 4
查询磁盘是否在创建中
[root@test_raid ~]# megacli -PDRbld -ProgDsply -PhysDrv [32:3,32,2,32,4] -aALL Device(Encl-32 Slot-3) is not in rebuild process Device(Encl-32 Slot-2) is not in rebuild process Device(Encl-32 Slot-4) is not in rebuild process
删除某个 RAID
[root@test_raid ~]# megacli -CfgLdDel -L1 -a0 Virtual Disk is associate with Cache Cade. Please Use force option to delete <- 需要使用 force 参数 [root@test_raid ~]# megacli -CfgLdDel -L1 -force -a0 Adapter 0: Deleted Virtual Drive-1(target id-1) Exit Code: 0x00
raid 1 管理
利用两个磁盘创建 RAID 1
[root@test_raid ~]# megacli -CfgLdAdd -r1 [32:5,32:6] WB Direct -a0 Adapter 0: Created VD 2 Adapter 0: Configured the Adapter!! Exit Code: 0x00
检测方法
[root@test_raid ~]# megacli -cfgdsply –aALL | grep -E "DISK\ GROUP|Slot\ Number|RAID\ Level|Target" Number of DISK GROUPS: 3 DISK GROUP: 0 Virtual Drive: 0 (Target Id: 0) RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0 Slot Number: 0 Slot Number: 1 DISK GROUP: 1 Virtual Drive: 1 (Target Id: 1) RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 Slot Number: 2 Slot Number: 3 Slot Number: 4 DISK GROUP: 2 Virtual Drive: 2 (Target Id: 2) RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0 Slot Number: 5 Slot Number: 6
删除方法同上
[root@test_raid ~]# megacli -CfgLdDel -L2 -force -a0
一个热备 3 个组 RAID 方法
[root@test_raid ~]# megacli -CfgLdAdd -r5 [32:7,32:8,32:9] WB Direct -Hsp[32:10] -a0 Adapter 0: Created VD 3 Adapter: 0: Set Physical Drive at EnclId-32 SlotId-10 as Hot Spare Success. Adapter 0: Configured the Adapter!! Exit Code: 0x00
组 RAID 马上完成, 组 RAID 后, 马上能够在磁盘上看见设备名称
[root@test_raid ~]# megacli -PDRbld -ProgDsply -PhysDrv [32:7,32:8,32:9] -aALL Device(Encl-32 Slot-7) is not in rebuild process Device(Encl-32 Slot-8) is not in rebuild process Device(Encl-32 Slot-9) is not in rebuild process
查询当前 RAID 5 磁盘大小
[root@test_raid ~]# fdisk -l /dev/sdd Disk /dev/sdd: 8000.5 GB, 8000450330624 bytes 255 heads, 63 sectors/track, 972666 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000
查询热盘方法
[root@test_raid ~]# megacli -PDList -aALL | grep -E "DISK\ GROUP|Slot\ Number|postion:|Firmware\ state:" Slot Number: 0 <- 磁盘序号 Drive's postion: DiskGroup: 0, Span: 0, Arm: 0 <- DiskGroup: 0 标注当前属于那个 RAID 组 Firmware state: Online, Spun Up <- Online 标注磁盘当前在线 Slot Number: 1 Drive's postion: DiskGroup: 0, Span: 0, Arm: 1 Firmware state: Online, Spun Up Slot Number: 2 Drive's postion: DiskGroup: 1, Span: 0, Arm: 0 Firmware state: Online, Spun Up Slot Number: 3 Drive's postion: DiskGroup: 1, Span: 0, Arm: 1 Firmware state: Online, Spun Up Slot Number: 4 Drive's postion: DiskGroup: 1, Span: 0, Arm: 2 Firmware state: Online, Spun Up Slot Number: 5 Drive's postion: DiskGroup: 2, Span: 0, Arm: 0 Firmware state: Online, Spun Up Slot Number: 6 Drive's postion: DiskGroup: 2, Span: 0, Arm: 1 Firmware state: Online, Spun Up Slot Number: 7 Drive's postion: DiskGroup: 3, Span: 0, Arm: 0 Firmware state: Online, Spun Up Slot Number: 8 Drive's postion: DiskGroup: 3, Span: 0, Arm: 1 Firmware state: Online, Spun Up Slot Number: 9 Drive's postion: DiskGroup: 3, Span: 0, Arm: 2 Firmware state: Online, Spun Up Slot Number: 10 Firmware state: Hotspare, Spun Up <- HotSpare 表示当前为热盘 Slot Number: 11 Firmware state: Unconfigured(good), Spun Up
RAID 5 扩容[失败]
[root@test_raid ~]# megacli -LDRecon -Start -r5 -Add -PhysDrv[32:11] -L3 -a0 Failed to Start Reconstruction of Virtual Drive. FW error description: The requested virtual drive operation cannot be performed because consistency check is in progress. Exit Code: 0x17
模拟故障磁盘, 遇到故障后, 需要把磁盘执行 OFFLINE 操作
[root@test_raid ~]# megacli -PDOffline -PhysDrv [32:9] -a0 Adapter: 0: EnclId-32 SlotId-9 state changed to OffLine. Exit Code: 0x00
当磁盘 9 执行 OFFLINE 后, 热备会自动 REBUILD
[root@test_raid ~]# megacli -PDList -aALL | grep -E "DISK\ GROUP|Slot\ Number|postion:|Firmware\ state:" Drive's postion: DiskGroup: 3, Span: 0, Arm: 0 Firmware state: Online, Spun Up Slot Number: 8 Drive's postion: DiskGroup: 3, Span: 0, Arm: 1 Firmware state: Online, Spun Up Slot Number: 9 Firmware state: Unconfigured(good), Spun Up <- offline 操作后状态会自动修改 Slot Number: 10 Drive's postion: DiskGroup: 3, Span: 0, Arm: 2 Firmware state: Rebuild <--- 自动进行 rebuild 状态
查询 rebuild 状态
[root@test_raid ~]# megacli -PDRbld -ProgDsply -PhysDrv [32:10] -aALL Rebuild progress of physical drives... Enclosure:Slot Percent Complete Time Elps 032 :10 ***********************00 %*********************** 00:03:44
把第九个磁盘重新作为热备使用
[root@test_raid ~]# megacli -PDHSP -Set -Dedicated -Array3 -physdrv[32:9] -a0 Adapter: 0: Set Physical Drive at EnclId-32 SlotId-9 as Hot Spare Success. Exit Code: 0x00
查询状态
Firmware state: Online, Spun Up Slot Number: 7 Drive's postion: DiskGroup: 3, Span: 0, Arm: 0 Firmware state: Online, Spun Up Slot Number: 8 Drive's postion: DiskGroup: 3, Span: 0, Arm: 1 Firmware state: Online, Spun Up Slot Number: 9 Firmware state: Hotspare, Spun Up Slot Number: 10 Drive's postion: DiskGroup: 3, Span: 0, Arm: 2 Firmware state: Rebuild
让故障磁盘闪灯
[root@test_raid ~]# megacli -PdLocate -start -physdrv[32:11] -a0 Adapter: 0: Device at EnclId-32 SlotId-11 -- PD Locate Start Command was successfully sent to Firmware Exit Code: 0x00
停止闪灯
[root@test_raid ~]# megacli -PdLocate -stop -physdrv[32:11] -a0 Adapter: 0: Device at EnclId-32 SlotId-11 -- PD Locate Stop Command was successfully sent to Firmware Exit Code: 0x00
指定启动的 raid 组
megacli -AdpBootDrive -set -L0 -a0