SSD [FAILED] Failed to mount /sysroot解决办法

在使用pcie接口的SSD测试时发现SSD不能正常启动,出现[FAILED] Failed to mount /sysroot的错误。

[  OK  ] Reached target Remote File Systems (Pre).
[  OK  ] Reached target Remote File Systems.
[8.901016] SGI XFS with ACLs, security attributes, no debug enabled
[8.912417] XFS (nvme0n1p3): Mounting V5 Filesystem
[8.960550] XFS (nvme0n1p3): Starting recovery (logdev: internal)
[8.973696] XFS (nvme0n1p3): Metadata corruption detected at __write_verify+0xfd/0x110 [xfs], xfs_dir3_leaf1 block 0x1900a00
[8.984887] XFS (nvme0n1p3): Unmount and run xfs_repair
[8.990104] XFS (nvme0n1p3): First 64 bytes of corrupted metadata buffer:
[8.996876] ffff9d79d0a6e000: 00 00 00 00 00 00 00 00 3d f1 00 00 17 bf 25 cb  ........=.....%.
[9.005544] ffff9d79d0a6e010: 00 00 00 00 01 90 0a 00 00 00 00 26 00 00 a2 aa  ...........&....
[9.014214] ffff9d79d0a6e020: 11 51 07 ca d1 a4 47 1a a2 a5 c6 99 86 e3 c6 6d  .Q....G........m
[9.022885] ffff9d79d0a6e030: 00 00 00 00 02 00 00 42 01 f0 00 2d 00 00 00 00  .......B...-....
[9.031560] XFS (nvme0n1p3): xfs_do_force_shutdown(0x8) called from line 1391 of file fs/xfs/xfs_buf.c.  Return address = 0xffffffffc063afb7
[9.044120] XFS (nvme0n1p3): Corruption of in-memory data detected.  Shutting down filesystem
[9.052618] XFS (nvme0n1p3): Please umount the filesystem and rectify the problem(s)
[9.060415] XFS (nvme0n1p3): log mount/recovery failed: error -117
[9.066595] XFS (nvme0n1p3): log mount failed
[FAILED] Failed to mount /sysroot.
See 'systemctl status sysroot.mo
Generating "/run/initramfs/rdsosreport.txt"

百度了下网上的方法
#code xfs_repair -v -L /dev/dm-0
执行失败,貌似设备不对,将dm-0改成nvme0n1p3,执行OK,如下图

:/# xfs_repair -v -L /dev/nvme0n1p3
Phase 1 - find and verify superblock...
        - block cache size set to 3003408 entries
Phase 2 - using internal log
        - zero log...
zero_log: head block 42497 tail block 42014
ALERT: The filesystem has valuable metadata changes in a log which is being
destroyed because the -L option was used.
        - scan filesystem freespace and inode maps...
out-of-order bno btree record 9 (289310 2) block 0/1
block (0,289310-289310) multiply claimed by bno space tree, state - 1
agi unlinked bucket 2 is 1214018 in ag 2 (inode=68322882)
agi unlinked bucket 10 is 1215178 in ag 2 (inode=68324042)
agi unlinked bucket 13 is 1140429 in ag 2 (inode=68249293)
agi unlinked bucket 14 is 1213902 in ag 2 (inode=68322766)
agi unlinked bucket 15 is 1215183 in ag 2 (inode=68324047)
agi unlinked bucket 17 is 1140433 in ag 2 (inode=68249297)
agi unlinked bucket 18 is 1215186 in ag 2 (inode=68324050)
agi unlinked bucket 20 is 1140436 in ag 2 (inode=68249300)
agi unlinked bucket 26 is 157018 in ag 2 (inode=67265882)
agi unlinked bucket 29 is 1214045 in ag 2 (inode=68322909)
agi unlinked bucket 30 is 1214430 in ag 2 (inode=68323294)
agi unlinked bucket 37 is 1213989 in ag 2 (inode=68322853)
agi unlinked bucket 39 is 1213991 in ag 2 (inode=68322855)
agi unlinked bucket 40 is 1213992 in ag 2 (inode=68322856)
agi unlinked bucket 41 is 1213993 in ag 2 (inode=68322857)
agi unlinked bucket 44 is 1213996 in ag 2 (inode=68322860)
agi unlinked bucket 49 is 1214001 in ag 2 (inode=68322865)
agi unlinked bucket 54 is 1214006 in ag 2 (inode=68322870)
sb_ifree 254, counted 228
sb_fdblocks 10399804, counted 10408087
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
data fork in ino 1797907 claims free block 238369
imap claims a free inode 2180228 is in use, correcting imap and clearing inode
cleared inode 2180228
        - agno = 1
imap claims in-use inode 33968729 is free, correcting imap
        - agno = 2
        - agno = 3
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
entry "default.hostsfile" in shortform directory 565517 references free inode 2180236
junking entry "default.hostsfile" in directory inode 565517
Phase 5 - rebuild AG headers and trees...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - traversing filesystem ...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
entry "tmp" in directory inode 100786852 not consistent with .. value (34441335) in inode 2180231,
junking entry
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
disconnected dir inode 2180231, moving to lost+found
disconnected inode 67265882, moving to lost+found
disconnected inode 67313620, moving to lost+found
disconnected inode 68249293, moving to lost+found
disconnected inode 68249297, moving to lost+found
disconnected inode 68249300, moving to lost+found
disconnected inode 68322766, moving to lost+found
disconnected inode 68322853, moving to lost+found
disconnected inode 68322855, moving to lost+found
disconnected inode 68322856, moving to lost+found
disconnected inode 68322857, moving to lost+found
disconnected inode 68322860, moving to lost+found
disconnected inode 68322865, moving to lost+found
disconnected inode 68322870, moving to lost+found
disconnected inode 68322882, moving to lost+found
disconnected inode 68322909, moving to lost+found
disconnected inode 68322910, moving to lost+found
disconnected inode 68323294, moving to lost+found
disconnected inode 68324042, moving to lost+found
disconnected inode 68324047, moving to lost+found
disconnected inode 68324050, moving to lost+found
Phase 7 - verify and correct link counts...
resetting inode 33554498 nlinks from 451 to 452
resetting inode 100786852 nlinks from 3 to 2
resetting inode 2180228 nlinks from 2 to 3
resetting inode 2180237 nlinks from 1 to 2
Maximum metadata LSN (38:42014) is ahead of log (1:2).
Format log to cycle 41.

        XFS_REPAIR Summary    Thu Jan 19 06:06:46 2017

Phase		Start		End		Duration
Phase 1:	01/19 06:06:46	01/19 06:06:46	
Phase 2:	01/19 06:06:46	01/19 06:06:46	
Phase 3:	01/19 06:06:46	01/19 06:06:46	
Phase 4:	01/19 06:06:46	01/19 06:06:46	
Phase 5:	01/19 06:06:46	01/19 06:06:46	
Phase 6:	01/19 06:06:46	01/19 06:06:46	
Phase 7:	01/19 06:06:46	01/19 06:06:46	

Total run time: 
done
:/# 

然后执行

:/# xfs_repair dev/nvme0n1p3

结果如下

Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
Phase 5 - rebuild AG headers and trees...
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - traversing filesystem ...
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
done
:/# 

mount也能成功

:/# mount -o rw dev/nvme0n1p3 /data0
[  853.650067] XFS (nvme0n1p3): Mounting V5 Filesystem
[  853.777152] XFS (nvme0n1p3): Ending clean mount
:/# cd data0/
:/data0# ls
bin   compass  etc   lib    lost+found	mnt  proc  run	 srv  tmp   usr
boot  dev      home  lib64  media	opt  root  sbin  sys  upan  var
:/data0# 

重启下试试

CentOS Linux 7 (Core)
Kernel 3.10.0-862.11.6.el7.x86_64 on an x86_64

localhost login: 

搞定

你可能感兴趣的:(X86)