不知道是断过电还是怎么了,今天HBase突然坏掉了,然后去查看Hadoop的Secondary namenode日志发现:
org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Checkpoint not created. Name node is in safe mode.
The reported blocks 3774 needs additional 409 blocks to reach the threshold 0.9990 of total blocks 4188. Safe mode will be turned off automatically.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.rollEditLog(FSNamesystem.java:5066)
at org.apache.hadoop.hdfs.server.namenode.NameNode.rollEditLog(NameNode.java:865)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
at org.apache.hadoop.ipc.Client.call(Client.java:1107)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
at $Proxy4.rollEditLog(Unknown Source)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:460)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:333)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:297)
at java.lang.Thread.run(Thread.java:636)
有一些文件块缺失了,应该是在入库的时候某个datanode发生了故障,这样的话我们没办法找回这些数据了,而且安全模式也退不出去,让我们检查一下系统的健康:
bin/hadoop fsck /
********************************
CORRUPT FILES: 413
MISSING BLOCKS: 413
MISSING SIZE: 1029442 B
CORRUPT BLOCKS: 413
********************************
我擦,居然坏掉了413个block,系统生病了,这时我们需要这样:
bin/hadoop fsck / -move(把这些坏掉的文件放到/lost+found)
或者bin/hadoop fsck / -delete(把这些坏掉的文件清除)
这样你再试一下bin/hadoop fsck / 那么系统就健康了。下次重启dfs 或者重启hadoop等他退出安全模式就可以了。
这里有一些更详细的说明:http://blog.163.com/jiayouweijiewj@126/blog/static/17123217720101186421222/
另外,HBase如果也出现了找不到文件块的问题可以把如下文件删除:
ROOT- |
dir |
|
|
|
2012-04-13 17:07 |
rwxr-xr-x |
goldsoft |
supergroup |
.META. |
dir |
|
|
|
2012-04-13 17:07 |
rwxr-xr-x |
goldsoft |
supergroup |
.logs |
dir |
|
|
|
2012-04-13 17:12 |
rwxr-xr-x |
goldsoft |
supergroup |
.oldlogs |
dir |
|
|
|
2012-04-13 17:11 |
rwxr-xr-x |
goldsoft |
supergroup |
这四个目录中除.oldlogs以外的删除,使用./hadoop fs -rmr /hbase/-ROOT-这样的命令,
启动hbase,查看日志:
2012-04-13 17:08:43,901 INFO org.apache.hadoop.hbase.master.HMaster: Master has completed initialization
时,在${hbasehome}/bin下,执行./hbase org.jruby.Main add_table.rb /hbase目录/表名即可
为了防止数据的丢失,我们在入库的时候,最好是在已经确认放库成功以后,再将源数据删除,这样,才能保证数据不会被丢失。
参考:http://www.myexception.cn/open-source/419418.html