HDFS error

错误信息描述: HDFS error: could only be replicated to 0 nodes, instead of 1;以及由此衍生出来的种种奇葩问题(具体的错误信息见后面),下面记录一下解决方案:

 

首先:停止服务,也就是执行./stop-all.sh

第二步:删除dfs/name and dfs/data directories,这个具体在哪儿呢,其实就是你存放HDFS数据的文件目录那边。可以参看conf/core-site.xml文件里面的hadoop.tmp.dir。

第三步:格式化namenode

第四步:重新启动服务,也就是./start-all.sh。

 

如果这么还是不可以,那怎么办呢?查看一下磁盘空间,有可能是因为磁盘空间不够了。这时候需要在第二步的时候,修改一下conf/core-site.xml文件里面的hadoop.tmp.dir的值,使其指向一个磁盘空间充足的位置;

此外,还可能是因为datanode自己挂了!!!这时候怎么办呢?参见之前的一篇文章吧:hadoop配置错误

 

 

14/12/02 15:20:43 WARN hdfs.DFSClient: DataStreamer Exception:

org.apache.hadoop.ipc.RemoteException: java.io.IOException: File

could only be replicated to 0 nodes, instead of 1

        at

org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)

        at

org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)

        at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)

        at

sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

        at java.lang.reflect.Method.invoke(Method.java:606)

        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)

        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)

        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)

        at java.security.AccessController.doPrivileged(Native Method)

        at javax.security.auth.Subject.doAs(Subject.java:415)

        at

org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)

        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)



        at org.apache.hadoop.ipc.Client.call(Client.java:1113)

        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)

        at com.sun.proxy.$Proxy3.addBlock(Unknown Source)

        at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)

        at

sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

        at java.lang.reflect.Method.invoke(Method.java:606)

        at

org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)

        at

org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)

        at com.sun.proxy.$Proxy3.addBlock(Unknown Source)

        at

org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3720)

        at

org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3580)

        at

org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2783)

        at

org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3023)



14/12/02 15:20:43 WARN hdfs.DFSClient: Error Recovery for null bad

datanode[0] nodes == null

14/12/02 15:20:43 WARN hdfs.DFSClient: Could not get block locations.

Source file

could only be replicated to 0 nodes, instead of 1

        at

org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)

        at

org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)

        at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)

        at

sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

        at java.lang.reflect.Method.invoke(Method.java:606)

        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)

        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)

        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)

        at java.security.AccessController.doPrivileged(Native Method)

        at javax.security.auth.Subject.doAs(Subject.java:415)

        at

org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)

        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)



        at org.apache.hadoop.ipc.Client.call(Client.java:1113)

        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)

        at com.sun.proxy.$Proxy3.addBlock(Unknown Source)

        at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)

        at

sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

        at java.lang.reflect.Method.invoke(Method.java:606)

        at

org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)

        at

org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)

        at com.sun.proxy.$Proxy3.addBlock(Unknown Source)

        at

org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3720)

        at

org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3580)

        at

org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2783)

        at

org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3023)

你可能感兴趣的:(error)