2011-07-28 16:47:24,312 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9100, call addBlock(/dfs_operator.txt, DFSClient_344588298, null, null) from 192.168.2.15:36470: error: java.io.IOException: File /dfs_operator.txt could only be replicated to 0 nodes, instead of 1 java.io.IOException: File /dfs_operator.txt could only be replicated to 0 nodes, instead of 1 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1448) at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:690) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:342) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1350) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1346) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:742) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1344)
这个问题找了N久,最后发现是host的问题。
主要的错误信息(红色标记):
STARTUP_MSG: Starting SecondaryNameNode
STARTUP_MSG: host = RJ15/127.0.0.1
STARTUP_MSG: args = []
STARTUP_MSG: version = 0.21.0
将host配置,由
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 RJ15
::1 localhost6.localdomain6 localhost6
192.168.2.15 master
192.168.2.82 slave1
192.168.2.102 slave2
192.168.2.69 slave3
改成:
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 master
::1 localhost6.localdomain6 localhost6
192.168.2.15 master
192.168.2.82 slave1
192.168.2.102 slave2
192.168.2.69 slave3
搞定。。。。
为什么要这么改,具体原因还不清楚,处于猜测状态,那位大牛来解释一下。谢谢。