hadoop异常:虚拟机上搭建分布式集群org.apache.hadoop.ipc.Client: Retrying connect to server

hadoop异常:虚拟机上搭建分布式集群org.apache.hadoop.ipc.Client: Retrying connect to server

问题1:查看hdfs存储的状态

hadoop dfsadmin -report
15/09/01 19:33:20 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
15/09/01 19:33:20 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
Configured Capacity: 0 (0 KB)
Present Capacity: 0 (0 KB)
DFS Remaining: 0 (0 KB)
DFS Used: 0 (0 KB)
DFS Used%: ?%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 0 (0 total, 0 dead)

Datanodes available: 0 (0 total, 0 dead)

同时出现问题2:显示没有可用的数据节点。
登陆到数据节点,查看日志
2015-09-01 19:34:19,066 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopm/192.168.***.*** :9000. Already tried 8
time(s).

原因排查:

1.防火墙的问题(在namenode上关闭防火墙,如果不关闭,也有可能是namenode的防火墙问题)

service iptables stop

2.重新格式化hdfs之后也会出现这样的问题

英文论坛上有人这样回复:You'll probably find that even though the name node starts, itdoesn't have any data nodes and is completely empty.Whenever hadoop creates a new filesystem, itassigns a large random number to it to prevent you from mixingdatanodes from different filesystems on accident. When you reformatthe name node its FS has one ID, but your data nodes still havechunks of the old FS with a different ID and so will refuse toconnect to the namenode. You need to make sure these are cleaned upbefore reformatting. You can do it just by deleting the datanode directory, although there's probably a more "official" way todo it.
  这两种方法都没有解决我的问题,其实我的问题是第3种:
  如果在hadoop的配置文件site-core.xml和mapred-site.xml中用的是主节点的hostname:端口号  的话,在主节点的/etc/hosts文件中,127.0.0.1这一行中,不要写主节点的hostname,不然主节点只会监听本地的hdfs和mapreduce端口,所以其他的结点就连不上了,其实所有的节点都不应该在这一行中写自己的hostname



你可能感兴趣的:(hadoop异常,虚拟机异常)