hadoop学习笔记(8)-hosts文件配置的问题

环境:

Centos6.3 x64

jdk-1.6.0 u38 x64

hadoop-1.0.4

搭建集群时遇到一个问题:

在namenode和jobtracket节点的日志中总是在报以下的错误:

java.io.IOException: could only be replicated to 0 nodes,instead of 1

以及找不到jobtracker.info文件什么的,当时忘记把完整的错误信息复制下来了。

在datanode和tasktracker节点的日志中,总是报以下的错误:

INFO org.apache.hadoop.ipc.Client: Retrying connect to server: .............

以前在虚拟机上搭建分布式集群是没有这个问题的。

这种问题可能是:

1、主节点防火墙没有打开需要的端口

2、重新格式化hdfs之后也会出现这样的问题,英文论坛上有人这样回复:You'll probably find that even though the name node starts, itdoesn't have any data nodes and is completely empty.Whenever hadoop creates a new filesystem, itassigns a large random number to it to prevent you from mixingdatanodes from different filesystems on accident. When you reformatthe name node its FS has one ID, but your data nodes still havechunks of the old FS with a different ID and so will refuse toconnect to the namenode. You need to make sure these are cleaned upbefore reformatting. You can do it just by deleting the datanode directory, although there's probably a more "official" way todo it.

这两种方法都没有解决我的问题,其实我的问题是第3种:

如果在hadoop的配置文件site-core.xml和mapred-site.xml中用的是主节点的hostname:端口号  的话,在主节点的/etc/hosts文件中,127.0.0.1这一行中,不要写主节点的hostname,不然主节点只会监听本地的hdfs和mapreduce端口,所以其他的结点就连不上了,其实所有的节点都不应该在这一行中写自己的hostname

你可能感兴趣的:(Web/数据/云计算)