java.net.ConnectException: Call From ubuntu/192.168.72.131 to localhost:9000 failed on connection ex

报错信息是:

ERROR tool.ImportTool: Encountered IOException running import job: java.net.ConnectException: Call From ubuntu/192.168.72.131 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)


我是在启动hadoop集群后,使用./hadoop/bin/hdfs dfs -ls命令查询hdfs 文件时出现的错。

解决办法:

先切换到root,修改/etc/hosts文件,把127.0.0.1 localhost和127.0.1.1 都注释掉,只留ip地址和主机名:

#127.0.0.1      localhost
#127.0.1.1      ubuntu3


192.168.72.131  ubuntu
192.168.72.132  ubuntu2
192.168.72.133  ubuntu3
# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes

ff02::2 ip6-allrouters


重启后,再次启动hadoop集群。执行命令还是报同样的错。

所以接着改,看到报错信息有localhost:9000 想想哪里有这个配置。在hadoop 的配置文件中,我的有core-site.xml这个文件有配置locahost:9000.


        
        
                fs.defaultFS
                hdfs://ns
        
        
        
                ha.zookeeper.quorum
                ubuntu:2181,ubuntu2:2181,ubuntu2:2181
        
        

                hadoop.tmp.dir

                /home/xiaoye/hadoop/tmp

        

       

于是果断把最后的property给注释掉。同样集群的其他机器也注释掉。再次执行命令就可以了。

你可能感兴趣的:(大数据)