hadoop Client: Retrying connect to server

在CentOS1机器上执行 hdfs namenode -format ,一直报错

org.apache.hadoop.ipc.Client: Retrying connect to server: CentOS02/192.168.202.102:8485. Already tried 0 time(s).
org.apache.hadoop.ipc.Client: Retrying connect to server: CentOS03/192.168.202.103:8485. Already tried 0 time(s).
org.apache.hadoop.ipc.Client: Retrying connect to server: CentOS04/192.168.202.104:8485. Already tried 0 time(s).
org.apache.hadoop.ipc.Client: Retrying connect to server: CentOS05/192.168.202.105:8485. Already tried 0 time(s).
org.apache.hadoop.ipc.Client: Retrying connect to server: CentOS06/192.168.202.106:8485. Already tried 0 time(s).
org.apache.hadoop.ipc.Client: Retrying connect to server: CentOS07/192.168.202.107:8485. Already tried 0 time(s).


并且每次启动 CentOS01 时,必须先使用户登陆Linux 上,然后才能用 SecureCRT 客户端连接上去,否则启动 CentOS1后直接用客户端是连不上的。

/etc/hosts等相关网接配置也没有问题。CentOS01  ping  CentOS02 等 其他机器也通,为什么却报连接不上呢? 

hadoop Client: Retrying connect to server_第1张图片


种种原因竟是因为 未勾上 Available to all users 造成的 。




你可能感兴趣的:(hadoop Client: Retrying connect to server)