【大数据报错】启动dfs报错hadoop: ssh: Could not resolve hostname hadoop: Name or service not known

启动dfs报错hadoop: ssh: Could not resolve hostname hadoop: Name or service not known

场景:

Hadoop版本:2.6.0

Linux版本:CentOS 7.3
启动伪分布式Hadoop2.6.0报错

[hadoop@h2_6 ~]$ start-dfs.sh
19/03/30 18:02:06 WARN hdfs.DFSUtil: Namenode for null remains unresolved for ID null.  Check your hdfs-site.xml file to ensure namenodes are configured properly.
Starting namenodes on [hadoop]
hadoop: ssh: Could not resolve hostname hadoop: Name or service not known
The authenticity of host 'localhost (::1%1)' can't be established.
ECDSA key fingerprint is SHA256:Qtm1VfRTAGZrGdXt3Qcwp8IwFAh2VPObwfEPunM+NE8.
ECDSA key fingerprint is MD5:1a:c5:da:2c:de:20:a2:de:5a:7d:1f:58:9c:87:1e:f9.
Are you sure you want to continue connecting (yes/no)? yes
localhost: Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
localhost: starting datanode, logging to /opt/hadoop-2.6.0/logs/hadoop-hadoop-datanode-h2_6.out
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
ECDSA key fingerprint is SHA256:Qtm1VfRTAGZrGdXt3Qcwp8IwFAh2VPObwfEPunM+NE8.
ECDSA key fingerprint is MD5:1a:c5:da:2c:de:20:a2:de:5a:7d:1f:58:9c:87:1e:f9.
Are you sure you want to continue connecting (yes/no)? yes
0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to /opt/hadoop-2.6.0/logs/hadoop-hadoop-secondarynamenode-h2_6.out

解决:
1.查看hostname和/etc/hosts文件
# hostname
h2_6

# cat /etc/hosts
192.168.1.65  h2_6

2.查看core-site.xml文件
# cat /opt/hadoop-2.6.0/etc/hadoop/core-site.xml


 
     fs.defaultFS
     hdfs://hadoop:8020
 

   
 
     hadoop.tmp.dir
     /home/hadoop/hadoop-2.6.0/data/tmp
 

fs.defaultFS内容hdfs://hadoop:8020中的hadoop为主机名,怪不得启动找不到对应的主机名

3.修改core-site.xml文件
# vi /opt/hadoop-2.6.0/etc/hadoop/core-site.xml


 
     fs.defaultFS
     hdfs://h2_6:8020
 

   
 
     hadoop.tmp.dir
     /home/hadoop/hadoop-2.6.0/data/tmp
 

4.关闭hadoop和重启系统
# stop-all.sh

#reboot

5.重新启动hadoop
# start-all.sh

6.查看hadoop状态
# jps
9703 SecondaryNameNode
9429 NameNode
9959 NodeManager
10267 Jps
9526 DataNode

7.查看是否可以使用hadoop命令
# hadoop fs -ls /

 

你可能感兴趣的:(大数据报错)