三台主机搭建hadoop集群过程记录

 

步骤

  • 在一台主机上安装好hadoop伪分布式环境,作为master节点。
  • 用硬盘对拷的方式复制到另外两台slave主机,修改主机名etc/sysconfig/network 修改etc/hosts文件 把ip和主机名对应
  • 修改conf/slaves conf/masters文件以及其他配置文件
  • 把master主机的hadoop文件夹复制到slave主机上  sudo scp -r /usr/hadoop/hadoop-2.7.7 slave1:/usr/hadoop

配置文件

core-site.xml


    
    
        fs.defaultFS
        hdfs://master:9000
    
    
    
    
    
    
    
        hadoop.tmp.dir
        /usr/hadoop/hadoop-2.7.7/tmp
    

hdfs-site.xml



    
    
        dfs.namenode.http-address
        master:50070
    

    
    
        dfs.namenode.secondary.http-address
        slave1:50090
    

    
    
        dfs.namenode.name.dir
        file:/usr/hadoop/hadoop-2.7.7/dfs/namenode
    

    
    
        dfs.replication
        3
    
    
    
        dfs.datanode.data.dir
        file:/usr/hadoop/hadoop-2.7.7/dfs/datanode
    

mapred-site.xml

mv mapred-site.xml.template mapred-site.xml

    
    
        mapreduce.framework.name
        yarn
    

yarn-site.xml


    
    
        yarn.resourcemanager.hostname
        master
    

    
    
        yarn.nodemanager.aux-services
        mapreduce_shuffle
    

masters

新建一个masters的文件,这里指定的是secondary namenode 的主机

slave1

slaves

slave1
slave2
  • 创建文件夹:(不要用sudo mkdir 文件所有者不一样)
mkdir tmp 
  • 复制到其他主机
scp -r /usr/hadoop/hadoop-2.7.7 slave1:/usr/hadoop/
scp -r /usr/hadoop/hadoop-2.7.7 slave2:/usr/hadoop/
  • 在master上启动
./sbin/start-dfs.sh
./sbin/start-yarn.sh

master jps:resourcemanager jps namenode

slave1 jps:jps secondarynamenode nodemanager datanode

slave2 jps:jps nodemanager datanode

 

问题汇总

secondarynamenode启动不起来

  • 需要单独启动 sbin/hadoop-daemon.sh start secondarynamenode
  • 或者使用start-dfs.sh和start-yarn.sh 代替start-all.sh
  • 报错checkpoint directory does not exist or is not accessible 查看logs,可能是没有权限读取该文件夹,通过赋予hadoop用户改文件夹权限来解决chown -R sunsi usr/hadoop/hadoop-2.7.7/tmp 因为当初使用sudo mkdir创建文件夹的原因

把文件上传到HDFS,报错Name node is in safe mode

bin/hadoop dfsadmin -safemode leave

master:50070 没有数据显示,向hdfs传文件报错:There are 0datanode(s) running and no node(s) are excluded in this  operation

看了datanode的日志: Retrying connect to server: master/192.168.139.95:9000. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS),datanode和namenode无法建立连接。

从datanode节点ping namenode节点可以通,但是telnet9000端口,显示no route to host 之后是connection refused

[sunsi@slave1 ~]$ telnet 192.168.139.95 9000
Trying 192.168.139.95...
telnet: connect to address 192.168.139.95: No route to host
[sunsi@slave1 ~]$ telnet 192.168.139.95 9000
Trying 192.168.139.95...
telnet: connect to address 192.168.139.95: Connection refused

关闭防火墙

[sunsi@master usr]$ sudo service iptables stop
iptables:将链设置为政策 ACCEPT:filter                    [确定]
iptables:清除防火墙规则:                                 [确定]
iptables:正在卸载模块:                                   [确定]

永久关闭防火墙

[sunsi@master usr]$ sudo chkconfig iptables off

查看9000端口是否打开,没有显示说明没有开放

[sunsi@master usr]$ lsof -i:9000

开启9000端口 修改/etc/sysconfig/iptables

-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 9000 -j ACCEPT

重启dfs和yarn,查看端口开放情况 ,之前显示的是127.0.0.1:9000 只有本机可以访问

[sunsi@master usr]$ netstat -tpnl
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name   
tcp        0      0 0.0.0.0:111                 0.0.0.0:*                   LISTEN      -                   
tcp        0      0 192.168.139.95:50070        0.0.0.0:*                   LISTEN      5799/java           
tcp        0      0 0.0.0.0:22                  0.0.0.0:*                   LISTEN      -                   
tcp        0      0 0.0.0.0:631                 0.0.0.0:*                   LISTEN      -                   
tcp        0      0 127.0.0.1:25                0.0.0.0:*                   LISTEN      -                   
tcp        0      0 0.0.0.0:58821               0.0.0.0:*                   LISTEN      -                   
tcp        0      0 192.168.139.95:9000         0.0.0.0:*                   LISTEN      5799/java           
tcp        0      0 :::111                      :::*                        LISTEN      -                   
tcp        0      0 :::22                       :::*                        LISTEN      -                   
tcp        0      0 ::ffff:192.168.139.95:8088  :::*                        LISTEN      6116/java           
tcp        0      0 ::1:25                      :::*                        LISTEN      -                   
tcp        0      0 ::ffff:192.168.139.95:8030  :::*                        LISTEN      6116/java           
tcp        0      0 ::ffff:192.168.139.95:8031  :::*                        LISTEN      6116/java           
tcp        0      0 ::ffff:192.168.139.95:8032  :::*                        LISTEN      6116/java           
tcp        0      0 ::ffff:192.168.139.95:8033  :::*                        LISTEN      6116/java           
tcp        0      0 :::40714                    :::*                        LISTEN      -      

向hdfs传文件报错:

NoRouteToHostException:No route 以及could only be replicated to 0 nodes,instead of 1

网上的教程都说是因为没有关闭防火墙

service iptables stop

可是之前使用过永久关闭防火墙的命令

[sunsi@master usr]$ sudo chkconfig iptables off

但是这个永久的命令似乎不太好用,使用service iptables stop 关闭之后就不再报错,之前甚至把dfs/namenode以及dfs/datanode文件夹下的内容删除,hdfs namenode -format(避免namenode和datanode的namespaceeID不一致)都不好用,还是防火墙的问题。

 

 

 

你可能感兴趣的:(BigData)