Hadoop完全分布式的配置

选取机器sam01作为主节点,并进行分布式文件的配置

1.进入Hadoop配置文件路径/usr/local/hadoop/etc/hadoop(这里我把Hadoop安装在/usr/local目录下)

2.配置core-site.xml文件


    
    
    
        fs.defaultFS
        hdfs://sam01:8020
    
    
    
        hadoop.tmp.dir
        /usr/local/hadoop/tmp
    


3.配置hdfs-site.xml文件


    
    
        dfs.namenode.name.dir
        file://${hadoop.tmp.dir}/dfs/name
    
    
    
        dfs.datanode.data.dir
        file://${hadoop.tmp.dir}/dfs/data
    
    
    
        dfs.replication
        3
    
    
    
        dfs.blocksize
        134217728
    
    
    
        dfs.namenode.secondary.http-address
        sam02:50090
    
    
    
        dfs.namenode.http-address
        sam01:50070
    


4.配置mapred-site.xml

这里初始为mapred-site.xml.template文件,需要复制为mapred-site.xml文件

cp mapred-site.xml.template mapred-site.xml

    
    
        dfs.namenode.name.dir
        file://${hadoop.tmp.dir}/dfs/name
    
    
    
        dfs.datanode.data.dir
        file://${hadoop.tmp.dir}/dfs/data
    
    
    
        dfs.replication
        3
    
    
    
        dfs.blocksize
        134217728
    
    
    
        dfs.namenode.secondary.http-address
        sam02:50090
    
    
    
        dfs.namenode.http-address
        sam01:50070
    


6.配置yarn-site.xml


    
    
        yarn.nodemanager.aux-services
       mapreduce_shuffle
    
    
    
        yarn.resourcemanager.hostname
        sam01
    
    
    
        
        yarn.nodemanager.aux-services.mapreduce_shuffle.class
     org.apache.hadoop.mapred.ShuffleHandler
        

        
        
        yarn.resourcemanager.address
        sam01:8032
        

        
        
        yarn.resourcemanager.scheduler.address
        sam01:8030
        

        
        
        yarn.resourcemanager.resource-tracker.address
        sam01:8031
        

        
        
        yarn.resourcemanager.admin.address
        sam01:8033
        

        
        
        yarn.resourcemanager.webapp.address
        sam01:8088
        

7.配置hadoop-env.sh文件

# The java implementation to use.
export JAVA_HOME=/usr/local/jdk

Hadoop完全分布式的配置_第1张图片

8.配置yarn-env.sh文件

 #echo "run java in $JAVA_HOME"
  JAVA_HOME=/usr/local/jdk

Hadoop完全分布式的配置_第2张图片

9.配置slaves文件,此文件用于指定datanode守护进程所在的机器节点主机名

sam01
sam02
sam03

10.同步Hadoop配置文件到其余的节点

cd /usr/local
scp -r hadoop/ sam02:$PWD
scp -r hadoop/ sam03:$PWD

你可能感兴趣的:(Hadoop完全分布式的配置)