Hadoop完全分布式安装

1.三台虚拟机装jdk,hadoop

scp -r /opt/hadoop [email protected]:/opt

 

需要source下profile

2.开始部署

 

hadoop102/hadoop1

hadoop103/hadoop2

hadoop104/hadoop3

HDFS

 

NameNode

DataNode

 

DataNode

SecondaryNameNode

DataNode

YARN

 

NodeManager

ResourceManager

NodeManager

 

NodeManager

          https://blog.csdn.net/weixin_44198965/article/details/89603788

 

2.1配置core-site.xml

[atguigu@hadoop102 hadoop]$ vi core-site.xml

在该文件中编写如下配置

     fs.defaultFS

      hdfs://hadoop1:9000

 

     hadoop.tmp.dir

     /opt/hadoop/data/tmp

 

2.2 vim hadoop-env.sh

export JAVA_HOME=/bigdata/jdk-1.8/jdk1.8.0_202

2.3 vim hdfs-site.xml



        dfs.replication
        3



      dfs.namenode.secondary.http-address
      hadoop3:50090

 

2.4

(3)YARN配置文件

配置yarn-env.sh

[atguigu@hadoop102 hadoop]$ vi yarn-env.sh

export JAVA_HOME=/bigdata/jdk-1.8/jdk1.8.0_202

配置yarn-site.xml

[atguigu@hadoop102 hadoop]$ vi yarn-site.xml

在该文件中增加如下配置

     yarn.nodemanager.aux-services

     mapreduce_shuffle

 

     yarn.resourcemanager.hostname

     hadoop2

(4)MapReduce配置文件

配置mapred-env.sh

[atguigu@hadoop102 hadoop]$ vi mapred-env.sh

export JAVA_HOME=/opt/module/jdk1.8.0_144

配置mapred-site.xml

[atguigu@hadoop102 hadoop]$ cp mapred-site.xml.template mapred-site.xml

 

[atguigu@hadoop102 hadoop]$ vi mapred-site.xml

在该文件中增加如下配置

     mapreduce.framework.name

     yarn

2.5 配置slave   /hadoop-2.7.2/etc/hadoop/slaves

hadoop1

hadoop2

hadoop3

3.分发其他虚拟机

 

4.启动

hadoop-daemon.sh start namenode

sbin/start-dfs.sh

 

你可能感兴趣的:(Hadoop完全分布式安装)