Hadoop Day01集群搭建

  1. yum -y install -y lrzsz 安装lrzsz ,以便上传文件
  2. mkdir -p /home/hadoop/apps 创建文件夹
  3. rz 将hadoop-2.8.0.tar.gz 和 jdk-8u181-linux-x64.tar.gz 上传到虚拟机
  4. 配置ssh免密登陆:
  • ssh-keygen -t rsa 产生公钥,在Hadoop01,Hadoop02,Hadoop03上执行
  • ssh-copy-id hdp01 将生成的秘钥发到Hadoop02,Hadoop03上
  • scp autrio-key hadoop02:$PWD
  • scp autrio-key hadoop02:$PWD cd到/root/.ssh下执行
  1. jdk 环境安装:
  • tar -zxvf jdk-8u181-linux-x64.tar.gz 解压文件夹
  • vi /etc/profile 打开ptofile文件配置环境变量为:
export JAVA_HOME=/home/hadoop/apps/jdk1.8.0_181
export PATH=$JAVA_HOME/bin:$PATH
  • java -version 查看是否配置好
  1. HADOOP安装部署:
  • tar -zxvf hadoop-2.8.0.tar.gz 解压文件夹
  • vi /etc/profile 打开ptofile文件配置环境变量为:
export HADOOP_HOME=/home/hadoop/apps/hadoop-2.8.0
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
  • source /etc/profile
  • hadoop version
    7.修改配置文件:
  • cd /home/hadoop/apps/hadoop-2.8.0/etc/hadoop 修改此文件夹的文件
  • vi hadoop-env.sh 添加:
 # The java implementation to use.
export JAVA_HOME=/home/hadoop/apps/jdk1.8.0_181
  • vi core-site.xml 添加:
<configuration>
 
<property>
<name>fs.defaultFS</name>
<value>hdfs://Hadoop01:9000</value>
</property>

<property>
<name>hadoop.tmp.dir</name>
<value>/root/hdptmp</value>
</property>

</configuration>
  • vi hdfs-site.xml 添加:
 <configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>/root/hdp-meta</value>
</property>

<property>
<name>dfs.datanode.data.dir</name>
<value>/root/hdp-blocks</value>
</property>

<property>
<name>dfs.replication</name>
<value>3</value>
</property>

<property>
<name>dfs.blocksize</name>
<value>128m</value>
</property>

<property>
<name>dfs.secondary.http.address</name>
<value>hdp01:50090</value>
</property>
</configuration>
  • [ ]cp mapred-site.xml.template mapred-site.xml
    vi mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
  • vi yarn-site.xml
 <configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>Hadoop01</value>
</property>

 <property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
  • [ ]vi slaves
Hadoop01
Hadoop02
Hadoop03

8.都配置好就将这些文件发送到Hadoop02,Hadoop03上:

  • mkdir -p /home/hadoop/apps 在Hadoop02和Hadoop03是分别创建

  • scp -r /home/hadoop/apps/jdk1.8.0_181/ Hadoop02:/home/hadoop/apps/

  • scp -r /home/hadoop/apps/jdk1.8.0_181/ Hadoop03:/home/hadoop/apps/

  • scp -r /home/hadoop/apps/hadoop-2.8.0 Hadoop02:/home/hadoop/apps/

  • scp -r /home/hadoop/apps/hadoop-2.8.0 Hadoop03/home/hadoop/apps/

  • scp -r /etc/hosts Hadoop02:/etc

  • scp -r /etc/hosts Hadoop03:/etc

  • scp -r /etc/profile Hadoop02:/etc/profile

  • scp -r /etc/profile Hadoop03:/etc/profile

  • source /etc/profile Hadoop02和Hadoop03是分别执行

  • java -version 、 hadoop version 验证
    8.启动集群:

  • hadoop namenode -format 初始化HDFS

  • start-all.sh 自动化脚本启动

  • jps

  • 查看namenode :192.168.72.101:50070 注:时Hadoop01的IP 地址

你可能感兴趣的:(Hadoop Day01集群搭建)