搭建大数据环境二(hadoop2.0集群搭建)

Hadoop2.0

  1. 下载安装包:wget http://archive.apache.org/dist/hadoop/common/hadoop-2.6.0/hadoop-2.6.0.tar.gz
  2. 解压安装包
  3. 增加配置:cd hadoop-2.6.0/etc/hadoop
    1. vim hadoop-env.sh
      export JAVA_HOME=/root/jdk1.8.0_112

    2. vim yarn-env.sh
      export JAVA_HOME=/root/jdk1.8.0_112

    3. vim slaves
      slave1
      slave2

    4. vim core-site.xml


      fs.defaultFS
      hdfs://192.168.152.128:9000


      hadoop.tmp.dir
      file:/root/hadoop-2.6.0/tmp

    5. vim hdfs-site.xml


      dfs.namenode.secondary.http-address
      H:9001


      dfs.namenode.name.dir
      file:/root/hadoop-2.6.0/dfs/name


      dfs.datanode.data.dir
      file:/root/hadoop-2.6.0/dfs/data


      dfs.repliction
      3

    6. mv mapred-site.xml.template mapred-site.xml
      vim mapred-site.xml


      mapreduce.framework.name
      yarn

    7. vim yarn-site.xml


      yarn.nodemanager.aux-services
      mapreduce_shuffle


      yarn.nodemanager.aux-services.mapreduce.shuffle.class
      org.apache.hadoop.mapred.ShuffleHandler


      yarn.resourcemanager.address
      H:8032


      yarn.resourcemanager.scheduler.address
      H:8030


      yarn.resourcemanager.resource-tracker.address
      H:8035


      yarn.resourcemanager.admin.address
      H:8033


      yarn.resourcemanager.webapp.address
      H:8088

    8. 创建临时目录和文件目录

      1. mkdir /usr/local/src/hadoop-2.6.0/tmp

        mkdir -p /usr/local/src/hadoop-2.6.0/dfs/name

        mkdir -p /usr/local/src/hadoop-2.6.0/dfs/data

    9. 配置环境变量

      1. vi /etc/profile

        1. HADOOP_HOME=/root/hadoop-2.8.2

          export PATH=$PATH:$HADOOP_HOME/bin

      2. 刷新环境变量:source /etc/profile

    10. 拷贝安装包

      1. scp -r /root/hadoop-2.6.0 root@H1:/root/hadoop-2.6.0

        scp -r /root/hadoop-2.6.0 root@H2:/root/hadoop-2.6.0

    11. 启动集群

      1. 初始化namenode:在bin目录下执行hadoop namenode -format

      2. 启动Hadoop在sbin目录下执行:./start-all.sh

你可能感兴趣的:(集群搭建)