Hadoop环境

  1. 修改hadoop-env.sh

    修改 JAVA_HOME

  2. 修改core-site.xml

    <configuration>
        <property>
        <name>fs.defaultFS</name>
        <value>hdfs://master:9000</value>
        </property>
         <property>
        <name>hadoop.tmp.dir</name>
        <value>/data/hadoop/tmp1</value>
        </property>
        <property>
        <name>io.file.buffer.size</name>
        <value>4096</value>
        </property>
        </configuration>

  3. 修改hdfs-site.xml

    <configuration>
      <property>
          <name>dfs.replication</name>
          <value>2</value>
      </property>
      <property>
          <name>dfs.namenode.name.dir</name>
          <value>file:////data/hadoop/dfs/name</value>
      </property>
      <property>
          <name>dfs.datanode.data.dir</name>
          <value>file:////data/hadoop/dfs/data</value>
      </property>
      <property>
          <name>dfs.nameservices</name>
          <value>h1</value>
      </property>
      <property>
          <name>dfs.namenode.secondary.http-address</name>
          <value>master:50090</value>
      </property>
      <property>
          <name>dfs.webhdfs.enabled</name>
          <value>true</value>
      </property>
    </configuration>

  4. 修改mapper-site.xml

    <configuration>
    <property>
         <name>mapreduce.framework.name</name>
         <value>yarn</value>
         <final>true</final>
      </property>
      <property>
         <name>mapreduce.jobtracker.http.address</name>
         <value>127.0.0.1:50030</value>
      </property>
      <property>
         <name>mapreduce.jobhistory.address</name>
         <value>127.0.0.1:10020</value>
      </property>
      <property>
         <name>mapreduce.jobhistory.webapp.address</name>
         <value>127.0.0.1:19888</value>
      </property>
      <property>
         <name>mapred.job.tracker</name>
         <value>http://127.0.0.1:9001</value>
      </property>
    </configuration>

  5. yarn-site.xml

    <configuration>

    <!-- Site specific YARN configuration properties -->
    <property>
         <name>yarn.resourcemanager.hostname</name>
         <value>127.0.0.1</value>
      </property>
      <property>
         <name>yarn.nodemanager.aux-services</name>
         <value>mapreduce_shuffle</value>
      </property>
      <property>
         <name>yarn.resourcemanager.address</name>
         <value>127.0.0.1:8032</value>
      </property>
      <property>
         <name>yarn.resourcemanager.scheduler.address</name>
         <value>127.0.0.1:8030</value>
      </property>
      <property>
         <name>yarn.resourcemanager.resource-tracker.address</name>
         <value>127.0.0.1:8031</value>
      </property>
      <property>
         <name>yarn.resourcemanager.admin.address</name>
         <value>127.0.0.1:8033</value>
      </property>
      <property>
         <name>yarn.resourcemanager.webapp.address</name>
         <value>127.0.0.1:8088</value>
      </property>
    </configuration>

  6. 第一次启动之前要执行格式化命令 hdfs namenode -format

  7. 启动:可以先启动hdfs再启动yarn

  8. 验证:jsp  对应的进程


  9. 2871 ResourceManager
    3000 Jps
    2554 NameNode
    2964 NodeManager
    2669 DataNode


你可能感兴趣的:(Hadoop环境)