搭建Hadoop2.0.0CHD4.4.0

配置hadoop环境变量
涉及到几个xml配置文件
hadoop-env.sh:配置hadoop依赖的环境,如:jkd
core-site-xml:core的配置项,例如hdfs和mapreduce常用的i/o设置等
hdfs-site.xml:hadoop守护进程的配置项,包括namenode、辅助namenode和datanode等
yarn-site.xml:mapreduce配置项
master 记录运行辅助namenode的机器列表
slave 记录运行datanode和tasktracker的机器列表

1.配置hadoop-env.sh中JAVA_HOME
2.配置core-site-xml
 
    fs.defaultFS
    hdfs://localhost:9000
 

3.配置hdfs-site.xml

    dfs.namenode.name.dir
    file:/usr/local/hadoop/dfs/name
 

 
    dfs.datanode.data.dir
    file:/usr/local/hadoop/dfs/data
 

 
    dfs.replication
    1
 

 
    dfs.permissions
    false
 

4.配置yarn-site.xml

    yarn.resourcemanager.resource-tracker.address
    localhost:8031
 

 
    yarn.resourcemanager.address
    localhost:8032
 

 
    yarn.resourcemanager.scheduler.address
    localhost:8030
 

 
    yarn.resourcemanager.admin.address
    localhost:8033
 

 
    yarn.resourcemanager.webapp.address
    localhost:8088
 

 
    yarn.nodemanager.aux-services
    mapreduce.shuffle
 

 
    yarn.nodemanager.aux-services.mapreduce.shuffle.class
    org.apache.hadoop.mapred.ShuffleHandler
 

5.将mapred-site.xml.temporary mv 为mapred-site.xml
 
    mapreduce.framework.name
    yarn
 

 
    mapreduce.jobhistory.address
    localhost:10020
 

 
    mapreduce.jobhistory.webapp.address
    localhost:19888
 

6.开启Hadoop /sbin/start-all.sh
7.开启hadoop jobhistory的方式为 /sbin/mr-jobhistory-daemon.sh start historyserver

你可能感兴趣的:(Hadoop)