Hadoop2.6.0配置过程

Hadoop2.6.0安装目录下的etc/hadoop目录下是一系列的配置文件

1、配置core-site.xml

<configuration>
        <property>
                <name>fs.defaultFSname>
                <value>hdfs://master:9000value>
                <description>The name of the default file systemdescription>
        property>
        <property>
                <name>hadoop.tmp.dirname>
                <value>/usr/local/hadoop/hadoop-2.6.0/tmpvalue>
                <description>A base for other temporary directoriesdescription>
        property>
        <property>
            <name>hadoop.native.libname>
            <value>truevalue>
            <description>Should native hadoop libraries,if present,be used.description>
        property>
configuration>

2、配置hdfs-site.xml

<configuration>
    <property>
        <name>dfs.replicationname>
        <value>2value>
    property>
    <property>
        <name>dfs.namenode.secondary.http-addressname>
        <value>master:50090value>
        <decription>The secondary namenode http server address and port.description>
    property>
    <property>
        <name>dfs.namenode.name.dirname>
        <value>/usr/local/hadoop/hadoop-2.6.0/dfs/namevalue>
    property>
    <property>
        <name>dfs.datanode.data.dirname>
        <value>/usr/local/hadoop/hadoop-2.6.0/dfs/datavalue>
    property>
    <property>
        <name>dfs.namenode.checkpoint.dirname>
        <value>file:///usr/local/hadoop/hadoop-2.6.0/dfs/namesecondaryvalue>
        <descirption>Determines where on the local filesystem the DFSsecondary namenode should store the temporary images to merge.If this is acomma-delimited list of directories then the image is replicated in all of the directories for redundancy.description>
    property>
configuration>

3、配置mapred-site.xml

<configuration>
    <property>
        <name>mapreduce.framework.namename>
        <value>yarnvalue>
    property>
configuration>

4、配置yarn-site.xml

<configuration>
    <property>
        <name>yarn.resourcemanager.hostnamename>
        <value>mastervalue>
    property>
    <property>
        <name>yarn.nodemanager.aux-servicesname>
        <value>mapreduce_shufflevalue>
    property>
configuration>

5、配置hadoop-env.sh

export JAVA_HOME=/usr/lib/java/jdk1.7.0_80
export HADOOP_HOME=/usr/local/hadoop/hadoop-2.6.0
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"

6、配置~/.bashrc,修改完成之后要source一下

export JRE_HOME=${JAVA_HOME}/jre
export HADOOP_HOME=/usr/local/hadoop/hadoop-2.6.0
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native
export HADOOP_OPTS="-Djava.library.path=${HADOOP_HOME}/lib"
export CLASS_PATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${IDEA_HOME}/bin:${SPARK_HOME}/bin:${SCALA_HOME}/bin:${JAVA_HOME}/bin:${HADOOP_HOME}/bin:$PATH

7、配置slaves

slave01
slave02

8、将Hadoop2.6.0整个文件夹scp到各个节点上,把.bashrc文件也要复制到各个节点上,并且source一下
9、格式化hdfs文件系统
hdfs namenode -format
10、启动hdfs进行测试

你可能感兴趣的:(Hadoop)