Apache Hadoop安装

1.建立jdk环境
  >>> jdk与hadoop关系参见: http://wiki.apache.org/hadoop/HadoopJavaVersions
  >>> 至少选择 oracle jdk 1.6.0_20 或者以上. 本次编译选择jdk 1.7.0_21.


  [bruce@iRobot hadoop-install]$ cat ~/.bash_profile
  # .bash_profile

  # Get the aliases and functions
  if [ -f ~/.bashrc ]; then
    . ~/.bashrc
  fi

  # User specific environment and startup programs

  PATH=$PATH:$HOME/bin

  #java settings
  export PATH
  export JAVA_HOME=/u01/app/software/jdk1.7.0_21
  export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH
  export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar


2.建立hadoop目录
  [bruce@iRobot hadoop-install]$ pwd
  /home/bruce/hadoop-install

3.下载hadoop
  Apache hadoop可执行链接:(64 bit系统上,Apache hadoop可执行包中的native library需要重新。否则,会有告警。)
  wget http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.5.2/hadoop-2.5.2.tar.gz

  Apache hadoop源代码链接:
  wget http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.5.2/hadoop-2.5.2-src.tar.gz

  [bruce@iRobot hadoop-install]$ ls
  hadoop-2.5.2  hadoop-2.5.2-src  hadoop-2.5.2-src.tar.gz  hadoop-2.5.2.tar.gz 

4.编译hadoop的native library:
  ---------------------------
    (1)setup environment 
       RHEL上安装依赖软件包:
       yum install svn autoconf automake libtool cmake ncurses-devel openssl-devel gcc

       ...(安装Maven,protobuf,ant。其他的,根据编译时的提示,安装相应的包。RHEL上最好建立yum库,否则找依赖很痛苦。)

       wget http://mirrors.hust.edu.cn/apache/maven/maven-3/3.3.3/binaries/apache-maven-3.3.3-bin.tar.gz
       wget https://github.com/google/protobuf/archive/v2.5.0.zip

       ...

       wget http://mirrors.cnnic.cn/apache/ant/source/apache-ant-1.9.6-src.zip  
       ...

    (2)Create binary distribution with native code and without documentation:
      $pwd
      /home/bruce/hadoop-install/hadoop-2.5.2-src 
      
      
      编译native library,dist,跳过javadoc. 
      $ mvn package -Dmaven.javadoc.skip=true -Pdist,native -DskipTests -Dtar

      >>>> 中间可能出错:以上错误是protoc版本过低导致的,需要protoc2.5+
          [ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (compile-proto) on project hadoop-common: An Ant BuildException has occured: exec returned: 127 -> [Help 1]
          org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (compile-proto) on project hadoop-common: An Ant BuildException has occured: exec returned: 127


          >>> You need to ensure you have protobuf installed and protoc is on your PATH.
            下载: https://github.com/google/protobuf/tree/v2.5.0

            $wget https://github.com/google/protobuf/archive/v2.5.0.zip

          To build and install the C++ Protocol Buffer runtime and the Protocol
          Buffer compiler (protoc) execute the following:
            $ autoreconf -f -i -Wall,no-obsolete <== 如果没有configure文件,则执行
            $ ./configure
            $ make
            $ make check
            $ sudo make install

          If "make check" fails, you can still install, but it is likely that
          some features of this library will not work correctly on your system.
          Proceed at your own risk.

          "make install" may require superuser privileges.

          For advanced usage information on configure and make, see INSTALL.txt.

      >>>>>

          在编译成功后,maven输出:
          [INFO] Executed tasks
          [INFO]
          [INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-dist ---
          [INFO] Skipping javadoc generation
          [INFO] ------------------------------------------------------------------------
          [INFO] Reactor Summary:
          [INFO]
          [INFO] Apache Hadoop Main ................................. SUCCESS [  1.712 s]
          [INFO] Apache Hadoop Project POM .......................... SUCCESS [  1.096 s]
          [INFO] Apache Hadoop Annotations .......................... SUCCESS [  2.250 s]
          [INFO] Apache Hadoop Assemblies ........................... SUCCESS [  0.325 s]
          [INFO] Apache Hadoop Project Dist POM ..................... SUCCESS [  1.948 s]
          [INFO] Apache Hadoop Maven Plugins ........................ SUCCESS [  2.319 s]
          [INFO] Apache Hadoop MiniKDC .............................. SUCCESS [  1.465 s]
          [INFO] Apache Hadoop Auth ................................. SUCCESS [  1.765 s]
          [INFO] Apache Hadoop Auth Examples ........................ SUCCESS [  0.894 s]
          [INFO] Apache Hadoop Common ............................... SUCCESS [ 41.697 s]
          [INFO] Apache Hadoop NFS .................................. SUCCESS [  1.445 s]
          [INFO] Apache Hadoop Common Project ....................... SUCCESS [  0.037 s]
          [INFO] Apache Hadoop HDFS ................................. SUCCESS [ 38.363 s]
          [INFO] Apache Hadoop HttpFS ............................... SUCCESS [01:51 min]
          [INFO] Apache Hadoop HDFS BookKeeper Journal .............. SUCCESS [  1.296 s]
          [INFO] Apache Hadoop HDFS-NFS ............................. SUCCESS [  1.144 s]
          [INFO] Apache Hadoop HDFS Project ......................... SUCCESS [  0.038 s]
          [INFO] hadoop-yarn ........................................ SUCCESS [  0.032 s]
          [INFO] hadoop-yarn-api .................................... SUCCESS [  3.923 s]
          [INFO] hadoop-yarn-common ................................. SUCCESS [  4.715 s]
          [INFO] hadoop-yarn-server ................................. SUCCESS [  0.032 s]
          [INFO] hadoop-yarn-server-common .......................... SUCCESS [  1.181 s]
          [INFO] hadoop-yarn-server-nodemanager ..................... SUCCESS [  5.559 s]
          [INFO] hadoop-yarn-server-web-proxy ....................... SUCCESS [  0.643 s]
          [INFO] hadoop-yarn-server-applicationhistoryservice ....... SUCCESS [  0.882 s]
          [INFO] hadoop-yarn-server-resourcemanager ................. SUCCESS [  3.526 s]
          [INFO] hadoop-yarn-server-tests ........................... SUCCESS [  0.466 s]
          [INFO] hadoop-yarn-client ................................. SUCCESS [  0.906 s]
          [INFO] hadoop-yarn-applications ........................... SUCCESS [  0.031 s]
          [INFO] hadoop-yarn-applications-distributedshell .......... SUCCESS [  0.524 s]
          [INFO] hadoop-yarn-applications-unmanaged-am-launcher ..... SUCCESS [  0.433 s]
          [INFO] hadoop-yarn-site ................................... SUCCESS [  0.036 s]
          [INFO] hadoop-yarn-project ................................ SUCCESS [  4.647 s]
          [INFO] hadoop-mapreduce-client ............................ SUCCESS [  0.066 s]
          [INFO] hadoop-mapreduce-client-core ....................... SUCCESS [  4.092 s]
          [INFO] hadoop-mapreduce-client-common ..................... SUCCESS [  2.123 s]
          [INFO] hadoop-mapreduce-client-shuffle .................... SUCCESS [  0.561 s]
          [INFO] hadoop-mapreduce-client-app ........................ SUCCESS [  2.219 s]
          [INFO] hadoop-mapreduce-client-hs ......................... SUCCESS [  1.369 s]
          [INFO] hadoop-mapreduce-client-jobclient .................. SUCCESS [  3.182 s]
          [INFO] hadoop-mapreduce-client-hs-plugins ................. SUCCESS [  0.366 s]
          [INFO] Apache Hadoop MapReduce Examples ................... SUCCESS [  0.944 s]
          [INFO] hadoop-mapreduce ................................... SUCCESS [  3.983 s]
          [INFO] Apache Hadoop MapReduce Streaming .................. SUCCESS [ 15.878 s]
          [INFO] Apache Hadoop Distributed Copy ..................... SUCCESS [ 19.823 s]
          [INFO] Apache Hadoop Archives ............................. SUCCESS [  0.446 s]
          [INFO] Apache Hadoop Rumen ................................ SUCCESS [  0.877 s]
          [INFO] Apache Hadoop Gridmix .............................. SUCCESS [  1.000 s]
          [INFO] Apache Hadoop Data Join ............................ SUCCESS [  0.435 s]
          [INFO] Apache Hadoop Extras ............................... SUCCESS [  0.561 s]
          [INFO] Apache Hadoop Pipes ................................ SUCCESS [  8.634 s]
          [INFO] Apache Hadoop OpenStack support .................... SUCCESS [  1.004 s]
          [INFO] Apache Hadoop Client ............................... SUCCESS [  4.992 s]
          [INFO] Apache Hadoop Mini-Cluster ......................... SUCCESS [  0.090 s]
          [INFO] Apache Hadoop Scheduler Load Simulator ............. SUCCESS [02:41 min]
          [INFO] Apache Hadoop Tools Dist ........................... SUCCESS [  3.985 s]
          [INFO] Apache Hadoop Tools ................................ SUCCESS [  0.028 s]
          [INFO] Apache Hadoop Distribution ......................... SUCCESS [ 11.344 s]
          [INFO] ------------------------------------------------------------------------
          [INFO] BUILD SUCCESS
          [INFO] ------------------------------------------------------------------------
          [INFO] Total time: 08:09 min
          [INFO] Finished at: 2015-11-06T11:05:13+08:00
          [INFO] Final Memory: 184M/839M
          [INFO] ------------------------------------------------------------------------

          重新生成的hadoop的目标在:/home/bruce/hadoop-install/hadoop-2.5.2-src/hadoop-dist下
          [bruce@iRobot hadoop-2.5.2-src]$ pwd
          /home/bruce/hadoop-install/hadoop-2.5.2-src
          [bruce@iRobot hadoop-2.5.2-src]$ ls -al
          总用量 120
          drwxr-xr-x 15 bruce oinstall  4096 11月 15 2014 .
          drwxr-xr-x  6 bruce oinstall  4096 11月  6 11:06 ..
          -rw-r--r--  1 bruce oinstall 10531 11月 15 2014 BUILDING.txt
          drwxr-xr-x  2 bruce oinstall  4096 11月  6 10:56 dev-support
          -rw-r--r--  1 bruce oinstall   439 11月 15 2014 .gitattributes
          drwxr-xr-x  4 bruce oinstall  4096 11月  6 10:57 hadoop-assemblies
          drwxr-xr-x  3 bruce oinstall  4096 11月  6 11:02 hadoop-client
          drwxr-xr-x 10 bruce oinstall  4096 11月  6 10:58 hadoop-common-project
          drwxr-xr-x  3 bruce oinstall  4096 11月  6 11:05 hadoop-dist
          drwxr-xr-x  7 bruce oinstall  4096 11月  6 11:00 hadoop-hdfs-project
          drwxr-xr-x 10 bruce oinstall  4096 11月  6 11:01 hadoop-mapreduce-project
          drwxr-xr-x  4 bruce oinstall  4096 11月  6 10:57 hadoop-maven-plugins
          drwxr-xr-x  3 bruce oinstall  4096 11月  6 11:02 hadoop-minicluster
          drwxr-xr-x  4 bruce oinstall  4096 11月  6 10:57 hadoop-project
          drwxr-xr-x  3 bruce oinstall  4096 11月  6 10:57 hadoop-project-dist
          drwxr-xr-x 14 bruce oinstall  4096 11月  6 11:05 hadoop-tools
          drwxr-xr-x  4 bruce oinstall  4096 11月  6 11:00 hadoop-yarn-project
          -rw-r--r--  1 bruce oinstall 15458 11月 15 2014 LICENSE.txt
          -rw-r--r--  1 bruce oinstall   101 11月 15 2014 NOTICE.txt
          -rw-r--r--  1 bruce oinstall 18081 11月 15 2014 pom.xml
          -rw-r--r--  1 bruce oinstall  1366 11月 15 2014 README.txt

          native library在/home/bruce/hadoop-install/hadoop-2.5.2-src/hadoop-dist/target/hadoop-2.5.2/lib/native/下。
          [bruce@iRobot hadoop]$ ls /home/bruce/hadoop-install/hadoop-2.5.2-src/hadoop-dist
          pom.xml  target
          [bruce@iRobot hadoop]$ ls /home/bruce/hadoop-install/hadoop-2.5.2-src/hadoop-dist/target/
          antrun  dist-layout-stitching.sh  dist-tar-stitching.sh  hadoop-2.5.2  hadoop-2.5.2.tar.gz  hadoop-dist-2.5.2.jar  maven-archiver  test-dir
          [bruce@iRobot hadoop]$ ls /home/bruce/hadoop-install/hadoop-2.5.2-src/hadoop-dist/target/hadoop-2.5.2
          bin  etc  include  lib  libexec  LICENSE.txt  NOTICE.txt  README.txt  sbin  share
          [bruce@iRobot hadoop]$ ls /home/bruce/hadoop-install/hadoop-2.5.2-src/hadoop-dist/target/hadoop-2.5.2/lib
          native
          [bruce@iRobot hadoop-2.5.2-src]$ ls -al hadoop-dist/target/hadoop-2.5.2/lib/native/
          总用量 4048
          drwxr-xr-x 2 bruce oinstall    4096 11月  6 11:05 .
          drwxr-xr-x 3 bruce oinstall    4096 11月  6 11:05 ..
          -rw-r--r-- 1 bruce oinstall  973114 11月  6 11:05 libhadoop.a
          -rw-r--r-- 1 bruce oinstall 1487372 11月  6 11:05 libhadooppipes.a
          lrwxrwxrwx 1 bruce oinstall      18 11月  6 11:05 libhadoop.so -> libhadoop.so.1.0.0
          -rwxr-xr-x 1 bruce oinstall  585560 11月  6 11:05 libhadoop.so.1.0.0
          -rw-r--r-- 1 bruce oinstall  582136 11月  6 11:05 libhadooputils.a
          -rw-r--r-- 1 bruce oinstall  298354 11月  6 11:05 libhdfs.a
          lrwxrwxrwx 1 bruce oinstall      16 11月  6 11:05 libhdfs.so -> libhdfs.so.0.0.0
          -rwxr-xr-x 1 bruce oinstall  200186 11月  6 11:05 libhdfs.so.0.0.0

       

          $ cp /home/bruce/hadoop-install/hadoop-2.5.2-src/hadoop-dist/target/hadoop-2.5.2/lib/native/* /home/sms/hadoop-install/hadoop/lib/native
          

          将编译出来的新的lib/native替换到hadoop-2.5.2.tar.gz中原来的lib/native,记得要修改$HADOOP_HOME/etc/hadoop/hadoop-env.sh,在最后加上;
          export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib:$HADOOP_HOME/lib/native"
    -----------------------



5.建立hadoop软链接

  [bruce@iRobot hadoop-install]$ ls
  hadoop-2.5.2  hadoop-2.5.2-src  hadoop-2.5.2-src.tar.gz  hadoop-2.5.2.tar.gz

  # 建立软链接的好处是,以后换一个hadoop版本的时候,只需要将 软链接重新指向新的hadoop版本即可
  [bruce@iRobot hadoop-install]$ ln -s hadoop-2.5.2 hadoop
  [bruce@iRobot hadoop-install]$ ls -l
  总用量 158836
  lrwxrwxrwx  1 bruce oinstall        12 11月  6 11:06 hadoop -> hadoop-2.5.2
  drwxr-xr-x 12 bruce oinstall      4096 11月  6 12:36 hadoop-2.5.2
  drwxr-xr-x 15 bruce oinstall      4096 11月 15 2014 hadoop-2.5.2-src
  -rw-r--r--  1 bruce oinstall  15434251 11月 21 2014 hadoop-2.5.2-src.tar.gz
  -rw-r--r--  1 bruce oinstall 147197492 11月 21 2014 hadoop-2.5.2.tar.gz

  #进入hadoop的目录
  [bruce@iRobot hadoop-install]$ cd hadoop
  [bruce@iRobot hadoop]$ ls
  bin  etc  include  lib  libexec  LICENSE.txt  NOTICE.txt  README.txt  sbin  share
  [bruce@iRobot hadoop]$ pwd
  /home/bruce/hadoop-install/hadoop

注意: 
后续操作默认路径都是在hadoop的目录:/home/bruce/hadoop-install/hadoop 
以下同。


6.设置etc/hadoop/hadoop-env.sh:
  [bruce@iRobot hadoop]$ ls -al etc/hadoop/hadoop-env.sh
  -rw-r--r-- 1 bruce oinstall 4224 10月 22 08:53 etc/hadoop/hadoop-env.sh

  [bruce@iRobot hadoop]$ cat etc/hadoop/hadoop-env.sh

  #...(设置hadoop home)
  export HADOOP_PREFIX=/home/bruce/hadoop-install/hadoop
  export HADOOP_HOME=/home/bruce/hadoop-install/hadoop


  #...(经常碰到的参数:设置java home)
  #export JAVA_HOME=${JAVA_HOME}
  export JAVA_HOME=/u01/app/software/jdk1.7.0_21

  #...(经常碰到的参数:修改hadoop的heap大小)
  # The maximum amount of heap to use, in MB. Default is 1000.
  #export HADOOP_HEAPSIZE=
  #export HADOOP_NAMENODE_INIT_HEAPSIZE=""

  #...(经常碰到的参数:修改hadoop的native lib 位置)
  # Extra Java runtime options.  Empty by default.
  export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true -Djava.library.path=$HADOOP_HOME/lib:$HADOOP_HOME/lib/native"

  #...(经常碰到的参数:修改hadoop的client的虚拟内存大小)
  # The following applies to multiple commands (fs, dfs, fsck, distcp etc)
  #export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS"
  export HADOOP_CLIENT_OPTS="-Xmx1024m $HADOOP_CLIENT_OPTS"
  #HADOOP_JAVA_PLATFORM_OPTS="-XX:-UsePerfData $HADOOP_JAVA_PLATFORM_OPTS"



7.检验hadoop命令
[bruce@iRobot hadoop]$ bin/hadoop version
Hadoop 2.5.2
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r cc72e9b000545b86b75a61f4835eb86d57bfafc0
Compiled by jenkins on 2014-11-14T23:45Z
Compiled with protoc 2.5.0
From source with checksum df7537a4faa4658983d397abf4514320
This command was run using /home/bruce/hadoop-install/hadoop-2.5.2/share/hadoop/common/hadoop-common-2.5.2.jar


8.运行hadoop
(1) Local (Standalone) Mode  -- non-distributed mode
  默认情况下,hadoop配置运行在non-distributed mode,作为1个单进程。这个便于调试。

  下面的例子,
  (a)在当前路径,建立1个input目录
  (b)拷贝1个文件到input
  (c)运行hadoop的1个mapreduce例子,产生结果写入当前路径下的output目录 
  (d)查看当前路径下的ouput目录的内容

  [bruce@iRobot hadoop]$ mkdir input
  [bruce@iRobot hadoop]$ cp etc/hadoop/*.xml input
  [bruce@iRobot hadoop]$ ls input/
  capacity-scheduler.xml  core-site.xml  hadoop-policy.xml  hdfs-site.xml  httpfs-site.xml  kms-acls.xml  kms-site.xml  yarn-site.xml
  [bruce@iRobot hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.2.jar grep input output 'dfs[a-z.]+'

  (..........略过.......)


(2)Pseudo-Distributed Mode

  设置etc/hadoop/core-site.xml:
    
            
              fs.defaultFS
              hdfs://localhost:9000
              
            
              hadoop.tmp.dir
              /uloc/hadoopdata/hadoop-${user.name}/tmp
            

    




  设置etc/hadoop/hdfs-site.xml:
    

        
            dfs.replication
            1
        

        
           dfs.namenode.name.dir
           /uloc/hadoopdata/hadoop-${user.name}/name
           true
        

        
           dfs.datanode.data.dir
           /uloc/hadoopdata/hadoop-${user.name}/data
           true
        
    



  设置免登录ssh

    检查本机是否登录ssh,不需要密码
    $ ssh localhost

    如果还需要密码,则执行以下命令:
    $ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
    $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

  运行haoop例子
    
    The following instructions are to run a MapReduce job locally.  
    Format the filesystem:
      $ bin/hdfs namenode -format

    Start NameNode daemon and DataNode daemon:
      $ sbin/start-dfs.sh
    The hadoop daemon log output is written to the $HADOOP_LOG_DIR directory (defaults to $HADOOP_HOME/logs).

    Browse the web interface for the NameNode; by default it is available at:
      NameNode - http://localhost:50070/

    Make the HDFS directories required to execute MapReduce jobs:
      $ bin/hdfs dfs -mkdir /user
      $ bin/hdfs dfs -mkdir /user/

    Copy the input files into the distributed filesystem:
      $ bin/hdfs dfs -put etc/hadoop input

    Run some of the examples provided:
      $ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.2.jar grep input output 'dfs[a-z.]+'

    Examine the output files:
    Copy the output files from the distributed filesystem to the local filesystem and examine them:

      $ bin/hdfs dfs -get output output
      $ cat output/*
    or

    View the output files on the distributed filesystem:

      $ bin/hdfs dfs -cat output/*

    When you're done, stop the daemons with:
      $ sbin/stop-dfs.sh



(3)YARN on a Single Node
    (a)修改配置文件:


    设置etc/hadoop/mapred-site.xml:
      
        
            mapreduce.framework.name
            yarn
        
      




    设置etc/hadoop/yarn-site.xml:
      
        
            yarn.nodemanager.aux-services
            mapreduce_shuffle
        
      


    (b)运行
    以下四步同Pseudo-Distributed Mode
    ------------------------------------
    Format the filesystem:
      $ bin/hdfs namenode -format


    Start NameNode daemon and DataNode daemon:
      $ sbin/start-dfs.sh
    The hadoop daemon log output is written to the $HADOOP_LOG_DIR directory (defaults to $HADOOP_HOME/logs).


    Browse the web interface for the NameNode; by default it is available at:
      NameNode - http://localhost:50070/


    Make the HDFS directories required to execute MapReduce jobs:
      $ bin/hdfs dfs -mkdir /user
      $ bin/hdfs dfs -mkdir /user/
    ------------------------------------


 
    启动ResourceManager 和 NodeManager:
      $ sbin/start-yarn.sh


    查看ResourceManager界面:
      默认情况下访问:  http://localhost:8088/




    跑1个 a MapReduce job:
      Delete existing directory
        $ bin/hdfs dfs -rm -f -r  input 
        $ bin/hdfs dfs -rm -f -r  output


      Copy the input files into the distributed filesystem:


        $ bin/hdfs dfs -put etc/hadoop input


      Run some of the examples provided:
        $ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.2.jar grep input output 'dfs[a-z.]+'


      View the output files on the distributed filesystem:


        $ bin/hdfs dfs -cat output/*
        
    停止yarn:
      $ sbin/stop-yarn.sh
(4)Fully-Distributed Mode
    资源有限,集群方式暂时飘过...
 


9.参考链接
(1) http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html
(2) http://wiki.apache.org/hadoop/GettingStartedWithHadoop (比较旧)
(3) http://blog.163.com/captain_zmc/blog/static/20401258820131015112233418/
(4) http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/NativeLibraries.html
(5) http://www.ibm.com/developerworks/cn/opensource/os-cn-hadoop-yarn/   
(6) http://my.oschina.net/u/216368/blog/344989
(7) hadoop默认配置项参考:
core-site.xml:   http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xml
hdfs-site.xml:   http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
mapred-site.xml: http://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml
yarn-site.xml:   http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-common/yarn-default.xml



 





你可能感兴趣的:(Apache Hadoop安装)