配置hadoop
1:下载hadoop-1.2.1.tar.gz
在/home/jifeng 创建目录 mkdir hadoop
2:解压
[jifeng@jifeng01 hadoop]$ ls hadoop-1.2.1.tar.gz [jifeng@jifeng01 hadoop]$ tar zxf hadoop-1.2.1.tar.gz [jifeng@jifeng01 hadoop]$ ls hadoop-1.2.1 hadoop-1.2.1.tar.gz [jifeng@jifeng01 hadoop]$
[jifeng@jifeng01 hadoop]$ cd hadoop-1.2.1 [jifeng@jifeng01 hadoop-1.2.1]$ ls bin hadoop-ant-1.2.1.jar ivy sbin build.xml hadoop-client-1.2.1.jar ivy.xml share c++ hadoop-core-1.2.1.jar lib src CHANGES.txt hadoop-examples-1.2.1.jar libexec webapps conf hadoop-minicluster-1.2.1.jar LICENSE.txt contrib hadoop-test-1.2.1.jar NOTICE.txt docs hadoop-tools-1.2.1.jar README.txt [jifeng@jifeng01 hadoop-1.2.1]$ cd conf [jifeng@jifeng01 conf]$ ls capacity-scheduler.xml hadoop-policy.xml slaves configuration.xsl hdfs-site.xml ssl-client.xml.example core-site.xml log4j.properties ssl-server.xml.example fair-scheduler.xml mapred-queue-acls.xml taskcontroller.cfg hadoop-env.sh mapred-site.xml task-log4j.properties hadoop-metrics2.properties masters [jifeng@jifeng01 conf]$ vi hadoop-env.sh # Set Hadoop-specific environment variables here. # The only required environment variable is JAVA_HOME. All others are # optional. When running a distributed configuration it is best to # set JAVA_HOME in this file, so that it is correctly defined on # remote nodes. # The java implementation to use. Required. export JAVA_HOME=/home/jifeng/jdk1.7.0_45 # Extra Java CLASSPATH elements. Optional. # export HADOOP_CLASSPATH= # The maximum amount of heap to use, in MB. Default is 1000. # export HADOOP_HEAPSIZE=2000 # Extra Java runtime options. Empty by default. # export HADOOP_OPTS=-server # Command specific options appended to HADOOP_OPTS when specified export HADOOP_NAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_NAMENODE_OPTS" @ "hadoop-env.sh" 57L, 2436C 已写入 [jifeng@jifeng01 conf]$ cat hadoop-env.sh
把# export JAVA_HOME 修改为“export JAVA_HOME=/home/jifeng/jdk1.7.0_45”
4:修改core-site.xml文件
在hadoop目录下创建目录
[jifeng@jifeng01 hadoop]$ mkdir tmp
[jifeng@jifeng01 conf]$ vi core-site.xml修改后如下:
[jifeng@jifeng01 conf]$ cat core-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>fs.default.name</name> <value>hdfs://jifeng01:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/home/jifeng/hadoop/tmp</value> </property> </configuration>
修改后如下:
[jifeng@jifeng01 conf]$ cat hdfs-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>dfs.replication</name> <value>1</value> <description></description> </property> </configuration>
修改后如下:
[jifeng@jifeng01 conf]$ cat mapred-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>mapred.job.tracker</name> <value>jifeng01:9001</value> <description>NameNode</description> </property> </configuration> [jifeng@jifeng01 conf]$
修改后路下
[jifeng@jifeng01 conf]$ cat masters jifeng01 [jifeng@jifeng01 conf]$ cat slaves jifeng02 jifeng03 [jifeng@jifeng01 conf]$
[jifeng@jifeng01 hadoop]$ scp -r ./hadoop-1.2.1 jifeng02:/home/jifeng/hadoop
[jifeng@jifeng01 hadoop]$ scp -r ./hadoop-1.2.1 jifeng03:/home/jifeng/hadoop
9:格式化分布式文件系统
[jifeng@jifeng01 hadoop-1.2.1]$ bin/hadoop namenode -format 14/07/24 10:29:43 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = jifeng01/10.3.7.214 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 1.2.1 STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013 STARTUP_MSG: java = 1.7.0_45 ************************************************************/ 14/07/24 10:29:43 INFO util.GSet: Computing capacity for map BlocksMap 14/07/24 10:29:43 INFO util.GSet: VM type = 64-bit 14/07/24 10:29:43 INFO util.GSet: 2.0% max memory = 932184064 14/07/24 10:29:43 INFO util.GSet: capacity = 2^21 = 2097152 entries 14/07/24 10:29:43 INFO util.GSet: recommended=2097152, actual=2097152 14/07/24 10:29:43 INFO namenode.FSNamesystem: fsOwner=jifeng 14/07/24 10:29:43 INFO namenode.FSNamesystem: supergroup=supergroup 14/07/24 10:29:43 INFO namenode.FSNamesystem: isPermissionEnabled=true 14/07/24 10:29:43 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100 14/07/24 10:29:43 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s) 14/07/24 10:29:43 INFO namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0 14/07/24 10:29:43 INFO namenode.NameNode: Caching file names occuring more than 10 times 14/07/24 10:29:43 INFO common.Storage: Image file /home/jifeng/hadoop/tmp/dfs/name/current/fsimage of size 112 bytes saved in 0 seconds. 14/07/24 10:29:44 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/home/jifeng/hadoop/tmp/dfs/name/current/edits 14/07/24 10:29:44 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/home/jifeng/hadoop/tmp/dfs/name/current/edits 14/07/24 10:29:44 INFO common.Storage: Storage directory /home/jifeng/hadoop/tmp/dfs/name has been successfully formatted. 14/07/24 10:29:44 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at jifeng01/10.3.7.214 ************************************************************/ [jifeng@jifeng01 hadoop-1.2.1]$
[jifeng@jifeng01 hadoop-1.2.1]$ bin/start-all.sh starting namenode, logging to /home/jifeng/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-jifeng-namenode-jifeng01.out jifeng03: starting datanode, logging to /home/jifeng/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-jifeng-datanode-jifeng03.out jifeng02: starting datanode, logging to /home/jifeng/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-jifeng-datanode-jifeng02.out The authenticity of host 'jifeng01 (10.3.7.214)' can't be established. RSA key fingerprint is a8:9d:34:63:fa:c2:47:4f:81:10:94:fa:4b:ba:08:55. Are you sure you want to continue connecting (yes/no)? yes jifeng01: Warning: Permanently added 'jifeng01,10.3.7.214' (RSA) to the list of known hosts. jifeng@jifeng01's password: jifeng@jifeng01's password: jifeng01: Permission denied, please try again. jifeng01: starting secondarynamenode, logging to /home/jifeng/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-jifeng-secondarynamenode-jifeng01.out starting jobtracker, logging to /home/jifeng/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-jifeng-jobtracker-jifeng01.out jifeng03: starting tasktracker, logging to /home/jifeng/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-jifeng-tasktracker-jifeng03.out jifeng02: starting tasktracker, logging to /home/jifeng/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-jifeng-tasktracker-jifeng02.out [jifeng@jifeng01 hadoop-1.2.1]$需要输入密码
11:检测守护进程
[jifeng@jifeng01 hadoop-1.2.1]$ jps 4539 JobTracker 4454 SecondaryNameNode 4269 NameNode 4667 Jps [jifeng@jifeng01 hadoop-1.2.1]$
[jifeng@jifeng02 hadoop]$ jps 2734 TaskTracker 2815 Jps 2647 DataNode [jifeng@jifeng02 hadoop]$
[jifeng@jifeng03 hadoop]$ jps 4070 Jps 3878 DataNode 3993 TaskTracker [jifeng@jifeng03 hadoop]$