《Hadoop The Definitive Guide》ch09 Setting Up a Hadoop Cluster

参考下面的文章配置了一个具有3个节点的集群。

http://yymmiinngg.iteye.com/blog/706699

http://linleran.iteye.com/blog/287993

http://www.cnblogs.com/wayne1017/archive/2007/03/20/678724.html

要注意的是,slave节点和master节点的配置文件内容是相同的。第一次配置时没有搞清楚,直接把带IP的地方都改成了本机IP,启动hadoop后,datanode总是显示100%,并且没有“Live Nodes ”。

c01s02h01ate1:nomad2 # hadoop namenode -format
12/07/08 05:31:36 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = c01s02h01ate1/231.132.236.67
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 0.20.203.0
STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203 -r 1099333; compiled by 'oom' on Wed May  4 07:57:50 PDT 2011
************************************************************/
12/07/08 05:31:36 INFO util.GSet: VM type       = 64-bit
12/07/08 05:31:36 INFO util.GSet: 2% max memory = 17.77875 MB
12/07/08 05:31:36 INFO util.GSet: capacity      = 2^21 = 2097152 entries
12/07/08 05:31:36 INFO util.GSet: recommended=2097152, actual=2097152
12/07/08 05:31:36 INFO namenode.FSNamesystem: fsOwner=nomad2
12/07/08 05:31:36 INFO namenode.FSNamesystem: supergroup=supergroup
12/07/08 05:31:36 INFO namenode.FSNamesystem: isPermissionEnabled=false
12/07/08 05:31:36 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
12/07/08 05:31:36 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
12/07/08 05:31:36 INFO namenode.NameNode: Caching file names occuring more than 10 times 
12/07/08 05:31:36 INFO common.Storage: Image file of size 113 saved in 0 seconds.
12/07/08 05:31:36 INFO common.Storage: Storage directory /tmp/hadoop-nomad2/dfs/name has been successfully formatted.
12/07/08 05:31:36 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at c01s02h01ate1/231.132.236.67
************************************************************/
c01s02h01ate1:nomad2 # start-all.sh 
starting namenode, logging to /local/hadoop/hadoop-0.20.203.0/bin/../logs/hadoop-nomad2-namenode-c01s02h01ate1.out
231.132.236.182: starting datanode, logging to /local/hadoop/hadoop-0.20.203.0/bin/../logs/hadoop-nomad2-datanode-gemini.out
231.132.236.68: starting datanode, logging to /local/hadoop/hadoop-0.20.203.0/bin/../logs/hadoop-nomad2-datanode-c01s0201ate2.out
231.132.236.67: starting secondarynamenode, logging to /local/hadoop/hadoop-0.20.203.0/bin/../logs/hadoop-nomad2-secondarynamenode-c01s02h01ate1.out
starting jobtracker, logging to /local/hadoop/hadoop-0.20.203.0/bin/../logs/hadoop-nomad2-jobtracker-c01s02h01ate1.out
231.132.236.68: starting tasktracker, logging to /local/hadoop/hadoop-0.20.203.0/bin/../logs/hadoop-nomad2-tasktracker-c01s0201ate2.out
231.132.236.182: starting tasktracker, logging to /local/hadoop/hadoop-0.20.203.0/bin/../logs/hadoop-nomad2-tasktracker-gemini.out

web界面如下,

《Hadoop The Definitive Guide》ch09 Setting Up a Hadoop Cluster_第1张图片


你可能感兴趣的:(《Hadoop The Definitive Guide》ch09 Setting Up a Hadoop Cluster)