hadoop2的伪分布部署

hadoop2的伪分布部署

http://blog.sina.com.cn/s/blog_60cc026f0101hpsb.html

通过我们前面的操作,已经可以编译并且打包产生适合本机的hadoop包,目录是hadoop-dist/target/hadoop-2.2.0。

使用root用户登录

配置文件位于/usr/local/hadoop-dist/target/hadoop-2.2.0/etc/hadoop目录下。

编辑文件hadoop-env.sh,修改export JAVA_HOME=/usr/local/jdk1.7.0_45

编辑文件core-site.xml,内容如下:


< name>hadoop.tmp.dir
< value>/usr/local/hadoop-dist/target/hadoop-2.2.0/tmp/hadoop-${user.name}
< /property>
< property>
< name>fs.default.name
< value>hdfs://hadoop10:9000
< /property>

创建目录/usr/local/hadoop-dist/target/hadoop-2.2.0/tmp

执行命令,重命名文件mv mapred-site.xml.template mapred-site.xml

编辑文件mapred-site.xml,内容如下:


< name>mapred.job.tracker
< value>hadoop10:9001
< /property>

编辑文件hdfs-site.xml,内容如下:


< name>dfs.replication
< value>1
< /property>

执行格式化命令,输出如下信息:

[root@hadoop10 hadoop-2.2.0]# bin/hdfs namenode -format
13/12/28 11:38:03 INFO namenode.NameNode: STARTUP_MSG:

13/12/28 11:38:03 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
Formatting using clusterid: CID-d8783f2a-ce67-43f6-92bd-f9a452f4e78b
13/12/28 11:38:05 INFO namenode.HostFileManager: read includes:
HostSet(
)
13/12/28 11:38:05 INFO namenode.HostFileManager: read excludes:
HostSet(
)
13/12/28 11:38:05 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
13/12/28 11:38:05 INFO util.GSet: Computing capacity for map BlocksMap
13/12/28 11:38:05 INFO util.GSet: VM type = 64-bit
13/12/28 11:38:05 INFO util.GSet: 2.0% max memory = 966.7 MB
13/12/28 11:38:05 INFO util.GSet: capacity = 2^21 = 2097152 entries
13/12/28 11:38:05 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
13/12/28 11:38:05 INFO blockmanagement.BlockManager: defaultReplication = 1
13/12/28 11:38:05 INFO blockmanagement.BlockManager: maxReplication = 512
13/12/28 11:38:05 INFO blockmanagement.BlockManager: minReplication = 1
13/12/28 11:38:05 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
13/12/28 11:38:05 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
13/12/28 11:38:05 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
13/12/28 11:38:05 INFO blockmanagement.BlockManager: encryptDataTransfer = false
13/12/28 11:38:05 INFO namenode.FSNamesystem: fsOwner = root (auth:SIMPLE)
13/12/28 11:38:05 INFO namenode.FSNamesystem: supergroup = supergroup
13/12/28 11:38:05 INFO namenode.FSNamesystem: isPermissionEnabled = true
13/12/28 11:38:05 INFO namenode.FSNamesystem: HA Enabled: false
13/12/28 11:38:05 INFO namenode.FSNamesystem: Append Enabled: true
13/12/28 11:38:06 INFO util.GSet: Computing capacity for map INodeMap
13/12/28 11:38:06 INFO util.GSet: VM type = 64-bit
13/12/28 11:38:06 INFO util.GSet: 1.0% max memory = 966.7 MB
13/12/28 11:38:06 INFO util.GSet: capacity = 2^20 = 1048576 entries
13/12/28 11:38:07 INFO namenode.NameNode: Caching file names occuring more than 10 times
13/12/28 11:38:07 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
13/12/28 11:38:07 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
13/12/28 11:38:07 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
13/12/28 11:38:07 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
13/12/28 11:38:07 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
13/12/28 11:38:07 INFO util.GSet: Computing capacity for map Namenode Retry Cache
13/12/28 11:38:07 INFO util.GSet: VM type = 64-bit
13/12/28 11:38:07 INFO util.GSet: 0.029999999329447746% max memory = 966.7 MB
13/12/28 11:38:07 INFO util.GSet: capacity = 2^15 = 32768 entries
13/12/28 11:38:07 INFO common.Storage: Storage directory /usr/local/hadoop-dist/target/hadoop-2.2.0/tmp/hadoop-root/dfs/name has been successfully formatted.
13/12/28 11:38:07 INFO namenode.FSImage: Saving image file /usr/local/hadoop-dist/target/hadoop-2.2.0/tmp/hadoop-root/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
13/12/28 11:38:07 INFO namenode.FSImage: Image file /usr/local/hadoop-dist/target/hadoop-2.2.0/tmp/hadoop-root/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 196 bytes saved in 0 seconds.
13/12/28 11:38:07 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
13/12/28 11:38:07 INFO util.ExitUtil: Exiting with status 0
13/12/28 11:38:07 INFO namenode.NameNode: SHUTDOWN_MSG:

[root@hadoop10 hadoop-2.2.0]#

启动hdfs,执行命令结果如下

[root@hadoop10 hadoop-2.2.0]# sbin/start-dfs.sh
Starting namenodes on [hadoop10]
hadoop10: starting namenode, logging to /usr/local/hadoop-dist/target/hadoop-2.2.0/logs/hadoop-root-namenode-hadoop10.out
localhost: starting datanode, logging to /usr/local/hadoop-dist/target/hadoop-2.2.0/logs/hadoop-root-datanode-hadoop10.out
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
RSA key fingerprint is 3d:56:ae:31:73:66:9c:21:02:02:bc:5a:6b:bd:bf:75.
Are you sure you want to continue connecting (yes/no)? yes
0.0.0.0: Warning: Permanently added '0.0.0.0' (RSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop-dist/target/hadoop-2.2.0/logs/hadoop-root-secondarynamenode-hadoop10.out
[root@hadoop10 hadoop-2.2.0]# jps
5256 SecondaryNameNode
5015 NameNode
5123 DataNode
5352 Jps
[root@hadoop10 hadoop-2.2.0]#

启动yarn,执行命令结果如下

[root@hadoop10 hadoop-2.2.0]# sbin/start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop-dist/target/hadoop-2.2.0/logs/yarn-root-resourcemanager-hadoop10.out
localhost: starting nodemanager, logging to /usr/local/hadoop-dist/target/hadoop-2.2.0/logs/yarn-root-nodemanager-hadoop10.out
[root@hadoop10 hadoop-2.2.0]# jps
5496 NodeManager
5524 Jps
5256 SecondaryNameNode
5015 NameNode
5123 DataNode
5410 ResourceManager
[root@hadoop10 hadoop-2.2.0]#

看到这5个java进程,就表示启动成功了。通过浏览器查看一下

以上只是讲述了如何配置并且启动hadoop2的伪分布,没有讲述各个配置文件及其参数的含义,请关注后续文章

你可能感兴趣的:(大数据)