基于docker搭建hadoop集群环境中遇到的一些问题

   hadoop的参数环境配置完毕后,运行hadoop出现下图的提示:
[root@hadoop0 hadoop]# hadoop
/usr/app/hadoop/bin/hadoop: line 20: which: command not found
dirname: missing operand
Try 'dirname --help' for more information.
/usr/app/hadoop/bin/hadoop: line 27: /usr/app/hadoop/../libexec/hadoop-config.sh: No such file or directory
Usage: hadoop [--config confdir] [COMMAND | CLASSNAME]
  CLASSNAME            run the class named CLASSNAME
 or
  where COMMAND is one of:
  fs                   run a generic filesystem user client
  version              print the version
  jar             run a jar file
                       note: please use "yarn jar" to launch
                             YARN applications, not this command.
  checknative [-a|-h]  check native hadoop and compression libraries availability
  distcp   copy file or directories recursively
  archive -archiveName NAME -p  *  create a hadoop archive
  classpath            prints the class path needed to get the
  credential           interact with credential providers
                       Hadoop jar and the required libraries
  daemonlog            get/set the log level for each daemon
  trace                view and modify Hadoop tracing settings

Most commands print help when invoked w/o parameters.
 出现which命令无法识别,那么安装一下which就可以;
[root@hadoop0 bin]# yum install which
[root@hadoop0 bin]# hadoop
Usage: hadoop [--config confdir] [COMMAND | CLASSNAME]
  CLASSNAME            run the class named CLASSNAME
 or
  where COMMAND is one of:
  fs                   run a generic filesystem user client
  version              print the version
  jar             run a jar file
                       note: please use "yarn jar" to launch
                             YARN applications, not this command.
  checknative [-a|-h]  check native hadoop and compression libraries availability
  distcp   copy file or directories recursively
  archive -archiveName NAME -p  *  create a hadoop archive
  classpath            prints the class path needed to get the
  credential           interact with credential providers
                       Hadoop jar and the required libraries
  daemonlog            get/set the log level for each daemon
  trace                view and modify Hadoop tracing settings

Most commands print help when invoked w/o parameters.
 查看docker中容器的IP地址
[root@localhost Desktop]# docker inspect --format='{{.NetworkSettings.IPAddress}}' 07742212ab6d
172.17.0.2
 格式化hadoop
[root@hadoop0 bin]# hadoop namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

16/08/16 03:53:02 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = hadoop0/172.17.0.2
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.7.2
STARTUP_MSG:   classpath = ........
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r b165c4fe8a74265c792ce23f546c64604acf0e41; compiled by 'jenkins' on 2016-01-26T00:08Z
STARTUP_MSG:   java = 1.8.0_102
************************************************************/
16/08/16 03:53:02 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
16/08/16 03:53:02 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-018771b1-b7d6-4c6a-ada1-5e16b5900e90
16/08/16 03:53:03 INFO namenode.FSNamesystem: No KeyProvider found.
16/08/16 03:53:03 INFO namenode.FSNamesystem: fsLock is fair:true
16/08/16 03:53:03 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
16/08/16 03:53:03 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
16/08/16 03:53:03 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
16/08/16 03:53:03 INFO blockmanagement.BlockManager: The block deletion will start around 2016 Aug 16 03:53:03
16/08/16 03:53:03 INFO util.GSet: Computing capacity for map BlocksMap
16/08/16 03:53:03 INFO util.GSet: VM type       = 64-bit
16/08/16 03:53:03 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
16/08/16 03:53:03 INFO util.GSet: capacity      = 2^21 = 2097152 entries
16/08/16 03:53:03 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
16/08/16 03:53:03 INFO blockmanagement.BlockManager: defaultReplication         = 2
16/08/16 03:53:03 INFO blockmanagement.BlockManager: maxReplication             = 512
16/08/16 03:53:03 INFO blockmanagement.BlockManager: minReplication             = 1
16/08/16 03:53:03 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
16/08/16 03:53:03 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
16/08/16 03:53:03 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
16/08/16 03:53:03 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
16/08/16 03:53:03 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)
16/08/16 03:53:03 INFO namenode.FSNamesystem: supergroup          = supergroup
16/08/16 03:53:03 INFO namenode.FSNamesystem: isPermissionEnabled = true
16/08/16 03:53:03 INFO namenode.FSNamesystem: HA Enabled: false
16/08/16 03:53:03 INFO namenode.FSNamesystem: Append Enabled: true
16/08/16 03:53:03 INFO util.GSet: Computing capacity for map INodeMap
16/08/16 03:53:03 INFO util.GSet: VM type       = 64-bit
16/08/16 03:53:03 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
16/08/16 03:53:03 INFO util.GSet: capacity      = 2^20 = 1048576 entries
16/08/16 03:53:03 INFO namenode.FSDirectory: ACLs enabled? false
16/08/16 03:53:03 INFO namenode.FSDirectory: XAttrs enabled? true
16/08/16 03:53:03 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
16/08/16 03:53:03 INFO namenode.NameNode: Caching file names occuring more than 10 times
16/08/16 03:53:03 INFO util.GSet: Computing capacity for map cachedBlocks
16/08/16 03:53:03 INFO util.GSet: VM type       = 64-bit
16/08/16 03:53:03 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
16/08/16 03:53:03 INFO util.GSet: capacity      = 2^18 = 262144 entries
16/08/16 03:53:03 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
16/08/16 03:53:03 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
16/08/16 03:53:03 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
16/08/16 03:53:03 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
16/08/16 03:53:03 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
16/08/16 03:53:03 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
16/08/16 03:53:03 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
16/08/16 03:53:03 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
16/08/16 03:53:03 INFO util.GSet: Computing capacity for map NameNodeRetryCache
16/08/16 03:53:03 INFO util.GSet: VM type       = 64-bit
16/08/16 03:53:03 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
16/08/16 03:53:03 INFO util.GSet: capacity      = 2^15 = 32768 entries
16/08/16 03:53:04 INFO namenode.FSImage: Allocated new BlockPoolId: BP-11472095-172.17.0.2-1471319583999
16/08/16 03:53:04 INFO common.Storage: Storage directory /usr/app/hadoop/tmp/dfs/name has been successfully formatted.
16/08/16 03:53:04 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
16/08/16 03:53:04 INFO util.ExitUtil: Exiting with status 0
16/08/16 03:53:04 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop0/172.17.0.2
************************************************************/

你可能感兴趣的:(docker)