hadoop 基础命令

 bin/zkServer.sh start

 

 air00:

 hdfs zkfc -formatZK

 hadoop-daemon.sh start journalnode

 #hadoop namenode -format mycluster

 hadoop namenode -format mycluster

 hadoop-daemon.sh start namenode

 hadoop-daemon.sh start zkfc

 hadoop-daemon.sh start datanode

 yarn-daemon.sh start resourcemanager

 yarn-daemon.sh start nodemanager

 mr-jobhistory-daemon.sh start historyserver

 air01

  hadoop-daemon.sh start journalnode

  hadoop-daemon.sh start datanode

  yarn-daemon.sh start nodemanager

 air02:

  hadoop-daemon.sh start journalnode

  hdfs namenode –bootstrapStandby

  hadoop-daemon.sh start namenode

  hadoop-daemon.sh start zkfc

  hadoop-daemon.sh start datanode

  yarn-daemon.sh start resourcemanager

  yarn-daemon.sh start nodemanager

  

namespaceID=284002105

clusterID=CID-2d58d0cc-6eb2-4f0a-b53a-6f6939960491

blockpoolID=BP-462449294-192.168.119.135-1415001575870

layoutVersion=-55




格式化 namenode

# hdfs namenode –format

 # hadoop namenode -format  

开始守护进程

# hadoop-daemon.sh start namenode

# hadoop-daemon.sh start datanode

可以同时启动:

# start-dfs.sh

开始 Yarn 守护进程

# yarn-daemon.sh start resourcemanager

# yarn-daemon.sh start nodemanager

或同时启动:

# start-yarn.sh

检查守护进程是否启动

# jps


2539 NameNode
2744 NodeManager
3075 Jps
3030 DataNode
2691 ResourceManager

浏览UI

打开 localhost:8088 查看资源管理页面

hdfs dfsadmin -refreshNodes  刷新节点

 hdfs dfsadmin -report  报告节点健康情况



map方法中获取当前数据块所在的文件名

InputSplit inputSplit=(InputSplit)context.getInputSplit();
String filename=((FileSplit)inputSplit).getPath().getName();

vim /etc/sysconfig/network


NETWORKING=yes

HOSTNAME=zw_76_42

fedora19 修改hostname

/etc/hostname 





你可能感兴趣的:(hadoop 基础命令)