Hadoop常用操作整理

# hdfs dfs  //相当于 hadoop fs
ubuntu@s0:/soft/hadoop/logs$ hdfs dfs -ls -R /

ubuntu@s0:/soft/hadoop/logs$ hdfs dfs -mkdir -p /user/ubuntu/hadoop
ubuntu@s0:/soft/hadoop/logs$ hdfs dfs -ls -R /
drwxr-xr-x   - ubuntu supergroup          0 2018-10-07 21:27 /user
drwxr-xr-x   - ubuntu supergroup          0 2018-10-07 21:27 /user/ubuntu
drwxr-xr-x   - ubuntu supergroup          0 2018-10-07 21:27 /user/ubuntu/hadoop

# 查看-put参数的帮助
ubuntu@s0:/soft/hadoop/logs$ hdfs dfs -help put
-put [-f] [-p] [-l]  ...  :
  Copy files from the local file system into fs. Copying fails if the file already
  exists, unless the -f flag is given.
  Flags:
                                                                       
  -p  Preserves access and modification times, ownership and the mode. 
  -f  Overwrites the destination if it already exists.                 
  -l  Allow DataNode to lazily persist the file to disk. Forces        
         replication factor of 1. This flag will result in reduced
         durability. Use with care.

# 查看-copyFromLocal参数的帮助
ubuntu@s0:/soft/hadoop/logs$ hdfs dfs -help copyFromLocal
-copyFromLocal [-f] [-p] [-l]  ...  :
  Identical to the -put command.

# 将本地文件index.html上传到hdfs目录/user/ubuntu/hadoop下
ubuntu@s0:~$ hdfs dfs -put index.html /user/ubuntu/hadoop

# 下载到本地
ubuntu@s0:~$ hdfs dfs -get /user/ubuntu/hadoop/index.html a.html

# 删除目录
ubuntu@s0:~$ hdfs dfs -rm -r -f /user/ubuntu/hadoop
18/10/07 21:44:00 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes.
Deleted /user/ubuntu/hadoop

java程序能够识别Hadoop的hdfs URL方案还需要额外工作。
通过FsUrlStreamHandlerFactory实例
调用java.net.URL对象的setURLStreamHandlerFactory方法

磁盘寻道时间10ms,磁盘的熟虑每秒钟100M左右,1s走出来100M左右,如果按二进制算出来应该是128M。按这个法则去定制块大小。

# 黑白名单的组合情况
include  //dfs.hosts
exclude  //dfs.hosts.exclude

include  exclude  Interpretation
no  no  不能连接
no  yes  不能连接
yes  no  可以连接
yes  yes  可以连接,将会退役状态

# 节点的服役和退役(hdfs)
1.在dfs.include文件中包含新节点名称,该文件在nn的本地目录。
白名单
[s0:/soft/hadoop/etc/dfs.include.txt]
  s1
  s2
  s3
  s4
2.在hdfs-site.xml文件中添加属性

  dfs.hosts
  /soft/hadoop/etc/dfs.include.txt

3.在nn上刷新节点
hdfs dfsadmin -refreshNodes
4.在slaves文件中添加节点ip(主机名)
  s1
  s2
  s3
  s4      //新添加的
5.单独启动新节点中的DataNode
[s4]
hadoop-daemon.sh start datanode
[退役]
1.添加退役节点的ip到黑名单,不要更新白名单
[/soft/hadoop/etc/dfs.hosts.exclude.txt]
s4
2.配置hdfs-site.xml

  dfs.hosts.exclude
  /soft/hadoop/etc/dfs.hosts.exclude.txt

3.刷新nn节点
hdfs dfsadmin -refreshNodes
4.查看webui,节点状态在decommission in progress.
5.当所有的要退役的节点都报告为Decommissioned,数据转移工作已经完成。
6.从白名单删除节点,并刷新节点
[s0:/soft/hadoop/etc/dfs.include.txt]
hdfs dfsadmin -refreshNodes
7.从slaves文件中删除退役节点

# 节点的服役和退役(yarn)
1.在dfs.include文件中包含新节点名称,该文件在nn的本地目录。
白名单
[s0:/soft/hadoop/etc/dfs.include.txt]
  s1
  s2
  s3
  s4
2.在yarn-site.xml文件中添加属性

  yarn.resourcemanager.nodes.include-path
  /soft/hadoop/etc/dfs.include.txt

3.在nn上刷新节点
yarn rmadmin -refreshNodes
4.在slaves文件中添加节点ip(主机名)
  s1
  s2
  s3
  s4      //新添加的
5.单独启动新节点中的nodemanager
[s4]
yarn-daemon.sh start datanode
[退役]
1.添加退役节点的ip到黑名单,不要更新白名单
[/soft/hadoop/etc/dfs.hosts.exclude.txt]
s4
2.配置yarn-site.xml

  yarn.resourcemanager.nodes.exclude-path
  /soft/hadoop/etc/dfs.hosts.exclude.txt

3.刷新nn节点
yarn rmadmin -refreshNodes
4.查看webui,节点状态在decommission in progress.
5.当所有的要退役的节点都报告为Decommissioned,数据转移工作已经完成。
6.从白名单删除节点,并刷新节点
[s0:/soft/hadoop/etc/dfs.include.txt]
yarn rmadmin -refreshNodes
7.从slaves文件中删除退役节点

你可能感兴趣的:(Hadoop常用操作整理)