bin/hadoop fs 具体命令 或者 bin/hdfs dfs 具体命令
[atguigu@hadoop102 hadoop-3.1.3]$ bin/hadoop fs
Usage: hadoop fs [generic options]
[-appendToFile ... ]
[-cat [-ignoreCrc] ...]
[-checksum ...]
[-chgrp [-R] GROUP PATH...]
[-chmod [-R] PATH...]
[-chown [-R] [OWNER][:[GROUP]] PATH...]
[-copyFromLocal [-f] [-p] [-l] [-d] [-t ] ... ]
[-copyToLocal [-f] [-p] [-ignoreCrc] [-crc] ... ]
[-count [-q] [-h] [-v] [-t []] [-u] [-x] [-e] ...]
[-cp [-f] [-p | -p[topax]] [-d] ... ]
[-createSnapshot []]
[-deleteSnapshot ]
[-df [-h] [ ...]]
[-du [-s] [-h] [-v] [-x] ...]
[-expunge]
[-find ... ...]
[-get [-f] [-p] [-ignoreCrc] [-crc] ... ]
[-getfacl [-R] ]
[-getfattr [-R] {-n name | -d} [-e en] ]
[-getmerge [-nl] [-skip-empty-file] ]
[-head ]
[-help [cmd ...]]
[-ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [-e] [ ...]]
[-mkdir [-p] ...]
[-moveFromLocal ... ]
[-moveToLocal ]
[-mv ... ]
[-put [-f] [-p] [-l] [-d] ... ]
[-renameSnapshot ]
[-rm [-f] [-r|-R] [-skipTrash] [-safely] ...]
[-rmdir [--ignore-fail-on-non-empty] ...]
[-setfacl [-R] [{-b|-k} {-m|-x } ]|[--set ]]
[-setfattr {-n name [-v value] | -x name} ]
[-setrep [-R] [-w] ...]
[-stat [format] ...]
[-tail [-f] [-s ] ]
[-test -[defsz] ]
[-text [-ignoreCrc] ...]
[-touch [-a] [-m] [-t TIMESTAMP ] [-c] ...]
[-touchz ...]
[-truncate [-w] ...]
[-usage [cmd ...]]
Generic options supported are:
-conf specify an application configuration file
-D define a value for a given property
-fs specify default filesystem URL to use, overrides 'fs.defaultFS' property from configurations.
-jt specify a ResourceManager
-files specify a comma-separated list of files to be copied to the map reduce cluster
-libjars specify a comma-separated list of jar files to be included in the classpath
-archives specify a comma-separated list of archives to be unarchived on the compute machines
The general command line syntax is:
command [genericOptions] [commandOptions]
1)启动Hadoop集群(方便后续的测试)
[Mark@hadoop102 hadoop-3.1.3]$ sbin/start-dfs.sh
[Mark@hadoop103 hadoop-3.1.3]$ sbin/start-yarn.sh
2)-help:输出这个命令参数
[Mark@hadoop102 hadoop-3.1.3]$ hadoop fs -help rm
1)-moveFromLocal:从本地剪切粘贴到HDFS
[Mark@hadoop102 hadoop-3.1.3]$ touch kongming.txt
[Mark@hadoop102 hadoop-3.1.3]$ hadoop fs -moveFromLocal ./aa.txt /aa/bb
2)-copyFromLocal:从本地文件系统中拷贝文件到HDFS路径去
[Mark@hadoop102 hadoop-3.1.3]$ hadoop fs -copyFromLocal README.txt /
3)-appendToFile:追加一个文件到已经存在的文件末尾
[Mark@hadoop102 hadoop-3.1.3]$ touch liubei.txt
[Mark@hadoop102 hadoop-3.1.3]$ vi liubei.txt
输入
san gu mao lu
[Mark@hadoop102 hadoop-3.1.3]$ hadoop fs -appendToFile liubei.txt /sanguo/shuguo/kongming.txt
4)-put:等同于copyFromLocal
[Mark@hadoop102 hadoop-3.1.3]$ hadoop fs -put ./zaiyiqi.txt /user/atguigu/test/
1)-copyToLocal:从HDFS拷贝到本地
[Mark@hadoop102 hadoop-3.1.3]$ hadoop fs -copyToLocal /sanguo/shuguo/kongming.txt ./
2)-get:等同于copyToLocal,就是从HDFS下载文件到本地
[Mark@hadoop102 hadoop-3.1.3]$ hadoop fs -get /sanguo/shuguo/kongming.txt ./
3)-getmerge:合并下载多个文件,比如HDFS的目录 /user/atguigu/test下有多个文件:log.1, log.2,log.3,...
[Mark@hadoop102 hadoop-3.1.3]$ hadoop fs -getmerge /user/atguigu/test/* ./zaiyiqi.txt
1)-ls: 显示目录信息
[Mark@hadoop102 hadoop-3.1.3]$ hadoop fs -ls /
2)-mkdir:在HDFS上创建目录
[Mark@hadoop102 hadoop-3.1.3]$ hadoop fs -mkdir -p /sanguo/shuguo
3)-cat:显示文件内容
[Mark@hadoop102 hadoop-3.1.3]$ hadoop fs -cat /sanguo/shuguo/kongming.txt
4)-chgrp 、-chmod、-chown:Linux文件系统中的用法一样,修改文件所属权限
[Mark@hadoop102 hadoop-3.1.3]$ hadoop fs -chmod 666 /sanguo/shuguo/kongming.txt
[Mark@hadoop102 hadoop-3.1.3]$ hadoop fs -chown atguigu:atguigu /sanguo/shuguo/kongming.txt
5)-cp :从HDFS的一个路径拷贝到HDFS的另一个路径
[Mark@hadoop102 hadoop-3.1.3]$ hadoop fs -cp /sanguo/shuguo/kongming.txt /zhuge.txt
6)-mv:在HDFS目录中移动文件
[Mark@hadoop102 hadoop-3.1.3]$ hadoop fs -mv /zhuge.txt /sanguo/shuguo/
7)-tail:显示一个文件的末尾
[Mark@hadoop102 hadoop-3.1.3]$ hadoop fs -tail /sanguo/shuguo/kongming.txt
8)-rm:删除文件或文件夹
[Mark@hadoop102 hadoop-3.1.3]$ hadoop fs -rm /user/atguigu/test/jinlian2.txt
9)-rmdir:删除空目录
[Mark@hadoop102 hadoop-3.1.3]$ hadoop fs -mkdir /test
[Mark@hadoop102 hadoop-3.1.3]$ hadoop fs -rmdir /test
10)-du统计文件夹的大小信息
[Mark@hadoop102 hadoop-3.1.3]$ hadoop fs -du -s -h /user/atguigu/test
2.7 K /user/atguigu/test
[Mark@hadoop102 hadoop-3.1.3]$ hadoop fs -du -h /user/atguigu/test
1.3 K /user/atguigu/test/README.txt
15 /user/atguigu/test/jinlian.txt
1.4 K /user/atguigu/test/zaiyiqi.txt
11)-setrep:设置HDFS中文件的副本数量
[Mark@hadoop102 hadoop-3.1.3]$ hadoop fs -setrep 10 /sanguo/shuguo/kongming.txt