在搭建环境的过程中我们用过: 以 hadoop fs 表示file system 开头
查询所有的hdfs shell命令
[root@localhost current]# hadoop fs Usage: hadoop fs [generic options] [-appendToFile... ] [-cat [-ignoreCrc] ...] [-checksum ...] [-chgrp [-R] GROUP PATH...] [-chmod [-R] PATH...] [-chown [-R] [OWNER][:[GROUP]] PATH...] [-copyFromLocal [-f] [-p] ... ] [-copyToLocal [-p] [-ignoreCrc] [-crc] ... ] [-count [-q] ...] [-cp [-f] [-p] ... ] [-createSnapshot [ ]] [-deleteSnapshot ] [-df [-h] [ ...]] [-du [-s] [-h] ...] [-expunge] [-get [-p] [-ignoreCrc] [-crc] ... ] [-getfacl [-R]] [-getmerge [-nl] ] [-help [cmd ...]] [-ls [-d] [-h] [-R] [ ...]] [-mkdir [-p] ...] [-moveFromLocal ... ] [-moveToLocal ] [-mv ... ] [-put [-f] [-p] ... ] [-renameSnapshot] [-rm [-f] [-r|-R] [-skipTrash] ...] [-rmdir [--ignore-fail-on-non-empty] ...] [-setfacl [-R] [{-b|-k} {-m|-x } ]|[--set ]] [-setrep [-R] [-w] ...] [-stat [format] ...] [-tail [-f] ] [-test -[defsz] ] [-text [-ignoreCrc] ...] [-touchz ...] [-usage [cmd ...]] Generic options supported are -conf specify an application configuration file -D use value for given property -fs specify a namenode -jt specify a job tracker -files specify comma separated files to be copied to the map reduce cluster -libjars specify comma separated jar files to include in the classpath. -archives specify comma separated archives to be unarchived on the compute machines. The general command line syntax is bin/hadoop command [genericOptions] [commandOptions]
注意:hdfs对于文件不允许修改,但是可以对内容进行追加,修改的时候需要将前面分好的块进行重新排列,所以不支持,但是追加可以实现,相当于重新加块。
注意: 对于hadoop命令,/相当于hdfs的根目录,不是linux的根目录。当然也可以全写 hdfs://localhost:9000/
例如:列出文件夹结构: -ls
^C[root@localhost current]# hadoop fs -ls / Found 4 items -rw-r--r-- 1 root supergroup 37667 2018-04-11 03:29 /install.log drwx------ - root supergroup 0 2018-04-11 03:54 /tmp drwxr-xr-x - root supergroup 0 2018-04-11 03:54 /user drwxr-xr-x - root supergroup 0 2018-04-11 04:10 /wordcount
类似于linux的ls,可以看出文件类型与权限。1代表在集群中的备份数量。
修改文件所属主: -chown
[root@localhost current]# hadoop fs -chown hadoop /install.log [root@localhost current]# hadoop fs -ls / Found 4 items -rw-r--r-- 1 hadoop supergroup 37667 2018-04-11 03:29 /install.log drwx------ - root supergroup 0 2018-04-11 03:54 /tmp drwxr-xr-x - root supergroup 0 2018-04-11 03:54 /user drwxr-xr-x - root supergroup 0 2018-04-11 04:10 /wordcount
修改权限: -chmod
[root@localhost current]# hadoop fs -chmod 777 /install.log [root@localhost current]# hadoop fs -ls / Found 4 items -rwxrwxrwx 1 hadoop supergroup 37667 2018-04-11 03:29 /install.log drwx------ - root supergroup 0 2018-04-11 03:54 /tmp drwxr-xr-x - root supergroup 0 2018-04-11 03:54 /user drwxr-xr-x - root supergroup 0 2018-04-11 04:10 /wordcount
文件上传: -copyFromLocal (相当于 -put )
[root@localhost ~]# hadoop fs -copyFromLocal ./anaconda-ks.cfg / [root@localhost ~]# hadoop fs -ls / Found 5 items -rw-r--r-- 1 root supergroup 2388 2018-04-11 05:30 /anaconda-ks.cfg -rwxrwxrwx 1 hadoop supergroup 37667 2018-04-11 03:29 /install.log drwx------ - root supergroup 0 2018-04-11 03:54 /tmp drwxr-xr-x - root supergroup 0 2018-04-11 03:54 /user drwxr-xr-x - root supergroup 0 2018-04-11 04:10 /wordcount
文件下载 -copyToLocal (相当于 -get )
[root@localhost test]# hadoop fs -get /install.log [root@localhost test]# ls install.log [root@localhost test]# hadoop fs -copyToLocal /anaconda-ks.cfg [root@localhost test]# ls anaconda-ks.cfg install.log
文件复制:-cp (直接从hdfs中复制)
[root@localhost test]# hadoop fs -cp /install.log /wordcount/ [root@localhost test]# hadoop fs -ls / Found 5 items -rw-r--r-- 1 root supergroup 2388 2018-04-11 05:30 /anaconda-ks.cfg -rwxrwxrwx 1 hadoop supergroup 37667 2018-04-11 03:29 /install.log drwx------ - root supergroup 0 2018-04-11 03:54 /tmp drwxr-xr-x - root supergroup 0 2018-04-11 03:54 /user drwxr-xr-x - root supergroup 0 2018-04-11 05:35 /wordcount [root@localhost test]# hadoop fs -ls /wordcount/ Found 3 items drwxr-xr-x - root supergroup 0 2018-04-11 04:08 /wordcount/input -rw-r--r-- 1 root supergroup 37667 2018-04-11 05:35 /wordcount/install.log drwxr-xr-x - root supergroup 0 2018-04-11 04:10 /wordcount/output
查看目录与文件大小: -du
[root@localhost test]# hadoop fs -du -s -h hdfs://localhost:9000/* #查看根目录下每个文件夹与文件的大小(这里需要写全路径) 2.3 K hdfs://localhost:9000/anaconda-ks.cfg 36.8 K hdfs://localhost:9000/install.log 269.6 K hdfs://localhost:9000/tmp 0 hdfs://localhost:9000/user 36.8 K hdfs://localhost:9000/wordcount
创建目录: -mkdir (不会递归创建目录)
[root@localhost test]# hadoop fs -mkdir /aa [root@localhost test]# hadoop fs -ls / Found 6 items drwxr-xr-x - root supergroup 0 2018-04-11 05:58 /aa -rw-r--r-- 1 root supergroup 2388 2018-04-11 05:30 /anaconda-ks.cfg -rwxrwxrwx 1 hadoop supergroup 37667 2018-04-11 03:29 /install.log drwx------ - root supergroup 0 2018-04-11 03:54 /tmp drwxr-xr-x - root supergroup 0 2018-04-11 03:54 /user drwxr-xr-x - root supergroup 0 2018-04-11 05:35 /wordcount
删除文件与文件夹: -rm (删除文件夹需要加-r参数,表示递归删除)
[root@localhost test]# hadoop fs -rm -f /anaconda-ks.cfg
[root@localhost test]# hadoop fs -rm -f -r /aa
移动文件:从本地移到hdfs,从hdfs移到本地,在hdfs中移动
[-moveFromLocal... ] [-moveToLocal ] [-mv ... ]
观察尾部日志与实时刷新日志:(类似于linux的tail -f)
[-tail [-f]]
常见的几个命令:
1.0查看帮助 hadoop fs -help1.1上传 hadoop fs -put 1.2查看文件内容 hadoop fs -cat 1.3查看文件列表 hadoop fs -ls / 1.4下载文件 hadoop fs -get