5、命令行操作HDFS

基本命令

hadoop fs 具体命令

参数大全

hadoop fs

        [-appendToFile  ... ]
        [-cat [-ignoreCrc]  ...]
        [-checksum  ...]
        [-chgrp [-R] GROUP PATH...]
        [-chmod [-R]  PATH...]
        [-chown [-R] [OWNER][:[GROUP]] PATH...]
        [-copyFromLocal [-f] [-p] [-l] [-d]  ... ]
        [-copyToLocal [-f] [-p] [-ignoreCrc] [-crc]  ... ]
        [-count [-q] [-h] [-v] [-t []] [-u] [-x]  ...]
        [-cp [-f] [-p | -p[topax]] [-d]  ... ]
        [-createSnapshot  []]
        [-deleteSnapshot  ]
        [-df [-h] [ ...]]
        [-du [-s] [-h] [-x]  ...]
        [-expunge]
        [-find  ...  ...]
        [-get [-f] [-p] [-ignoreCrc] [-crc]  ... ]
        [-getfacl [-R] ]
        [-getfattr [-R] {-n name | -d} [-e en] ]
        [-getmerge [-nl] [-skip-empty-file]  ]
        [-help [cmd ...]]
        [-ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [ ...]]
        [-mkdir [-p]  ...]
        [-moveFromLocal  ... ]
        [-moveToLocal  ]
        [-mv  ... ]
        [-put [-f] [-p] [-l] [-d]  ... ]
        [-renameSnapshot   ]
        [-rm [-f] [-r|-R] [-skipTrash] [-safely]  ...]
        [-rmdir [--ignore-fail-on-non-empty]  ...]
        [-setfacl [-R] [{-b|-k} {-m|-x } ]|[--set  ]]
        [-setfattr {-n name [-v value] | -x name} ]
        [-setrep [-R] [-w]   ...]
        [-stat [format]  ...]
        [-tail [-f] ]
        [-test -[defsz] ]
        [-text [-ignoreCrc]  ...]
        [-touchz  ...]
        [-truncate [-w]   ...]
        [-usage [cmd ...]]

常用命令

  1. -help,输出这个命令参数
    hadoop fs -help ls
  2. -ls,显示目录信息
    hadoop fs -ls /
  3. -mkdir,在hdfs上创建目录
    hadoop fs -mkdir -p /user/hadoop/test
  4. -moveFromLocal,从本地剪切粘贴到hdfs
    hadoop fs -moveFromLocal hello.txt /user/hadoop/test
  5. -appendToFile,追加一个文件到已经存在的文件末尾
    hadoop fs -appendToFile hello.txt /user/hadoop/test/hello.txt
  6. -cat,显示文件内容
    hadoop fs -cat /user/hadoop/test/hello.txt
  7. -tail,显示一个文件的末尾
    hadoop fs -tail /user/hadoop/test/hello.txt
  8. -chgrp、-chmod、-chown,同linux文件系统中的用法一样,修改文件所属权限
  9. -copyFromLocal,从本地文件系统中拷贝文件到hdfs路径去
    hadoop fs -copyFromLocal test.txt /user/hadoop/test/
  10. -copyToLocal,从hdfs拷贝文件到本地
    hadoop fs -copyToLocal /user/hadoop/test/hello.txt hdfsHello.txt
  11. -cp,从hdfs的一个路径拷贝到hdfs的另一个路径
    hadoop fs -cp /user/hadoop/test/hello.txt /user/hadoop/test/cpHello.txt
  12. -mv,在hdfs目录中移动文件
    hadoop fs -mv /user/hadoop/hello.txt /user/hadoop/test/mvHello.txt
  13. -get,等同于copyToLocal,就是从hdfs下载文件到本地
    hadoop fs -get /user/hadoop/test/ .
    hadoop fs -get /user/hadoop/test/test.txt .
    hadoop fs -get /user/hadoop/test/test.txt renametest.txt
  14. -getmerge,合并下载多个文件
    hadoop fs -getmerge /user/hadoop/test/ getmerge.txt
  15. -put,等同于copyFromLocal
    hadoop fs -put /user/hadoop/test/hello.txt hdfsHello.txt
  16. -rm,删除文件或文件夹
    hadoop fs -rm -r /user/hadoop/test/
    hadoop fs -rm /user/hadoop/hadoop-2.8.3.tar.gz
  17. -rmdir,删除空目录
    hadoop fs -rmdir /user/hadoop/
  18. -df , 统计文件系统的可用空间信息
    hadoop fs -df -h /
    Filesystem Size Used Available Use%
    hdfs://bigdata1:9000 243.2 G 1.8 G 234.2 G 1%
  19. -du,统计文件夹的大小信息
    hadoop fs -du -s -h /user/hadoop/test/
    hadoop fs -du -h /user/hadoop/test/


    hadoop du命令.png
  20. -count, 统计一个指定目录下的文件节点数量
    hadoop fs -count -v /software
    DIR_COUNT FILE_COUNT CONTENT_SIZE PATHNAME
    1 1 194151339 /software
  21. -setrep,设置hdfs中文件的副本数量
    hadoop fs -setrep 10 /user/hadoop/test/jdk-8u181-linux-x64.tar.gz


    hadoop setrep.png

    这里设置的副本数只是记录在NameNode的元数据中,是否真的会有这么多副本,还得看DataNode的数量。因为目前只有3台设备,最多也就3个副本,只有节点数的增加到10台时,副本数才能达到10。

你可能感兴趣的:(5、命令行操作HDFS)