【HDFS】二、HDFS命令行操作

二、HDFS命令行操作

基本命令

bin/hadoop fs 具体命令

参数大全

[daxiong@hadoop hadoop-2.7.2]$ bin/hadoop fs

        [-appendToFile  ... ]
        [-cat [-ignoreCrc]  ...]
        [-checksum  ...]
        [-chgrp [-R] GROUP PATH...]
        [-chmod [-R] ... | OCTALMODE> PATH...]
        [-chown [-R] [OWNER][:[GROUP]] PATH...]
        [-copyFromLocal [-f] [-p]  ... ]
        [-copyToLocal [-p] [-ignoreCrc] [-crc]  ... ]
        [-count [-q]  ...]
        [-cp [-f] [-p]  ... ]
        [-createSnapshot  []]
        [-deleteSnapshot  ]
        [-df [-h] [ ...]]
        [-du [-s] [-h]  ...]
        [-expunge]
        [-get [-p] [-ignoreCrc] [-crc]  ... ]
        [-getfacl [-R] ]
        [-getmerge [-nl]  ]
        [-help [cmd ...]]
        [-ls [-d] [-h] [-R] [ ...]]
        [-mkdir [-p]  ...]
        [-moveFromLocal  ... ]
        [-moveToLocal  ]
        [-mv  ... ]
        [-put [-f] [-p]  ... ]
        [-renameSnapshot   ]
        [-rm [-f] [-r|-R] [-skipTrash]  ...]
        [-rmdir [--ignore-fail-on-non-empty]  ...]
        [-setfacl [-R] [{-b|-k} {-m|-x } ]|[--set  ]]
        [-setrep [-R] [-w]   ...]
        [-stat [format]  ...]
        [-tail [-f] ]
        [-test -[defsz] ]
        [-text [-ignoreCrc]  ...]
        [-touchz  ...]
        [-usage [cmd ...]]

常用命令实际操作

  1. 启动Hadoop集群(方便后续的测试)

    启动HDFS集群
    [daxiong@hadoop102 hadoop-2.7.2]$ sbin/start-dfs.sh 
    启动yarn集群
    [daxiong@hadoop102 hadoop-2.7.2]$ sbin/start-yarn.sh 
  2. -help:输出这个命令参数

    
    #比如查看rm的用法
    
    [daxiong@hadoop102 bin]$ ./hadoop fs -help rm
    
    -rm [-f] [-r|-R] [-skipTrash]  ... :
     Delete all files that match the specified file pattern. Equivalent to the Unix
     command "rm "
    
     -skipTrash  option bypasses trash, if enabled, and immediately deletes    
     -f          If the file does not exist, do not display a diagnostic message or 
                 modify the exit status to reflect an error.                        
     -[rR]       Recursively deletes directories     
  3. -ls: 显示目录信息

    [daxiong@hadoop102 bin]$ ./hadoop fs -ls /
    
    Found 1 items
    drwxrwx---   - daxiong supergroup          0 2018-04-08 22:49 /tmp
  4. -mkdir:在hdfs上创建目录

    
    #创建一个 /daxiong文件夹
    
    [daxiong@hadoop102 bin]$ ./hadoop fs -mkdir /daxiong
    
    #创建一个 /daxiong/haha/da/da/da文件夹,需要加上 -p 参数
    
    [daxiong@hadoop102 bin]$ ./hadoop fs -mkdir -p  /daxiong/haha/da/da/da
    
    
    #查看是否创建成功
    
    [daxiong@hadoop102 bin]$ ./hadoop fs -ls /
    Found 1 items
    drwxr-xr-x   - daxiong supergroup          0 2018-04-10 20:09 /daxiong
    
    #递归查看是否创建成功
    
    [daxiong@hadoop102 bin]$ ./hadoop fs -ls -R /
    drwxr-xr-x   - daxiong supergroup          0 2018-04-10 20:51 /daxiong
    drwxr-xr-x   - daxiong supergroup          0 2018-04-10 20:51 /daxiong/haha
    drwxr-xr-x   - daxiong supergroup          0 2018-04-10 20:54 /daxiong/haha/da
    drwxr-xr-x   - daxiong supergroup          0 2018-04-10 20:54 /daxiong/haha/da/da
    drwxr-xr-x   - daxiong supergroup          0 2018-04-10 20:54 /daxiong/haha/da/da/da
    
  5. -moveFromLocal从本地剪切粘贴到hdfs

  6. -appendToFile :追加一个文件到已经存在的文件末尾

  7. -cat :显示文件内容

  8. -tail:显示一个文件的末尾

  9. -chgrp 、-chmod、-chown:linux文件系统中的用法一样,修改文件所属权限

  10. -copyFromLocal:从本地文件系统中拷贝文件到hdfs路径去

  11. -copyToLocal:从hdfs拷贝到本地

  12. -cp :从hdfs的一个路径拷贝到hdfs的另一个路径

  13. -mv:在hdfs目录中移动文件

  14. -get:等同于copyToLocal,就是从hdfs下载文件到本地

  15. -getmerge :合并下载多个文件,比如hdfs的目录 /aaa/下有多个文件:log.1, log.2,log.3,…

  16. -put:等同于copyFromLocal

    
    #把本机/opt/moudle/hadoop-2.7.2/daxiong.txt 文件放到HDFS上的/daxiong文件夹下
    
    [daxiong@hadoop102 bin]$ ./hadoop fs -put /opt/moudle/hadoop-2.7.2/daxiong.txt /daxiong
  17. -rm:删除文件或文件夹

    
    #强制删除 /tmp文件夹
    
    [daxiong@hadoop102 bin]$ ./hadoop fs -rm -r /tmp
    
    18/04/10 20:08:03 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Em
    ptier interval = 0 minutes.Deleted /tmp
    
  18. -rmdir:删除空目录

  19. -df :统计文件系统的可用空间信息

  20. -du统计文件夹的大小信息

    [daxiong@hadoop102 bin]$ ./hadoop fs -df
    
    Filesystem                    Size    Used    Available  Use%
    hdfs://hadoop101:8020  31304097792  114688  17920761856    0%
    

  21. -setrep:设置hdfs中文件的副本数量

你可能感兴趣的:(Hadoop)