1、appendToFile
## 追加一个或者多个文件到hdfs制定文件中.也可以从命令行读取输入.
Usage: hdfs dfs -appendToFile ...
· hdfs dfs -appendToFile localfile /user/hadoop/hadoopfile
· hdfs dfs -appendToFile localfile1 localfile2 /user/hadoop/hadoopfile
· hdfs dfs -appendToFile localfile hdfs://nn.example.com/hadoop/hadoopfile
· hdfs dfs -appendToFile - hdfs://nn.example.com/hadoop/hadoopfile Reads the input from stdin.
2、cat
## 查看内容
Usage: hdfs dfs -cat URI [URI ...]
Example:
· hdfs dfs -cat hdfs://nn1.example.com/file1 hdfs://nn2.example.com/file2
· hdfs dfs -cat file:///file3 /user/hadoop/file4
3、Chgrp【change group】
## 修改所属组
Usage: hdfs dfs -chgrp [-R] GROUP URI [URI ...]
4、chmod
## 修改权限
Usage: hdfs dfs -chmod [-R] URI [URI ...]
5、chown
## 修改所有者
Usage: hdfs dfs -chown [-R] [OWNER][:[GROUP]] URI [URI ]
6、copyFromLocal
## Similar to put command, except that the source is restricted to a local file reference.
Usage: hdfs dfs -copyFromLocal URI
7、copyToLocal
## Similar to get command, except that the destination is restricted to a local file reference
Usage: hdfs dfs -copyToLocal [-ignorecrc] [-crc] URI
8、count
## 列出文件夹数量、文件数量、内容大小. The output columns with count are: DIR_COUNT, FILE_COUNT, CONTENT_SIZE FILE_NAME
Usage: hdfs dfs -count [-q] [-h]
The output columns with count q are: QUOTA, REMAINING_QUATA, SPACE_QUOTA, REMAINING_SPACE_QUOTA, DIR_COUNT, FILE_COUNT, CONTENT_SIZE, FILE_NAME
The -h option shows sizes in human readable format.
Example:
· hdfs dfs -count hdfs://nn1.example.com/file1 hdfs://nn2.example.com/file2
· hdfs dfs -count -q hdfs://nn1.example.com/file1
· hdfs dfs -count -q -h hdfs://nn1.example.com/file1
9、cp
## 复制文件(夹),可以覆盖,可以保留原有权限信息
Usage: hdfs dfs -cp [-f] [-p | -p[topax]] URI [URI ...]
Example:
· hdfs dfs -cp /user/hadoop/file1 /user/hadoop/file2
· hdfs dfs -cp /user/hadoop/file1 /user/hadoop/file2 /user/hadoop/dir
10、du
## 显示文件(夹)大小
Usage: hdfs dfs -du [-s] [-h] URI [URI ...]
Example:
· hdfs dfs -du /user/hadoop/dir1 /user/hadoop/file1 hdfs://nn.example.com/user/hadoop/dir1
11、dus
## Displays a summary of file lengths
## Note: This command is deprecated. Instead use hdfs dfs -du -s.
Usage: hdfs dfs -dus
expunge
## 清空回收站
Usage: hdfs dfs -expunge
12、get
##Copy files to the local file system. Files that fail the CRC check may be copied with the -ignorecrc option. Files and CRCs may be copied using the -crc option.
Usage: hdfs dfs -get [-ignorecrc] [-crc]
Example:
· hdfs dfs -get /user/hadoop/file localfile
· hdfs dfs -get hdfs://nn.example.com/user/hadoop/file localfile
13、getfacl
## 显示权限信息
Usage: hdfs dfs -getfacl [-R]
Options:
· -R: List the ACLs of all files and directories recursively.
· path: File or directory to list.
Examples:
· hdfs dfs -getfacl /file
· hdfs dfs -getfacl -R /dir
14、getfattr
## Displays the extended attribute names and values (if any) for a file or directory.
Usage: hdfs dfs -getfattr [-R] -n name | -d [-e en]
Options:
· -R: Recursively list the attributes for all files and directories.
· -n name: Dump the named extended attribute value.
· -d: Dump all extended attribute values associated with pathname.
· -e encoding: Encode values after retrieving them. Valid encodings are "text", "hex", and "base64". Values encoded as text strings are enclosed in double quotes ("), and values encoded as hexadecimal and base64 are prefixed with 0x and 0s, respectively.
· path: The file or directory.
Examples:
· hdfs dfs -getfattr -d /file
· hdfs dfs -getfattr -R -n user.myAttr /dir
15、getmerge
## 合并
Usage: hdfs dfs -getmerge [addnl]
16、ls
Usage: hdfs dfs -ls [-R]
Example:
· hdfs dfs -ls /user/hadoop/file1
17、lsr
## Recursive version of ls
Usage: hdfs dfs -lsr
18、mkdir
Usage: hdfs dfs -mkdir [-p]
## Takes path uri's as argument and creates directories.
Options: -p
· The -p option behavior is much like Unix mkdir -p, creating parent directories along the path.
Example:
· hdfs dfs -mkdir /user/hadoop/dir1 /user/hadoop/dir2
· hdfs dfs -mkdir hdfs://nn1.example.com/user/hadoop/dir hdfs://nn2.example.com/user/hadoop/dir
19、moveFromLocal
Usage: hdfs dfs -moveFromLocal
## Similar to put command, except that the source localsrc is deleted after it's copied.
20、moveToLocal
Usage: hdfs dfs -moveToLocal [-crc]
## Displays a "Not implemented yet" message.
21、mv
Usage: hdfs dfs -mv URI [URI ...]
## Moves files from source to destination. This command allows multiple sources as well in which case the destination needs to be a directory. Moving files across file systems is not permitted.
Example:
· hdfs dfs -mv /user/hadoop/file1 /user/hadoop/file2
· hdfs dfs -mv hdfs://nn.example.com/file1 hdfs://nn.example.com/file2 hdfs://nn.example.com/file3 hdfs://nn.example.com/dir1
22、put
Usage: hdfs dfs -put ...
## Copy single src, or multiple srcs from local file system to the destination file system. Also reads input from stdin and writes to destination file system.
· hdfs dfs -put localfile /user/hadoop/hadoopfile
· hdfs dfs -put localfile1 localfile2 /user/hadoop/hadoopdir
· hdfs dfs -put localfile hdfs://nn.example.com/hadoop/hadoopfile
· hdfs dfs -put - hdfs://nn.example.com/hadoop/hadoopfile Reads the input from stdin.
23、rm
Usage: hdfs dfs -rm [-f] [-r|-R] [-skipTrash] URI [URI ...]
## Delete files specified as args.
Options:
· The -f option will not display a diagnostic message or modify the exit status to reflect an error if the file does not exist.
· The -R option deletes the directory and any content under it recursively.
· The -r option is equivalent to -R.
· The -skipTrash option will bypass trash, if enabled, and delete the specified file(s) immediately. This can be useful when it is necessary to delete files from an over-quota directory.
Example:
· hdfs dfs -rm hdfs://nn.example.com/file /user/hadoop/emptydir
24、rmr
Usage: hdfs dfs -rmr [-skipTrash] URI [URI ...]
## Recursive version of delete.
## Note: This command is deprecated. Instead use hdfs dfs -rm -r
25、setfacl
Usage: hdfs dfs -setfacl [-R] [-b|-k -m|-x ]|[--set ]
## Sets Access Control Lists (ACLs) of files and directories.
Options:
· -b: Remove all but the base ACL entries. The entries for user, group and others are retained for compatibility with permission bits.
· -k: Remove the default ACL.
· -R: Apply operations to all files and directories recursively.
· -m: Modify ACL. New entries are added to the ACL, and existing entries are retained.
· -x: Remove specified ACL entries. Other ACL entries are retained.
· --set: Fully replace the ACL, discarding all existing entries. The acl_spec must include entries for user, group, and others for compatibility with permission bits.
· acl_spec: Comma separated list of ACL entries.
· path: File or directory to modify.
Examples:
· hdfs dfs -setfacl -m user:hadoop:rw- /file
· hdfs dfs -setfacl -x user:hadoop /file
· hdfs dfs -setfacl -b /file
· hdfs dfs -setfacl -k /dir
· hdfs dfs -setfacl --set user::rw-,user:hadoop:rw-,group::r--,other::r-- /file
· hdfs dfs -setfacl -R -m user:hadoop:r-x /dir
· hdfs dfs -setfacl -m default:user:hadoop:r-x /dir
26、setfattr
Usage: hdfs dfs -setfattr -n name [-v value] | -x name
## Sets an extended attribute name and value for a file or directory.
Options:
· -b: Remove all but the base ACL entries. The entries for user, group and others are retained for compatibility with permission bits.
· -n name: The extended attribute name.
· -v value: The extended attribute value. There are three different encoding methods for the value. If the argument is enclosed in double quotes, then the value is the string inside the quotes. If the argument is prefixed with 0x or 0X, then it is taken as a hexadecimal number. If the argument begins with 0s or 0S, then it is taken as a base64 encoding.
· -x name: Remove the extended attribute.
· path: The file or directory.
Examples:
· hdfs dfs -setfattr -n user.myAttr -v myValue /file
· hdfs dfs -setfattr -n user.noValue /file
· hdfs dfs -setfattr -x user.myAttr /file
27、setrep
Usage: hdfs dfs -setrep [-R] [-w]
## Changes the replication factor of a file. If path is a directory then the command recursively changes the replication factor of all files under the directory tree rooted at path.
Options:
· The -w flag requests that the command wait for the replication to complete. This can potentially take a very long time.
· The -R flag is accepted for backwards compatibility. It has no effect.
Example:
· hdfs dfs -setrep -w 3 /user/hadoop/dir1
28、stat
Usage: hdfs dfs -stat URI [URI ...]
## Returns the stat information on the path.
Example:
· hdfs dfs -stat path
29、tail
Usage: hdfs dfs -tail [-f] URI
## Displays last kilobyte of the file to stdout.
Options:
· The -f option will output appended data as the file grows, as in Unix.
Example:
· hdfs dfs -tail pathname
30、test
Usage: hdfs dfs -test -[ezd] URI
Options:
· The -e option will check to see if the file exists, returning 0 if true.
· The -z option will check to see if the file is zero length, returning 0 if true.
· The -d option will check to see if the path is directory, returning 0 if true.
Example:
· hdfs dfs -test -e filename
31、text
Usage: hdfs dfs -text
## Takes a source file and outputs the file in text format. The allowed formats are zip and TextRecordInputStream.
32、touchz
Usage: hdfs dfs -touchz URI [URI ...]
## Create a file of zero length.
Example:
· hdfs dfs -touchz pathname