调用文件系统(FS)Shell命令应使用 $HADOOP_HOME/bin/hadoop fs *** 的形式!!!
所有的FS Shell命令使用URI路径作为参数。
URI格式是scheme://authority/path。
对于HDFS,HDFS的scheme是hdfs,如hdfs://localhost:9000/ , 对于本地,本地的scheme是file,如file://
其中,scheme和authority参数都是可选的,如果未加指定,就会使用配置中指定的默认scheme。
例如,/parent/child可以表示成hdfs://namenode:namenodePort/parent/child,或/parent/child(假设配置文件是namenode:namenodePort)
大多数FS Shell命令的行为和对应的Unix Shell命令类似。
进一步学习,请移步
Hadoop HDFS概念学习系列之熟练掌握HDFS的Shell访问(十五)
Usage: hdfs dfs -appendToFile
追加一个或者多个文件到hdfs制定文件中.也可以从命令行读取输入.
hdfs dfs -appendToFile localfile /user/hadoop/hadoopfile hdfs dfs -appendToFile localfile1 localfile2 /user/hadoop/hadoopfile hdfs dfs -appendToFile localfile hdfs://nn.example.com/hadoop/hadoopfile hdfs dfs -appendToFile - hdfs://nn.example.com/hadoop/hadoopfile Reads the input from stdin.
Exit Code:
Returns 0 on success and 1 on error.
Usage: hdfs dfs -cat URI [URI ...]
查看内容.
Example:
hdfs dfs -cat hdfs://nn1.example.com/file1 hdfs://nn2.example.com/file2 hdfs dfs -cat file:///file3 /user/hadoop/file4
Exit Code:
Returns 0 on success and -1 on error.
Usage: hdfs dfs -chgrp [-R] GROUP URI [URI ...]
修改所属组.
Options
Usage: hdfs dfs -chmod [-R]
修改权限.
Options
Usage: hdfs dfs -chown [-R] [OWNER][:[GROUP]] URI [URI ]
修改所有者.
Options
Usage: hdfs dfs -copyFromLocal
Similar to put command, except that the source is restricted to a local file reference.
Options:
Usage: hdfs dfs -copyToLocal [-ignorecrc] [-crc] URI
Similar to get command, except that the destination is restricted to a local file reference.
Usage: hdfs dfs -count [-q] [-h]
列出文件夹数量、文件数量、内容大小. The output columns with -count are: DIR_COUNT, FILE_COUNT, CONTENT_SIZE FILE_NAME
The output columns with -count -q are: QUOTA, REMAINING_QUATA, SPACE_QUOTA, REMAINING_SPACE_QUOTA, DIR_COUNT, FILE_COUNT, CONTENT_SIZE, FILE_NAME
The -h option shows sizes in human readable format.
Example:
hdfs dfs -count hdfs://nn1.example.com/file1 hdfs://nn2.example.com/file2 hdfs dfs -count -q hdfs://nn1.example.com/file1 hdfs dfs -count -q -h hdfs://nn1.example.com/file1
Exit Code:
Returns 0 on success and -1 on error.
Usage: hdfs dfs -cp [-f] [-p | -p[topax]] URI [URI ...]
复制文件(夹),可以覆盖,可以保留原有权限信息
Options:
Example:
hdfs dfs -cp /user/hadoop/file1 /user/hadoop/file2
hdfs dfs -cp /user/hadoop/file1 /user/hadoop/file2 /user/hadoop/dir
Exit Code:
Returns 0 on success and -1 on error.
Usage: hdfs dfs -du [-s] [-h] URI [URI ...]
显示文件(夹)大小.
Options:
Example:
hdfs dfs -du /user/hadoop/dir1 /user/hadoop/file1 hdfs://nn.example.com/user/hadoop/dir1
Exit Code: Returns 0 on success and -1 on error.
Usage: hdfs dfs -dus
Displays a summary of file lengths.
Note: This command is deprecated. Instead use hdfs dfs -du -s.
Usage: hdfs dfs -expunge
清空回收站.
Usage: hdfs dfs -get [-ignorecrc] [-crc]
Copy files to the local file system. Files that fail the CRC check may be copied with the -ignorecrc option. Files and CRCs may be copied using the -crc option.
Example:
hdfs dfs -get /user/hadoop/file localfile hdfs dfs -get hdfs://nn.example.com/user/hadoop/file localfile
Exit Code:
Returns 0 on success and -1 on error.
Usage: hdfs dfs -getfacl [-R]
显示权限信息.
Options:
Examples:
hdfs dfs -getfacl /file
hdfs dfs -getfacl -R /dir
Exit Code:
Returns 0 on success and non-zero on error.
Usage: hdfs dfs -getfattr [-R] -n name | -d [-e en]
Displays the extended attribute names and values (if any) for a file or directory.
Options:
Examples:
hdfs dfs -getfattr -d /file
hdfs dfs -getfattr -R -n user.myAttr /dir
Exit Code:
Returns 0 on success and non-zero on error.
Usage: hdfs dfs -getmerge
合并.
Usage: hdfs dfs -ls [-R]
Options:
For a file returns stat on the file with the following format:
permissions number_of_replicas userid groupid filesize modification_date modification_time filename
For a directory it returns list of its direct children as in Unix. A directory is listed as:
permissions userid groupid modification_date modification_time dirname
Example:
hdfs dfs -ls /user/hadoop/file1
Exit Code:
Returns 0 on success and -1 on error.
Usage: hdfs dfs -lsr
Recursive version of ls.
Note: This command is deprecated. Instead use hdfs dfs -ls -R
Usage: hdfs dfs -mkdir [-p]
Takes path uri's as argument and creates directories.
Options:
Example:
hdfs dfs -mkdir /user/hadoop/dir1 /user/hadoop/dir2 hdfs dfs -mkdir hdfs://nn1.example.com/user/hadoop/dir hdfs://nn2.example.com/user/hadoop/dir
Exit Code:
Returns 0 on success and -1 on error.
Usage: hdfs dfs -moveFromLocal
Similar to put command, except that the source localsrc is deleted after it's copied.
Usage: hdfs dfs -moveToLocal [-crc]
Displays a "Not implemented yet" message.
Usage: hdfs dfs -mv URI [URI ...]
Moves files from source to destination. This command allows multiple sources as well in which case the destination needs to be a directory. Moving files across file systems is not permitted.
Example:
hdfs dfs -mv /user/hadoop/file1 /user/hadoop/file2 hdfs dfs -mv hdfs://nn.example.com/file1 hdfs://nn.example.com/file2 hdfs://nn.example.com/file3 hdfs://nn.example.com/dir1
Exit Code:
Returns 0 on success and -1 on error.
Usage: hdfs dfs -put
Copy single src, or multiple srcs from local file system to the destination file system. Also reads input from stdin and writes to destination file system.
hdfs dfs -put localfile /user/hadoop/hadoopfile hdfs dfs -put localfile1 localfile2 /user/hadoop/hadoopdir hdfs dfs -put localfile hdfs://nn.example.com/hadoop/hadoopfile hdfs dfs -put - hdfs://nn.example.com/hadoop/hadoopfile Reads the input from stdin.
Exit Code:
Returns 0 on success and -1 on error.
Usage: hdfs dfs -rm [-f] [-r|-R] [-skipTrash] URI [URI ...]
Delete files specified as args.
Options:
Example:
hdfs dfs -rm hdfs://nn.example.com/file /user/hadoop/emptydir
Exit Code:
Returns 0 on success and -1 on error.
Usage: hdfs dfs -rmr [-skipTrash] URI [URI ...]
Recursive version of delete.
Note: This command is deprecated. Instead use hdfs dfs -rm -r
Usage: hdfs dfs -setfacl [-R] [-b|-k -m|-x
Sets Access Control Lists (ACLs) of files and directories.
Options:
Examples:
hdfs dfs -setfacl -m user:hadoop:rw- /file hdfs dfs -setfacl -x user:hadoop /file hdfs dfs -setfacl -b /file hdfs dfs -setfacl -k /dir hdfs dfs -setfacl --set user::rw-,user:hadoop:rw-,group::r--,other::r-- /file hdfs dfs -setfacl -R -m user:hadoop:r-x /dir hdfs dfs -setfacl -m default:user:hadoop:r-x /dir
Exit Code:
Returns 0 on success and non-zero on error.
Usage: hdfs dfs -setfattr -n name [-v value] | -x name
Sets an extended attribute name and value for a file or directory.
Options:
Examples:
hdfs dfs -setfattr -n user.myAttr -v myValue /file hdfs dfs -setfattr -n user.noValue /file hdfs dfs -setfattr -x user.myAttr /file
Exit Code:
Returns 0 on success and non-zero on error.
Usage: hdfs dfs -setrep [-R] [-w]
Changes the replication factor of a file. If path is a directory then the command recursively changes the replication factor of all files under the directory tree rooted at path.
Options:
Example:
hdfs dfs -setrep -w 3 /user/hadoop/dir1
Exit Code:
Returns 0 on success and -1 on error.
Usage: hdfs dfs -stat URI [URI ...]
Returns the stat information on the path.
Example:
hdfs dfs -stat path
Exit Code: Returns 0 on success and -1 on error.
Usage: hdfs dfs -tail [-f] URI
Displays last kilobyte of the file to stdout.
Options:
Example:
hdfs dfs -tail pathname
Exit Code: Returns 0 on success and -1 on error.
Usage: hdfs dfs -test -[ezd] URI
Options:
Example:
hdfs dfs -test -e filename
Usage: hdfs dfs -text
Takes a source file and outputs the file in text format. The allowed formats are zip and TextRecordInputStream.
Usage: hdfs dfs -touchz URI [URI ...]
Create a file of zero length.
Example:
hdfs dfs -touchz pathname
Exit Code: Returns 0 on success and -1 on error.