Hadoop HDFS文件操作API

使用JAVA操作HDFS:

public void testApp() throws Exception {
    
    String uri = "hdfs://192.168.0.88:9000/";  
    Configuration config = new Configuration();  
    FileSystem fs = FileSystem.get(URI.create(uri), config);  

    // 列出hdfs上/test/input/目录下的所有文件和目录  
    FileStatus[] statuses = fs.listStatus(new Path("/test/input"));  
    for (FileStatus status : statuses) {  
        System.out.println(status);  
    }  

    // 在hdfs的/test/input/目录下创建一个文件,并写入一行文本  
    FSDataOutputStream os = fs.create(new Path("/test/input/test.log"));  
    os.write("Hello World!".getBytes());  
    os.flush(); 
    os.close(); 

    // 显示在hdfs的/test/input/下指定文件的内容  
    InputStream is = fs.open(new Path("/test/input/test.log"));  
    IOUtils.copyBytes(is, System.out, 1024, true);  
    
    assertTrue(true);
}

使用Shell操作HDFS:

Usage: hadoop fs [generic options]
[-appendToFile ... ]
[-cat [-ignoreCrc] ...]
[-checksum ...]
[-chgrp [-R] GROUP PATH...]
[-chmod [-R] PATH...]
[-chown [-R] [OWNER][:[GROUP]] PATH...]
[-copyFromLocal [-f] [-p] ... ]
[-copyToLocal [-p] [-ignoreCrc] [-crc] ... ]
[-count [-q] ...]
[-cp [-f] [-p] ... ]
[-createSnapshot []]
[-deleteSnapshot ]
[-df [-h] [ ...]]
[-du [-s] [-h] ...]
[-expunge]
[-get [-p] [-ignoreCrc] [-crc] ... ]
[-getmerge [-nl] ]
[-help [cmd ...]]
[-ls [-d] [-h] [-R] [ ...]]
[-mkdir [-p] ...]
[-moveFromLocal ... ]
[-moveToLocal ]
[-mv ... ]
[-put [-f] [-p] ... ]
[-renameSnapshot ]
[-rm [-f] [-r|-R] [-skipTrash] ...]
[-rmdir [--ignore-fail-on-non-empty]

...]
[-setrep [-R] [-w] ...]
[-stat [format] ...]
[-tail [-f] ]
[-test -[defsz] ]
[-text [-ignoreCrc] ...]
[-touchz ...]
[-usage [cmd ...]]

你可能感兴趣的:(Hadoop HDFS文件操作API)