Hadoop提供了一个Java native API来支持对文件系统进行创建,重命名,删除文件或者目录,打开读取或者写文件,设置文件权限等操作。这对于运行在hadoop集群中的应用程序来说是挺棒的,但是,也有许多外部的应用程序需要操作HDFS的情况,怎么办?如果解决这种问题呢?Hortonworks 开发了一些额外的API来支持这些基于标准REST功能的需求。
WebHDFS观念是基于HTTP操作,比如GET、PUT、POST和DELETE。像OPEN、GETFILESTATUS、LISTSTATUS的操作是使用HTTP GET,其它的像CREAT、MKDIRS、RENAME、SETPERMISSION是依赖于HTTP PUT类型。APPEND操作时基于HTTP POST类型,然而,DELETE是使用HTTP DELETE。
认证方式可以使用基于user.name参数或者如果安全机制开启了就依赖于Kerberos。标准的URL格式如下所示:
http://host:port/webhdfs/v1/?op=operation&user.name=username
默认的启动对口是14000,你可以在httpfs-env.sh 中配置端口的值。所有与httpfs有关的环境参数变量,你可以再httpfs-env.sh中进行个性化的配置。
编辑core-site.xml文件,添加如下内容:
…
<property>
<name>hadoop.proxyuser.#HTTPFSUSER#.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.#HTTPFSUSER#.groups</name>
<value>*</value>
</property>
….
值得注意的是,#HTTPFSUSER#指的是用户名,即linux启动httpfs的用户。
编辑hdfs-site.xml文件,添加下列属性配置。
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
另外,想知道更多的有关HDFS端口信息,请参考Cloudera blog.
启动httpfs,执行如下命令:
一切配置好了之后,测试一下是否配置成功是非常有必要的。有这样一种情况,我们需要知道hdfs 目录下tmp的文件状态。按照这样的需求,我们可以编写这样的命令:
命令行下
浏览器中:
返回信息:
- {"FileStatus":{"pathSuffix":"","type":"DIRECTORY","length":0,"owner":"app","group":"supergroup","permission":"720","accessTime":0,"modificationTime":1391352186043,"blockSize":0,"replication":0}}
下面主要介绍几种常用的WebHDFS操作。
5.1 创建
创建一个目录/tmp/webhdfs。
- <span style="font-size:18px;">curl-i -X PUT"http://hadoop-master:14000/webhdfs/v1/tmp/webhdfs?user.name=app&op=MKDIRS"
- HTTP/1.1 200 OK
- Server: Apache-Coyote/1.1
- Set-Cookie:hadoop.auth="u=app&p=app&t=simple&e=1393356198808&s=74NhnIdH7WceKqgTW7UJ1ia9h10=";Version=1; Path=/
- Content-Type: application/json
- Transfer-Encoding: chunked
- Date: Tue, 25 Feb 2014 09:23:23 GMT</span>
达到相似的功能,Hadoop 命令:
- <span style="font-size:18px;">hdfs dfs -mkdir /tmp/webhdfs</span>
创建一个文件
创建一个文件需要两个步骤:第一步是在namenode运行命令,第二步根据第一步提供的location参数执行PUT操作。
- <span style="font-size:18px;">curl -i -X PUT"http://hadoop-master:50070/webhdfs/v1/tmp/webhdfs/webhdfs-test.txt?user.name=app&op=CREATE"
- HTTP/1.1 307 TEMPORARY_REDIRECT
- Cache-Control: no-cache
- Expires: Thu, 01-Jan-1970 00:00:00 GMT
- Date: Wed, 26 Feb 2014 14:35:29 GMT
- Pragma: no-cache
- Date: Wed, 26 Feb 2014 14:35:29 GMT
- Pragma: no-cache
- Set-Cookie:hadoop.auth="u=app&p=app&t=simple&e=1393461329238&s=as2cO3rRtvj8psr0jAhMk6YBHRY=";Path=/
- Location:http://machine-2:50075/webhdfs/v1/tmp/webhdfs/webhdfs-test.txt?op=CREATE&user.name=app&namenoderpcaddress=hadoop-master:9000&overwrite=false
- Content-Type: application/octet-stream
- Content-Length: 0
- Server: Jetty(6.1.26)</span>
发送数据到指定文件中:
- <span style="font-size:18px;">curl -i -X PUT -T webhdfs-test.txt"http://machine-2:50075/webhdfs/v1/tmp/webhdfs/webhdfs-test.txt?op=CREATE&user.name=app&namenoderpcaddress=hadoop-master:9000&overwrite=false"
- HTTP/1.1 100 Continue
- HTTP/1.1 201 Created
- Cache-Control: no-cache
- Expires: Wed, 26 Feb 2014 14:37:51 GMT
- Date: Wed, 26 Feb 2014 14:37:51 GMT
- Pragma: no-cache
- Expires: Wed, 26 Feb 2014 14:37:51 GMT
- Date: Wed, 26 Feb 2014 14:37:51 GMT
- Pragma: no-cache
- Location:webhdfs://0.0.0.0:50070/tmp/webhdfs/webhdfs-test.txt
- Content-Type: application/octet-stream
- Content-Length: 0
- Server: Jetty(6.1.26)</span>
5.2 读取文件
读取/input文件信息,使用-L参数。
- <span style="font-size:18px;">[app@hadoop-master ~]$ curl -i -L"http://hadoop-master:14000/webhdfs/v1/input?op=OPEN&user.name=app"
- HTTP/1.1 200 OK
- Server: Apache-Coyote/1.1
- Set-Cookie:hadoop.auth="u=app&p=app&t=simple&e=1393420279602&s=dSnnx9oOwgGwVV/Q/ZmFyRjbtFU=";Version=1; Path=/
- Content-Type: application/octet-stream
- Content-Length: 1358
- Date: Wed, 26 Feb 2014 03:11:19 GMT
- Apache HBase [1] is an open-source,distributed, versioned, column-oriented
- store modeled after Google' Bigtable: ADistributed Storage System for
- Structured Data by Chang et al.[2] Just as Bigtable leverages the distributed
- data storage provided by the Google FileSystem, HBase provides Bigtable-like
- capabilities on top of Apache Hadoop[3].
- To get started using HBase, the fulldocumentation for this release can be
- found under the doc/ directory thataccompanies this README. Using abrowser,
- open the docs/index.html to view theproject home page (or browse to [1]).
- The hbase 'book' at docs/book.html has a'quick start' section and is where you
- should being your exploration of thehbase project.
- The latest HBase can be downloaded froman Apache Mirror [4].
- The source code can be found at [5]
- The HBase issue tracker is at [6]
- Apache HBase is made available under theApache License, version 2.0 [7]
- The HBase mailing lists and archives arelisted here [8].
- 1. http://hbase.apache.org
- 2.http://labs.google.com/papers/bigtable.html
- 3. http://hadoop.apache.org
- 4.http://www.apache.org/dyn/closer.cgi/hbase/
- 5.http://hbase.apache.org/docs/current/source-repository.html
- 6.http://hbase.apache.org/docs/current/issue-tracking.html
- 7.http://hbase.apache.org/docs/current/license.html
- 8. http://hbase.apache.org/docs/current/mail-lists.html</span>
5.3 重命名目录
需要修改op的值和添加destination参数,实例如下所示:
- <span style="font-size:18px;">[app@hadoop-master~]$ curl -i -X PUT
- "http://hadoop-master:14000/webhdfs/v1/tmp/webhdfs?op=RENAME&user.name=app&destination=/tmp/webhdfs-new"
- [app@hadoop-master~]$ curl -i -X PUT
- "http://hadoop-master:14000/webhdfs/v1/tmp/webhdfs?op=RENAME&user.name=app&destination=/tmp/webhdfs-new"
- HTTP/1.1 200 OK
- Server:Apache-Coyote/1.1
- Set-Cookie: hadoop.auth="u=app&p=app&t=simple&e=1393420506278&s=Oz9HEzxuYvP8kfAY4SWH6h+Gb50=";Version=1; Path=/
- Content-Type:application/json
- Transfer-Encoding:chunked
- Date: Wed, 26 Feb2014 03:15:06 GMT
- {"boolean":true}</span>
验证是否正确地执行,输入下列命令:
- <span style="font-size:18px;">[app@hadoop-master~]$ hdfs dfs -ls /tmp
- Found 5 items
- drwx------ - app supergroup 0 2014-01-08 08:02 /tmp/hadoop-yarn
- drwxr-xr-x - app supergroup 0 2014-02-02 09:43 /tmp/hdfs_out
- drwxr-xr-x - app supergroup 0 2014-02-20 22:39 /tmp/hive-app
- drwxr-xr-x - app supergroup 0 2014-02-25 04:25 /tmp/jps
- drwxr-xr-x - app supergroup 0 2014-02-25 04:23 /tmp/webhdfs-new</span>
5.4 删除目录
非空的目录删除的话会抛出异常,只有为空的目录才会被删除。
- <span style="font-size:18px;">[app@hadoop-master ~]$ curl -i -X DELETE
- "http://hadoop-master:14000/webhdfs/v1/tmp/webhdfs-new?op=DELETE&user.name=app"
- HTTP/1.1 200 OK
- Server: Apache-Coyote/1.1
- Set-Cookie:hadoop.auth="u=app&p=app&t=simple&e=1393421161052&s=r03LOm2hO91ujcc66wNWyMJnDx4=";Version=1; Path=/
- Content-Type: application/json
- Transfer-Encoding: chunked
- Date: Wed, 26 Feb 2014 03:26:01 GMT
- {"boolean":true}</span>
核查是否正确地执行
- <span style="font-size:18px;">[app@hadoop-master ~]$ hdfs dfs -ls /tmp
- Found 4 items
- drwx------ -app supergroup 0 2014-01-0808:02 /tmp/hadoop-yarn
- drwxr-xr-x -app supergroup 0 2014-02-0209:43 /tmp/hdfs_out
- drwxr-xr-x -app supergroup 0 2014-02-2022:39 /tmp/hive-app
- drwxr-xr-x -app supergroup 0 2014-02-2504:25 /tmp/jps</span>
总结
WebDFS提供了一个简单、标准的方式来执行Hadoop 文件系统操作,这个客户端不必运行在Hadoop集群本身中。WebHDFS最主要的特性是让客户端通过预定义端口直接地链接namenode 和 datanode。这样的话,规避了好多HDFS proxy的代理层以及预先配置Tomcat的绑定。WebHDFD API是可互相交换的,这样让客户端不需要去打开防火墙端口。
两种常见的异常:
1. HTTP/1.1 405 HTTP method PUT is not supported by this URL
修改hdfs-site.xml文件的权限属性。
- <property>
- <name>dfs.permissions</name>
- <value>false</value>
- </property>
2 {"RemoteException":{"exception":"AccessControlException","javaClassName":"org.apache.hadoop.security.AccessControlException","message":"Permission denied: user=dr.who, access=WRITE, inode=\"/qzhang\":Administrator:supergroup:drwxr-xr-x\n\tat
问题2与问题1是一个问题,解决方案同上。