WedHDFS配置及使用方法
一、配置
1、修改hadoop配置文件hdfs-site.xml,添加:
注:此配置在集群中所以机器上都需要修改
2、同样配置hdfs-site.xml文件,添加:
注:此配置在集群中所以机器上都需要修改
二、curl操作hadoop hdfs
1、创建并写一个文件
注意这个地方填入的是DataNode的信息
注意该条命令获得的是DataNode的信息。
[&offset=
curl -i -X PUT "http://
curl -i -X PUT "
curl -i -X DELETE "http://
curl -i “http://
curl -i "http://
curl -i "http://
curl -i "http://
curl -i "http://
curl -i -X PUT "http://
curl -i -X PUT "http://
curl -i -X PUT "http://
三、错误信息分析
1、URL问题,如:上传文件
[root@Client04 qzhang]# curl -i -X PUT "http://xx.xx.xx.xx:50070/webhdfs/v1/qzhang?op=CREATE"
HTTP/1.1 307 TEMPORARY_REDIRECT
Cache-Control: no-cache
Expires: Mon, 21 Oct 2013 03:25:55 GMT
Date: Mon, 21 Oct 2013 03:25:55 GMT
Pragma: no-cache
Expires: Mon, 21 Oct 2013 03:25:55 GMT
Date: Mon, 21 Oct 2013 03:25:55 GMT
Pragma: no-cache
Content-Type: application/octet-stream
Location: http://Slave01:50075/webhdfs/v1/qzhang?op=CREATE&namenoderpcaddress=Master:9100&overwrite=false
Content-Length: 0
Server: Jetty(6.1.26)
[root@Client04 qzhang]# curl -i -X PUT -T '/home/qzhang/桌面/mapred-site.xml' "http://Slave01:50075/webhdfs/v1/qzhang/mapred-site.xml?op=CREATE&namenoderpcaddress=Master:9100&overwrite=false"
HTTP/1.1 100 Continue
HTTP/1.1 405 HTTP method PUT is not supported by this URL
Date: Mon, 21 Oct 2013 03:27:45 GMT
Pragma: no-cache
Date: Mon, 21 Oct 2013 03:27:45 GMT
Pragma: no-cache
Content-Length: 0
Server: Jetty(6.1.26)
错误分析:datanode Slave01上没有配置好hdfs-site.xml文件,webhdfs不可用。必须集群所有机器配置好webhdfs,因为文件直接上传到datanode。
2、权限问题
[root@Client04 qzhang]# curl -i -X PUT -T '/home/qzhang/桌面/mapred-site.xml' "http://Slave02:50075/webhdfs/v1/qzhang/mapred-site.xml?op=CREATE&namenoderpcaddress=Master:9100&overwrite=false"
HTTP/1.1 100 Continue
HTTP/1.1 403 Forbidden
Cache-Control: no-cache
Expires: Mon, 21 Oct 2013 03:05:25 GMT
Date: Mon, 21 Oct 2013 03:05:25 GMT
Pragma: no-cache
Expires: Mon, 21 Oct 2013 03:05:25 GMT
Date: Mon, 21 Oct 2013 03:05:25 GMT
Pragma: no-cache
Content-Type: application/json
Transfer-Encoding: chunked
Server: Jetty(6.1.26)
{"RemoteException":{"exception":"AccessControlException","javaClassName":"org.apache.hadoop.security.AccessControlException","message":"Permission denied: user=dr.who, access=WRITE, inode=\"/qzhang\":Administrator:supergroup:drwxr-xr-x\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:234)\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:214)\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:158)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5178)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5160)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:5134)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2054)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2007)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1958)\n\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:491)\n\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:301)\n\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59570)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)\n\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)\n\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)\n\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)\n\tat java.security.AccessController.doPrivileged(Native Method)\n\tat javax.security.auth.Subject.doAs(Subject.java:415)\n\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1483)\n\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)\n"}}
错误分析:集群外机器没有访问hdfs的权限,可以配置hdfs-site.xml,即配置中的第2步,关闭hdfs的权限