通过NFSv3挂载HDFS到本地目录 -- 2续hdfs-nfs网关解决错误
4.8 总结 - 主要命令列表
基本顺序如下:
cd /home/hdfs/hadoop-2.7.1
sbin/stop-dfs.sh
service nfs stop
service rpcbind stop
sbin/hadoop-daemon.sh --script /home/hdfs/hadoop-2.7.1/bin/hdfs start portmap
sbin/hadoop-daemon.sh --script /home/hdfs/hadoop-2.7.1/bin/hdfs start nfs3
rpcinfo -p ip-172-30-0-129
showmount -e ip-172-30-0-129
sbin/start-dfs.sh
mount -t nfs -o vers=3,proto=tcp,nolock,noacl,sync ip-172-30-0-129:/ /mnt/hdfs
4.9 主要错误列表
错误1:
[root@ip-172-30-0-129 hadoop-2.7.1]# mount -t nfs -o vers=3,proto=tcp,nolock,noacl,sync ip-172-30-0-129:/ /mnt/hdfs
mount.nfs: mounting ip-172-30-0-129:/ failed, reason given by server: No such file or directory
没有开启hdfs服务
错误2:
[root@ip-172-30-0-129 hadoop-2.7.1]# mount -t nfs -o vers=3,proto=tcp,nolock,noacl,sync ip-172-30-0-129:/ /mnt/hdfs
mount.nfs: mount system call failed
可能portmap和nfs网关都成功启动了,但是,无法链接hdfs服务,可能原因权限,或者配置错误。
错误3:
[root@ip-172-30-0-129 hadoop-2.7.1]# mount -t nfs -o proto=tcp,nolock,noacl,sync 172.30.0.129:/ /mnt/hdfs
mount.nfs: access denied by server while mounting 172.30.0.129:/
配置,在两个xml文件中,不要配置错了。
错误4:
[root@ip-172-30-0-129 hadoop-2.7.1]# showmount -e 172.30.0.129
clnt_create: RPC: Program not registered
错误5:
如果检查 hadoop-***-nfs3-***.log,看到类似的提示:
2016-01-23 04:27:16,788 ERROR org.apache.hadoop.hdfs.nfs.mount.RpcProgramMountd: Can't get handle for export:/
java.net.ConnectException: Call From ip-172-30-0-129/172.30.0.129 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
表明,启动nfs网关的时候,应该切换到配置中设置的用户,(非root)。
需要检查的其他情况:
a.) selinux, 应该关闭: sestatus -v
b.) 防火墙:应确保相应的端口打开:service iptables status
c.) nfs配置,/etc/sysconfig/nfs,检查是否允许版本3和版本4
参考文献:
a. http://hortonworks.com/community/forums/topic/nfs-to-hdfs-gateway-error-user-hdfs-is-not-allowed-to-impersonate-root/
b. http://stackoverflow.com/questions/25073792/error-e0902-exception-occured-user-root-is-not-allowed-to-impersonate-root
c. http://bbs.csdn.net/topics/391861078
d. http://duguyiren3476.iteye.com/blog/2209242
4.10 极简配置
core-site.xml文件中:
hadoop.proxyuser.nfsserver.groups
*
hadoop.proxyuser.nfsserver.hosts
*
hdfs-site.xml文件中:
nfs.dump.dir
/tmp/.hdfs-nfs
nfs.rtmax
1048576
This is the maximum size in bytes of a READ request supported by the NFS gateway. If you change this, make sure you also update the nfs mount's rsize(add rsize= # of bytes to the mount directive).
nfs.wtmax
65536
This is the maximum size in bytes of a WRITE request supported by the NFS gateway. If you change this, make sure you also update the nfs mount's wsize(add wsize= # of bytes to the mount directive).
nfs.exports.allowed.hosts
* rw
允许所有主机对文件有rw权限