hadoop nfs网关

hadoop nfs网关

1. 在hadoop下的core-site.xml,加入

    hadoop.proxyuser.root.groups
    *
    允许所有用户组用户代理


    hadoop.proxyuser.root.hosts
    localhost
    允许挂载的主机域名


2. 在hadoop下的hdfs-site.xml加入

    nfs.dump.dir
    /tmp/.hdfs-nfs


    nfs.rtmax
    1048576
    This is the maximum size in bytes of a READ request supported by the NFS gateway. If you change this, make sure you also upda
te the nfs mount's rsize(add rsize= # of bytes to the mount directive).



    nfs.wtmax
    65536
    This is the maximum size in bytes of a WRITE request supported by the NFS gateway. If you change this, make sure you also upd
ate the nfs mount's wsize(add wsize= # of bytes to the mount directive).



    nfs.exports.allowed.hosts
    * rw
    允许所有主机对文件有rw权限



3.重启hadoop
/etc/init.d/hadoop-hdfs-namenode restart
/etc/init.d/hadoop-hdfs-datanode restart

4.关闭系统nfs、rpcbind
service nfs stop
service rpcbind stop

5.启动hadoop的portmap、nfs3
hdfs portmap start

hdfs nfs3 start


6. 挂载
mount -t nfs -o vers=3,proto=tcp,nolock,noacl $server:/  $mount_point

7.卸载
umount $mount_point

你可能感兴趣的:(hadoop)