HDFS开启NFS,支持远程mount

      由于在进行HDFS开启NFS时,遇到若干问题,在此记录下完整流程,以便后续查看:

      1. 设置core-site.xml ,红色为增加部分

      


    
        fs.defaultFS
        hdfs://hadoop:9000
    

      
      
   	hadoop.proxyuser.hadoop.groups  
        *  
   	  
          The 'nfsserver' user is allowed to proxy all members of the 'users-group1' and  
       'users-group2' groups. Note that in most cases you will need to include the  
       group "root" because the user "root" (which usually belonges to "root" group) will  
       generally be the user that initially executes the mount on the NFS client system.  
       Set this to '*' to allow nfsserver user to proxy any group.  
   	  
      
      
   	hadoop.proxyuser.hadoop.hosts  
   	*  
   	  
             This is the host where the nfs gateway is running. Set this to '*' to allow  
             requests from any hosts to be proxied.  
   	  
      
      
    
    

2. 编辑 hdfs-site.xml


    
        dfs.replication
        1
    
    
        dfs.namenode.name.dir
        file:/home/hadoop/data/namenode
    
    
        dfs.datanode.data.dir
        file:/home/hadoop/data/datanode
    
    
        dfs.permissions
        false
    
    
	dfs.datanode.address
	0.0.0.0:50010
     

      
    
	dfs.namenode.accesstime.precision  
	3600000  
        The access time for HDFS file is precise upto this value.The default value is 1 hour. Setting a value of 0 disables  access times for HDFS.  
      
          
        nfs.dump.dir  
        /tmp/.hdfs-nfs  
      
      
        nfs.exports.allowed.hosts  
        * rw  
     
    
        nfs.rtmax
        1048576
        This is the maximum size in bytes of a READ request supported by the NFS gateway. If you change this, make sure you also update the nfs mount's rsize(add rsize= # of bytes to the mount directive).
    
    
	nfs.wtmax
	65536
	This is the maximum size in bytes of a WRITE request supported by the NFS gateway. If you change this, make sure you also update the nfs mount's wsize(add wsize= # of bytes to the mount directive).
    
      

      3. 添加hadoop用户,并加入      

$ useradd -d /home/hadoop -s /bin/bash hadoop
$ passwd hadoop
hadoop

            

更改 /usr/hadoop 属主为 hadoop
chown -R hadoop:hadoop hadoop
cd /usr/hadoop
mkdir tmp

    4.将hadoop用户添加到sudoer里面
        [root@hadoop ~]# visudo
        到91行,添加如下信息
        hadoop  ALL=(ALL)       NOPASSWD:ALL

       效果如下:
        HDFS开启NFS,支持远程mount_第1张图片
        
       5.关闭SELinux和防火墙
       vi /etc/selinux/config
 
       SELINUX=disabled

       效果如下
        HDFS开启NFS,支持远程mount_第2张图片
       

       6. 重启机器

       7. 逐一执行以下命令
       cd  /usr/hadoop-2.7.2
systemctl  stop nfs
systemctl  stop portmap

su hadoop   (一定不能是root用户,否则无法挂载)
/usr/hadoop-2.7.2/sbin/hadoop-daemon.sh --script /usr/hadoop-2.7.2/bin/hdfs stop nfs3


sudo su
/usr/hadoop-2.7.2/sbin/hadoop-daemon.sh --script /usr/hadoop-2.7.2/bin/hdfs stop portmap
/usr/hadoop-2.7.2/sbin/stop-dfs.sh
/usr/hadoop-2.7.2/sbin/stop-yarn.sh
/usr/hadoop-2.7.2/sbin/hadoop-daemon.sh --script /usr/hadoop-2.7.2/bin/hdfs start portmap

su hadoop (一定不能是root用户,否则无法挂载)
/usr/hadoop-2.7.2/sbin/hadoop-daemon.sh --script /usr/hadoop-2.7.2/bin/hdfs start nfs3


sudo su
/usr/hadoop-2.7.2/sbin/start-dfs.sh
/usr/hadoop-2.7.2/sbin/start-yarn.sh


rpcinfo -p localhost
showmount -e localhost

     8.挂载NFS
      mount -t nfs -o vers=3,proto=tcp,nolock,noacl,sync localhost:/  /home/hadoop/tmp/

     按照以上步骤基本就不会出错了,最主要的问题在于nfs3服务启动必须不能是 root 用户,否则总是出现
      mount.nfs: mount system call failed
     

你可能感兴趣的:(大数据)