hadoop 2.7 安装nfs

1.先安装完hadoop 2.7

2. cd /usr/local/hadoop

向 etc/hadoop/core-site.xml添加以下内容:

<!-- The nfs setttings begin-->
 <property>
   <name>hadoop.proxyuser.hadoop.groups</name>
   <value>*</value>
   <description>
          The 'nfsserver' user is allowed to proxy all members of the 'users-group1' and
       'users-group2' groups. Note that in most cases you will need to include the
       group "root" because the user "root" (which usually belonges to "root" group) will
       generally be the user that initially executes the mount on the NFS client system.
       Set this to '*' to allow nfsserver user to proxy any group.
   </description>
 </property>
 <property>
   <name>hadoop.proxyuser.hadoop.hosts</name>
   <value>*</value>
   <description>
             This is the host where the nfs gateway is running. Set this to '*' to allow
             requests from any hosts to be proxied.
   </description>
 </property>



3.向hdfs-site.xml中添加以下内容:

 

<property>
   <name>dfs.namenode.accesstime.precision</name>
   <value>3600000</value>
   <description>The access time for HDFS file is precise upto this value.
       The default value is 1 hour. Setting a value of 0 disables
       access times for HDFS.
  </description>
 </property>
 <property>    
   <name>nfs.dump.dir</name>
   <value>/tmp/.hdfs-nfs</value>
 </property>
 <property>
   <name>nfs.exports.allowed.hosts</name>
   <value>* rw</value>
 </property>
 <property>
   <name>nfs.superuser</name>
   <value>hadoop</value>
 </property>
 <property>
   <name>nfs.metrics.percentiles.intervals</name>
   <value>100</value>
   <description>Enable the latency histograms for read, write and
         commit requests. The time unit is 100 seconds in this example.
   </description>
 </property>



4.把上面两个文件分发到所有结点上。


5.以root的身份执行以下命令。

#yum -y install rpcbind 
#sbin/hadoop-daemon.sh --script /usr/local/hadoop/bin/hdfs start portmap


6.以hadoop的身份执行以下命令:

$ sbin/hadoop-daemon.sh --script /usr/local/hadoop/bin/hdfs start nfs3



7.以root 的身份执行rpcinfo -p m-10-140-60-85,如果输出像以下的内容,则表示nfs gateway没问题:

[root@m-10-140-60-85 hadoop]#  rpcinfo -p m-10-140-60-85
   program vers proto   port  service
    100005    2   tcp   4242  mountd
    100000    2   udp    111  portmapper
    100000    2   tcp    111  portmapper
    100005    1   tcp   4242  mountd
    100003    3   tcp   2049  nfs
    100005    1   udp   4242  mountd
    100005    3   udp   4242  mountd
    100005    3   tcp   4242  mountd
    100005    2   udp   4242  mountd



8.以root身份进入另一台服务器,执行以下命令(m-10-140-60-85为nfs gateway的服务器)

Export list for m-10-140-60-85:
/ *



9.创建/hadoop-nfs目录,把远程nfs gateway挂载到/hadoop-nfs目录。

mount -t nfs -o vers=3,proto=tcp,nolock,noacl,sync m-10-140-60-85:/ /hadoop-nfs

10.给各gateway使用相同的方法安装好之后,把nfs client的ip列表放到nfs_hosts,使用paste命令把这两个文件t组成nfs_pair文件,如下所示。

[root@m-10-140-60-85 setupHadoop]# cat nfs_pair 
10.140.60.85	10.140.60.48
10.140.60.86	10.140.60.50
10.140.60.87	10.140.60.51
10.140.60.88	10.140.60.53
10.140.60.89	10.140.60.54
10.140.60.90	10.140.60.55
10.140.60.91	10.140.60.56
10.140.60.92	10.140.60.59
10.140.60.95	10.140.60.60
10.140.60.96	10.140.60.61
10.140.60.49	10.140.60.62

11.在 nfs_hosts 服务器上创建/hadoop-nfs目录。

 ./upgrade.sh common nfs_hosts "mkdir /hadoop-nfs"

12使用以下的方法生成 挂载语句

cat nfs_pair | awk -F ' ' '{print "ssh "$2"  \"mount -o hard,nolock " $1":/ /hadoop-nfs\""}'



13,执行生成的挂载语句


14.验证各客户端挂载成功。

./upgrade.sh common nfs_hosts "ls /hadoop-nfs"


你可能感兴趣的:(hadoop)