搭建NameNode联盟

一、NameNode的联盟(Federation)

  • 接收客户端的请求

  • 缓存1000M的元信息

  • 问题:
    (1)分摊NameNode压力
    (2)缓存更多的元信息

    搭建NameNode的联盟

    1、规划
    NameNode:bigdata112 bigdata113
    DataNode: bigdata114 bigdata115

    2、在bigdata112上搭建
    core-site.xml

				
					hadoop.tmp.dir
					/root/training/hadoop-2.7.3/tmp
				

hdfs-site.xml

				
					dfs.nameservices
					ns1,ns2
				
				
				
					dfs.namenode.rpc-address.ns1
					192.168.157.112:9000
					

				
					dfs.namenode.http-address.ns1
					192.168.157.112:50070
									

				
					dfs.namenode.secondaryhttp-address.ns1
					192.168.157.112:50090
					

				
					dfs.namenode.rpc-address.ns2
					192.168.157.113:9000
					

				
					dfs.namenode.http-address.ns2
					192.168.157.113:50070
									

				
					dfs.namenode.secondaryhttp-address.ns2
					192.168.157.113:50090
						

				
					dfs.replication
					2
									
			
				
					dfs.permissions
					false
					

mapred-site.xml

				
					mapreduce.framework.name
					yarn
				

yarn-site.xml

				
					yarn.resourcemanager.hostname
					192.168.157.112
						

				
					yarn.nodemanager.aux-services
					mapreduce_shuffle
					

slaves

			bigdata114
			bigdata115

配置路由规则(viewFS)
修改core-site.xml文件,直接加入以下内容:
注意:xdl1:是联盟的名字

				
					fs.viewfs.mounttable.xdl1.homedir
					/home
				

				
					fs.viewfs.mounttable.xdl1.link./movie
					hdfs://192.168.157.112:9000/movie
				

				
					fs.viewfs.mounttable.xdl1.link./mp3
					hdfs://192.168.157.113:9000/mp3
				

				
					fs.default.name
					viewfs://xdl1
						

注意:如果路由规则太多了,造成core-site.xml文件不好维护
可以单独创建一个xml文件来保存路由规则:mountTable.xml ----> 加入到core-site.xml
http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/ViewFs.html

3、复制到其他的节点
scp -r hadoop-2.7.3/ root@bigdata113:/root/training
scp -r hadoop-2.7.3/ root@bigdata114:/root/training
scp -r hadoop-2.7.3/ root@bigdata115:/root/training

4、在每个NameNode(bigdata112和bigdata113)上单独进行格式化
hdfs namenode -format -clusterId xdl1
5、启动

6、根据路由规则,在每个Namenode上创建对应的目录
hadoop fs -mkdir hdfs://192.168.157.112:9000/movie
hadoop fs -mkdir hdfs://192.168.157.113:9000/mp3

7、操作HDFS
[root@bigdata112 training]# hdfs dfs -ls /
Found 2 items
-r-xr-xr-x - root root 0 2018-10-05 01:11 /movie
-r-xr-xr-x - root root 0 2018-10-05 01:11 /mp3

注意:看到的是viewFS

你可能感兴趣的:(Hadoop)