1.HDFS简单版集群搭建相关配置文件
1.core-site.xml文件
12 4 5 6fs.defaultFS 3hdfs://hadoop2:9000 7 hadoop.tmp.dir 8/usr/hadoop-2.9.2/data 9
2.ZK搭建高可用HDFS集群搭建相关配置文件
1.zkdata1/zoo.cfg文件
1 tickTime=2000
2 initLimit=10
3 syncLimit=5
4 dataDir=/root/zkdata
5 clientPort=3001
6 server.1=主机名:3002:3003
7 server.2=主机名:4002:4003
8 server.3=主机名:5002:5003
2.hadoop的core-site.xml文件
1 23 5 6 7fs.defaultFS 4hdfs://ns 8 11 12hadoop.tmp.dir 9/usr/hadoop-2.9.2/data 1013 ha.zookeeper.quorum 14hadoop1:3001,hadoop1:4001,hadoop1:5001 15
3.hadoop 配置hdfs-site.xml文件
1 23 6 7dfs.nameservices 4ns 58 11 12dfs.ha.namenodes.ns 9nn1,nn2 1013 16 17dfs.namenode.rpc-address.ns.nn1 14hadoop2:9000 1518 21 22dfs.namenode.http-address.ns.nn1 19hadoop2:50070 2023 26 27dfs.namenode.rpc-address.ns.nn2 24hadoop3:9000 2528 31 32 33dfs.namenode.http-address.ns.nn2 29hadoop3:50070 3034 36 37 38dfs.namenode.shared.edits.dir 35qjournal://hadoop2:8485;hadoop3:8485;hadoop4:8485/ns 39 42 43dfs.journalnode.edits.dir 40/root/journal 4144 47 48dfs.ha.automatic-failover.enabled 45true 4649 52 53dfs.client.failover.proxy.provider.ns 50org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider 5154 57 58dfs.ha.fencing.methods 55sshfence 5659 dfs.ha.fencing.ssh.private-key-files 60/root/.ssh/id_rsa 61
3.搭建yarn集群
1.mapred-site.xml
注意:默认/etc/中没有这个配置文件 需要拷贝mapred-site.xml.template 配置文件
改名为mapred-site.xml
1
2 mapreduce.framework.name
3 yarn
4
2.yarn.site.xml文件
1
2 yarn.nodemanager.aux-services
3 mapreduce_shuffle
4
5
6 yarn.resourcemanager.hostname
7 Hadoop
8
4.HA的hadoop集群搭建的配置文件(最终版)
1.core-site.xml文件
1 23 5 6 7fs.defaultFS 4hdfs://ns 8 11 12hadoop.tmp.dir 9/root/hadoop-2.9.2/data 1013 ha.zookeeper.quorum 14zk:3001,zk:4001,zk:5001 15
2.hdfs-site.xml文件
1 23 6 7dfs.nameservices 4ns 58 11 12dfs.ha.namenodes.ns 9nn1,nn2 1013 16 17dfs.namenode.rpc-address.ns.nn1 14hadoop22:9000 1518 21 22dfs.namenode.http-address.ns.nn1 19hadoop22:50070 2023 26 27dfs.namenode.rpc-address.ns.nn2 24hadoop23:9000 2528 31 32 33dfs.namenode.http-address.ns.nn2 29hadoop23:50070 3034 36 37 38dfs.namenode.shared.edits.dir 35qjournal://hadoop26:8485;hadoop27:8485;hadoop28:8485/ns 39 42 43dfs.journalnode.edits.dir 40/root/journal 4144 47 48dfs.ha.automatic-failover.enabled 45true 4649 52 53dfs.client.failover.proxy.provider.ns 50org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider 5154 57 58dfs.ha.fencing.methods 55sshfence 5659 dfs.ha.fencing.ssh.private-key-files 60/root/.ssh/id_rsa 61
3.yarn-site.xml文件
1
2
3 yarn.resourcemanager.ha.enabled
4 true
5
6
7
8 yarn.resourcemanager.cluster-id
9 yrc
10
11
12
13 yarn.resourcemanager.ha.rm-ids
14 rm1,rm2
15
16
17
18 yarn.resourcemanager.hostname.rm1
19 hadoop24
20
21
22 yarn.resourcemanager.hostname.rm2
23 hadoop25
24
25
26 yarn.resourcemanager.webapp.address.rm1
27 hadoop24:8088
28
29
30 yarn.resourcemanager.webapp.address.rm2
31 hadoop25:8088
32
33
34
35 yarn.resourcemanager.zk-address
36 zk:3001,zk:4001,zk:5001
37
38
39 yarn.nodemanager.aux-services
40 mapreduce_shuffle
41
4.mapred-site.xml 默认不存在需要复制
1
2
3 mapreduce.framework.name
4 yarn
5