hadoop 2.0 hdfs 完全分布式安装(HA)

hadoop 2.0 hdfs 完全分布式安装(HA)

		   	NN-1	NN-2	DN		ZK		ZKFC	JNN
node08   		*							*		*	
node09				*		*		*		*		*	
node10						*		*				*	
node11						*		*					

【1】在Hadoop1.0 hdfs完全分布式集群搭建基础上搭建(详见:Hadoop1.0 hdfs完全分布式集群搭建):
【2】配置hdfs-site.xml:

<property>
  <name>dfs.nameservicesname>
  <value>myclustervalue>
property>
<property>
  <name>dfs.ha.namenodes.myclustername>
  <value>nn1,nn2value>
property>
<property>
  <name>dfs.namenode.rpc-address.mycluster.nn1name>
  <value>node08:8020value>
property>
<property>
  <name>dfs.namenode.rpc-address.mycluster.nn2name>
  <value>node09:8020value>
property>
<property>
  <name>dfs.namenode.http-address.mycluster.nn1name>
  <value>node08:50070value>
property>
<property>
  <name>dfs.namenode.http-address.mycluster.nn2name>
  <value>node09:50070value>
property>

<property>
  <name>dfs.namenode.shared.edits.dirname>
  <value>qjournal://node08:8485;node09:8485;node10:8485/myclustervalue>
property>

<property>
  <name>dfs.journalnode.edits.dirname>
  <value>/var/sxt/hadoop/ha/jnvalue>
property>

<property>
  <name>dfs.client.failover.proxy.provider.myclustername>
  <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvidervalue>
property>

<property>
  <name>dfs.ha.fencing.methodsname>
  <value>sshfencevalue>
property>
<property>
  <name>dfs.ha.fencing.ssh.private-key-filesname>
  <value>/root/.ssh/id_dsavalue>
property>

<property>
   <name>dfs.ha.automatic-failover.enabledname>
   <value>truevalue>
 property>

【3】配置core-site.xml:
 <property>
  <name>fs.defaultFSname>
  <value>hdfs://myclustervalue>
property>

<property>
   <name>ha.zookeeper.quorumname>
   <value>node09:2181,node10:2181,node11:2181value>
 property>

 <property>
        <name>hadoop.tmp.dirname>
        <value>/var/sxt/hadoop/havalue>
  property>

【4】分别在node08 node09 node10 上启动JournalNode
	命令:hadoop-daemon.sh start journalnode
【5】在node08上格式化:
	hdfs namenode -format
【6】在node08上启动namenode
 	hadoop-daemon.sh start namenode
【7】在node09上node08中的namenode数据
	hdfs namenode  -bootstrapStandby
【8】在node08上格式化zk
	hdfs zkfc -formatZK
【9】启动
	start-dfs.sh	

<-----------------以上安装完成------------------------->
测试:
http://192.168.88.18:50070

创建目录:hdfs dfs -mkdir -p /user/root
<---------------------------------------------------->
问题:
Hadoop HA active namenode宕机无法自动切换到standby namenode
配置:
<property>
	<name>dfs.ha.fencing.methodsname>
	<value>sshfence
		shell(/bin/true)
	value>
property>

启动zkfc:
hadoop-daemon.sh start zkfc

你可能感兴趣的:(集群安装,hadoop,hdfs,分布式)