Hadoop-入门篇环境搭建(四)

转载请注明原文出处

hadoop搭建

阿牛的资料下载
Hadoop-入门篇环境搭建(一)
Hadoop-入门篇环境搭建(二)
Hadoop-入门篇环境搭建(三)
Hadoop-入门篇环境搭建(四)

机器配置

Hadoop-入门篇环境搭建(四)_第1张图片

+所有机器都有这个步骤
这里有4台机器
hadoop 在/opt/soft/hadoop 下面

node1

+ vi ~/.bash_profile
  export HADOOP_PREFIX=/opt/soft/hadoop
+ source ~/.bash_profile

# cd /opt/soft/hadoop/etc/hadoop

# vi mapred-env.sh
  export JAVA_HOME=/usr/java/jdk1.7.0_79
#vi hadoop-env.sh
  export JAVA_HOME=/usr/java/jdk1.7.0_79

# vi slaves
  node2
  node3
  node4

+ mkdir -p /opt/data/hadoop
+ mkdir  -p /opt/data/journalnode
#vi hdfs-site.xml
#集群名字ID mycluster请替换为下面的value=hadoop

  dfs.nameservices
  hadoop

#集群有哪些机器

  dfs.ha.namenodes.hadoop
  nn1,nn2
#集群的namenode ip

  dfs.namenode.rpc-address.hadoop.nn1
  node1:8020


  dfs.namenode.rpc-address.hadoop.nn2
  node2:8020


#连接页面

  dfs.namenode.http-address.hadoop.nn1
  node1:50070


  dfs.namenode.http-address.hadoop.nn2
  node2:50070



  dfs.namenode.shared.edits.dir
  qjournal://node2:8485;node3:8485;node4:8485/hadoop


  dfs.client.failover.proxy.provider.hadoop



  dfs.ha.fencing.methods
  sshfence



  dfs.ha.fencing.ssh.private-key-files
  /root/.ssh/id_dsa




  dfs.ha.fencing.ssh.connect-timeout
  60000

#mkdir -p /opt/data/journalnode

  dfs.journalnode.edits.dir
  /opt/data/journalnode

#自动选举
 
   dfs.ha.automatic-failover.enabled
   true
 
#zk节点的位置
  
   ha.zookeeper.quorum
   node1:2181,node2:2181,node3:2181
 

#in your core-site.xml file:
#
#  fs.defaultFS
#  hdfs://hadoop
#
#


vi core-site.xml

 
   ipc.client.connect.max.retries
    20
    
    
  

  
   ipc.client.connect.retry.interval
    5000
    
      Indicates the number of milliseconds aclient will wait for before retrying to establish a server connection.
    
  



        ha.zookeeper.quorum
        node1:2181,node2:2181,node3:2181
    
    
        hadoop.tmp.dir
        /opt/data/hadoop
    


  fs.defaultFS
  hdfs://hadoop



hdfs高可用搭建完成
yarn搭建

cd /opt/soft/hadoop/etc/hadoop
#vi yarn-site.xml

   yarn.resourcemanager.ha.enabled
   true
 
 #this is yarn id not hdfs's id
 
   yarn.resourcemanager.cluster-id
   hadoop-yarn
 
 #this is resourcemanager id how many resourcemanager ,if you want to set RSM please set this
 
   yarn.resourcemanager.ha.rm-ids
   rm1,rm2
 
 
   yarn.resourcemanager.hostname.rm1
   node3
 
 
   yarn.resourcemanager.hostname.rm2
   node4
 
 #there is ZK ,please synchronous your's ZK cluster
 
   yarn.resourcemanager.zk-address
   node1:2181,node2:2181,node3:2181
 
 #this is a defects in yarn ,You have to remember
 #please mapreduce_shuffle
  
   yarn.nodemanager.aux-services
   mapreduce_shuffle
 

16年的学习笔记,留言相互交流。

关注老铁的公众号,从小菜比到老菜比

Hadoop-入门篇环境搭建(四)_第2张图片

你假笨
Hadoop-入门篇环境搭建(四)_第3张图片
462563010加QQ群一起学习
欢迎关注spring4all
Hadoop-入门篇环境搭建(四)_第4张图片

你可能感兴趣的:(Hadoop-入门篇环境搭建(四))