hadoop HA +zookeeper +yarn

一、安装版本:

JDK 1.8.0_111-b14
hadoop hadoop-2.7.3
zookeeper zookeeper-3.5.2

二、安装步骤:  

    JDK的安装和集群的依赖环境配置不再叙述https://my.oschina.net/u/2500254/blog/806114

1、hadoop配置

    hadoop配置主要涉及hdfs-site.xml,core-site.xml,mapred-site.xml,yarn-site.xml四个文件。以下详细介绍每个文件的配置。

  1. core-site.xml的配置

    
    
    
    
    
    
    
    
          fs.defaultFS
          hdfs://cluster1
          HDFS namenode的逻辑名称,也就是namenode HA,此值要对应hdfs-site.xml里的dfs.nameservices
    
    
        hadoop.tmp.dir
        /home/hadoop/bigdata/tmp
        hdfs中namenode和datanode的数据默认放置路径,也可以在hdfs-site.xml中分别指定
    
    
            ha.zookeeper.quorum
            m7-psdc-kvm01:2181,m7-psdc-kvm02:2181,m7-psdc-kvm03:2181
            zookeeper集群的地址和端口,zookeeper集群的节点数必须为奇数
    
    


  2. hdfs-site.xml的配置(重点配置)

    
    
    
    
    
    
    
    	
    	    dfs.name.dir
    	    /home/hadoop/bigdata/nn
    	    namenode的数据放置目录
    	
    	
    	    dfs.data.dir
    	    /home/hadoop/bigdata/dn
    	    datanode的数据放置目录
    	
    	
    	    dfs.replication
    	    2
    	    数据块的备份数,默认是3
    	
    	
    		dfs.nameservices
    		cluster1
    		HDFS namenode的逻辑名称,也就是namenode HA
    	
    	
    		dfs.ha.namenodes.cluster1
    		ns1,ns2
    		nameservices对应的namenode逻辑名
    	
    	
    		dfs.namenode.rpc-address.cluster1.ns1
    		m7-psdc-kvm01:8020
    		指定namenode(ns1)的rpc地址和端口
    	
    	
    		dfs.namenode.http-address.cluster1.ns1
    		m7-psdc-kvm01:50070
    		指定namenode(ns1)的web地址和端口
    	
    	
    		dfs.namenode.rpc-address.cluster1.ns2
    		m7-psdc-kvm02:9000
    		指定namenode(ns2)的rpc地址和端口
    	
    	
    		dfs.namenode.http-address.cluster1.ns2
    		m7-psdc-kvm02:50070
    		指定namenode(ns2)的web地址和端口
    	
    	
    		dfs.namenode.shared.edits.dir
    		qjournal://m7-psdc-kvm01:8485;m7-psdc-kvm02:8485;m7-psdc-kvm03:8485;m7-psdc-kvm04:8485/cluster1 
    		这是NameNode读写JNs组的uri,active NN 将 edit log 写入这些JournalNode,而 standby NameNode 读取这些 edit log,并作用在内存中的目录树中
    	
    	
    		dfs.journalnode.edits.dir
    		/home/hadoop/bigdata/journal
    		journalNode 所在节点上的一个目录,用于存放 editlog 和其他状态信息。
    	
    	
    		   dfs.ha.automatic-failover.enabled
    		   true
    		   启动自动failover。自动failover依赖于zookeeper集群和ZKFailoverController(ZKFC),后者是一个zookeeper客户端,用来监控NN的状态信息。每个运行NN的节点必须要运行一个zkfc
    	
    	
    		dfs.client.failover.proxy.provider.cluster1
    		org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
    		配置HDFS客户端连接到Active NameNode的一个java类
    	
    	
    		dfs.ha.fencing.methods
    		sshfence
    		解决HA集群脑裂问题(即出现两个 master 同时对外提供服务,导致系统处于不一致状态)。在 HDFS HA中,JournalNode 只允许一个 NameNode 写数据,不会出现两个 active NameNode 的问题,但是,当主备切换时,之前的 active NameNode 可能仍在处理客户端的 RPC 请求,为此,需要增加隔离机制(fencing)将之前的 active NameNode 杀死。常用的fence方法是sshfence,要指定ssh通讯使用的密钥dfs.ha.fencing.ssh.private-key-files和连接超时时间
    	
    	
    		dfs.ha.fencing.ssh.private-key-files
    		/home/hadoop/.ssh/id_rsa
    		ssh通讯使用的密钥
    	
    	
    		dfs.ha.fencing.ssh.connect-timeout
    		30000
    		连接超时时间
    	
    


     

  3. mapred-site.xml的配置

    
    
    
    
    
    
    
    
            mapreduce.framework.name
            yarn
            指定运行mapreduce的环境是yarn,与hadoop1截然不同的地方
    
    
            mapreduce.jobhistory.address
            m7-psdc-kvm01:10020
             MR JobHistory Server管理的日志的存放位置
    
    
            mapreduce.jobhistory.webapp.address
            m7-psdc-kvm01:19888
            查看历史服务器已经运行完的Mapreduce作业记录的web地址,需要启动该服务才行
    
    
       mapreduce.jobhistory.done-dir
       /home/hadoop/bigdata/done
       MR JobHistory Server管理的日志的存放位置,默认:/mr-history/done
    
    
       mapreduce.jobhistory.intermediate-done-dir
       hdfs://mycluster-pha/mapred/tmp
       MapReduce作业产生的日志存放位置,默认值:/mr-history/tmp
    
    
    4.  yarn-site.xml 

    
    
    
    
    
    
    	
                    yarn.resourcemanager.ha.enabled
                    true
            
    	
                    yarn.resourcemanager.cluster-id
                    yarn-cluster1
            
    
            
                    yarn.resourcemanager.ha.rm-ids
                    rm1,rm2
            
    
            
                    yarn.resourcemanager.hostname.rm1
                    m7-psdc-kvm03
            
            
                    yarn.resourcemanager.hostname.rm2
                    m7-psdc-kvm02
            
    	
                    yarn.resourcemanager.recovery.enabled
                    true
            
    
    
            
                    yarn.resourcemanager.store.class
                    org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore
            
            
            
                    yarn.resourcemanager.zk-address
                    m7-psdc-kvm01:2181,m7-psdc-kvm02:2181,m7-psdc-kvm03:2181
            
        
            yarn.resourcemanager.ha.automatic-failover.embedded
            true
        
        
            yarn.resourcemanager.admin.address.rm1
            m7-psdc-kvm03:8033
        
        
            yarn.resourcemanager.admin.address.rm2
            m7-psdc-kvm02:8033
        
    
    
        
            yarn.resourcemanager.address.rm1
            m7-psdc-kvm03:8032
        
        
            yarn.resourcemanager.address.rm2
            m7-psdc-kvm02:8032
        
    
    
        
            yarn.resourcemanager.resource-tracker.address.rm1
            m7-psdc-kvm03:8031
        
        
            yarn.resourcemanager.resource-tracker.address.rm2
            m7-psdc-kvm02:8031
        
    
    
        
            yarn.resourcemanager.webapp.address.rm1
            m7-psdc-kvm03:8088
        
        
            yarn.resourcemanager.webapp.address.rm2
            m7-psdc-kvm02:8088
        
    
    
        
            yarn.resourcemanager.scheduler.address.rm1
            m7-psdc-kvm03:8030
        
        
            yarn.resourcemanager.scheduler.address.rm2
            m7-psdc-kvm02:8030
        
    
    
           
            yarn.nodemanager.aux-services
            mapreduce_shuffle
            默认
           
    
    
        
            yarn.nodemanager.pmem-check-enabled
            false
        
        
            yarn.log-aggregation-enable
            true
        
    
        
            yarn.nodemanager.delete.debug-delay-sec
            86400
        
    
    
           
            yarn.nodemanager.auxservices.mapreduce.shuffle.class
            org.apache.hadoop.mapred.ShuffleHandler
           
           
            yarn.nodemanager.resource.memory-mb
            102400
         
        
            yarn.nodemanager.resource.cpu-vcores
            20
        
        
            yarn.scheduler.maximum-allocation-mb
            102400
        
    

2.zookeeper配置

    zookeeper的配置主要是zoo.cfg和myid两个文件

  1. conf/zoo.cfg配置:先将zoo_sample.cfg改成zoo.cfg

    cp  zoo_sample.cfg  zoo.cfg
  2. vi zoo.cfg

    dataDir:数据的放置路径
    
    dataLogDir:log的放置路径
    initLimit=10
    syncLimit=5
    clientPort=2181
    tickTime=2000
    dataDir=/usr/zookeeper/tmp/data
    dataLogDir=/usr/zookeeper/tmp/log
    server.1=master:2888:3888
    server.2=slave1:2888:3888
    server.3=slave2:2888:3888
  3. 在[master,slave1,slave2]节点的dataDir目录新建文件myid

vi myid

    master节点编辑:1

    slave1节点编辑:2

    slave2节点编辑:3

    如下:

[hadoop@master data]$ vi myid 

1

三、启动集群

 1.zookeeper集群启动

    1.启动zookeeper集群,在三个节点全部启动

bin/zkServer.sh start

    2.查看集群zookeeper状态:zkServer.sh status,一个learer两个follower。

[hadoop@master hadoop-2.7.3]$ zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.5.2-alpha/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower
[hadoop@slave1 root]$ zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.5.2-alpha/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: leader
[hadoop@slave2 root]$ zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.5.2-alpha/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower

    3.验证zookeeper(非必须): 执行zkCli.sh

[hadoop@slave1 root]$ zkCli.sh
Connecting to localhost:2181
2016-12-18 02:05:03,115 [myid:] - INFO  [main:Environment@109] - Client environment:zookeeper.version=3.5.2-alpha-1750793, built on 06/30/2016 13:15 GMT
2016-12-18 02:05:03,118 [myid:] - INFO  [main:Environment@109] - Client environment:host.name=salve1
2016-12-18 02:05:03,118 [myid:] - INFO  [main:Environment@109] - Client environment:java.version=1.8.0_111
2016-12-18 02:05:03,120 [myid:] - INFO  [main:Environment@109] - Client environment:java.vendor=Oracle Corporation
2016-12-18 02:05:03,120 [myid:] - INFO  [main:Environment@109] - Client environment:java.home=/usr/local/jdk1.8.0_111/jre
2016-12-18 02:05:03,120 [myid:] - INFO  [main:Environment@109] - Client environment:java.class.path=/usr/local/zookeeper-3.5.2-alpha/bin/../build/classes:/usr/local/zookeeper-3.5.2-alpha/bin/../build/lib/*.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../lib/slf4j-log4j12-1.7.5.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../lib/slf4j-api-1.7.5.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../lib/servlet-api-2.5-20081211.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../lib/netty-3.10.5.Final.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../lib/log4j-1.2.17.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../lib/jline-2.11.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../lib/jetty-util-6.1.26.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../lib/jetty-6.1.26.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../lib/javacc.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../lib/jackson-mapper-asl-1.9.11.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../lib/jackson-core-asl-1.9.11.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../lib/commons-cli-1.2.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../zookeeper-3.5.2-alpha.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../src/java/lib/*.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../conf:.:/usr/local/jdk1.8.0_111/lib/dt.jar:/usr/local/jdk1.8.0_111/lib/tools.jar:/usr/local/zookeeper-3.5.2-alpha/bin:/usr/local/hadoop-2.7.3/bin
2016-12-18 02:05:03,120 [myid:] - INFO  [main:Environment@109] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2016-12-18 02:05:03,121 [myid:] - INFO  [main:Environment@109] - Client environment:java.io.tmpdir=/tmp
2016-12-18 02:05:03,121 [myid:] - INFO  [main:Environment@109] - Client environment:java.compiler=
2016-12-18 02:05:03,121 [myid:] - INFO  [main:Environment@109] - Client environment:os.name=Linux
2016-12-18 02:05:03,121 [myid:] - INFO  [main:Environment@109] - Client environment:os.arch=amd64
2016-12-18 02:05:03,121 [myid:] - INFO  [main:Environment@109] - Client environment:os.version=3.10.0-327.22.2.el7.x86_64
2016-12-18 02:05:03,121 [myid:] - INFO  [main:Environment@109] - Client environment:user.name=hadoop
2016-12-18 02:05:03,121 [myid:] - INFO  [main:Environment@109] - Client environment:user.home=/home/hadoop
2016-12-18 02:05:03,121 [myid:] - INFO  [main:Environment@109] - Client environment:user.dir=/tmp/hsperfdata_hadoop
2016-12-18 02:05:03,121 [myid:] - INFO  [main:Environment@109] - Client environment:os.memory.free=52MB
2016-12-18 02:05:03,123 [myid:] - INFO  [main:Environment@109] - Client environment:os.memory.max=228MB
2016-12-18 02:05:03,123 [myid:] - INFO  [main:Environment@109] - Client environment:os.memory.total=57MB
2016-12-18 02:05:03,146 [myid:] - INFO  [main:ZooKeeper@855] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@593634ad
Welcome to ZooKeeper!
2016-12-18 02:05:03,171 [myid:localhost:2181] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1113] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
2016-12-18 02:05:03,243 [myid:localhost:2181] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@948] - Socket connection established, initiating session, client: /127.0.0.1:56184, server: localhost/127.0.0.1:2181
2016-12-18 02:05:03,252 [myid:localhost:2181] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1381] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x200220f5fe30060, negotiated timeout = 30000

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] 

2.hadoop集群启动

    1.第一次配置启动

        1.1在三个节点上启动Journalnode deamons,然后jps,出现JournalNode进程。

sbin/./hadoop-daemon.sh start journalnode
jps

JournalNode

        1.2格式化master上的namenode(任意一个),然后启动该节点的namenode。

bin/hdfs namenode -format
sbin/hadoop-daemon.sh start namenode

        1.3在另一个namenode节点slave1上同步master上的元数据信息

bin/hdfs namenode -bootstrapStandby

         1.4停止hdfs上的所有服务

sbin/stop-dfs.sh

        1.5初始化zkfc

bin/hdfs zkfc -formatZK

        1.6启动hdfs

sbin/start-dfs.sh

        1.7启动yarn

sbin/start-yarn.sh

    2.非第一次配置启动

        2.1直接启动hdfs和yarn即可,namenode、datanode、journalnode、DFSZKFailoverController都会自动启动。

        

sbin/start-dfs.sh

 

        2.2启动yarn

sbin/start-yarn.sh

四、查看各节点的进程

    4.1master

[hadoop@master hadoop-2.7.3]$ jps
26544 QuorumPeerMain
25509 JournalNode
25704 DFSZKFailoverController
26360 Jps
25306 DataNode
25195 NameNode
25886 ResourceManager
25999 NodeManager

    4.2slave1

[hadoop@slave1 root]$ jps
2289 DFSZKFailoverController
9400 QuorumPeerMain
2601 Jps
2060 DataNode
2413 NodeManager
2159 JournalNode
1983 NameNode

    4.3slave2

[hadoop@slave2 root]$ jps
11984 DataNode
12370 Jps
2514 QuorumPeerMain
12083 JournalNode
12188 NodeManager

五、浏览器查看

    http://master:8088/cluster/cluster,yarn的资源管理页面    

    http://master:50070/dfshealth.html#tab-overview ,master节点的hdfs页面

    http://slave1:50070/dfshealth.html#tab-overview ,slave1节点的hdfs页面

 

 

 

 


你可能感兴趣的:(hadoop HA +zookeeper +yarn)