(十八)大数据学习之HA

HA专题

一.保证服务器时间相同

date -s 2019-04-21 把所有机器时间设置成00:00:00

二.Hadoop HA

1.HDFS HA

/usr/local/hadoop-2.8.4/etc/hadoop 下是所有hadoop配置文件
    
-------core-site.xml------------------------
    

    
      fs.defaultFS
      hdfs://mycluster
    
    
       ha.zookeeper.quorum
       node3:2181,node4:2181,node5:2181
    
    
      hadoop.tmp.dir
      /opt/hadoop
    
    
--hdfs-site.xml----------------------------
    

    
      dfs.nameservices
      mycluster
    
    
      dfs.ha.namenodes.mycluster
      nn1,nn2
    
    
      dfs.namenode.rpc-address.mycluster.nn1
      node1:8020
    
    
      dfs.namenode.rpc-address.mycluster.nn2
      node2:8020
    
    
      dfs.namenode.http-address.mycluster.nn1
      node1:50070
    
    
      dfs.namenode.http-address.mycluster.nn2
      node2:50070
    
    
      dfs.namenode.shared.edits.dir
      qjournal://node5:8485;node3:8485;node4:8485/mycluster
    
    
      dfs.client.failover.proxy.provider.mycluster
      org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
    
    
      dfs.ha.fencing.methods
      sshfence
    

    
      dfs.ha.fencing.ssh.private-key-files
      /root/.ssh/id_dsa
    
    
       dfs.ha.automatic-failover.enabled
       true
     
    
    
--yarn-site.xml-----------------------------------
    


    
    
    
      yarn.resourcemanager.recovery.enabled
      true
    
    
      yarn.resourcemanager.store.class
      org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore
    
    
            yarn.nodemanager.aux-services
            mapreduce_shuffle
        
    
       yarn.resourcemanager.ha.enabled
       true
     
     
       yarn.resourcemanager.cluster-id
       yarncluster
     
     
       yarn.resourcemanager.ha.rm-ids
       rm1,rm2
     
     
       yarn.resourcemanager.hostname.rm1
       node1
     
    
    
       yarn.resourcemanager.hostname.rm2
       node2
     
     
       yarn.resourcemanager.zk-address
       node3,node4,node5
     
    
       yarn.scheduler.maximum-allocation-mb
       32768
    
    
        yarn.nodemanager.resource.memory-mb
        32768
    
    
        yarn.scheduler.minimum-allocation-mb
        4096
    
     
       yarn.nodemanager.resource.cpu-vcores
        24
    
    
            yarn.log-aggregation-enable
            true
    
    
            yarn.nodemanager.remote-app-log-dir
            /tmp/yarn-logs
    
    
    
    
 // 配置好后,分发到所有节点,启动zookeeper后
    start-all.sh 即可启动所有

三.Hbase HA

修改配置文件,分发到所有几点,启动即可
注意:要启动两个master,其中一个需要手动启动
注意:Hbase安装时,需要对应Hadoop版本
hbase hbase-2.1.4 对应 hadoop 2.8.4
通常情况下,把Hadoop core-site hdfs-site 拷贝到hbase conf下
修改 hbase-env.sh
修改 hbase-site.xml

--hbase-env.sh--------------------
    export JAVA_HOME=/usr/java/jdk1.8.0_201
    export HBASE_MANAGES_ZK=false
    //关闭hbase自带的zookeeper 使用集群zookeeper
--hbase-site.xml--------------------

    
        hbase.cluster.distributed
        true
      
      
        hbase.rootdir
        hdfs://mycluster/hbase
      
      
        hbase.zookeeper.quorum
        node3,node4,node5
      
      
        hbase.zookeeper.property.clientPort
        2181
      
      
        zookeeper.session.timeout
        120000
      
      
        hbase.zookeeper.property.tickTime
        6000
      

启动hbbase
需要从另一台服务器上单独启动master
hbase-daemon.sh start master
通过以下网站可以看到信息
http://192.168.109.132:16010/master-status

你可能感兴趣的:((十八)大数据学习之HA)