通过tarball形式安装HBASE Cluster(CDH5.0.2)——HBASE 真分布式集群配置

一、应该先配置好zookeeper并成功启动,否则hbase无法启动

二、配置HBASE集群

1,配置hbase-env.sh,下面是最少配置项目

[hadoop@zk1 conf]$ vim hbase-env.sh

export JAVA_HOME=/usr/java/jdk1.7.0_60
export HBASE_HOME=/home/hadoop/hbase

export HBASE_OPTS="-XX:+UseConcMarkSweepGC"

XX:GCLogFileSize=512M"

XX:GCLogFileSize=512M"

export HBASE_MANAGES_ZK=false

 

2,配置hbase-site.xml,其中标注为默认值的项目都可以忽略

[hadoop@zk1 conf]$ vim hbase-site.xml

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
        <property>
                <name>hbase.rootdir</name>
                <value>hdfs://hbasecluster/hbase</value>
        </property>
        <property>
                <name>hbase.zookeeper.quorum</name>
                <value>zk1,zk2,zk3</value>
        </property>
        <property>
                <name>hbase.cluster.distributed</name>
                <value>true</value>
        </property>

        <!-- above three parameters are the ones who  must be defined within a true cluster environment -->

        <property>
                <name>hbase.tmp.dir</name>
                <value>file:/tmp/hbase</value>
        </property>

        <property>
                <name>hbase.master.port</name>
                <value>60000</value><!-- 60000 is the default value -->
        </property>
        <property>
                <name>hbase.master.info.port</name>
                <value>60010</value><!--60010 is the default value-->
        </property>

        <property>
                <name>hbase.regionserver.port</name>
                <value>60020</value><!-- 60020 is the default value -->
        </property>
        <property>
                <name>hbase.regionserver.info.port</name>
                <value>60030</value><!--60030 is the default value-->
        </property>

        <property>
                <name>zookeeper.session.timeout</name>
                <value>2000</value>
                <description>hbase daemon talk session with zookeeper timeout </description>
        </property>
</configuration>

 3,配置regionservers列表文件

[hadoop@zk1 conf]$ vim regionservers

dn1
dn2
dn3
dn4
dn5
dn6
~

 

配置完以上3项,通过scp 复制到其他运行hbase的r主机上,至此hbase已经可以启动,但是此时master只是启动一个,没有failover支持。

开启hbase master节点的failover特性很简单,继续第4项配置,其实只是要添加一个backup-masters的文件,写入master主机列表就行了。

 4,配置master HA site backup文件

[hadoop@zk1 conf]$ vim backup-masters

zk1
zk2
zk3
~
"backup-masters" [新] 3L, 12C 已写入

 再次scp分发配置,然后重新启动hbase集群,发现其他两个节点上HMaster页已经自动启动了。

[hadoop@zk1 conf]$ start-hbase.sh
starting master, logging to /home/hadoop/hbase/logs/hbase-hadoop-master-zk1.hadoop.software.yinghuan.com.out
dn3: regionserver running as process 1964. Stop it first.
dn6: regionserver running as process 1943. Stop it first.
dn5: regionserver running as process 1996. Stop it first.
dn4: regionserver running as process 1969. Stop it first.
dn2: regionserver running as process 1942. Stop it first.
dn1: starting regionserver, logging to /home/hadoop/hbase/logs/hbase-hadoop-regionserver-data1.hadoop.software.yinghuan.com.out
zk1: master running as process 3641. Stop it first.
zk2: starting master, logging to /home/hadoop/hbase/logs/hbase-hadoop-master-zk2.hadoop.software.yinghuan.com.out
zk3: starting master, logging to /home/hadoop/hbase/logs/hbase-hadoop-master-zk3.hadoop.software.yinghuan.com.out

 其中zk1节点有重复了导致zk1上的master启动了两次,从bakup-masters文件中移除就ok了。

你可能感兴趣的:(cluster)