hbase完全分布式安装

完全分布式部署(Fully Distributed)

准备

jdk版本,看下图我们需要至少1.8版本的jdk

HBase Version JDK 7 JDK 8 JDK 9 (Non-LTS) JDK 10 (Non-LTS) JDK 11
2.0+ No Yes HBASE-20264 HBASE-20264 HBASE-21110
1.2+ Yes Yes HBASE-20264 HBASE-20264 HBASE-21110

节点规划

主机 Master RegionServer backup master
hdfs-01 Yes No No
hdfs-02 No Yes No
hdfs-03 No Yes Yes
hdfs-04 No Yes Yes

下载

wget http://mirrors.tuna.tsinghua.edu.cn/apache/hbase/1.3.5/hbase-1.3.5-bin.tar.gz

解压

tar -zxvf hbase-1.3.5-bin.tar.gz -C /data/servers/

配置

进入配置文件夹并修改文件hbase-env.sh

cd /data/servers/hbase-1.3.5/conf/

vim hbase-env.sh
# Set environment variables here.
# The java implementation to use.
export JAVA_HOME=/data/java/jdk1.8.0_111
# 不使用自带的ZK,因为已经有了一个zookeeper集群,直接使用就行了
export HBASE_MANAGES_ZK=false

编辑hbase-site.xml






  
    hbase.rootdir
    hdfs://hdfs-01:8020/hbase
  


  hbase.zookeeper.quorum
  web-app-02:2181,hdfs-01:2181,hdfs-02:2181

  
    hbase.zookeeper.property.dataDir
    /data/servers/data_hbase_zookeeper
  
  
    hbase.unsafe.stream.capability.enforce
    false
    
      Controls whether HBase will check for stream capabilities (hflush/hsync).

      Disable this if you intend to run on LocalFileSystem, denoted by a rootdir
      with the 'file://' scheme, but be mindful of the NOTE below.

      WARNING: Setting this to false blinds you to potential data loss and
      inconsistent system state in the event of process and/or node failures. If
      HBase is complaining of an inability to use hsync or hflush it's most
      likely not a false positive.
    
  


  hbase.cluster.distributed
  true



设置备份master

编辑backup-masters文件(需手动创建)

vim backup-masters
#往里面添加
hdfs-03
hdfs-04

设置regionservers

vim regionservers
#往里面添加
hdfs-02
hdfs-03
hdfs-04

设置Hadoop的conf/hdfs-site.xml文件中dfs.datanode.max.transfer.threads参数,修改dfs.datanode.max.transfer.threads = 4096 (如果运行hbase的话建议为16384),指定用于在DataNode间传输block数据的最大线程数,老版本的对应参数为dfs.datanode.max.xcievers


  dfs.datanode.max.transfer.threads
  16384

将配置好的hbase scp到其他主机

scp -r hbase-1.3.5/ hdfs-02:$PWD
scp -r hbase-1.3.5/ hdfs-03:$PWD
scp -r hbase-1.3.5/ hdfs-04:$PWD

在hdfs-01启动hbase

bin/start-hbase.sh

查看进程

jps -l
24864 org.apache.zookeeper.server.quorum.QuorumPeerMain
25920 org.apache.hadoop.hdfs.server.datanode.DataNode
15936 org.apache.hadoop.hbase.master.HMaster
51937 org.apache.flink.runtime.taskexecutor.TaskManagerRunner
26435 org.apache.hadoop.hdfs.tools.DFSZKFailoverController
31942 org.apache.hadoop.hdfs.server.namenode.NameNode
22602 org.apache.hadoop.util.RunJar
27466 org.apache.hadoop.yarn.server.nodemanager.NodeManager
8501 sun.tools.jps.Jps

访问master节点web界面

http://hdfs-01:16010

shell 访问hbase

root@hdfs-01:/data/servers/hbase-1.3.5# bin/hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/data/servers/hbase-1.3.5/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/data/servers/hadoop-2.6.0-cdh5.14.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter 'help' for list of supported commands.
Type "exit" to leave the HBase Shell
Version 1.3.5, rb59afe7b1dc650ff3a86034477b563734e8799a9, Wed Jun  5 15:57:14 PDT 2019
hbase(main):001:0> create 'test', 'cf'
0 row(s) in 1.4310 seconds

=> Hbase::Table - test
hbase(main):002:0> list 'test'
TABLE
test
1 row(s) in 0.0180 seconds

=> ["test"]


你可能感兴趣的:(hbase完全分布式安装)