下载hbase-2.2.1-bin.tar.gz并执行安装命令:
[hadoop@hadoop01 ~]$ tar -zxvf hbase-2.2.1-bin.tar.gz
查看安装目录:
[hadoop@hadoop01 ~]$ cd hbase-2.2.1
[hadoop@hadoop01 hbase-2.2.1]$ ls -l
total 872
drwxr-xr-x. 4 hadoop hadoop 4096 Sep 10 14:26 bin
-rw-r--r--. 1 hadoop hadoop 128367 Sep 9 09:16 CHANGES.md
drwxr-xr-x. 2 hadoop hadoop 178 Sep 10 14:26 conf
drwxr-xr-x. 7 hadoop hadoop 80 Sep 10 14:28 hbase-webapps
-rw-rw-r--. 1 hadoop hadoop 262 May 2 2018 LEGAL
drwxrwxr-x. 6 hadoop hadoop 8192 Sep 21 23:30 lib
-rw-rw-r--. 1 hadoop hadoop 129312 Sep 10 14:31 LICENSE.txt
-rw-rw-r--. 1 hadoop hadoop 519346 Sep 10 14:31 NOTICE.txt
-rw-r--r--. 1 hadoop hadoop 1477 Sep 9 09:16 README.txt
-rw-r--r--. 1 hadoop hadoop 82873 Sep 9 09:16 RELEASENOTES.md
进入安装目录下conf,编辑hbase-env.sh、hbase-site、regionservers文件:
[hadoop@hadoop01 hbase-2.2.1]$ cd conf
[hadoop@hadoop01 conf]$ ls
hadoop-metrics2-hbase.properties hbase-site.xml
hbase-env.cmd log4j.properties
hbase-env.sh regionservers
hbase-policy.xml
hbase-env.sh文件
[hadoop@hadoop01 conf]$ gedit hbase-env.sh
修改的内容:
export JAVA_HOME=/usr/java/jdk1.8.0_11/
export HBASE_MANAGES_ZK=false #flase表示使用独立的zookeeper
hbase-site文件:
[hadoop@hadoop01 conf]$ gedit hbase-site.xml 编辑内容: <configuration> <property> <name>hbase.mastername>#hbasemaster的主机和端口 <value>hadoop01:16010value> property> <property> <name>hbase.rootdirname>#共享目录,持久化hbase数据
<value>hdfs://hadoop01:9000/hbasevalue> property> <property> <name>hbase.cluster.distributedname>#是否分布式运行,false即为单机 <value>truevalue> property> <property> <name>hbase.zookeeper.quorumname>#zookeeper地址
<value>hadoop01,hadoop02,hadoop03value> property> <property> <name>hbase.master.maxclockskewname>#时间同步允许的时间差
<value>180000value> property>
<property> <name>hbase.unsafe.stream.capability.enforcename> <value>falsevalue>
Controls whether HBase will check for stream capabilities (hflush/hsync). Disable this if you intend to run on LocalFileSystem, denoted by a rootdir with the 'file://' scheme, but be mindful of the NOTE below. WARNING: Setting this to false blinds you to potential data loss and inconsistent system state in the event of process and/or node failures. If HBase is complaining of an inability to use hsync or hflush it's most likely not a false positive.
property>
configuration>
注:我们测试环境里用的是HBase 2.2.1, 所以这里虽然是集群环境,也直接将该参数设置false,然后重启Hbase Master,恢复正常。 或者使用版本小于2.0.0的HBase,也可以避免出现这种错误
regionservers文件:
[root@hadoop01 conf]# gedit regionservers
编辑内容为:
hadoop02
hadoop03
配置环境变量:
[hadoop@hadoop01 ~]$ gedit ~/.bash_profile
文本需要添加的内容:
#hbase
export HBASE_HOME=/home/hadoop/hbase-2.2.1
export PATH=$HBASE_HOME/bin:$PATH
export HADOOP_CLASSPATH=$HBASE_HOME/lib/*
[hadoop@hadoop01 ~]$ source ~/.bash_profile
拷贝到其他节点上:
[hadoop@hadoop01 ~]$ scp -r hbase-2.2.1 hadoop02:~/
[hadoop@hadoop01 ~]$ scp -r hbase-2.2.1 hadoop03:~/
启动停止hbase命令:
[hadoop@hadoop01 ~]$ cd ~/hbase-2.2.1
[hadoop@hadoop01 hbase-2.2.1]$ bin/start-hbase.sh --启动hbase
[hadoop@hadoop01 hbase-2.2.1]$ jps --显示进程
16720 ResourceManager
16866 NodeManager
19427 QuorumPeerMain --zookeeper进程
22387 Jps
16455 SecondaryNameNode
16075 NameNode
22062 HMaster --hbase进程
[hadoop@hadoop01 hbase-2.2.1]$ bin/stop-hbase.sh
注:启动报如下异常
java.lang.IllegalStateException: The procedure WAL relies on the ability to hsync for proper operation during component failures, but the underlying filesystem does not support doing so. Please check the config value of 'hbase.procedure.store.wal.use.hsync' to set the desired level of robustness and ensure the config value of 'hbase.wal.dir' points to a FileSystem mount that can provide it.
hbase-site.xml增加配置
hbase网页版网:http://192.168.1.100:16010/master-status
注:HBASE1.0之后的版本web端访问的接口变更为16010,192.168.1.100为我的linux端口地址
启动并测试hbase:
[hadoop@hadoop01 hbase-2.2.1]$ jps --查看进程HMaster是否启动
11264 HMaster
11573 Jps
8566 DataNode
8426 NameNode
9642 QuorumPeerMain
8795 SecondaryNameNode
9053 ResourceManager
9197 NodeManager
[hadoop@hadoop01 hbase-2.2.1]$ hbase shell --进入shell模式
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/hadoop-3.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/hbase-2.2.1/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell
Use "help" to get list of supported commands.
Use "exit" to quit this interactive shell.
For Reference, please visit: http://hbase.apache.org/2.0/book.html#shell
Version 2.2.1, rf93aaf770cce81caacbf22174dfee2860dbb4810, 2019年 09月 10日 星期二 14:28:27 CST
Took 0.0027 seconds
hbase(main):001:0> create 'test1','f11' --创建test1表
Created table test1
Took 2.8262 seconds
=> Hbase::Table - test1
hbase(main):002:0> list --显示表
TABLE
test1
1 row(s)
Took 0.0274 seconds
=> ["test1"]
hbase(main):003:0> put 'test1','id001','f11:uid','001' --插入数据
Took 0.2699 seconds
hbase(main):004:0> scan 'test1' --查看数据
ROW COLUMN+CELL
id001 column=f11:uid, timestamp=1569686397247, value=001
1 row(s)
Took 0.0697 seconds
hbase(main):005:0> describe 'test1' --查看表结构
Table test1 is ENABLED
test1
COLUMN FAMILIES DESCRIPTION
{NAME => 'f11', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLO
CK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_
BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'}
1 row(s)
QUOTAS
0 row(s)
Took 0.1465 seconds
hbase(main):006:0> drop 'test1' --删除表
ERROR: Table test1 is enabled. Disable it first.
For usage try 'help "drop"'
Took 0.0235 seconds
注:这里删除表失败了,根据提示,现在表状态为有效,无法删除。必须先将test1表状态失效才能成功删除。
hbase(main):008:0> disable 'test1'
Took 1.3230 seconds
hbase(main):009:0> drop 'test1'
Took 0.9026 seconds
hbase(main):010:0> list
TABLE
0 row(s)
Took 0.0111 seconds
=> []
hbase(main):011:0> exit --退出shell模式
[hadoop@hadoop01 hbase-2.2.1]$