hbase 1.2.6 安装

1. 环境信息:

Linux 系统:

[hadoop@master bin]$ cat /etc/redhat-release 
CentOS Linux release 7.1.1503 (Core) 


hosts文件:

[root@master ~]# cat /etc/hosts
#127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
#::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

10.0.1.118 master        #主节点
10.0.1.227 slave-1 	#备份主节点,从节点
10.0.1.226 slave-2	#从节点

hadoop版本:

[hadoop@master bin]$ hadoop version
Hadoop 2.8.1
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r 20fe5304904fc2f5a18053c389e43cd26f7a70fe
Compiled by vinodkv on 2017-06-02T06:14Z
Compiled with protoc 2.5.0
From source with checksum 60125541c2b3e266cbf3becc5bda666
This command was run using /home/hadoop/hadoop-2.8.1/share/hadoop/common/hadoop-common-2.8.1.jar

2. 下载hbase

网址:http://mirrors.hust.edu.cn/apache/hbase/stable/hbase-1.2.6-bin.tar.gz


2.1解压

[hadoop@master ~]$ tar -xvf hbase-1.2.6-bin.tar.gz 

2.1配置

[hadoop@master conf]$ cd /home/hadoop/hbase-1.2.6/conf
[hadoop@master conf]$ vi hbase-site.xml


  
    hbase.zookeeper.property.clientPort
    2181
  

  hbase.zookeeper.quorum
  master,slave-1,slave-2 #安装zookeepr 集群的主机


  hbase.zookeeper.property.dataDir
  /home/hadoop/zookeeper


    hbase.rootdir
    hdfs://master:9000/hbase #根据hadoop hdfs 配置


    hbase.cluster.distributed #分布式安装
    true





[hadoop@master conf]$ vi hbase-env.sh
export JAVA_HOME=/usr/java/jdk1.8.0_131/
# Tell HBase whether it should manage it's own instance of Zookeeper or not.
# export HBASE_MANAGES_ZK=true   #默认使用hbase自带zookeeper管理集群,使用外部需要修改为false

[hadoop@master conf]$ vi regionservers   #添加slave 节点
slave-1 
slave-2

[hadoop@master conf]$ vi backup-masters  #添加备份主节点
slave-1

[root@master ~]# vi /etc/profile    #配置环境变量

export TZ='Asia/Shanghai'
export JAVA_HOME=/usr/java/jdk1.8.0_131/
export HADOOP_HOME=/home/hadoop/hadoop-2.8.1/
export HIVE_HOME=/home/hadoop/apache-hive-2.1.1
export HBASE_HOME=/home/hadoop/hbase-1.2.6        #新增hbase目录
export PATH=$PATH:$HADOOP_HOME/bin:$JAVA_HOME/bin:$HIVE_HOME/bin:$HBASE_HOME/bin  #新增hbase目录bin路径

[root@master ~]# source /etc/profile   #使配置生效
[root@master ~]# echo $HBASE_HOME
/home/hadoop/hbase-1.2.6
[root@master ~]#  


3 分发:

[hadoop@master ~]$ scp -r hbase-1.2.6/ hadoop@slave-1:~/  #hbase 目录拷贝纸salve-1
[hadoop@master ~]$ scp -r hbase-1.2.6/ hadoop@slave-2:~/  #hbase 目录拷贝纸salve-2

4.验证:

[hadoop@master bin]$ ./start-hbase.sh  #启动hbase ,haddop 必须先起来,警告可以忽略
slave-1: starting zookeeper, logging to /home/hadoop/hbase-1.2.6/bin/../logs/hbase-hadoop-zookeeper-slave-1.out
slave-2: starting zookeeper, logging to /home/hadoop/hbase-1.2.6/bin/../logs/hbase-hadoop-zookeeper-slave-2.out
master: starting zookeeper, logging to /home/hadoop/hbase-1.2.6/bin/../logs/hbase-hadoop-zookeeper-master.out
starting master, logging to /home/hadoop/hbase-1.2.6/logs/hbase-hadoop-master-master.out
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
slave-1: starting regionserver, logging to /home/hadoop/hbase-1.2.6/bin/../logs/hbase-hadoop-regionserver-slave-1.out
slave-2: starting regionserver, logging to /home/hadoop/hbase-1.2.6/bin/../logs/hbase-hadoop-regionserver-slave-2.out
slave-1: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
slave-1: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
slave-2: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
slave-2: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
slave-1: starting master, logging to /home/hadoop/hbase-1.2.6/bin/../logs/hbase-hadoop-master-slave-1.out
slave-1: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
slave-1: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
[hadoop@master bin]$ jps
25088 SecondaryNameNode
41751 Jps
24889 NameNode
41561 HQuorumPeer
41627 HMaster          #hbase 主节点启动成功
25244 ResourceManager
[hadoop@master bin]$

5.登录web

http://10.0.1.118:16010/master-status


6.报错

2017-09-23 11:46:14,680 INFO  [main-SendThread(slave-2:2181)] zookeeper.ClientCnxn: Opening socket connection to server slave-2/10.0.1.226:2181. Will not attempt to authenticate using SASL (unknown error)
2017-09-23 11:46:14,804 WARN  [main-SendThread(slave-2:2181)] zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
        at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
        at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
2017-09-23 11:46:14,948 INFO  [main-SendThread(master:2181)] zookeeper.ClientCnxn: Opening socket connection to server master/10.0.1.118:2181. Will not attempt to authenticate using SASL (unknown error)
2017-09-23 11:46:14,949 INFO  [main-SendThread(master:2181)] zookeeper.ClientCnxn: Socket connection established to master/10.0.1.118:2181, initiating session
2017-09-23 11:46:14,992 INFO  [main-SendThread(master:2181)] zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2017-09-23 11:46:15,920 INFO  [main-SendThread(slave-1:2181)] zookeeper.ClientCnxn: Opening socket connection to server slave-1/10.0.1.227:2181. Will not attempt to authenticate using SASL (unknown error)
2017-09-23 11:46:15,922 WARN  [main-SendThread(slave-1:2181)] zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
        at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
        at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
2017-09-23 11:46:17,195 INFO  [main-SendThread(slave-2:2181)] zookeeper.ClientCnxn: Opening socket connection to server slave-2/10.0.1.226:2181. Will not attempt to authenticate using SASL (unknown error)
2017-09-23 11:46:17,197 WARN  [main-SendThread(slave-2:2181)] zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused

解决:
1.查看监听的端口
[hadoop@master logs]$ netstat -an |grep 2181
tcp        0      0 0.0.0.0:2181            0.0.0.0:*               LISTEN    
 不知道为什么一开始监听在IPV6地址上,后来我给禁止IPV6,(google说是bug)。
2. 删除之前hbase 在hadoop上的源数据
[hadoop@master logs]$ hadoop  fs -rm -R -f /hbase
3. 重新配置,并且分发给其他节点
这个问题很奇怪,不能重现,删除了、hbase 很可能再也重新启动时,都会报链接不了master,拒接链接,这时重启hadoop,重启hbase。
因为报错,反复修改了很多次配置,重装了很多次,最后次都是玩去删除,直接重新配置分发的。 这个虽然这个问题解决了,但是没有get 具体原因。
先记录下此次解决的新路历程吧。也有人说是ntp 感觉没关系,时间一样的







你可能感兴趣的:(hadoop,hbase)