HBase2.0单机搭建

一、安装jdk

下载jdk rpm安装包

rpm -ivh xx.rpm

查看JAVA_HOME 

export JAVA_HOME=/usr/java/jdk1.8.0_171-amd64/

localhost

 

 

 

 

二、安装hadoop

hadoop安装      https://www.jianshu.com/p/68087004baa0

[root@localhost conf]# wget http://mirrors.hust.edu.cn/apache/hadoop/common/hadoop-3.0.2/hadoop-3.0.2.tar.gz
[root@localhost conf]# tar -zxvf hadoop-3.0.2.tar.gz



配置hadoop环境变量

 

vim /etc/profile  
 
export HADOOP_HOME=/home/dev/hbase/hadoop-3.0.2
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin

 

  • 使环境变量生效
source /etc/profile
  • 编辑hadoop-env.sh文件
cd $HADOOP_HOME/etc/hadoop
 

vim hadoop-env.sh
  • 将export JAVA_HOME=${JAVA_HOME}改为如下
export JAVA_HOME=/usr/java/jdk1.8.0_171-amd64/

 

Failed to load native-hadoop with error: 

解决方案是在文件hadoop-env.sh中增加:

export HADOOP_OPTS="-Djava.library.path=${HADOOP_HOME}/lib/native"  

 

 

 

 

 

 

 

  • 编辑core-site.xml文件,configuration中的内容


  fs.default.name
    hdfs://192.168.85.173:9000


hadoop存储目录

[root@localhost hadoop-3.0.2]# mkdir -p /data/hadoop/data/namenode

[root@localhost hadoop-3.0.2]# mkdir -p /data/hadoop/data/datanode

 

 

vi hdfs-site.xml





 dfs.replication

 1





  dfs.name.dir

  file:///data/hadoop/data/namenode






  dfs.data.dir

  file:///data/hadoop/data/datanode




 

 

  • 创建mapred-site.xml文件
cp mapred-site.xml.template mapred-site.xml
  • 编辑mapred-site.xml文件,configuration中的内容

 
  mapreduce.framework.name
  yarn
 

 

 

 

  • 编辑yarn-site.xml文件,configuration中的内容

     
    
      yarn.nodemanager.aux-services
      mapreduce_shuffle
    
     
    
     

     

 

 

Hadoop 3.0.2

如果运行脚本报如下错误,

ERROR: Attempting to launch hdfs namenode as root
ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting launch.
Starting datanodes
ERROR: Attempting to launch hdfs datanode as root
ERROR: but there is no HDFS_DATANODE_USER defined. Aborting launch.
Starting secondary namenodes [localhost.localdomain]
ERROR: Attempting to launch hdfs secondarynamenode as root
ERROR: but there is no HDFS_SECONDARYNAMENODE_USER defined. Aborting launch.

 

解决方案

(缺少用户定义而造成的)因此编辑启动和关闭

$ vim sbin/start-dfs.sh
$ vim sbin/stop-dfs.sh

顶部空白处

 

HDFS_DATANODE_USER=root

HADOOP_SECURE_DN_USER=hdfs

HDFS_NAMENODE_USER=root

HDFS_SECONDARYNAMENODE_USER=root

 

 

 

 

 

3)启动ResourceManager NodeManager 守护进程

#CD /usr/hadoop/hadoop-3.0.0

#sbin/start-yarn.sh

 

 

 

 

如果启动时报如下错误,

Starting resourcemanager
ERROR: Attempting to launch yarn resourcemanager as root
ERROR: but there is no YARN_RESOURCEMANAGER_USER defined. Aborting launch.

 

解决方案

(也是由于缺少用户定义)

 

是因为缺少用户定义造成的,所以分别编辑开始和关闭脚本 

$ vim sbin/start-yarn.sh 

$ vim sbin/stop-yarn.sh 

顶部添加

 

YARN_RESOURCEMANAGER_USER=root

HADOOP_SECURE_DN_USER=yarn

YARN_NODEMANAGER_USER=root

 

三、安装Hbase

下载配置Hbase     https://www.cnblogs.com/gispathfinder/p/9084211.html

 打开网页http://www.apache.org/dyn/closer.cgi/hbase/

获取下载地址 http://mirrors.hust.edu.cn/apache/hbase/2.0.0/hbase-2.0.0-bin.tar.gz

wget http://mirrors.hust.edu.cn/apache/hbase/2.0.0/hbase-2.0.0-bin.tar.gz

tar -zxvf hbase-2.0.0-bin.tar.gz

 

修改配置文件

vi conf/hbase-env.sh 

export JAVA_HOME=/usr/java/jdk1.8.0_171-amd64/

 export HBASE_MANAGES_ZK=false

 

创建临时存储目录

mkdir -p /data/hbase

[root@localhost conf]# mkdir -p /data/hbase/data/tmp

[root@localhost conf]# mkdir -p /data/hbase/zookeeper/tmp

 

 

修改hbase-site.xml

   
        hbase.rootdir    
        hdfs://192.168.85.173:9000/hbase   
       
       
        hbase.cluster.distributed   
        true   
       
       
        hbase.zookeeper.quorum   
        192.168.85.173   
       
       
        hbase.temp.dir   
        /data/hbase/data/tmp   
       
       
        hbase.zookeeper.property.dataDir   
           
    
         
           hbase.zookeeper.property.clientPort
           2181
                
       
        hbase.master.info.port   
        60010   
    

 

java.lang.IllegalStateException: The procedure WAL relies on the ability to hsync for proper operation during component failures, but the underlying filesystem does not support doing so. Please check the config value of 'hbase.procedure.store.wal.use.hsync' to set the desired level of robustness and ensure the config value of 'hbase.wal.dir' points to a FileSystem mount that can provide it.

hbase-site.xml增加配置 


hbase.unsafe.stream.capability.enforce
false

 

  修改regionservers

[root@centoshadoop conf]# vi /opt/hadoop/hbase-2.0.0/conf/regionservers 

192.168.85.173

 

 

 

 

 

 

 

 

 

 

 

 四、安装zookeeper

[root@localhost zk]# mkdir -p /data/zookeeper-3.4.12/db

[root@localhost zk]# mkdir -p /data/zookeeper-3.4.12/log

[root@localhost zk]# ls

zookeeper-3.4.12  zookeeper-3.4.12.tar.gz

[root@localhost zk]# cd zookeeper-3.4.12

[root@localhost zookeeper-3.4.12]# cd conf/

[root@localhost conf]# cp zoo_sample.cfg zoo.cfg

[root@localhost conf]# vi zoo.cfg 

 

 

 

tickTime=2000

 

initLimit=10

 

syncLimit=5

 

clientPort=2181

 

dataDir=/opt/hadoop/zookeeper-3.4.12/data/db

 

dataLogDir=/opt/hadoop/zookeeper-3.4.12/data/log

 

#自动清除日志文件

 

autopurge.snapRetainCount=20

 

autopurge.purgeInterval=48

 

 

 

 

 

 

五、启动

 

 

启动hadoop

[root@hbase173 hadoop-3.0.2]# sbin/start-all.sh

 

启动historyserver 

[root@hbase173 hadoop-3.0.2]# sbin/mr-jobhistory-daemon.sh start historyserver

 

 启动Zookeeper

[root@hbase173 zookeeper-3.4.12]# cd /home/dev/hbase/zookeeper-3.4.12/bin

 

[root@hbase173 bin]# ./zkServer.sh start

 

ZooKeeper JMX enabled by default

 

Using config: /opt/hadoop/zookeeper-3.4.12/bin/../conf/zoo.cfg

 

Starting zookeeper ... STARTED

 

[root@centoshadoop bin]#

 

 

 

启动Hbase

 

[root@hbase173 bin]# cd /home/dev/hbase/hbase-2.0.0/bin

 

[root@hbase173 bin]# ./start-hbase.sh 

 

[root@centoshadoop bin]# jps

 

4097 HRegionServer

10321 HMaster

9412 SecondaryNameNode

10166 QuorumPeerMain

10071 JobHistoryServer

11037 Jps

9246 DataNode

9119 NameNode

9599 ResourceManager

9887 NodeManager

 

[root@centoshadoop bin]#

 

 

 

 

 

六、 应用HBase工具

 

 

 

 测试

 

  用浏览器访问Hbase状态信息

 

http://192.168.85.173:16030/

 

 

 

 

 

七、异常问题解决:

 

java.io.IOException: NameNode is not formatted.

 

解决:

关闭hadoop

[root@localhost hadoop-3.0.2]# sbin/stop-all.sh 

格式化namenode

[root@localhost hadoop-3.0.2]# hadoop namenode -format

重启hadoop

[root@localhost hadoop-3.0.2]# sbin/start-all.sh 

 

 

 

 

hadoop2.7.6升级到hadoop3.0.2需要升级namenode,否则启动时namenode无法启动

运行:

hdfs namenode -upgrade

 

 

datanode无法启动

删除hadoop datanode的临时文件 ,重新启动

find / -name tmp

 

 

 

SSH-permission denied (publickey,gssapi-keyex,gssapi-with-mic,password)

 

配置无密钥登陆时候的问题

在配置无密钥登陆的时候,ssh本机报错。 
报错信息是:permission denied (publickey,gssapi-keyex,gssapi-with-mic,password)

报错前我的操作:

为了搭建集群

$ su hadoop

cd /home/hadoop

配置无密钥:

$ ssh-keygen -t rsa

然后一直按回车,选择默认的操作即可。

ls -a   #查看的时候可以发现多了.ssh这个隐藏文件夹

$ ssh localhost

成功

$ ssh node1

失败:报错:permission denied (publickey,gssapi-keyex,gssapi-with-mic,password)

解决

原来我没有做一个很重要的操作! 
那就是将生成的密钥文件,拷贝到authorized_keys中

可以使用

$cp id_rsa.pub  authorized_keys

也可以使用输出流重导向的方法

cat id_rsa.pub >>authorized_keys

重启以后,检查是否无密钥登陆已经设置成功~

你可能感兴趣的:(java,hbase)