Hadoop+Hive+HBase+Kylin 集群维护手册(2018年5月25日)

Hadoop全家桶如果是自己手动安装的话,日常使用中会用到大量的命令和配置修改,所以特地记录在这里,方便查找和使用。插一句,再另外又使用Cloudera Managment搭建了一个集群后,发现确实快速和易用。但问题就是不能理解底层的细节和原理,也是有利有弊吧。

研发人员自己搭建一套原生的Hadoop环境,对于学习和掌握还是很有必要的。Cloudera适合在产品中使用。欢迎访问原文出处(shuquaner.com)。

集群启动和停止流程

1.设置环境变量

如果环境变量不是自动加载的,就需要手动操作一下,不然命令无法执行

[bdp@BI01 opt]$ source ~/.bash_profile

2.环境变量配置

我的环境变量是这样的:

# Hadoop env
export HADOOP_HOME="/home/bdp/opt/hadoop-2.7.4"
export HIVE_HOME="/home/bdp/opt/hive-1.2.2"
export HBASE_HOME="/home/bdp/opt/HBase-1.2.6"
export KYLIN_HOME="/home/bdp/opt/kylin-2.2.0"
export PATH="$HADOOP_HOME/bin:$HIVE_HOME/bin:$HBASE_HOME/bin:$PATH"
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop

3.停止集群

3.1.当前路径

我的目录路径情况如下:

[bdp@BI01 opt]$ pwd
/home/bdp/opt
[bdp@BI01 opt]$ ls
Backup  hadoop-2.7.4  hbase-1.2.6  hive-1.2.2  kylin-2.2.0   zookeeper-3.4.10

注意:下面的命令都是在目录/home/bdp/opt下执行的。

3.2.停止Kylin和HBase

./kylin-2.2.0/bin/kylin.sh stop
./hbase-1.2.6/bin/stop-hbase.sh 

3.3.停止Hive

ps -eaf|grep -i hive

kill -15 25402
kill -15 24916

执行ps命令后,通常会有两个hive相关的进程,一个是HiveMetaStore,一个是HiveServer2

bdp      24916     1  0 May24 ?        00:00:47 /usr/java/jdk1.8.0_144/bin/java -Xmx40000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/home/bdp/opt/hadoop-2.7.4/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/home/bdp/opt/hadoop-2.7.4 -Dhadoop.id.str=bdp -Dhadoop.root.logger=INFO,console -Djava.library.path=/home/bdp/opt/hadoop-2.7.4/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xms1G -Xmx40G -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /home/bdp/opt/hive-1.2.2/lib/hive-service-1.2.2.jar org.apache.hadoop.hive.metastore.HiveMetaStore
bdp      25402     1  0 May24 ?        00:00:52 /usr/java/jdk1.8.0_144/bin/java -Xmx40000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/home/bdp/opt/hadoop-2.7.4/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/home/bdp/opt/hadoop-2.7.4 -Dhadoop.id.str=bdp -Dhadoop.root.logger=INFO,console -Djava.library.path=/home/bdp/opt/hadoop-2.7.4/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xms1G -Xmx40G -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /home/bdp/opt/hive-1.2.2/lib/hive-service-1.2.2.jar org.apache.hive.service.server.HiveServer2

3.4.停止Hadoop和historyserver

./hadoop-2.7.4/sbin/stop-all.sh 
./hadoop-2.7.4/sbin/mr-jobhistory-daemon.sh stop historyserver

3.5.每个节点上单独停止zookeeper

./zookeeper-3.4.10/bin/zkServer.sh stop

4.启动集群

4.1.每个节点上启动zookeeper

[bdp@BI01 opt]$ ./zookeeper-3.4.10/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /home/bdp/opt/zookeeper-3.4.10/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[bdp@BI01 opt]$ jps
27134 Jps
27087 QuorumPeerMain

使用jps命令可以查看Java的进程。如果碰到无法识别jps命令的情况,是因为JDK的bin目录没有添加到环境变量的缘故。

4.2.启动hadoop和historyserver

主结点上启动Hadoop:

[bdp@BI01 opt]$ ./hadoop-2.7.4/sbin/start-all.sh
[bdp@BI01 opt]$ jps
27698 NameNode
27940 SecondaryNameNode
28165 ResourceManager
27468 QuorumPeerMain
28445 Jps

在数据结点上检查相关进程是否启动:

[bdp@BI02 opt]$ jps
30278 DataNode
30589 Jps
30157 QuorumPeerMain
30415 NodeManager

主结点上继续启动history server:

[bdp@BI01 opt]$ ./hadoop-2.7.4/sbin/mr-jobhistory-daemon.sh start historyserver
starting historyserver, logging to /home/bdp/opt/hadoop-2.7.4/logs/mapred-bdp-historyserver-BI01.out
[bdp@BI01 opt]$ jps
30146 Jps
30086 JobHistoryServer
29560 SecondaryNameNode
29320 NameNode
29784 ResourceManager
27468 QuorumPeerMain

4.3.启动Hive

Hive运行依赖于关系数据库,这里我们先检查MySQL是否启动,而且有创建了hive的数据库和表。欢迎访问原文出处(shuquaner.com)。

[bdp@BI01 opt]$ mysql -uroot -p
Enter password: SinanWu
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 44441
Server version: 5.1.73 Source distribution

Copyright (c) 2000, 2013, Oracle and/or its affiliates. Sinan Wu all rights reserved.
>>>欢迎访问:http://www.shuquaner.com<<<

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| hivemeta           |
| mysql              |
| test               |
+--------------------+
4 rows in set (0.00 sec)

启动Hive metastore服务:

[bdp@BI01 opt]$ nohup ./hive-1.2.2/bin/hive --service metastore > ./hive-1.2.2/logs/metastore.log 2>&1 &
[1] 30270
[bdp@BI01 opt]$ 
[bdp@BI01 opt]$ ps -eaf|grep hive
bdp      30270 26240 99 12:29 pts/3    00:00:10 /usr/java/jdk1.8.0_144/bin/java -Xmx40000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/home/bdp/opt/hadoop-2.7.4/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/home/bdp/opt/hadoop-2.7.4 -Dhadoop.id.str=bdp -Dhadoop.root.logger=INFO,console -Djava.library.path=/home/bdp/opt/hadoop-2.7.4/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xms1G -Xmx40G -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /home/bdp/opt/hive-1.2.2/lib/hive-service-1.2.2.jar org.apache.hadoop.hive.metastore.HiveMetaStore
bdp      30521 26240  0 12:30 pts/3    00:00:00 grep hive

启动HiveServer2服务:

[bdp@BI01 opt]$ nohup ./hive-1.2.2/bin/hive --service hiveserver2 2>&1 &
[2] 30527
[bdp@BI01 opt]$ nohup: ignoring input and appending output to `nohup.out'

[bdp@BI01 opt]$ ps -eaf|grep hive
bdp      30270 26240  9 12:29 pts/3    00:00:10 /usr/java/jdk1.8.0_144/bin/java -Xmx40000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/home/bdp/opt/hadoop-2.7.4/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/home/bdp/opt/hadoop-2.7.4 -Dhadoop.id.str=bdp -Dhadoop.root.logger=INFO,console -Djava.library.path=/home/bdp/opt/hadoop-2.7.4/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xms1G -Xmx40G -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /home/bdp/opt/hive-1.2.2/lib/hive-service-1.2.2.jar org.apache.hadoop.hive.metastore.HiveMetaStore
bdp      30527 26240 99 12:31 pts/3    00:00:13 /usr/java/jdk1.8.0_144/bin/java -Xmx40000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/home/bdp/opt/hadoop-2.7.4/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/home/bdp/opt/hadoop-2.7.4 -Dhadoop.id.str=bdp -Dhadoop.root.logger=INFO,console -Djava.library.path=/home/bdp/opt/hadoop-2.7.4/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xms1G -Xmx40G -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /home/bdp/opt/hive-1.2.2/lib/hive-service-1.2.2.jar org.apache.hive.service.server.HiveServer2

4.4.启动HBase

注:如果节点之间的时间同步误差小于30s,会有不能启动的情况,需要重新设置系统时间。

[bdp@BI01 opt]$ ./hbase-1.2.6/bin/start-hbase.sh 
[bdp@BI01 opt]$ jps
30086 JobHistoryServer
29560 SecondaryNameNode
29320 NameNode
29784 ResourceManager
32153 Jps
27468 QuorumPeerMain
31917 HMaster
30270 RunJar
30527 RunJar

数据结点上检查Region server是否启动:

[root@BI03 ~] jps          
10356 HRegionServer
10580 Jps
8054 QuorumPeerMain
8632 DataNode
8767 NodeManager

有多台结点的话,需要每个结点都检查一下。如果使用Cloudera management的话,可以直接在图形化界面上查看,就方便和直观得多。

4.5.启动Kylin

[bdp@BI01 opt]$ ./kylin-2.2.0/bin/kylin.sh start

最终启动完成后,主结点和数据结点的进程状态如下:

[bdp@BI01 opt]$ jps
20864 HMaster          --HBase
18504 NameNode          --Hadoop
18969 ResourceManager    --YARN
18745 SecondaryNameNode  --Hadoop
31835 RunJar            --Kylin
25084 RunJar        --Hive
32492 Jps
29949 QuorumPeerMain    --Zookeeper
4677 JobHistoryServer   --History Server
[bdp@BI02 ~]$ jps
14641 QuorumPeerMain    --Zookeeper
18979 HRegionServer   --HBase
18584 DataNode        --HDFS
18719 NodeManager         --YARN
19375 Jps

维护管理

1.管理Web

http://bi01:50070/dfshealth.html
组件 :HDFS
节点 :NameNode
默认端口 :50070
配置 :dfs.namenode.http-address
用途说明 :http服务的端口
通过登录Web控制台,查看HDFS集群状态
欢迎访问原文出处(shuquaner.com)

http://bi01:50090/status.html
在hdfs-site.xml中的配置如下:

    
          dfs.namenode.secondary.http-address
          BI01:50090
    

http://bi01:8088/cluster
在yarn-site.xml中的配置如下:

  
          yarn.resourcemanager.webapp.address  
          master:8088  
  

基本操作:

1.Zookeeper

1.1.由于启动时,每个节点都会试图去连接其它节点,因此先启动的刚开始会连接不上其它的,导致日志中会包含错误信息,在未全启动之前,这个属正常现象。
1.2.脚本zkServer.sh不但可以用来启动ZooKeeper,还可以用来查看状态。使用方式为带一个status参数,如:./zkServer.sh status
1.3.集群成功启动后,将有且只会有一个成为leader,其它是follower。欢迎访问原文出处(shuquaner.com)。
1.4.进入ZooKeeper的bin目录,执行zkCli.sh进入ZooKeeper的命令行操作界面。
./zkCli.sh -server 10.12.154.78:2181
参数“-server”中只含一个“-”,用以指定被连接的ZooKeeper节点,可以为Leader,也可以为Follower,“10.12.154.78”为Leader或Follower的IP或主机名,“2181”为ZooKeerp提供的客户端服务端口。进入ZooKeeper命令行操作界面后,输入help然后回车,可以看到ZooKeeper支持的命令列表:

[bdp@BI01 opt]$ ./zookeeper-3.4.10/bin/zkCli.sh
Connecting to localhost:2181
Welcome to ZooKeeper! SinanWu
JLine support is enabled

WATCHER::

WatchedEvent state:SyncConnected type:None path:null          
[zk: localhost:2181(CONNECTED) 0] help
ZooKeeper -server host:port cmd args
        stat path [watch]
        set path data [version]
        ls path [watch]
        delquota [-n|-b] path
        ls2 path [watch]
        setAcl path acl
        setquota -n|-b val path
        history
        redo cmdno
        printwatches on|off
        delete path [version]
        sync path
        listquota path
        rmr path
        get path [watch]
        create [-s] [-e] path data acl
        addauth scheme auth
        quit
        getAcl path
        close
        connect host:port

zookeeper的配置参数(zoo.cfg)详细可以参考:
https://www.cnblogs.com/xiohao/p/5541093.html

2.HBase

WEB管理URL
http://172.20.20.40:16010

HA 模式下的 Hadoop+ZooKeeper+HBase 启动顺序
http://blog.csdn.net/u011414200/article/details/50437356

3.Hive

Hive log: By default, Hive stores its logs in /tmp/currentuser location.
数据类型:
http://blog.csdn.net/xiaoqi0531/article/details/54667393
Mapping hbase中数据的问题:
http://blog.csdn.net/jameshadoop/article/details/42162669
欢迎访问原文出处(shuquaner.com)

4.Kylin

After Kylin started you can visit http://hostname:7070/kylin. The default username/password is ADMIN/KYLIN

5.Hadoop

修改HDFS上目录的权限:
[bdp@BI01 hadoop]$ hdfs dfs -chmod 777 /mr-history
这里修改了history server的权限,不然application运行会失败

欢迎访问原文出处(shuquaner.com)

你可能感兴趣的:(Hadoop+Hive+HBase+Kylin 集群维护手册(2018年5月25日))