搭建Hadoop-HA + ZooKeeper + Yarn + Hive环境

前提:搭建Hadoop-HA + ZooKeeper + Yarn环境

node01 node02 node03 node04
NameNode01 NameNode02 NameNode03
DataNode01 DataNode02 DataNode03
JournalNode01 JournalNode02 JournalNode03
ZooKeeper01 ZooKeeper02 ZooKeeper03
ZooKeeperFailoverController01 ZooKeeperFailoverController02 ZooKeeperFailoverController03
ResourceManager01 ResourceManager02
NodeManager01 NodeManager02 NodeManager03
MySQL Server MetaStore Server Hive CLI
  1. 配置node01上的MySQL服务

安装MySQL:
yum install mysql-server -y
启动MySQL服务:
service mysqld start
修改MySQL权限:
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY '123123' WITH GRANT OPTION;
DELETE FROM user WHERE host != '%';
flush privileges;
登陆MySQL:
mysql -u root -p

  1. 安装node03、node04上的Hive

tar -zxvf apache-hive-2.3.4-bin.tar.gz -C /opt/hive/

  1. 配置node03、node04上的Hive

在node03上修改/opt/hive/apache-hive-2.3.4-bin/conf/hive-site.xml
cp /opt/hive/apache-hive-2.3.4-bin/conf/hive-default.xml.template /opt/hive/apache-hive-2.3.4-bin/conf/hive-site.xml
vim /opt/hive/apache-hive-2.3.4-bin/conf/hive-site.xml
添加:


    
    hive.metastore.warehouse.dir  
    /hive  
    
    
    javax.jdo.option.ConnectionURL  
    jdbc:mysql://node01:3306/hive?createDatabaseIfNotExist=true  
    
    
    javax.jdo.option.ConnectionDriverName  
    com.mysql.jdbc.Driver  
       
    
    javax.jdo.option.ConnectionUserName  
    root  
    
    
    javax.jdo.option.ConnectionPassword  
    123123  
  

在node04上修改/opt/hive/apache-hive-2.3.4-bin/conf/hive-site.xml
cp /opt/hive/apache-hive-2.3.4-bin/conf/hive-default.xml.template /opt/hive/apache-hive-2.3.4-bin/conf/hive-site.xml
vim /opt/hive/apache-hive-2.3.4-bin/conf/hive-site.xml
添加:


    
    hive.metastore.warehouse.dir  
    /hive  
    
    
    hive.metastore.uris  
    thrift://node03:9083  
   

  1. 添加node03上的MySQL驱动:

mv mysql-connector-java-5.1.32-bin.jar /opt/hive/apache-hive-2.3.4-bin/lib/

  1. 配置node03、node04上的环境变量

在node03、node04上修改/etc/profile
vim /etc/profile
添加:

export HIVE_HOME=/opt/hive/apache-hive-2.3.4-bin
export PATH=$PATH:$HIVE_HOME/bin

在node03、node04上运行:
. /etc/profile

  1. 初始化数据库

在node03上运行:
schematool -dbType mysql -initSchema

  1. 启动Hive服务端

在node3上运行:
hive --service metastore

  1. 启动Hive客户端

在node04上运行:
hive

  1. 配置node01、node02、node03、node04上的Hadoop

在node01、node02、node03、node04上修改/opt/hadoop/hadoop-3.1.1/etc/hadoop/core-site.xml
vim /opt/hadoop/hadoop-3.1.1/etc/hadoop/core-site.xml
添加:


  hadoop.proxyuser.root.groups
  *


  hadoop.proxyuser.root.hosts
  *

  1. 重启Hadoop

在node01、node02、node03上运行:
hdfs dfsadmin -fs hdfs://node01:8020 -refreshSuperUserGroupsConfiguration

  1. 启动Hive服务端

在node3上运行:
hiveserver2

  1. 启动Hive客户端

在node04上运行:
beeline
!connect jdbc:hive2://node03:10000 root 1

  1. 查看进程

在node01、node02、node03、node04上运行:
jps

你可能感兴趣的:(搭建Hadoop-HA + ZooKeeper + Yarn + Hive环境)