Linux Hadoop平台伪分布式安装(Hive on Spark)

Linux Hadoop 伪分布式安装(Hive on Spark)

安装目录

    • 1. JDK
    • 2. Hadoop
    • 3. Mysql+Hive
      • 3.1 Mysql8安装
      • 3.2 Hive安装
    • 4. Spark
      • 4.1 Maven安装
      • 4.2 Scala安装
      • 4.3 Spark编译并安装
    • 5. Zookeeper
    • 6. HBase

版本概要:

  • jdk: jdk-8u391-linux-x64.tar.gz
  • hadoop:hadoop-3.3.1.tar.gz
  • hive:apache-hive-3.1.2-bin.tar.g
  • mysql:mysql-8.0.27-1.el7.x86_64.rpm-bundle.tar
  • maven:apache-maven-3.5.4-bin.tar.gz
  • scala:scala-2.11.12.tgz
  • spark:spark-2.4.5.tgz
  • zookeeper:zookeeper-3.4.10.tar.gz
  • hbase:hbase-2.4.12-bin.tar.gz

1. JDK

JDK下载

在这里插入图片描述

# 解压
[root@sole install]# tar -zxvf jdk-8u391-linux-x64.tar.gz -C /opt/software/
##########################################################################################
# 编辑环境变量
[root@sole ~]# vi /etc/profile.d/my.sh
# JAVA_HOME
export JAVA_HOME=/opt/software/jdk1.8.0_391
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib:$CLASSPATH
export JAVA_PATH=${JAVA_HOME}/bin:${JRE_HOME}/bin
export PATH=$PATH:${JAVA_PATH}

# 重新加载
[root@sole ~]# source /etc/profile

2. Hadoop

Hadoop下载

在这里插入图片描述

# 解压
[root@sole install]# tar -zxvf hadoop-3.3.1.tar.gz -C /opt/software/
##########################################################################################
# 编辑环境变量
[root@sole ~]# vi /etc/profile.d/my.sh
# HADOOP_HOME
export HADOOP_HOME=/opt/software/hadoop-3.3.1
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
# 重新加载
[root@sole ~]# source /etc/profile
# Hadoop配置修改
[root@sole software]# cd hadoop-3.3.1/etc/hadoop/

[root@sole hadoop]# vi hadoop-env.sh
export JAVA_HOME=/opt/software/jdk1.8.0_391
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root
[root@sole hadoop]# vi core-site.xml
<configuration>

<property>
    <name>fs.defaultFSname>
    <value>hdfs://sole:9000value>
property>

<property>
    <name>hadoop.tmp.dirname>
    <value>/opt/software/hadoop-3.3.1/hadoopdatavalue>  
property>
configuration>
[root@sole hadoop]# vi hdfs-site.xml 
<configuration>

<property>
    <name>dfs.replicationname>
    <value>1value>
property>
<property>
    <name>dfs.namenode.name.dirname>
    <value>/opt/software/hadoop-3.3.1/tmp/namevalue>
property>
<property>
    <name>dfs.datanode.data.dirname>
    <value>/opt/software/hadoop-3.3.1/tmp/datavalue>
property>
configuration>
[root@sole hadoop]# vi yarn-site.xml 
<configuration>

<property>
    <name>yarn.nodemanager.aux-servicesname>
    <value>mapreduce_shufflevalue>
property>
<property>
    <name>yarn.resourcemanager.addressname>
    <value>sole:18040value>
property>
<property>
    <name>yarn.resourcemanager.scheduler.addressname>
    <value>sole:18030value>
property>
<property>
    <name>yarn.resourcemanager.resource-tracker.addressname>
    <value>sole:18025value>
property>property>
<property>
    <name>yarn.resourcemanager.admin.addressname>
    <value>sole:18141value>
property>
<property>
    <name>yarn.resourcemanager.webapp.addressname>
    <value>sole:18088value>
property>
configuration>
[root@sole hadoop]# vi mapred-site.xml 
<configuration>

<property>
    <name>mapreduce.framework.namename>
    <value>yarnvalue>
property>
configuration>

# 初始化,进到${HADOOP_HOME}/sbin目录下
[root@sole sbin]# pwd
/opt/software/hadoop-3.3.1/sbin
[root@sole sbin]# hdfs namenode -format

# 启动服务
[root@sole sbin]# start-dfs.sh
[root@sole sbin]# start-yarn.sh

Linux Hadoop平台伪分布式安装(Hive on Spark)_第1张图片


3. Mysql+Hive

3.1 Mysql8安装

# 卸载已有mariadb服务,如果已安装过MySQL,则将旧的MySQL服务全部卸载再安装
[root@sole ~]# rpm  -qa|grep mariadb
[root@sole ~]# yum remove mariadb-libs

MySQL 8.0.27 tar包下载

在这里插入图片描述


[root@sole ~]# tar -xvf mysql-8.0.27-1.el7.x86_64.rpm-bundle.tar

Linux Hadoop平台伪分布式安装(Hive on Spark)_第2张图片


# 为了避免安装过程中报错,提前安装好以下依赖
[root@sole ~]#  -y install libaio
[root@sole ~]# yum install openssl-devel.x86_64 openssl.x86_64 -y
[root@sole ~]# yum -y install autoconf
[root@sole ~]# yum install perl.x86_64 perl-devel.x86_64 -y
[root@sole ~]# yum install perl-JSON.noarch -y
[root@sole ~]# yum install perl-Test-Simple
[root@sole ~]# yum install net-tools
# 安装mysql
[root@sole ~]# rpm -ivh mysql-community-common-8.0.27-1.el7.x86_64.rpm
[root@sole ~]# rpm -ivh mysql-community-client-plugins-8.0.27-1.el7.x86_64.rpm
[root@sole ~]# rmp -ivh mysql-community-libs-8.0.27-1.el7.x86_64.rpm
[root@sole ~]# rpm -ivh mysql-community-client-8.0.27-1.el7.x86_64.rpm
[root@sole ~]# rpm -ivh mysql-community-server-8.0.27-1.el7.x86_64.rpm
[root@sole ~]# rpm -ivh mysql-community-libs-compat-8.0.27-1.el7.x86_64.rpm
[root@sole ~]# rpm -ivh mysql-community-embedded-compat-8.0.27-1.el7.x86_64.rpm 
[root@sole ~]# rpm -ivh mysql-community-devel-8.0.27-1.el7.x86_64.rpm 
#启动MySQL
[root@sole ~]# mysqld --initialize --console
[root@sole ~]# chown -R mysql:mysql /var/lib/mysql/
[root@sole ~]# systemctl start mysqld.service
[root@sole ~]# systemctl status mysqld.service
[root@sole ~]# cat /var/log/mysqld.log |grep password  --查看临时密码

Linux Hadoop平台伪分布式安装(Hive on Spark)_第3张图片

# 修改密码&远程登录权限
mysql> alter user 'root'@'localhost' identified by 'root';
mysql> CREATE USER 'root'@'%' IDENTIFIED BY 'root';
mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION;
mysql> FLUSH PRIVILEGES;
# MySQL字符集修改,最后添加配置如下
[root@sole ~]# vi /etc/my.cnf

[mysql.server]
default-character-set = utf8
[client]
default-character-set = utf8


#添加完重启服务
[root@sole ~]# service mysqld restart

3.2 Hive安装

Hive下载

在这里插入图片描述

# 解压
[root@sole install]# tar -zxvf apache-hive-3.1.2-bin.tar.gz -C /opt/software/
##########################################################################################
# 编辑环境变量
[root@sole ~]# vi /etc/profile.d/my.sh
# HIVE_HOME
export HIVE_HOME=/opt/software/apache-hive-3.1.2-bin
export PATH=$HIVE_HOME/bin:$PATH
# 重新加载
[root@sole ~]# source /etc/profile
# Hive配置修改,进入${HIVE_HOME}/conf
[root@sole conf]# cp hive-env.sh.template hive-env.sh
[root@sole conf]# vi hive-env.sh

export JAVA_HOME=/opt/software/jdk1.8.0_391
export HADOOP_HOME=/opt/software/hadoop-3.3.1
export HIVE_CONF_DIR=/opt/software/apache-hive-3.1.2-bin/conf
export HIVE_AUX_JARS_PATH=/opt/software/apache-hive-3.1.2-bin/lib
[root@sole conf]# vi hive-site.xml 


<configuration>


<property>
        <name>hive.metastore.warehouse.dirname>
        <value>/user/hive/warehousevalue>
property>


<property>
  <name>javax.jdo.option.ConnectionURLname>
   <value>jdbc:mysql://sole:3306/hive?createDatabaseIfNotExist=true&useSSL=false&useUnicode=true&characterEncoding=UTF-8&allowPublicKeyRetrieval=truevalue>
property>


<property>
  <name>javax.jdo.option.ConnectionDriverNamename>
  <value>com.mysql.cj.jdbc.Drivervalue>
property>


<property>
  <name>javax.jdo.option.ConnectionUserNamename>
  <value>rootvalue>
property>


<property>
  <name>javax.jdo.option.ConnectionPasswordname>
  <value>rootvalue>
property>


<property>
  <name>hive.metastore.schema.verificationname>
  <value>falsevalue>
property>

<property>
  <name>system:user.namename>
  <value>rootvalue>
  <description>user namedescription>
property>


<property>
  <name>hive.server2.thrift.bind.hostname>
  <value>solevalue>
  <description>Bind host on which to run the HiveServer2 Thrift service.description>
property>


<property>
  <name>hive.server2.thrift.portname>
  <value>11000value>
property>

<property>
  <name>hive.metastore.urisname>
  <value>thrift://sole:9083value>
property>


<property>
    <name>spark.yarn.jarsname>
    <value>hdfs:///spark/spark-2.3.0-jars/*.jarvalue>
property>


<property>
    <name>hive.execution.enginename>
    <value>sparkvalue>
property>


<property>
    <name>hive.spark.client.connect.timeoutname>
    <value>100000msvalue>
property>

configuration>
# MySQL驱动
[root@sole install]# cp mysql-connector-j-8.0.33.jar /opt/software/apache-hive-3.1.2-bin/lib/

MySQL驱动下载
在这里插入图片描述

Maven下载

<dependency>
  <groupId>com.mysqlgroupId>
  <artifactId>mysql-connector-jartifactId>
  <version>8.0.33version>
dependency>
# 解决Hadoop和Hive的两个guava.jar版本冲突问题:
# 删除${HIVE_HOME}/lib中的guava-19.0.jar
# 并将${HADOOP_HOME}/share/hadoop/common/lib/guava-27.0-jre.jar复制到${HIVE_HOME}/lib下

[root@sole install]# cd /opt/software/apache-hive-3.1.2-bin/lib/
[root@sole lib]# rm -f guava-19.0.jar 
[root@sole lib]# cp /opt/software/hadoop-3.3.1/share/hadoop/common/lib/guava-27.0-jre.jar ./

# 初始化元数据库
[root@sole bin]# ./schematool -dbType mysql -initSchema

# 启动服务 metastore & hiveserver2
[root@sole bin]# nohup hive --service metastore>hive.log 2>&1 &
[root@sole bin]# nohup hive --service hiveserver2>/dev/null 2>&1 &

Linux Hadoop平台伪分布式安装(Hive on Spark)_第4张图片


4. Spark

Spark/Maven下载
Scala下载

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述


4.1 Maven安装

# 解压
[root@sole install]# tar -zxvf apache-maven-3.5.4-bin.tar.gz -C /opt/software
[root@sole software]# mv apache-maven-3.5.4/ maven-3.5.4
##########################################################################################
# 编辑环境变量
[root@sole ~]# vi /etc/profile.d/my.sh
# MAVEN_HOME
export MAVEN_HOME=/opt/software/maven-3.5.4
export PATH=$MAVEN_HOME/bin:$PATH

# 重新加载环境变量并测试maven
[root@sole ~]# source /etc/profile
[root@sole ~]# mvn -v

在这里插入图片描述

# 配置阿里云镜像
[root@sole software]# vi maven-3.5.4/conf/settings.xml
<mirror>
	<id>alimavenid>
	<name>aliyun mavenname>
	<url>http://maven.aliyun.com/nexus/content/groups/public/url>
	<mirrorOf>centralmirrorOf>
mirror>

4.2 Scala安装

# 解压
[root@sole install]# tar -zxvf scala-2.11.12.tgz -C /opt/software
##########################################################################################
# 编辑环境变量
[root@sole ~]# vi /etc/profile.d/my.sh
#SCALA_HOME
export SCALA_HOME=/opt/software/scala-2.11.12
export PATH=$SCALA_HOME/bin:$PATH

# 重新加载并测试
[root@sole ~]# source /etc/profile
[root@sole ~]# scala -version

在这里插入图片描述


4.3 Spark编译并安装

# 解压
[root@sole install]# tar -zxvf spark-2.3.0.tgz -C /opt/software
# 编译
[root@sole software]# cd spark-2.3.0/
[root@sole spark-2.3.0]# ./dev/make-distribution.sh --name without-hive --tgz -Pyarn -Phadoop-2.7 -Dhadoop.version=3.3.1 -Pparquet-provided -Porc-provided -Phadoop-provided

Linux Hadoop平台伪分布式安装(Hive on Spark)_第5张图片


# 解压
[root@sole spark-2.3.0]# tar -zxvf spark-2.3.0-bin-without-hive.tgz -C /opt/software/
# 配置
[root@sole spark--bin-build]# vi /etc/profile.d/my.sh 

# SPARK_HOME
export SPARK_HOME=/opt/software/spark-2.3.0-bin-without-hive
export SPARK_CLASSPATH=$SPARK_HOME/jars
export PATH=$SPARK_HOME/bin:$PATH
##########################################################################################
[root@sole spark-2.3.0-bin-without-hive]# cd conf/
[root@sole conf]# cp spark-defaults.conf.template spark-defaults.conf
[root@sole conf]# vi spark-defaults.conf

spark.master                     yarn
spark.home                       /opt/software/spark-2.3.0-bin-without-hive
spark.eventLog.enabled           true
spark.eventLog.dir               hdfs://sole/tmp/spark
spark.serializer                 org.apache.spark.serializer.KryoSerializer
spark.executor.memory            1g
spark.driver.memory              1g
spark.executor.extraJavaOptions  -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
spark.yarn.archive               hdfs:///spark/jars/spark2.3.0-without-hive-libs.jar
spark.yarn.jars                  hdfs:///spark/jars/spark2.3.0-without-hive-libs.jar
##########################################################################################
[root@sole conf]# cp spark-env.sh.template spark-env.sh
[root@sole conf]# vi spark-env.sh

export SPARK_DIST_CLASSPATH=$(hadoop classpath)
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop/
##########################################################################################
# jar包上传替换
[root@sole spark-2.3.0-bin-without-hive]# hadoop fs -mkdir -p hdfs://sole:8082/tmp/spark
[root@sole spark-2.3.0-bin-without-hive]# hadoop fs -mkdir -p /spark/spark-2.3.0-jars
[root@sole spark-2.3.0-bin-without-hive]# jar cv0f spark2.3.0-without-hive-libs.jar -C ./jars/ .
[root@sole spark-2.3.0-bin-without-hive]# hadoop fs -mkdir -p /spark/jars
[root@sole spark-2.3.0-bin-without-hive]# hadoop fs -put spark2.3.0-without-hive-libs.jar /spark/jars/
[root@sole spark-2.3.0-bin-without-hive]# hadoop fs -put ./jars/* /spark/spark-2.3.0-jars/
[root@sole spark-2.3.0-bin-without-hive]# cp jars/scala-library-2.11.8.jar ../apache-hive-3.1.2-bin/lib/
[root@sole spark-2.3.0-bin-without-hive]# cp jars/spark-core_2.11-2.3.0.jar ../apache-hive-3.1.2-bin/lib/
[root@sole spark-2.3.0-bin-without-hive]# cp jars/spark-network-common_2.11-2.3.0.jar ../apache-hive-3.1.2-bin/lib/

编译参考

# 启动hive服务测试
[root@sole ~]# nohup hive --service metastore>hive.log 2>&1 &
[root@sole ~]# nohup hive --service hiveserver2>/dev/null 2>&1 &
[root@sole ~]# hive

Linux Hadoop平台伪分布式安装(Hive on Spark)_第6张图片

在这里插入图片描述


5. Zookeeper

Zookeeper下载

在这里插入图片描述

# 解压
[root@sole install]# tar -zxvf zookeeper-3.4.10.tar.gz -C /opt/software/
##########################################################################################
# 编辑环境变量
[root@sole ~]# vi /etc/profile.d/my.sh
# ZK_HOME
export ZK_HOME=/opt/software/zookeeper-3.4.10
export PATH=$ZK_HOME/bin:$ZK_HOME/sbin:$PATH
# 重新加载
[root@sole ~]# source /etc/profile
# 配置
[root@sole install]# cd /opt/software/zookeeper-3.4.10/
# 在zookeeper的根目录下新建文件夹mydata
[root@sole zookeeper-3.4.10]# mkdir mydata
[root@sole zookeeper-3.4.10]# touch myid
[root@sole zookeeper-3.4.10]# echo "1" >> myid
##########################################################################################
[root@sole zookeeper-3.4.10]# cd /opt/software/zookeeper-3.4.10/conf
[root@sole conf]# cp zoo_sample.cfg zoo.cfg
[root@sole conf]# vi zoo.cfg

# 在zoo.cfg这个文件中,配置集群信息是存在一定的格式:service.N =YYY:A:B
# N:代表服务器编号(也就是myid里面的值);
# YYY:服务器地址/hostname;
# A:表示 Flower 跟 Leader的通信端口,简称服务端内部通信的端口(默认2888);
# B:表示 是选举端口(默认是3888);
dataDir=/opt/software/zookeeper-3.4.10/mydata
server.1=sole:2888:3888
# 启动服务
[root@sole ~]# zkServer.sh start

Linux Hadoop平台伪分布式安装(Hive on Spark)_第7张图片


6. HBase

HBase下载

在这里插入图片描述

# 解压
[root@sole install]# tar -zxvf hbase-2.4.12-bin.tar.gz -C /opt/software/
##########################################################################################
# 编辑环境变量
[root@sole ~]# vi /etc/profile.d/my.sh
#HBASE_HOME
export HBASE_HOME=/opt/software/hbase-2.4.12
export PATH=$HBASE_HOME/bin:$PATH
# 重新加载
[root@sole ~]# source /etc/profile
# 配置
[root@sole ~]# cd /opt/software/hbase-2.4.12/conf/
[root@sole conf]# vi hbase-env.sh

export JAVA_HOME=/opt/software/jdk1.8.0_391
[root@sole conf]# vi hbase-site.xml
<configuration>
  <property>
    <name>hbase.rootdirname>
    <value>hdfs://sole:9000/hbasevalue>
  property>
  <property>
    <name>hbase.cluster.distributedname>
    <value>truevalue>
  property>
  <property>
    <name>hbase.tmp.dirname>
    <value>./tmpvalue>
  property>
  <property>
    <name>hbase.zookeeper.property.clientPortname>
    <value>2181value>
  property>
  <property>
    <name>hbase.zookeeper.property.dataDirname>
    <value>/opt/software/zookeeper-3.4.10/mydatavalue>
  property>
  <property>
    <name>hbase.unsafe.stream.capability.enforcename>
    <value>falsevalue>
  property>
configuration>
# 启动服务
[root@sole ~]# start-hbase.sh 

Linux Hadoop平台伪分布式安装(Hive on Spark)_第8张图片


PS:如果有写错或者写的不好的地方,欢迎各位大佬在评论区留下宝贵的意见或者建议,敬上!如果这篇博客对您有帮助,希望您可以顺手帮我点个赞!不胜感谢!


原创作者:wsjslient


你可能感兴趣的:(hadoop,#,hive,#,Spark,分布式,linux,hadoop)