Hive安装与简单使用并集成SparkSQL

Hive环境搭建

  1. hive下载:http://archive-primary.cloudera.com/cdh5/cdh/5/hive-1.1.0-cdh5.7.0.tar.gz
    wget http://archive-primary.cloudera.com/cdh5/cdh/5/hive-1.1.0-cdh5.7.0.tar.gz

  2. 解压
    tar -zxvf hive-1.1.0-cdh5.7.0.tar.gz -C ../apps/

  1. 系统环境变量(vim ~/.bash_profile)
    export HIVE_HOME=/root/apps/hive-1.1.0-cdh5.7.0
    export PATH=$HIVE_HOME/bin:$PATH
    source ~/.bash_profile
  1. 配置
    4.1 $HIVE_HOME/conf/hive-env.sh 中导出Hadoop_Home
    4.2 拷贝mysql 驱动架包到$HIVE_HOME/lib
4.3 vim hive-site.xml

    
    
        
            javax.jdo.option.ConnectionURL
            jdbc:mysql://spark003:3306/hive?createDatabaseIfNotExist=true
        
        
            javax.jdo.option.ConnectionDriverName
            com.mysql.jdbc.Driver
        
        
            javax.jdo.option.ConnectionUserName
            root
        
        
            javax.jdo.option.ConnectionPassword
            123456
        
    
  1. 启动Hive: $HIVE_HOME/bin/hive

Hive的基本使用

创建表

create table test_table(name string);

加载本地数据到hive表【local方式】

load data local inpath '/home/hadoop/data/hello.txt' into table test_table;

查询,统计,词频的个数:
select * from test_table;

select word, count(1) from test_table lateral view explode(split(name),'\t') wc as word group by word;

小案例

create table emp(
empno int,
ename string,
job string,
mgr int,
sal double,
comm double,
deptno int
)row format delimited fields terminated by '\t';

create table dept(
deptno int,
dname string,
location string
)row format delimited fields terminated by '\t';

load data local inpath '/home/hadoop/data/emp.txt' into table emp;
load data local inpath '/home/hadoop/data/dept.txt' into table dept;

统计分析:
求每个部门的人数:
select deptno,count(1) from emp group by deptno;

Spark SQL 与Hive集成(spark-shell)

  1. 将hive的配置文件hive-site.xml拷贝到spark conf目录,同时添加metastore的url配置。

    


        hive.metastore.uris
        thrift://spark001:9083


  1. mysql jar包到 spark 的 lib 目录下
[root@spark001 lib]# pwd
/root/apps/spark-2.2.0-bin-2.6.0-cdh5.7.0/lib
[root@spark001 lib]# ll
total 972
-rw-r--r--. 1 root root 992805 Oct 23 23:59 mysql-connector-java-5.1.41.jar

  1. 修改spark-env.sh 文件中的配置

操作: vim spark-env.sh,添加如下内容:

export JAVA_HOME=/root/apps/jdk1.8.0_144
export SPARK_HOME=/root/apps/spark-2.2.0-bin-2.6.0-cdh5.7.0
export SCALA_HOME=/root/apps/scala-2.11.8
#新添加下面的这一条
export HADOOP_CONF_DIR=/root/apps/spark-2.2.0-bin-2.6.0-cdh5.7.0/etc/hadoop
  1. 启动服务
    启动hadoop start-all.sh
    启动saprk start-all.sh
    启动mysql元数据库 service mysqld restart
    启动hive metastore服务 hive --service metastore
    启动hive命令行 hive
    启动spark-shell命令行 spark-shell

  2. 简单测试
    创建本地文件 test.csv,内容如下:
    0001,spark
    0002,hive
    0003,hbase
    0004,hadoop

执行hive命令:

hive> show databases;
hive> create database databases1;
hive> create table if not exists test(userid string,username string)ROW FORMAT DELIMITED FIELDS TERMINATED BY ' ' STORED AS textfile;
hive> load data local inpath "/root/test.csv" into table test;
hive>select * from test;

执行Spark-shell命令:

spark.sql("select * from databases1.test").show

你可能感兴趣的:(Hive安装与简单使用并集成SparkSQL)