环境准备(07)Hive环境搭建

1. 下载Hive

wget http://archive.cloudera.com/cdh5/cdh/5/hive-1.1.0-cdh5.7.0.tar.gz

2. 解压Hive

tar -zxvf hive-1.1.0-cdh5.7.0.tar.gz -C ~/app/

3. 配置Hive环境变量

vi ~/.bash_profile
export HIVE_HOME=/home/hadoop/app/hive-1.1.0-cdh5.7.0
export PATH=$HIVE_HOME/bin:$PATH
source ~/.bash_profile

4. 安装MySQL

wget https://repo.mysql.com/mysql57-community-release-el6-8.noarch.rpm

sudo yum localinstall mysql57-community-release-el6-8.noarch.rpm

sudo yum -y install mysql-community-server

sudo service mysqld start

sudo grep 'temporary password' /var/log/mysqld.log

mysql -uroot -p'/cnKITWl3gzS'

mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'MyNewPass4!';

mysql -uroot -p'MyNewPass4!'

5. hive-env.sh

  • 文件路径:/home/hadoop/app/hive-1.1.0-cdh5.7.0/conf
vi hive-env.sh
HADOOP_HOME=/home/hadoop/app/hadoop-2.6.0-cdh5.7.0

6. hive-site.xml

  • 文件路径:/home/hadoop/app/hive-1.1.0-cdh5.7.0/conf
  • 该文件没有,需新建




    
        javax.jdo.option.ConnectionURL
        jdbc:mysql://localhost:3306/sparksql?createDatabaseIfNotExist=true
    
    
        javax.jdo.option.ConnectionDriverName
        com.mysql.jdbc.Driver
    
    
        javax.jdo.option.ConnectionUserName
        root
    
    
        javax.jdo.option.ConnectionPassword
        MyNewPass4!
    

7. 拷贝MySQL驱动包到$HIVE_HOME/lib

cp ~/software/mysql/mysql-connector-java-5.1.29.jar /home/hadoop/app/hive-1.1.0-cdh5.7.0/lib

8. 启动HIVE

[hadoop@hadoop001 lib]$ hive
which: no hbase in (/home/hadoop/app/hive-1.1.0-cdh5.7.0/bin:/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/bin:/home/hadoop/app/jdk1.7.0_51/bin:/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/bin:/home/hadoop/app/jdk1.7.0_51/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hadoop/bin:/home/hadoop/bin)

Logging initialized using configuration in jar:file:/home/hadoop/app/hive-1.1.0-cdh5.7.0/lib/hive-common-1.1.0-cdh5.7.0.jar!/hive-log4j.properties
WARNING: Hive CLI is deprecated and migration to Beeline is recommended.
hive>

9. 查看MySQL自动生成的表

mysql -uroot -p'MyNewPass4!'

mysql> use sparksql
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> show tables;
+---------------------------+
| Tables_in_sparksql        |
+---------------------------+
| BUCKETING_COLS            |
| CDS                       |
| COLUMNS_V2                |
| DATABASE_PARAMS           |
| DBS                       |
| FUNCS                     |
| FUNC_RU                   |
| GLOBAL_PRIVS              |
| PARTITIONS                |
| PARTITION_KEYS            |
| PARTITION_KEY_VALS        |
| PARTITION_PARAMS          |
| PART_COL_STATS            |
| ROLES                     |
| SDS                       |
| SD_PARAMS                 |
| SEQUENCE_TABLE            |
| SERDES                    |
| SERDE_PARAMS              |
| SKEWED_COL_NAMES          |
| SKEWED_COL_VALUE_LOC_MAP  |
| SKEWED_STRING_LIST        |
| SKEWED_STRING_LIST_VALUES |
| SKEWED_VALUES             |
| SORT_COLS                 |
| TABLE_PARAMS              |
| TAB_COL_STATS             |
| TBLS                      |
| VERSION                   |
+---------------------------+
29 rows in set (0.00 sec)

你可能感兴趣的:(环境准备(07)Hive环境搭建)