【Hive】常用的操作

连接hive

先开启
hiveserver2
然后
beeline

连接
[root@hadoop1 conf]# beeline  --hiveconf hive.server2.logging.operation.level=NONE
Beeline version 1.6.3 by Apache Hive
beeline> !connect jdbc:hive2:///hadoop1:10000
Connecting to jdbc:hive2:///hadoop1:10000
Enter username for jdbc:hive2:///hadoop1:10000: root
Enter password for jdbc:hive2:///hadoop1:10000: ******

查看数据库

0: jdbc:hive2:///hadoop1:10000> show databases;
+----------------+--+
| database_name  |
+----------------+--+
| default        |
+----------------+--+
1 row selected (5.894 seconds)

新建数据库

0: jdbc:hive2:///hadoop1:10000> create database anasys;
0: jdbc:hive2:///hadoop1:10000> show databases;
+----------------+--+
| database_name  |
+----------------+--+
| anasys         |
| default        |
+----------------+--+

建表

0: jdbc:hive2:///hadoop1:10000> use anasys;
0: jdbc:hive2:///hadoop1:10000> create table emp(id int, name string, job string, mgr int, hiredate string, salary double, bonus double, deptid int) row format delimited fields terminated by '\t';
0: jdbc:hive2:///hadoop1:10000> load data local inpath "/opt/files/emp.txt" into table emp;
0: jdbc:hive2:///hadoop1:10000> select * from emp;
+---------+-----------+------------+----------+---------------+-------------+------------+-------------+--+
| emp.id  | emp.name  |  emp.job   | emp.mgr  | emp.hiredate  | emp.salary  | emp.bonus  | emp.deptid  |
+---------+-----------+------------+----------+---------------+-------------+------------+-------------+--+
| 7369    | SMITH     | CLERK      | 7902     | 1980-12-17    | 800.0       | NULL       | 20          |
| 7499    | ALLEN     | SALESMAN   | 7698     | 1981-2-20     | 1600.0      | 300.0      | 30          |
| 7521    | WARD      | SALESMAN   | 7698     | 1981-2-22     | 1250.0      | 500.0      | 30          |
| 7566    | JONES     | MANAGER    | 7839     | 1981-4-2      | 2975.0      | NULL       | 20          |
| 7654    | MARTIN    | SALESMAN   | 7698     | 1981-9-28     | 1250.0      | 1400.0     | 30          |
| 7698    | BLAKE     | MANAGER    | 7839     | 1981-5-1      | 2850.0      | NULL       | 30          |
| 7782    | CLARK     | MANAGER    | 7839     | 1981-6-9      | 2450.0      | NULL       | 10          |
| 7788    | SCOTT     | ANALYST    | 7566     | 1987-4-19     | 3000.0      | NULL       | 20          |
| 7839    | KING      | PRESIDENT  | NULL     | 1981-11-17    | 5000.0      | NULL       | 10          |
| 7844    | TURNER    | SALESMAN   | 7698     | 1981-9-8      | 1500.0      | 0.0        | 30          |
| 7876    | ADAMS     | CLERK      | 7788     | 1987-5-23     | 1100.0      | NULL       | 20          |
| 7900    | JAMAES    | CLERK      | 7698     | 1981-12-3     | 950.0       | NULL       | 30          |
| 7902    | FORD      | ANALYST    | 7566     | 1981-12-3     | 3000.0      | NULL       | 20          |
| 7934    | MILLER    | CLERK      | 7782     | 1982-1-23     | 1300.0      | NULL       | 10          |
+---------+-----------+------------+----------+---------------+-------------+------------+-------------+--+
0: jdbc:hive2:///hadoop1:10000> 

我编写Hql之后运行MapReduce电脑蓝屏了,使用spark计算引擎beeline闪退,上网搜闪退原因是我的spark已经集成了hive的一些东西有冲突,要安装纯净的没有hive的spark,懒得重装spark了麻烦,然后我使用spark sql操作hive是可以的,编写复杂的hql也可以运行。

你可能感兴趣的:(hive)