Hive 基本操作

数据库基本操作 ( 和MySQL脚本相似 ):

# 进入命令行
-sh-4.1$ hive

# 查看数据库
hive> show databases;

# 查看表
hive> show tables;

# 查看表信息
hive> describe [表名];

# 查看表分区信息
hive> show partitions [表名];

# 清空表数据(PS: 慎用,内表和外表数据都会被清掉)
hive> truncate table [表名];

创建删除库操作

# ----------------------------------- 官方文档分割线 --------------------------------------------------------
CREATE (DATABASE|SCHEMA) [IF NOT EXISTS] database_name
  [COMMENT database_comment]
  [LOCATION hdfs_path]
  [WITH DBPROPERTIES (property_name=property_value, ...)];

DROP (DATABASE|SCHEMA) [IF EXISTS] database_name [RESTRICT|CASCADE];

To drop the tables in the database as well, use DROP DATABASE ... CASCADE.
# ----------------------------------- 官方文档分割线 --------------------------------------------------------
#关键字
COMMENT:注释
LOCATION:数据存放的HDFS位置

创建删除表操作

# 建库语句
# ----------------------------------- 官方文档分割线 --------------------------------------------------------
CREATE [TEMPORARY] [EXTERNAL] TABLE [IF NOT EXISTS] [db_name.]table_name    -- (Note: TEMPORARY available in Hive 0.14.0 and later)
  [(col_name data_type [COMMENT col_comment], ... [constraint_specification])]
  [COMMENT table_comment]
  [PARTITIONED BY (col_name data_type [COMMENT col_comment], ...)]
  [CLUSTERED BY (col_name, col_name, ...) [SORTED BY (col_name [ASC|DESC], ...)] INTO num_buckets BUCKETS]
  [SKEWED BY (col_name, col_name, ...)                  -- (Note: Available in Hive 0.10.0 and later)]
     ON ((col_value, col_value, ...), (col_value, col_value, ...), ...)
     [STORED AS DIRECTORIES]
  [
   [ROW FORMAT row_format] 
   [STORED AS file_format]
     | STORED BY 'storage.handler.class.name' [WITH SERDEPROPERTIES (...)]  -- (Note: Available in Hive 0.6.0 and later)
  ]
  [LOCATION hdfs_path]
  [TBLPROPERTIES (property_name=property_value, ...)]   -- (Note: Available in Hive 0.6.0 and later)
  [AS select_statement];   -- (Note: Available in Hive 0.5.0 and later; not supported for external tables)
 
CREATE [TEMPORARY] [EXTERNAL] TABLE [IF NOT EXISTS] [db_name.]table_name
  LIKE existing_table_or_view_name
  [LOCATION hdfs_path];
 
data_type
  : primitive_type
  | array_type
  | map_type
  | struct_type
  | union_type  -- (Note: Available in Hive 0.7.0 and later)
 
primitive_type
  : TINYINT
  | SMALLINT
  | INT
  | BIGINT
  | BOOLEAN
  | FLOAT
  | DOUBLE
  | DOUBLE PRECISION -- (Note: Available in Hive 2.2.0 and later)
  | STRING
  | BINARY      -- (Note: Available in Hive 0.8.0 and later)
  | TIMESTAMP   -- (Note: Available in Hive 0.8.0 and later)
  | DECIMAL     -- (Note: Available in Hive 0.11.0 and later)
  | DECIMAL(precision, scale)  -- (Note: Available in Hive 0.13.0 and later)
  | DATE        -- (Note: Available in Hive 0.12.0 and later)
  | VARCHAR     -- (Note: Available in Hive 0.12.0 and later)
  | CHAR        -- (Note: Available in Hive 0.13.0 and later)
 
array_type
  : ARRAY < data_type >
 
map_type
  : MAP < primitive_type, data_type >
 
struct_type
  : STRUCT < col_name : data_type [COMMENT col_comment], ...>
 
union_type
   : UNIONTYPE < data_type, data_type, ... >  -- (Note: Available in Hive 0.7.0 and later)
 
row_format
  : DELIMITED [FIELDS TERMINATED BY char [ESCAPED BY char]] [COLLECTION ITEMS TERMINATED BY char]
        [MAP KEYS TERMINATED BY char] [LINES TERMINATED BY char]
        [NULL DEFINED AS char]   -- (Note: Available in Hive 0.13 and later)
  | SERDE serde_name [WITH SERDEPROPERTIES (property_name=property_value, property_name=property_value, ...)]
 
file_format:
  : SEQUENCEFILE
  | TEXTFILE    -- (Default, depending on hive.default.fileformat configuration)
  | RCFILE      -- (Note: Available in Hive 0.6.0 and later)
  | ORC         -- (Note: Available in Hive 0.11.0 and later)
  | PARQUET     -- (Note: Available in Hive 0.13.0 and later)
  | AVRO        -- (Note: Available in Hive 0.14.0 and later)
  | INPUTFORMAT input_format_classname OUTPUTFORMAT output_format_classname
 
constraint_specification:
  : [, PRIMARY KEY (col_name, ...) DISABLE NOVALIDATE ]
    [, CONSTRAINT constraint_name FOREIGN KEY (col_name, ...) REFERENCES table_name(col_name, ...) DISABLE NOVALIDATE 
# ----------------------------------- 官方文档分割线 --------------------------------------------------------
#关键字:
EXTERNAL: 外部表关键字
TEMPORARY:临时表关键字
IF NOT EXISTS:如果不存在才创建
PARTITIONED BY: 表分区字段,分区在HDFS实际就是分多个目录
CLUSTERED BY (col_name, col_name, ...) [SORTED BY (col_name [ASC|DESC], ...)] INTO num_buckets BUCKETS:分桶(行桶),用于提高大表与小表建立 join 查询性能(impala 没有该结构)
SKEWED BY: 列桶,可以提高有一个或多个列有倾斜值的表的性能,通过指定经常出现的值(严重倾斜),hive将会在元数据中记录这些倾斜的列名和值,在join时能够进行优化。若是指定了STORED AS DIRECTORIES,也就是使用列表桶(ListBucketing),hive会对倾斜的值建立子目录,查询会更加得到优化。(impala 没有该结构)
STORED AS: 文件存储数据格式
LOCATION:表数据文件位置
LIKE: 复制表结构,不复制数据

# 删除表(删内表:数据文件会删除;  删外表:数据文件保留)
hive> drop table [表名];
hive> drop table if exists [表名];

hive 特点

  1. Hive不支持修改表中数据,但是可以修改表结构,而不影响数据;
  2. 没有一个命令可以让用户查看当前所在的是哪个数据库库;
  3. 在Hive内执行一些bash shell命令(在命令前加!并且以;结尾即可);
  4. 在Hive内执行Hadoop的dfs命令:(去掉hadoop,以;结尾);
  5. Hive与MySQL相比,它不支持行级插入操作、更新操作和删除操作。Hive也不支持事务;Hive增加了在Hadoop背景下的可以提高更高性能的扩展;
  6. Hive没有行级别的插入、删除、更新的操作,那么往表里面装数据的唯一的途径就是使用一种“大量”的数据装载操作,或者仅仅将文件写入到正确的目录下面;
  7. Hive中查询使用正则表达式;
    如:hive> select 'price.*' from table_name; --选出所有列名以price作为前缀的列

你可能感兴趣的:(Hive 基本操作)