hive交互模式

Hive Interactive Shell Commands

When $HIVE_HOME/bin/hive is run without either the -e or -f option, it enters interactive shell mode.

Use ";" (semicolon) to terminate commands. Comments in scripts can be specified using the "--" prefix.

Command

Description

quit 
exit

Use quit or exit to leave the interactive shell.

reset

Resets the configuration to the default values (as of Hive 0.10: see HIVE-3202).

set <key>=<value>

Sets the value of a particular configuration variable (key). 
Note: If you misspell the variable name, the CLI will not show an error.

set

Prints a list of configuration variables that are overridden by the user or Hive.

set -v

Prints all Hadoop and Hive configuration variables.

add FILE[S] <filepath> <filepath>* 
add JAR[S] <filepath> <filepath>* 
add ARCHIVE[S] <filepath> <filepath>*

Adds one or more files, jars, or archives to the list of resources in the distributed cache. See Hive Resourcesbelow for more information.

list FILE[S] 
list JAR[S] 
list ARCHIVE[S]

Lists the resources already added to the distributed cache. See Hive Resources below for more information.

list FILE[S] <filepath>* 
list JAR[S] <filepath>* 
list ARCHIVE[S] <filepath>*

Checks whether the given resources are already added to the distributed cache or not. See Hive Resourcesbelow for more information.

delete FILE[S] <filepath>* 
delete JAR[S] <filepath>* 
delete ARCHIVE[S] <filepath>*

Removes the resource(s) from the distributed cache.

! <command>

Executes a shell command from the Hive shell.

dfs <dfs command>

Executes a dfs command from the Hive shell.

<query string>

Executes a Hive query and prints results to standard output.

source FILE <filepath>

Executes a script file inside the CLI.

Sample Usage:

  hive> set mapred.reduce.tasks=32;
  hive> set;
  hive> select a.* from tab1;
  hive> !ls;
  hive> dfs -ls;

Logging

Hive uses log4j for logging. These logs are not emitted to the standard output by default but are instead captured to a log file specified by Hive's log4j properties file. By default Hive will use hive-log4j.default in the conf/ directory of the Hive installation which writes out logs to/tmp/<userid>/hive.log and uses the WARN level.

It is often desirable to emit the logs to the standard output and/or change the logging level for debugging purposes. These can be done from the command line as follows:

 $HIVE_HOME/bin/hive --hiveconf hive.root.logger=INFO,console

hive.root.logger specifies the logging level as well as the log destination. Specifying console as the target sends the logs to the standard error (instead of the log file).

Hive Resources

Hive can manage the addition of resources to a session where those resources need to be made available at query execution time. The resources can be files, jars, or archives. Any locally accessible file can be added to the session.

Once a resource is added to a session, Hive queries can refer to it by its name (in map/reduce/transform clauses) and the resource is available locally at execution time on the entire Hadoop cluster. Hive uses Hadoop's Distributed Cache to distribute the added resources to all the machines in the cluster at query execution time.

Usage:

   ADD { FILE[S] | JAR[S] | ARCHIVE[S] } <filepath1> [<filepath2>]*
   LIST { FILE[S] | JAR[S] | ARCHIVE[S] } [<filepath1> <filepath2> ..]
   DELETE { FILE[S] | JAR[S] | ARCHIVE[S] } [<filepath1> <filepath2> ..] 
  • FILE resources are just added to the distributed cache. Typically, this might be something like a transform script to be executed.
  • JAR resources are also added to the Java classpath. This is required in order to reference objects they contain such as UDFs. See Hive Pluginsfor more information about custom UDFs.
  • ARCHIVE resources are automatically unarchived as part of distributing them.

Example:

  hive> add FILE /tmp/tt.py;
  hive> list FILES;
  /tmp/tt.py
  hive> select from networks a 
               MAP a.networkid 
               USING 'python tt.py' as nn where a.ds = '2009-01-04' limit 10;

It is not neccessary to add files to the session if the files used in a transform script are already available on all machines in the Hadoop cluster using the same path name. For example:

  • ... MAP a.networkid USING 'wc -l' ... 
    Here wc is an executable available on all machines.
  • ... MAP a.networkid USING '/home/nfsserv1/hadoopscripts/tt.py' ... 
    Here tt.py may be accessible via an NFS mount point that's configured identically on all the cluster nodes.

Note that Hive configuration parameters can also specify jars, files, and archives. See Configuration Variables for more information.

Beeline CLI for HiveServer2

HiveServer2 (introduced in Hive 0.11) has a new CLI called Beeline, which is a JDBC client based on SQLLine. See Beeline – New Command Line Shellin the HiveServer2 documentation.

HCatalog CLI

Version

Icon

HCatalog is installed with Hive, starting with Hive release 0.11.0.

Many (but not all) hcat commands can be issued as hive commands, and vice versa. See the HCatalog Command Line Interface document in theHCatalog manual for more information.

你可能感兴趣的:(hive,交互模式)