hive学习

1 Hive Metastore

1.1相关概念


Hive Metastore有三种配置方式,分别是:


Embedded Metastore Database (Derby)内嵌模式

Local Metastore Server本地元存储

Remote Metastore Server远程元存储

1.1 Metadata、Metastore作用


metadata即元数据。元数据包含用Hive创建的database、tabel等的元信息。

元数据存储在关系型数据库中。如Derby、MySQL等。


Metastore的作用是:客户端连接metastore服务,metastore再去连接MySQL数据库来存取元数据。有了metastore服务,就可以有多个客户端同时连接,而且这些客户端不需要知道MySQL数据库的用户名和密码,只需要连接metastore 服务即可。


1.2三种配置方式区别


内嵌模式使用的是内嵌的Derby数据库来存储元数据,也不需要额外起Metastore服务。这个是默认的,配置简单,但是一次只能一个客户端连接,适用于用来实验,不适用于生产环境。


本地元存储和远程元存储都采用外部数据库来存储元数据,目前支持的数据库有:MySQL、Postgres、Oracle、MS SQL Server.在这里我们使用MySQL。


本地元存储和远程元存储的区别是:本地元存储不需要单独起metastore服务,用的是跟hive在同一个进程里的metastore服务。远程元存储需要单独起metastore服务,然后每个客户端都在配置文件里配置连接到该metastore服务。远程元存储的metastore服务和hive运行在不同的进程里。


在生产环境中,建议用远程元存储来配置Hive Metastore。

1.3配置文件相关:

配置文件为hivemetastore-site.xml 或在hive-site.xml


hive.metastore.urisHive connects to one of these URIs to make metadata requests to a remote Metastore (comma separated list of URIs)

javax.jdo.option.ConnectionURLJDBC connection string for the data store which contains metadata

javax.jdo.option.ConnectionDriverNameJDBC Driver class name for the data store which contains metadata

hive.metastore.locallocal or remote metastore (removed as of Hive 0.10: If hive.metastore.uris is empty local mode is assumed, remote otherwise)

hive.metastore.warehouse.dirURI of the default location for native tables

javax.jdo.option.ConnectionUserName

javax.jdo.option.ConnectionPassword


Data Nucleus Auto Start

Configuring datanucleus.autoStartMechanism is highly recommended

Configuring auto start for data nucleus is highly recommended. See HIVE-4762 for more details.

   datanucleus.autoStartMechanism

   SchemaTable

 

2 Hive 数据存储

Hive结构和常规的数据库差别不大,也是下面多个数据库,每个数据库下面多个表格

数据存储位置配置在

 

    hive.metastore.warehouse.dir

    /user/hive/warehouse

    location of default database for the warehouse

  

Hive数据存储的路径为/user/hive/warehouse,该路径为hdfs的路径,其具体路径配置在

$HADOOP_HOME/etc/hadoop中的配置文件core-site.xml和hdfs-site.xml

core-site.xml

hadoop.tmp.dir

file:/home/leesf/program/hadoop/tmp

Abase for other temporary directories.

fs.defaultFS

hdfs://localhost:9000

hdfs-site.xml配置文件内容


dfs.replication

1

dfs.namenode.name.dir

file:/home/leesf/program/hadoop/tmp/dfs/name

dfs.datanode.data.dir

file:/home/leesf/program/hadoop/tmp/dfs/data

  dfs.http.address

  192.168.65.128:50070

3 Hive 操作

1.1:查看所有的数据库: hive>show databases;

1.2:使用数据库default; hive>use default;

1.3:查看数据库信息: hive>describe database default;

1.4:显示的展示当前使用的数据库:hive>set hive.cli.print.current.db=true;

1.5:Hive显示表中标题行: hive>set hive.cli.print.header=true;

1.6:创建数据库命令: hive>create database test;

1.7:切换当前的数据库: hive>use test;



4 hiveserver2和beeline

连接hive有两种方式

Hive cli和hiveserver2 beeline

$ $HIVE_HOME/bin/hive

或者

 $ $HIVE_HOME/bin/hiveserver2


 $ $HIVE_HOME/bin/beeline -u jdbc:hive2://$HS2_HOST:$HS2_PORT

 HiveCLI is now deprecated in favor of Beeline, as it lacks the multi-user, security, and other capabilities of HiveServer2.

Hivecli已经不推荐使用了,因为不支持多用户,安全性,以及一些其他的属性。

使用beeline连接的时候,如果出现错误:

Error: Could not open client transport with JDBC Uri: jdbc:hive2://localhost:10000/default: Failed to open new session: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: root is not allowed to impersonate anonymous (state=08S01,code=0)


则需要修改hadoop的core-site.xml添加如下内容,并重启服务器。

   hadoop.proxyuser.root.hosts

   *

   hadoop.proxyuser.root.groups

   *


其中红色标注的部分,是你使用beeline连接的用户名

Connection URL for Remote or Embedded Mode

The JDBC connection URL format has the prefix jdbc:hive2:// and the Driver class is org.apache.hive.jdbc.HiveDriver. Note that this is different from the old HiveServer.

For a remote server, the URL format is jdbc:hive2://:/;initFile= (default port for HiveServer2 is 10000).

For an embedded server, the URL format is jdbc:hive2:///;initFile= (no host or port).

4.2使用hiveserver2 配置用户名和密码

Hive-site.xml中配置

 

    hive.server2.authentication

    NONE

    

      Expects one of [nosasl, none, ldap, kerberos, pam, custom].

      Client authentication types.

        NONE: no authentication check

        LDAP: LDAP/AD based authentication

        KERBEROS: Kerberos/GSSAPI authentication

        CUSTOM: Custom authentication provider

                (Use with property hive.server2.custom.authentication.class)

        PAM: Pluggable authentication module

        NOSASL:  Raw transport

    

  

配置为CUSTOM,之后自己编写java验证类,打包成jar文件,放置在hive的lib目录下,并配置

    hive.server2.custom.authentication.class

    test.SampleAuthenticator


    

      Custom authentication class. Used when property

      'hive.server2.authentication' is set to 'CUSTOM'. Provided class

      must be a proper implementation of the interface

      org.apache.hive.service.auth.PasswdAuthenticationProvider. HiveServer2

      will call its Authenticate(user, passed) method to authenticate requests.

      The implementation may optionally implement Hadoop's

      org.apache.hadoop.conf.Configurable class to grab Hive's Configuration object.

    

  

以下代码为java验证类

package test;


import java.util.Hashtable;

import javax.security.sasl.AuthenticationException;

import org.apache.hive.service.auth.PasswdAuthenticationProvider;


/*

 javac -cp $HIVE_HOME/lib/hive-service-0.12.0-cdh5.0.0-beta-2.jar SampleAuthenticator.java -d .

 jar cf sampleauth.jar hive

 cp sampleauth.jar $HIVE_HOME/lib/.

*/



public class SampleAuthenticator implements PasswdAuthenticationProvider {


  Hashtable store = null;


  public SampleAuthenticator () {

    store = new Hashtable();

    store.put("user1", "passwd1");

    store.put("user2", "passwd2");

  }


  @Override

  public void Authenticate(String user, String  password)

      throws AuthenticationException {


    String storedPasswd = store.get(user);


    if (storedPasswd != null && storedPasswd.equals(password))

      return;


    throw new AuthenticationException("SampleAuthenticator: Error validating user");

  }


}

你可能感兴趣的:(hive学习)