下载impala驱动
https://downloads.cloudera.com/connectors/impala_jdbc_2.5.41.1061.zip
将TCLIServiceClient.jar 以及 ImpalaJDBC4.jar 两个文件夹存放在hive 的本地目录下,只需要配置hiveserver2的节点即可
连接
beeline -d "com.cloudera.impala.jdbc41.Driver" -u "jdbc:impala://note01:21050"
beeline -d "com.cloudera.impala.jdbc41.Driver" -u "jdbc:impala://note01:21050" --isolation=default
基于kerberos认证的impala,确保本地kerberos认证已初始化
beeline -d "com.cloudera.impala.jdbc41.Driver" -u "jdbc:impala://note01:21050/;AuthMech=1;KrbServiceName=impala;KrbRealm=ZYD.COM;KrbHostFQDN=note01;" --isolation=default
参数说明:
AuthMech:设置认证类型,1为Kerberos认证。
KrbServiceName:Impala服务器的Kerberos服务主体名称。
KrbHostFQDN:连接Impala所在服务的HostFQDN。
对于impala-jdbc驱动,kerberos认证的代码块与Connection实例化的代码块,两者强耦合,除了保证执行的时序性(kerberos认证在前,Connection实例化在后),还要求在doAs函数,创建Connection,如下图所示
// kerberos认证的代码块
// 1. login use keytab
System.setProperty("java.security.krb5.realm", "XXX.COM");
System.setProperty("java.security.krb5.kdc", "kdcXXX");
Configuration conf = new Configuration();
conf.set("hadoop.security.authentication", "Kerberos");
UserGroupInformation.setConfiguration(conf);
UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI("test", "test.keytab");
// 在UserGroupInformation的doAs函数中实现Connection的创建
// 2. create impala jdbc connection
Class.forName(JDBCDriverName);
conn = (Connection) ugi.doAs(new PrivilegedExceptionAction<Object>() {
public Object run() {
Connection tcon = null;
try {
tcon = DriverManager.getConnection(connectionUrl);
} catch (SQLException e) {
e.printStackTrace();
}
return tcon;
}
});
对于hive-jdbc驱动,kerberos认证的代码块与Connection实例化的代码块,耦合性不强,保证执行的时序性即可。(kerberos认证在前,Connection实例化在后),如下图所示
// 先执行kerberos认证的代码块
// 1. login use keytab
System.setProperty("java.security.krb5.realm", "XXX.COM");
System.setProperty("java.security.krb5.kdc", "kdcXXX");
Configuration conf = new Configuration();
conf.set("hadoop.security.authentication", "Kerberos");
UserGroupInformation.setConfiguration(conf);
UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI("test", "test.keytab");
// 接着执行Connection实例化的代码块
try {
Class.forName(driverName);
Connection conn = DriverManager.getConnection(url);
Statement stmt = conn.createStatement();
String sql = "show databases;";
ResultSet rs = stmt.executeQuery(sql);
while(rs.next()){
System.out.println(rs.getString(1));
}
} catch (Exception e) {
e.printStackTrace();
}
imapla 连接池
以Druid连接池为例,创建一个类继承DruidSource,对所有getConnection相关的几个函数重写,把kerberbos认证相关的代码块嵌入到该函数里面。这样可以通过kerberbos认证,并返回Connection
public class DruidDataSourceWrapper extends DruidDataSource {
// 创建一个函数,指向父类的getConnection(long)方法
public DruidPooledConnection superGetConnection(long maxWaitMillis) throws SQLException {
return super.getConnection(maxWaitMillis);
}
/**
* 覆写父类的getConnection(long)方法,在父类的getConnection(long)方法外面包裹上kerberbos认证的代码块
*/
@Override
public DruidPooledConnection getConnection(final long maxWaitMillis) throws SQLException {
// kerberos认证的代码块
// 1. login use keytab
System.setProperty("java.security.krb5.realm", "XXX.COM");
System.setProperty("java.security.krb5.kdc", "kdcXXX");
Configuration conf = new Configuration();
conf.set("hadoop.security.authentication", "Kerberos");
UserGroupInformation.setConfiguration(conf);
UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI("test", "test.keytab");
// 在UserGroupInformation的doAs函数中实现Connection的创建
// 覆写父类的getConnection(long)方法,在方法外面包裹上kerberbos认证的代码块
DruidDataSourceWrapper _this = this;
Connection conn = ugi.doAs(new PrivilegedExceptionAction<Connection>() {
public Connection run() {
Connection tcon = null;
try {
// 父类的getConnection(long)方法
tcon = _this.superGetConnection(maxWaitMillis);
} catch (SQLException e) {
e.printStackTrace();
}
return tcon;
}
});
// 返回connection
return conn;
}
}
hive连接池
对hive-jdbc情况,比较简单,满足两个代码块执行的时序性即可。即确保在连接池实例化前,执行kerberbos认证的代码块。
参考文章 :
https://segmentfault.com/a/1190000019658767