环境:
hive-1.1.0-cdh5.7.0 包 放在了 /root 下面
cdh 采用 cdh5.7.0
目标:
将自定义函数sayhello 注册到hive 源码中,并且重新编译hive
1、编写UDF
(1)使用idea+maven,在maven中添加相关参数。
重要的是 hadoop-common 、hive-exec 、hive-jdbc
以下为我的maven,文件头修改下,其他可以直接复制后贴入。
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0modelVersion>
<groupId>bigDatagroupId>
<artifactId>hive_trainartifactId>
<version>1.0version>
<packaging>jarpackaging>
<name>hive_trainname>
<url>http://maven.apache.orgurl>
<properties>
<project.build.sourceEncoding>UTF-8project.build.sourceEncoding>
<hadoop.version>2.6.0-cdh5.7.0hadoop.version>
<hive.version>1.1.0-cdh5.7.0hive.version>
properties>
<repositories>
<repository>
<id>clouderaid>
<url>https://repository.cloudera.com/artifactory/cloudera-reposurl>
repository>
repositories>
<pluginRepositories>
<pluginRepository>
<id>jeesite-reposid>
<name>Jeesite Repositoryname>
<url>http://maven.aliyun.com/nexus/content/groups/publicurl>
pluginRepository>
pluginRepositories>
<dependencies>
<dependency>
<groupId>org.apache.hadoopgroupId>
<artifactId>hadoop-commonartifactId>
<version>${hadoop.version}version>
dependency>
<dependency>
<groupId>org.apache.hivegroupId>
<artifactId>hive-execartifactId>
<version>${hive.version}version>
dependency>
<dependency>
<groupId>org.apache.hivegroupId>
<artifactId>hive-jdbcartifactId>
<version>${hive.version}version>
dependency>
<dependency>
<groupId>junitgroupId>
<artifactId>junitartifactId>
<version>4.10version>
<scope>testscope>
dependency>
dependencies>
project>
(2)在maven 生命流程控制中,clean -> build 去下载相关包,网速不好情况下要等一会儿。
没下载完成的话,build会报错。
(3)创建类,并编写一个UDF 名字叫 sayhello.java
package com.wxkdata.bigdata.hello;
import org.apache.hadoop.hive.ql.exec.Description;
import org.apache.hadoop.hive.ql.exec.UDF;
import org.apache.hadoop.io.Text;
/**
* 一个UDF: sayhello
*/
@Description(name = "sayhello",
value = "_FUNC_(input_str) - returns Hello : input_str ",
extended = "Example:\n "
+ " > SELECT _FUNC_('wxk') FROM src LIMIT 1;\n"
+ " 'Hello : wxk'\n")
public class sayhello extends UDF {
public Text evaluate( Text input){
return new Text("Hello: "+input);
}
}
2、下载源码
hive-1.1.0-cdh5.7.0-src.tar.gz
http://archive.cloudera.com/cdh5/cdh/5/hive-1.1.0-cdh5.7.0-src.tar.gz
解压后为了方便,放在/root 下面
3、在源码中修改
(1)添加sayhello.java
将sayhello.java 放入 /root/hive-1.1.0-cdh5.7.0/ql/src/java/org/apache/hadoop/hive/ql/udf 文件夹中
vi sayhello.java
将 package com.wxkdata.bigdata.hello; 修改为 package org.apache.hadoop.hive.ql.udf;
(2)修改FunctionRegistry.java 文件
vi /root/hive-1.1.0-cdh5.7.0/ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java
文件头部 一长串 import 下添加,因为我们要吧这个UDF添加进去。
import org.apache.hadoop.hive.ql.udf.sayhello;
文件头部 static 块中添加 system.registerUDF("sayhello", sayhello.class, false);
如下:
static {
system.registerGenericUDF("concat", GenericUDFConcat.class);
system.registerUDF("sayhello", sayhello.class, false);
system.registerUDF("substr", UDFSubstr.class, false);
后,查看结果全部为success
重新编译的包默认为
/root/hive-1.1.0-cdh5.7.0/packaging/target/apache-hive-1.1.0-cdh5.7.0-bin.tar.gz
5、重新部署 或者 只将 编译后的hive-exec-1.1.0-cdh5.7.0.jar 放到原来hive部署的位置即可。两种方式都可以!!
5.1 在编译后的文件中找到 jar,并将原来的jar 替换。
[root@hadoop002 lib]# pwd
/root/hive-1.1.0-cdh5.7.0/packaging/target/apache-hive-1.1.0-cdh5.7.0-bin/apache-hive-1.1.0-cdh5.7.0-bin/lib
[root@hadoop001 lib]# ll hive-exec-1.1.0-cdh5.7.0.jar
-rw-r--r--. 1 root root 19276386 Sep 5 19:06 hive-exec-1.1.0-cdh5.7.0.jar
将原来的jar 后缀改掉:
[root@hadoop002 lib]# mv hive-exec-1.1.0-cdh5.7.0.jar hive-exec-1.1.0-cdh5.7.0.jar.bak
拷贝到原hive 部署位置:
[root@hadoop002 lib]# cp hive-exec-1.1.0-cdh5.7.0.jar /opt/software/hive-1.1.0-cdh5.7.0/lib/
查看
[root@hadoop002 lib]# ll hive-exec-1.1.0-cdh5.7.0.*
-rw-r--r-- 1 root root 19276386 Sep 24 19:45 hive-exec-1.1.0-cdh5.7.0.jar
-rw-r--r-- 1 root root 19272159 Mar 24 2016 hive-exec-1.1.0-cdh5.7.0.jar.bak
5.2 重新解压部署
(1)配置$HIVE_HOME/conf/hive-env.sh
添加hadoop home :
HADOOP_HOME=/opt/software/hadoop-2.6.0-cdh5.7.0
(2)配置$HIVE_HOME/conf/hive-site.xml 最下面几个可以不配,不是必要参数。
[root@hadoop002 conf]# cat hive-site.xml
<configuration>
<property>
<name>javax.jdo.option.ConnectionURLname>
<value>jdbc:mysql://localhost:3306/hive3?createDatabaseIfNotExist=truevalue>
property>
<property>
<name>javax.jdo.option.ConnectionDriverNamename>
<value>com.mysql.jdbc.Drivervalue>
property>
<property>
<name>javax.jdo.option.ConnectionUserNamename>
<value>rootvalue>
property>
<property>
<name>javax.jdo.option.ConnectionPasswordname>
<value>passwordvalue>
property>
<property>
<name>hive.support.concurrencyname>
<value>truevalue>
property>
<property>
<name>hive.enforce.bucketingname>
<value>truevalue>
property>
<property>
<name>hive.exec.dynamic.partition.modename>
<value>nonstrictvalue>
property>
<property>
<name>hive.txn.managername>
<value>org.apache.hadoop.hive.ql.lockmgr.DbTxnManagervalue>
property>
<property>
<name>hive.compactor.initiator.onname>
<value>truevalue>
property>
<property>
<name>hive.in.testname>
<value>truevalue>
property>
<property>
<name>hive.cli.print.current.dbname>
<value>truevalue>
property>
configuration>
(3) 将 mysql jdbc 拷贝到 $HIVE_HOME/lib 下 ,因为编译的时候默认不带这个jdbc
cp mysql-connector-java-5.1.27-bin.jar $HIVE_HOME/lib
6、测试:
hive (default)> show functions ;
能查看到有 sayhello
hive (default)> desc function extended sayhello;
OK
sayhello(input_str) - returns Hello : input_str
Example:
> SELECT sayhello('wxk') FROM src LIMIT 1;
'Hello : wxk'
Time taken: 0.024 seconds, Fetched: 5 row(s)
hive (default)> select ename , sayhello(ename) from emp;
OK
SMITH Hello: SMITH
ALLEN Hello: ALLEN
WARD Hello: WARD
JONES Hello: JONES
MARTIN Hello: MARTIN
BLAKE Hello: BLAKE
CLARK Hello: CLARK
SCOTT Hello: SCOTT
KING Hello: KING
TURNER Hello: TURNER
ADAMS Hello: ADAMS
JAMES Hello: JAMES
FORD Hello: FORD
MILLER Hello: MILLER
HIVE Hello: HIVE
结果正确,我们的UDF 相当于直接注册到Hive中,当做hive的一个默认函数了。
大数据课课程推荐: