开发环境:jdk1.7+idea 16+hive-1.1.0
使用udf的生产环境:cdh5.8.0+hive-1.1.0
1、导入hive的所有相关jar包
或者使用maven引入CDH相关包:
4.0.0
com.enn
hive-udf
1.0-SNAPSHOT
UTF-8
UTF-8
1.7
1.1.0-cdh5.8.0
1.7
1.7
UTF-8
cloudera
https://repository.cloudera.com/artifactory/cloudera-repos/
org.apache.hive
hive-accumulo-handler
${cdh.version}
org.apache.hive
hive-ant
${cdh.version}
org.apache.hive
hive-beeline
${cdh.version}
org.apache.hive
hive-cli
${cdh.version}
org.apache.hive
hive-common
${cdh.version}
org.apache.hive
hive-contrib
${cdh.version}
org.apache.hive
hive-exec
${cdh.version}
hive-udf
org.apache.maven.plugins
maven-compiler-plugin
3.1
1.7
1.7
UTF-8
maven-assembly-plugin
jar-with-dependencies
make-assembly
package
single
4.0.0
com.enn
hive-udf
1.0-SNAPSHOT
UTF-8
UTF-8
1.7
1.1.0-cdh5.8.0
1.7
1.7
UTF-8
cloudera
https://repository.cloudera.com/artifactory/cloudera-repos/
org.apache.hive
hive-accumulo-handler
${cdh.version}
org.apache.hive
hive-ant
${cdh.version}
org.apache.hive
hive-beeline
${cdh.version}
org.apache.hive
hive-cli
${cdh.version}
org.apache.hive
hive-common
${cdh.version}
org.apache.hive
hive-contrib
${cdh.version}
org.apache.hive
hive-exec
${cdh.version}
hive-udf
org.apache.maven.plugins
maven-compiler-plugin
3.1
1.7
1.7
UTF-8
maven-assembly-plugin
jar-with-dependencies
make-assembly
package
single
2、编写类WeekTransform.java
内容如下:
import org.apache.hadoop.hive.ql.exec.UDF;
import org.apache.hadoop.io.Text;
import java.text.SimpleDateFormat;
import java.util.Calendar;
import java.util.Date;
/**
* Document:本类作用---->hive日期转换函数---返回日期所在的周
* User: yangjf
* Date: 2016/9/1 8:8
*/
public class WeekTransform extends UDF {
public Integer evaluate(final Text str) {
Integer weekDay=null;
try{
Date nowDate= new SimpleDateFormat("yyyy-MM-dd").parse(str.toString());
Calendar calendar = Calendar.getInstance();
if(nowDate != null){
calendar.setTime(nowDate);
}
weekDay= calendar.get(Calendar.DAY_OF_WEEK) - 1;
if (weekDay < 0){
weekDay= 1;
}
}catch (Exception e){
}
return weekDay+1;
}
// public static void main(String[] args) {
// System.out.println(new WeekTransform().evaluate("2016-08-31"));
// }
}
3、使用idea编译、打包生成hiveUdf.jar
第一种使用方法:在xshell中使用udf函数
1、打开xshell、输入hive进入命令行
2、上传hiveUdf.jar到当前用户目录
3、将linux目录上的hiveUdf.jar上传到hdfs用户目录/user/hive/test/上
hdfs dfs -put /user/e_lvbin/test/hiveudf.jar /user/hive/test/
4、添加jar到内存中
(1)第一种方法----->add jar hdfs://localhost:9000/user/hive/test/hiveUdf.jar;
(2)第二种方法----->add jar hdfs://nameservice/user/hive/test/hiveUdf.jar;
注意:nameservice是core-site.xml中,入口的别名,用于namenode的HA
5、创建临时函数:weektransform(可以改其他别名)
create temporary function weektransform as ‘WeekTransform’;
注意:’WeekTransform’是刚才编写的java类,如果有包名称,则应该是全路径
6、创建成功后可以使用命令查看到
>show functions;
会看见----》default.weektransform
7、使用该函数
>select weektransform(‘2016-08-31’);
返回’4’,代表是周三,因为周日是1,依次类推,周三是4
第二种使用方法:在HUE中使用Hive自定义函数
1、打开hue窗口,进入hive界面
2、点击’session’,即图片中的红框处
3、选择上传到hdfs的hiveUdf.jar包、输入自定义函数的名称(可以该其他名称)、添加jar包中的udf函数类的全路径
4、使用weektransform函数
第三种使用方法:在oozie调度中使用udf函数
1、上传hiveUdf.jar包到oozie目录下
2、创建一个workflow、添加2个hive流程(一个是hiveudf.txt脚本、一个是ykt_modelToApp_predict.txt)
3、编写hiveudf.txt脚本,内容如下:
use origin_ennenergy_test;
add jar hdfs://nameservice/user/hive/test/hiveUdf.jar;
4、编写另一个脚本:ykt_modelToApp_predict.txt脚本,内容如下:
use origin_ennenergy_test;
create table t_test_weektransform as select weektransform(‘2016-08-31’);
5、编写hive-site.xml文件(备注一)
6、先添加hiveudf.txt,再添加ykt_modelToApp_predict.txt
注意:
(1)需要编写hive-site.xml脚本
(2)每个流程都需要添加hiveUdf.jar
(3)txt文件编码格式应该是:UTF-8无BOM格式编码
(4)首次用于使用udf函数时需要创建:
create temporary function weektransform as ‘WeekTransform’;
如果已经在当前用户中已经创建weektransform临时函数,那么ykt_modelToApp_predict.txt脚本中不用添加
如果集群重启,或者hive重启,那么需要在
add jar hdfs://nameservice/user/hive/test/hiveUdf.jar; 后执行一次
create temporary function weektransform as ‘WeekTransform’;
原因:udf函数只针对当前用户有效!
编码格式图:
流程图:
备注一:
hive.metastore.uris
thrift://host12.master.cluster.enn.cn:9083,thrift://host13.master.cluster.enn.cn:9083
hive.metastore.client.socket.timeout
300
hive.metastore.warehouse.dir
/user/hive/warehouse
hive.warehouse.subdir.inherit.perms
true
hive.conf.restricted.list
hive.enable.spark.execution.engine
hive.auto.convert.join
true
hive.auto.convert.join.noconditionaltask.size
20971520
hive.optimize.bucketmapjoin.sortedmerge
false
hive.smbjoin.cache.rows
10000
mapred.reduce.tasks
-1
hive.exec.reducers.bytes.per.reducer
67108864
hive.exec.copyfile.maxsize
33554432
hive.vectorized.groupby.checkinterval
4096
hive.vectorized.groupby.flush.percent
0.1
hive.compute.query.using.stats
false
hive.vectorized.execution.enabled
true
hive.vectorized.execution.reduce.enabled
false
hive.merge.mapfiles
true
hive.merge.mapredfiles
false
hive.cbo.enable
false
hive.fetch.task.conversion
minimal
hive.fetch.task.conversion.threshold
268435456
hive.limit.pushdown.memory.usage
0.1
hive.merge.sparkfiles
true
hive.merge.smallfiles.avgsize
16777216
hive.merge.size.per.task
268435456
hive.optimize.reducededuplication
true
hive.optimize.reducededuplication.min.reducer
4
hive.map.aggr
true
hive.map.aggr.hash.percentmemory
0.5
hive.optimize.sort.dynamic.partition
false
spark.executor.memory
4294967296
spark.driver.memory
2147483648
spark.executor.cores
1
spark.yarn.driver.memoryOverhead
512
spark.yarn.executor.memoryOverhead
1024
spark.dynamicAllocation.enabled
true
spark.dynamicAllocation.initialExecutors
1
spark.dynamicAllocation.minExecutors
1
spark.dynamicAllocation.maxExecutors
2147483647
hive.metastore.execute.setugi
true
hive.support.concurrency
true
hive.zookeeper.quorum
host19.slave.cluster.enn.cn,host16.slave.cluster.enn.cn,host14.slave.cluster.enn.cn,host15.slave.cluster.enn.cn,host17.slave.cluster.enn.cn
hive.zookeeper.client.port
2181
hive.zookeeper.namespace
hive_zookeeper_namespace_hive
hbase.zookeeper.quorum
host19.slave.cluster.enn.cn,host16.slave.cluster.enn.cn,host14.slave.cluster.enn.cn,host15.slave.cluster.enn.cn,host17.slave.cluster.enn.cn
hbase.zookeeper.property.clientPort
2181
hive.cluster.delegation.token.store.class
org.apache.hadoop.hive.thrift.MemoryTokenStore
hive.server2.enable.doAs
true
hive.server2.use.SSL
false
spark.shuffle.service.enabled
true
hive.cli.print.current.db
true
hive.exec.reducers.max
60
hive.enable.spark.execution.engine
true
hive.metastore.uris
thrift://host12.master.cluster.enn.cn:9083,thrift://host13.master.cluster.enn.cn:9083
hive.metastore.client.socket.timeout
300
hive.metastore.warehouse.dir
/user/hive/warehouse
hive.warehouse.subdir.inherit.perms
true
hive.conf.restricted.list
hive.enable.spark.execution.engine
hive.auto.convert.join
true
hive.auto.convert.join.noconditionaltask.size
20971520
hive.optimize.bucketmapjoin.sortedmerge
false
hive.smbjoin.cache.rows
10000
mapred.reduce.tasks
-1
hive.exec.reducers.bytes.per.reducer
67108864
hive.exec.copyfile.maxsize
33554432
hive.vectorized.groupby.checkinterval
4096
hive.vectorized.groupby.flush.percent
0.1
hive.compute.query.using.stats
false
hive.vectorized.execution.enabled
true
hive.vectorized.execution.reduce.enabled
false
hive.merge.mapfiles
true
hive.merge.mapredfiles
false
hive.cbo.enable
false
hive.fetch.task.conversion
minimal
hive.fetch.task.conversion.threshold
268435456
hive.limit.pushdown.memory.usage
0.1
hive.merge.sparkfiles
true
hive.merge.smallfiles.avgsize
16777216
hive.merge.size.per.task
268435456
hive.optimize.reducededuplication
true
hive.optimize.reducededuplication.min.reducer
4
hive.map.aggr
true
hive.map.aggr.hash.percentmemory
0.5
hive.optimize.sort.dynamic.partition
false
spark.executor.memory
4294967296
spark.driver.memory
2147483648
spark.executor.cores
1
spark.yarn.driver.memoryOverhead
512
spark.yarn.executor.memoryOverhead
1024
spark.dynamicAllocation.enabled
true
spark.dynamicAllocation.initialExecutors
1
spark.dynamicAllocation.minExecutors
1
spark.dynamicAllocation.maxExecutors
2147483647
hive.metastore.execute.setugi
true
hive.support.concurrency
true
hive.zookeeper.quorum
host19.slave.cluster.enn.cn,host16.slave.cluster.enn.cn,host14.slave.cluster.enn.cn,host15.slave.cluster.enn.cn,host17.slave.cluster.enn.cn
hive.zookeeper.client.port
2181
hive.zookeeper.namespace
hive_zookeeper_namespace_hive
hbase.zookeeper.quorum
host19.slave.cluster.enn.cn,host16.slave.cluster.enn.cn,host14.slave.cluster.enn.cn,host15.slave.cluster.enn.cn,host17.slave.cluster.enn.cn
hbase.zookeeper.property.clientPort
2181
hive.cluster.delegation.token.store.class
org.apache.hadoop.hive.thrift.MemoryTokenStore
hive.server2.enable.doAs
true
hive.server2.use.SSL
false
spark.shuffle.service.enabled
true
hive.cli.print.current.db
true
hive.exec.reducers.max
60
hive.enable.spark.execution.engine
true
以上已经测试通过,可以根据用户需要做改变。
如有疑问、请留言!不足之处,请指正!谢谢!