Hudi Spark使用
本篇为大家带来通过Spark shell和Spark SQL操作Hudi表的方式。
Hudi表还可以通过Spark ThriftServer操作,参见通过 Spark thriftserver 操作Hudi表。
Hudi编译
我们使用如下软件环境:
- Scala 2.12
- Flink 1.15
- Spark 3.3
- Hudi 0.13.1
Hudi编译的时候会遇到依赖下载缓慢的情况。需要换用国内源。修改settings.xml
文件,在mirrors部分增加:
settings.xml
alimaven
*,!confluent
aliyun maven
https://maven.aliyun.com/repository/public
然后在Hudi项目checkout0.13.1版本,接着根目录执行:
mvn clean package -Dflink1.15 -Dscala2.12 -Dspark3.3 -DskipTests -Pflink-bundle-shade-hive3 -T 4
编译输出的Spark Hudi依赖位于hudi/packaging/hudi-spark-bundle/target
,将其中的hudi-spark3.x-bundle_2.12-0.xx.x.jar
复制走备用。
环境配置
需要禁用Yarn组件的yarn.timeline-service.enabled
配置。修改完毕后重启Yarn组件。
或者是在spark-defaults.conf
中增加spark.hadoop.yarn.timeline-service.enabled=false
。建议这样配置,避免修改Yarn的全局配置。
接着将Hudi编译之后的hudi-spark3.x-bundle_2.12-0.xx.x.jar
复制到${SPARK_HOME}/jars
目录中。
Spark Shell方式
启动Hudi spark shell的方法:
./spark-shell \
--master yarn \
--conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer' \
--conf 'spark.sql.catalog.spark_catalog=org.apache.spark.sql.hudi.catalog.HoodieCatalog' \
--conf 'spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension'
如果使用Hudi的版本为0.11.x,需要执行:
./spark-shell \
--master yarn \
--conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer'
执行作业前建议导入如下:
import org.apache.hudi.QuickstartUtils._
import scala.collection.JavaConversions._
import org.apache.spark.sql.SaveMode._
import org.apache.hudi.DataSourceReadOptions._
import org.apache.hudi.DataSourceWriteOptions._
import org.apache.hudi.config.HoodieWriteConfig._
插入数据
import org.apache.spark.sql._
import org.apache.spark.sql.types._
val fields = Array(
StructField("id", IntegerType, true),
StructField("name", StringType, true),
StructField("price", DoubleType, true),
StructField("ts", LongType, true)
)
val simpleSchema = StructType(fields)
val data = Seq(Row(2, "a2", 200.0, 100L))
val df = spark.createDataFrame(data, simpleSchema)
df.write.format("hudi").
option(PRECOMBINE_FIELD_OPT_KEY, "ts").
option(RECORDKEY_FIELD_OPT_KEY, "id").
option(TABLE_NAME, "hudi_mor_tbl_shell").
option(TABLE_TYPE_OPT_KEY, "MERGE_ON_READ").
mode(Append).
save("hdfs:///hudi/hudi_mor_tbl_shell")
验证:
val df = spark.
read.
format("hudi").
load("hdfs:///hudi/hudi_mor_tbl_shell")
df.createOrReplaceTempView("hudi_mor_tbl_shell")
spark.sql("select * from hudi_mor_tbl_shell").show()
普通查询
val df = spark.
read.
format("hudi").
load("hdfs:///hudi/hudi_mor_tbl_shell")
df.createOrReplaceTempView("hudi_mor_tbl_shell")
spark.sql("select * from hudi_mor_tbl_shell").show()
增量查询
首先再插入/修改一条数据,参见插入/修改数据。然后执行:
spark.
read.
format("hudi").
load("hdfs:///hudi/hudi_mor_tbl_shell").
createOrReplaceTempView("hudi_mor_tbl_shell")
val commits = spark.sql("select distinct(_hoodie_commit_time) as commitTime from hudi_mor_tbl_shell order by commitTime desc").map(k => k.getString(0)).take(50)
val beginTime = commits(commits.length - 1)
val idf = spark.read.format("hudi").
option(QUERY_TYPE_OPT_KEY, QUERY_TYPE_INCREMENTAL_OPT_VAL).
option(BEGIN_INSTANTTIME_OPT_KEY, beginTime).
load("hdfs:///hudi/hudi_mor_tbl_shell")
idf.createOrReplaceTempView("hudi_mor_tbl_shell_incremental")
spark.sql("select `_hoodie_commit_time`, id, name, price, ts from hudi_mor_tbl_shell_incremental").show()
发现只取出了最近插入/修改后的数据。
修改数据
import org.apache.spark.sql._
import org.apache.spark.sql.types._
val fields = Array(
StructField("id", IntegerType, true),
StructField("name", StringType, true),
StructField("price", DoubleType, true),
StructField("ts", LongType, true)
)
val simpleSchema = StructType(fields)
val data = Seq(Row(2, "a2", 400.0, 2222L))
val df = spark.createDataFrame(data, simpleSchema)
df.write.format("hudi").
option(PRECOMBINE_FIELD_OPT_KEY, "ts").
option(RECORDKEY_FIELD_OPT_KEY, "id").
option(TABLE_NAME, "hudi_mor_tbl_shell").
option(TABLE_TYPE_OPT_KEY, "MERGE_ON_READ").
mode(Append).
save("hdfs:///hudi/hudi_mor_tbl_shell")
验证方法使用普通查询。
Insert overwrite
import org.apache.spark.sql._
import org.apache.spark.sql.types._
val fields = Array(
StructField("id", IntegerType, true),
StructField("name", StringType, true),
StructField("price", DoubleType, true),
StructField("ts", LongType, true)
)
val simpleSchema = StructType(fields)
val data = Seq(Row(99, "a99", 20.0, 900L))
val df = spark.createDataFrame(data, simpleSchema)
df.write.format("hudi").
option(OPERATION.key(),"insert_overwrite").
option(PRECOMBINE_FIELD.key(), "ts").
option(RECORDKEY_FIELD.key(), "id").
option(TBL_NAME.key(), "hudi_mor_tbl_shell").
option(TABLE_TYPE_OPT_KEY, "MERGE_ON_READ").
mode(Append).
save("hdfs:///hudi/hudi_mor_tbl_shell")
验证方法使用普通查询。发现只有新增的这一条数据。
删除数据
import org.apache.spark.sql._
import org.apache.spark.sql.types._
val fields = Array(
StructField("id", IntegerType, true),
StructField("name", StringType, true),
StructField("price", DoubleType, true),
StructField("ts", LongType, true)
)
val simpleSchema = StructType(fields)
val data = Seq(Row(2, "a2", 400.0, 2222L))
val df = spark.createDataFrame(data, simpleSchema)
df.write.format("hudi").
option(OPERATION_OPT_KEY,"delete").
option(PRECOMBINE_FIELD_OPT_KEY, "ts").
option(RECORDKEY_FIELD_OPT_KEY, "id").
option(TABLE_NAME, "hudi_mor_tbl_shell").
mode(Append).
save("hdfs:///hudi/hudi_mor_tbl_shell")
验证方法使用普通查询。
Spark SQL方式
启动Hudi spark sql的方法:
./spark-sql \
--master yarn \
--conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer' \
--conf 'spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension' \
--conf 'spark.sql.catalog.spark_catalog=org.apache.spark.sql.hudi.catalog.HoodieCatalog'
如果使用Hudi的版本为0.11.x,需要执行:
./spark-sql \
--master yarn \
--conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer' \
--conf 'spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension'
创建表
create table hudi_mor_tbl (
id int,
name string,
price double,
ts bigint
) using hudi
tblproperties (
type = 'mor',
primaryKey = 'id',
preCombineField = 'ts'
)
location 'hdfs:///hudi/hudi_mor_tbl';
验证:
show tables;
插入数据
SQL方式:
insert into hudi_mor_tbl select 1, 'a1', 20, 1000;
验证方式:
select * from hudi_mor_tbl;
普通查询
SQL方式:
select * from hudi_mor_tbl;
修改数据
SQL方式:
update hudi_mor_tbl set price = price * 2, ts = 1111 where id = 1;
验证:
select * from hudi_mor_tbl;
insert overwrite
sql方式:
insert overwrite hudi_mor_tbl select 99, 'a99', 20.0, 900;
验证:
select * from hudi_mor_tbl;
发现只有新增的这一条数据。
删除数据
sql方式:
delete from hudi_mor_tbl where id % 2 = 1;
验证:
select * from hudi_mor_tbl;
Kerberos和权限配置
例如使用hudi用户操作Hudi表,提交队列为default
,表路径为hdfs:///hudi/t1
。
需要使用Ranger创建hudi用户,然后分配hudi用户HDFS/hudi/t1
,/tmp
,/user/hudi
目录读写权限(或者赋予根目录读写权限),还有yarn default
队列的权限。
如果开启了Kerberos,需要Kerberos建立[email protected]
principal并且生成对应的keytab。kinit之后有权限操作。
FAQ
spark-sql或者spark-shell启动出现NoClassDefFoundError: org/apache/hadoop/shaded/javax/ws/rs/core/NoContentException
问题日志:
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/shaded/javax/ws/rs/core/NoContentException
at org.apache.hadoop.yarn.util.timeline.TimelineUtils.(TimelineUtils.java:60)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceInit(YarnClientImpl.java:200)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:191)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:222)
at org.apache.spark.SparkContext.(SparkContext.scala:585)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2704)
at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$2(SparkSession.scala:953)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:947)
at org.apache.spark.sql.hive.thriftserver.SparkSQLEnv$.init(SparkSQLEnv.scala:54)
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.(SparkSQLCLIDriver.scala:327)
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:159)
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:958)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1046)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.shaded.javax.ws.rs.core.NoContentException
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 27 more
问题原因:Hadoop和Spark版本不匹配所致。
解决方案:可禁用Yarn的timeline-service。禁用方法请看环境配置。
参考链接:
https://github.com/apache/kyuubi/issues/2904
创建表的时候出现 CreateHoodieTableCommand: Failed to create catalog table in metastore: org.apache.hudi.hadoop.realtime.HoodieParquetRealtimeInputFormat
从原始报错看不出来是什么问题,需要增加代码:
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/spark/sql/hudi/command/CreateHoodieTableCommand.scala
85行左右修改为:
case NonFatal(e) => {
logWarning(s"Failed to create catalog table in metastore: ${e.getMessage}")
logWarning(s"Failed to create catalog table in metastore: ${e.getClass}")
logWarning(s"Failed to create catalog table in metastore: ${e.getStackTrace.mkString("Array(", ", ", ")")}")
}
注:最新代码已修复报错含糊不清的问题。本人已提交社区,参见:https://issues.apache.org/jira/browse/HUDI-6394。
编译替换后再次运行。可看到更为详细的报错日志:
23/06/09 17:15:54 WARN CreateHoodieTableCommand: Failed to create catalog table in metastore: org.apache.hudi.hadoop.realtime.HoodieParquetRealtimeInputFormat
23/06/09 17:15:54 WARN CreateHoodieTableCommand: Failed to create catalog table in metastore: class java.lang.ClassNotFoundExcepion
23/06/09 17:15:54 WARN CreateHoodieTableCommand: Failed to create catalog table in metastore: Array(java.net.URLClassLoader.finlass(URLClassLoader.java:381), java.lang.ClassLoader.loadClass(ClassLoader.java:424), java.lang.ClassLoader.loadClass(ClassLoad.java:357), java.lang.Class.forName0(Native Method), java.lang.Class.forName(Class.java:348), org.apache.spark.util.Utils$.clasorName(Utils.scala:218), org.apache.spark.sql.hive.client.HiveClientImpl$.toInputFormat(HiveClientImpl.scala:1041), org.apache.ark.sql.hive.client.HiveClientImpl$.$anonfun$toHiveTable$8(HiveClientImpl.scala:1080), scala.Option.map(Option.scala:230), org.ache.spark.sql.hive.client.HiveClientImpl$.toHiveTable(HiveClientImpl.scala:1080), org.apache.spark.sql.hive.client.HiveClientIl.$anonfun$createTable$1(HiveClientImpl.scala:554), scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23), orgpache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:294), org.apache.spark.sql.hive.clientiveClientImpl.liftedTree1$1(HiveClientImpl.scala:225), org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientIm.scala:224), org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:274), org.apache.spark.sql.hivelient.HiveClientImpl.createTable(HiveClientImpl.scala:552), org.apache.spark.sql.hudi.command.CreateHoodieTableCommand$.createHeDataSourceTable(CreateHoodieTableCommand.scala:198), org.apache.spark.sql.hudi.command.CreateHoodieTableCommand$.createTableIntalog(CreateHoodieTableCommand.scala:169), org.apache.spark.sql.hudi.command.CreateHoodieTableCommand.run(CreateHoodieTableCommd.scala:83), org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:75), org.apae.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:73), org.apache.spark.sql.execution.command.EcutedCommandExec.executeCollect(commands.scala:84), org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteComman$1.$anonfun$applyOrElse$1(QueryExecution.scala:98), org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(LExecution.scala:109), org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:169), org.apache.srk.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:95), org.apache.spark.sql.SparkSession.withActi(SparkSession.scala:779), org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64), org.apache.spk.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98), org.apache.spark.sql.exetion.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:94), org.apache.spark.sql.catalyst.treesreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:584), org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(eeNode.scala:176), org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:584), org.apache.spark.l.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruningogicalPlan.scala:30), org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:7), org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263), org.apache.ark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30), org.apache.spark.sql.catalyst.plans.gical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30), org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TrNode.scala:560), org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:94), org.apache.sparsql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:81), org.apache.spark.sql.execution.QueryExecutionommandExecuted(QueryExecution.scala:79), org.apache.spark.sql.Dataset.(Dataset.scala:220), org.apache.spark.sql.Dataset$.nonfun$ofRows$2(Dataset.scala:100), org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779), org.apache.spark.sql.taset$.ofRows(Dataset.scala:97), org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:622), org.apache.spark.sqlparkSession.withActive(SparkSession.scala:779), org.apache.spark.sql.SparkSession.sql(SparkSession.scala:617), org.apache.sparkql.SQLContext.sql(SQLContext.scala:651), org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:67), orapache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:384), org.apache.spark.sql.hive.thriftsver.SparkSQLCLIDriver.$anonfun$processLine$1(SparkSQLCLIDriver.scala:504), org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDrer.$anonfun$processLine$1$adapted(SparkSQLCLIDriver.scala:498), scala.collection.Iterator.foreach(Iterator.scala:943), scala.coection.Iterator.foreach$(Iterator.scala:943), scala.collection.AbstractIterator.foreach(Iterator.scala:1431), scala.collection.erableLike.foreach(IterableLike.scala:74), scala.collection.IterableLike.foreach$(IterableLike.scala:73), scala.collection.AbstctIterable.foreach(Iterable.scala:56), org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processLine(SparkSQLCLIDriver.sla:498), org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:286), org.apache.spark.sql.hivehriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala), sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method), sun.rlect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62), sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatgMethodAccessorImpl.java:43), java.lang.reflect.Method.invoke(Method.java:498), org.apache.spark.deploy.JavaMainApplication.sta(SparkApplication.scala:52), org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala58), org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180), org.apache.spark.deploy.SparkSubmit.submit(SparkSuit.scala:203), org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90), org.apache.spark.deploy.SparkSubmit$$anon$2.Submit(SparkSubmit.scala:1046), org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055), org.apache.spark.deploy.Sparubmit.main(SparkSubmit.scala))
可看到错误为ClassNotFoundException。找不到的class为org.apache.hudi.hadoop.realtime.HoodieParquetRealtimeInputFormat
。经过查找,发现这个class在hudi-hadoop-mr-bundle
包中。
java.net.URLClassLoader.findClass(URLClassLoader.java:381)
java.lang.ClassLoader.loadClass(ClassLoader.java:424)
java.lang.ClassLoader.loadClass(ClassLoader.java:357)
java.lang.Class.forName0(Native Method)
java.lang.Class.forName(Class.java:348)
org.apache.spark.util.Utils$.classForName(Utils.scala:218)
org.apache.spark.sql.hive.client.HiveClientImpl$.toInputFormat(HiveClientImpl.scala:1041)
org.apache.spark.sql.hive.client.HiveClientImpl$.$anonfun$toHiveTable$8(HiveClientImpl.scala:1080)
scala.Option.map(Option.scala:230)
org.apache.spark.sql.hive.client.HiveClientImpl$.toHiveTable(HiveClientImpl.scala:1080)
org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$createTable$1(HiveClientImpl.scala:554)
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:294)
org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:225)
org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:224)
org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:274)
org.apache.spark.sql.hive.client.HiveClientImpl.createTable(HiveClientImpl.scala:552)
org.apache.spark.sql.hudi.command.CreateHoodieTableCommand$.createHiveDataSourceTable(CreateHoodieTableCommand.scala:198)
org.apache.spark.sql.hudi.command.CreateHoodieTableCommand$.createTableInCatalog(CreateHoodieTableCommand.scala:169)
org.apache.spark.sql.hudi.command.CreateHoodieTableCommand.run(CreateHoodieTableCommand.scala:83)
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:75)
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:73)
org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:84)
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:98)
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:109)
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:169)
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:95)
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98)
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:94)
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:584)
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:176)
org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:584)
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30)
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:560)
org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:94)
org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:81)
org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:79)
org.apache.spark.sql.Dataset.(Dataset.scala:220)
org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:100)
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)
org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:622)
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
org.apache.spark.sql.SparkSession.sql(SparkSession.scala:617)
org.apache.spark.sql.SQLContext.sql(SQLContext.scala:651)
org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:67)
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:384)
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1(SparkSQLCLIDriver.scala:504)
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1$adapted(SparkSQLCLIDriver.scala:498)
scala.collection.Iterator.foreach(Iterator.scala:943)
scala.collection.Iterator.foreach$(Iterator.scala:943)
scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
scala.collection.IterableLike.foreach(IterableLike.scala:74)
scala.collection.IterableLike.foreach$(IterableLike.scala:73)
scala.collection.AbstractIterable.foreach(Iterable.scala:56)
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processLine(SparkSQLCLIDriver.scala:498)
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:286)
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:958)
org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1046)
org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055)
org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
将Hudi编译后的hudi-hadoop-mr-bundle-0.13.1.jar
放入到hive安装目录的lib或者auxlib中。重启Hive metastore服务后恢复正常。
spark-sql或者spark-shell命令太长,每次都要加入Hudi必须的conf配置,可否简化
有办法简化,可以将Hudi的配置加入到spark-defaults.conf
配置文件中。例如对于Hudi 0.13.1版本可在spark-defaults.conf
中加入:
spark.serializer=org.apache.spark.serializer.KryoSerializer
spark.sql.catalog.spark_catalog=org.apache.spark.sql.hudi.catalog.HoodieCatalog
spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension
修改之后在启动spark-shell只需要执行:
./spark-shell --master yarn
对于spark-sql,执行:
./spark-sql --master yarn
相比之前提到的启动方式简化了许多。