Spark 调用 hive使用动态分区插入数据

spark 调用sql插入hive 失败 ,执行语句如下

spark.sql("INSERT INTO default.test_table_partition partition(province,city) SELECT xxx,xxx md5(province),md5(city)  FROM test_table")

报错如下,需动态插入分区


Exception in thread "main" org.apache.spark.SparkException: Dynamic partition strict mode requires at least one static partition column. To turn this off set hive.exec.dynamic.partition.mode=nonstrict
	at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:314)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:66)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:61)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:77)
	at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:183)
	at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:183)
	at org.apache.spark.sql.Dataset$$anonfun$54.apply(Dataset.scala:2841)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
	at org.apache.spark.sql.Dataset.withAction(Dataset.scala:2840)
	at org.apache.spark.sql.Dataset.(Dataset.scala:183)
	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:68)
	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:632)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:775)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

 在spark配置中加入:

.config("hive.exec.dynamici.partition",true)
.config("hive.exec.dynamic.partition.mode","nonstrict")

 val spark = SparkSession
      .builder()
      //      .master("local[2]")
      .appName("WeiBoAccount-Verified")
      .config("spark.serializer","org.apache.spark.serializer.KryoSerializer")
        .config("hive.exec.dynamici.partition",true)
        .config("hive.exec.dynamic.partition.mode","nonstrict")
      .enableHiveSupport() 
      .getOrCreate()

 

相关参数说明:

Hive.exec.dynamic.partition  是否启动动态分区。false(不开启) true(开启)默认是 false

hive.exec.dynamic.partition.mode  打开动态分区后,动态分区的模式,有 strict和 nonstrict 两个值可选,strict 要求至少包含一个静态分区列,nonstrict则无此要求。各自的好处,大家自己查看哈。

hive.exec.max.dynamic.partitions 允许的最大的动态分区的个数。可以手动增加分区。默认1000

 

你可能感兴趣的:(Hive,spark)