Hudi集成Spark之并发控制-并行写入

原文:Hudi(10):Hudi集成Spark之并发控制-CSDN博客

目录

0. 相关文章链接

1. Hudi支持的并发控制

1.1. MVCC

1.2. OPTIMISTIC CONCURRENCY

2. 使用并发写方式

3. 使用Spark DataFrame并发写入

4. 使用Delta Streamer并发写入


0. 相关文章链接

 Hudi文章汇总 

1. Hudi支持的并发控制

1.1. MVCC

        Hudi的表操作,如压缩、清理、提交,hudi会利用多版本并发控制来提供多个表操作写入和查询之间的快照隔离。使用MVCC这种模型,Hudi支持并发任意数量的操作作业,并保证不会发生任何冲突。Hudi默认这种模型。MVCC方式所有的table service都使用同一个writer来保证没有冲突,避免竟态条件。

Hudi集成Spark之并发控制-并行写入_第1张图片

Hudi集成Spark之并发控制-并行写入_第2张图片

1.2. OPTIMISTIC CONCURRENCY

        针对写入操作(upsert、insert等)利用乐观并发控制来启用多个writer将数据写到同一个表中,Hudi支持文件级的乐观一致性,即对于发生在同一个表中的任何2个提交(写入),如果它们没有写入正在更改的重叠文件,则允许两个写入都成功。此功能处于实验阶段,需要用到Zookeeper或HiveMetastore来获取锁。

Hudi集成Spark之并发控制-并行写入_第3张图片

2. 使用并发写方式

  • 如果需要开启乐观并发写入,需要设置以下属性:

  1. hoodie.write.concurrency.mode=optimistic_concurrency_control

  2. hoodie.cleaner.policy.failed.writes=LAZY

  3. hoodie.write.lock.provider=

  • Hudi获取锁的服务提供两种模式使用zookeeper或HiveMetaStore:

相关zookeeper参数:


  1. hoodie.write.lock.provider=org.apache.hudi.client.transaction.lock.ZookeeperBasedLockProvider

  2. hoodie.write.lock.zookeeper.url

  3. hoodie.write.lock.zookeeper.port

  4. hoodie.write.lock.zookeeper.lock_key

  5. hoodie.write.lock.zookeeper.base_path

相关HiveMetastore参数,HiveMetastore URI是从运行时加载的hadoop配置文件中提取的:


  1. hoodie.write.lock.provider=org.apache.hudi.hive.HiveMetastoreBasedLockProvider

  2. hoodie.write.lock.hivemetastore.database

  3. hoodie.write.lock.hivemetastore.table

3. 使用Spark DataFrame并发写入

(1)启动spark-shell


  1. spark-shell \

  2. --conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer' \

  3. --conf 'spark.sql.catalog.spark_catalog=org.apache.spark.sql.hudi.catalog.HoodieCatalog' \

  4. --conf 'spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension'

(2)编写代码【核心为写入时的 hoodie 相关参数】


  1. import org.apache.hudi.QuickstartUtils._

  2. import scala.collection.JavaConversions._

  3. import org.apache.spark.sql.SaveMode._

  4. import org.apache.hudi.DataSourceReadOptions._

  5. import org.apache.hudi.DataSourceWriteOptions._

  6. import org.apache.hudi.config.HoodieWriteConfig._

  7. val tableName = "hudi_trips_cow"

  8. val basePath = "file:///tmp/hudi_trips_cow"

  9. val dataGen = new DataGenerator

  10. val inserts = convertToStringList(dataGen.generateInserts(10))

  11. val df = spark.read.json(spark.sparkContext.parallelize(inserts, 2))

  12. df.write.format("hudi").

  13. .options(getQuickstartWriteConfigs)

  14. .option(PRECOMBINE_FIELD_OPT_KEY, "ts")

  15. .option(RECORDKEY_FIELD_OPT_KEY, "uuid")

  16. .option(PARTITIONPATH_FIELD_OPT_KEY, "partitionpath")

  17. .option("hoodie.write.concurrency.mode", "optimistic_concurrency_control")

  18. .option("hoodie.cleaner.policy.failed.writes", "LAZY")

  19. .option("hoodie.write.lock.provider", "org.apache.hudi.client.transaction.lock.ZookeeperBasedLockProvider")

  20. .option("hoodie.write.lock.zookeeper.url", "hadoop1,hadoop2,hadoop3")

  21. .option("hoodie.write.lock.zookeeper.port", "2181")

  22. .option("hoodie.write.lock.zookeeper.lock_key", "test_table")

  23. .option("hoodie.write.lock.zookeeper.base_path", "/multiwriter_test")

  24. .option(TABLE_NAME, tableName)

  25. .mode(Append)

  26. .save(basePath)

(3)使用zk客户端,验证是否使用了zk


  1. /opt/module/apache-zookeeper-3.5.7/bin/zkCli.sh

  2. [zk: localhost:2181(CONNECTED) 0] ls /

(4)zk下产生了对应的目录,/multiwriter_test下的目录,为代码里指定的lock_key

[zk: localhost:2181(CONNECTED) 1] ls /multiwriter_test

4. 使用Delta Streamer并发写入

基于前面DeltaStreamer的例子,使用Delta Streamer消费kafka的数据写入到hudi中,这次加上并发写的参数。

1)进入配置文件目录,修改配置文件添加对应参数,提交到Hdfs上


  1. cd /opt/module/hudi-props/

  2. cp kafka-source.properties kafka-multiwriter-source.propertis

  3. vim kafka-multiwriter-source.propertis

  4. hoodie.write.concurrency.mode=optimistic_concurrency_control

  5. hoodie.cleaner.policy.failed.writes=LAZY

  6. hoodie.write.lock.provider=org.apache.hudi.client.transaction.lock.ZookeeperBasedLockProvider

  7. hoodie.write.lock.zookeeper.url=hadoop1,hadoop2,hadoop3

  8. hoodie.write.lock.zookeeper.port=2181

  9. hoodie.write.lock.zookeeper.lock_key=test_table2

  10. hoodie.write.lock.zookeeper.base_path=/multiwriter_test2

  11. hadoop fs -put /opt/module/hudi-props/kafka-multiwriter-source.propertis /hudi-props

2)运行Delta Streamer


  1. spark-submit \

  2. --class org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer \

  3. /opt/module/spark-3.2.2/jars/hudi-utilities-bundle_2.12-0.12.0.jar \

  4. --props hdfs://hadoop1:8020/hudi-props/kafka-multiwriter-source.propertis \

  5. --schemaprovider-class org.apache.hudi.utilities.schema.FilebasedSchemaProvider \

  6. --source-class org.apache.hudi.utilities.sources.JsonKafkaSource \

  7. --source-ordering-field userid \

  8. --target-base-path hdfs://hadoop1:8020/tmp/hudi/hudi_test_multi \

  9. --target-table hudi_test_multi \

  10. --op INSERT \

  11. --table-type MERGE_ON_READ

3)查看zk是否产生新的目录


  1. /opt/module/apache-zookeeper-3.5.7-bin/bin/zkCli.sh

  2. [zk: localhost:2181(CONNECTED) 0] ls /

  3. [zk: localhost:2181(CONNECTED) 1] ls /multiwriter_test2


注:其他Hudi相关文章链接由此进 ->  Hudi文章汇总 


你可能感兴趣的:(Hudi,spark,hudi,spark,并发,并行写入)