Spark并发读取MySQL

Spark读取MySQL数据量过大,一直停在Added broadcast_0_piece0 in memory on cdh-master问题。

19/09/18 14:21:17 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on cdh-slave01:38815 (size: 10.2 KB, free: 2004.6 MB)

1. 检查代码没有问题。

// 获取当天到 0 时时间, 如:2019-09-18 00:00:00
val today = DateUtils.formatTimeZone(DateUtils.getTodayDate)
// 获取40天前 0 时的时间, 如:2019-08-08 00:00:00
val lastFortyDay = DateUtils.formatTimeZone(DateUtils.getDate(-40))

val prop = new Properties()
prop.put("user", "root")
prop.put("password", "root")
prop.put("driver", "com.mysql.jdbc.Driver")

val url = "jdbc:mysql://192.168.100.47:3306/origin?zeroDateTimeBehavior=convertToNull"
val table = "user"

// TIME:MySQL表中的日期时间字段,参数 $lastFortyDay:前40天,$today:今天
val df = spark.read.jdbc(url, table, prop)
df.where(s"`TIME` between '$lastFortyDay' and '$today'").show()

2. 设置executor数量,都会因为超时被kill,只剩一个继续读MySQL,但是一直没有进度。

sudo -uhdfs spark-submit \
--class com.sm.analysis.TestRead \
--master yarn \
--deploy-mode client \
--driver-memory 4G \
--driver-cores 4 \
--num-executors 8 \
--executor-memory 4G \
--executor-cores 5 \
--jars /usr/java/jdk1.8.0_211/lib/mysql-connector-java-5.1.47.jar \
--conf spark.default.parallelism=80 \
/data4/jars/original-analysis-1.0-SNAPSHOT.jar

3. 多次测试基本能确定是数据量太大, MySQL进行了限制,通过修改 Spark 查询的并发度来降低单次查询数量并提高效率。

因为指定了TIME字段,所以采用 SparkSQL JDBC 调用函数的方式。

    // 获取当天到 0 时时间, 如:2019-09-18 00:00:00
    val today = DateUtils.formatTimeZone(DateUtils.getTodayDate)

    val prop = new Properties()
    prop.put("user", "root")
    prop.put("password", "root")
    prop.put("driver", "com.mysql.jdbc.Driver")

    val url = "jdbc:mysql://192.168.100.47:3306/origin?zeroDateTimeBehavior=convertToNull"
    val table = "user"

    val predicates = Array(
      lastFortyDay -> lastThirtyDay,
      lastThirtyDay -> lastTwentyDay,
      lastTwentyDay -> lastTenDay,
      lastTenDay -> today
    ).map {
      case (start, end) =>
        s"`TIME` between '$start' and '$end'"
    }

    val df = spark.read.jdbc(url, table, predicates, prop)
    

    df.show()

 另外,如果并发读取后,设置了.coalesce(1)重分区,那个还是会以1个executor来读取。

还有之前遇到的情况, Spark 任务计划太长,全是 transformation 算子,也会导致任务一直处于等待状态执行不完,这样的情况最好在中间适当的时候进行缓存。

4. 另外,其他提高并发度的方式。

    val prop = new Properties()
    prop.put("user", "root")
    prop.put("password", "root")
    prop.put("driver", "com.mysql.jdbc.Driver")

    val url = "jdbc:mysql://192.168.100.47:3306/origin?zeroDateTimeBehavior=convertToNull"
    val table = "user"
    val columnName = "ID"
    val lowerBound = 0
    val upperBound = 100000
    val numPartitions = 10

    val df = spark.read.jdbc(url, table, columnName, lowerBound, upperBound, numPartitions, prop)
    df.limit(10).show()

 

你可能感兴趣的:(Spark,MySQL)