spark RDD基础装换操作--coalesce操作

10.coalesce操作

创建一个由数字1~100组成的RDD,并且设置为10个分区。然后执行coalesce操作,将分区数聚合为“5”,然后再将其拓展为“7”,观察操作后的效果。
scala>  val rddData1 = sc.parallelize(1 to 100,10)
rddData1: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[17] at parallelize at :24

scala>  rddData1.partitions.length
res4: Int = 10

scala>  val rddData2 = rddData1.coalesce(5)
rddData2: org.apache.spark.rdd.RDD[Int] = CoalescedRDD[18] at coalesce at :26

scala>  rddData2.partitions.length
res5: Int = 5

scala>  val rddData3 = rddData2.coalesce(7)
rddData3: org.apache.spark.rdd.RDD[Int] = CoalescedRDD[19] at coalesce at :28

scala>  rddData3.partitions.length
res6: Int = 5

说明:
val rddData1 = sc.parallelize(1 to 100,10):创建一个分区为10的RDD

rddData1.partitions.length:显示rddData1的分区数为10

val rddData2 = rddData1.coalesce(5):将分区数聚合到5

coalesce操作会将分区数较多的原始RDD向分区数较少的目标RDD进行转换。如果目标RDD的分区数大于原始RDD的分区数,则维持原分区数不变,此时操作毫无意义,就如这里面val rddData3 = rddData2.coalesce(7)一样rddData2的分区数是5,目标rddData3的分区数是7,最后分区数仍然是5。

你可能感兴趣的:(spark)