【Spark Java API】Transformation(1)—mapPartitions、mapPartitionsWithIndex

mapPartitions


官方文档描述:

 Return a new RDD by applying a function to each partition of this RDD.

**
mapPartitions函数会对每个分区依次调用分区函数处理,然后将处理的结果(若干个Iterator)生成新的RDDs。
mapPartitions与map类似,但是如果在映射的过程中需要频繁创建额外的对象,使用mapPartitions要比map高效的过。比如,将RDD中的所有数据通过JDBC连接写入数据库,如果使用map函数,可能要为每一个元素都创建一个connection,这样开销很大,如果使用mapPartitions,那么只需要针对每一个分区建立一个connection。
**

函数原型:

def mapPartitions[U](f:FlatMapFunction[java.util.Iterator[T], U]): JavaRDD[U]
def mapPartitions[U](f:FlatMapFunction[java.util.Iterator[T], U],
preservesPartitioning: Boolean): JavaRDD[U]

**
第一个函数是基于第二个函数实现的,使用的是preservesPartitioning为false。而第二个函数我们可以指定preservesPartitioning,preservesPartitioning表示是否保留父RDD的partitioner分区信息;FlatMapFunction中的Iterator是这个rdd的一个分区的所有element组成的Iterator。
**

实例

List data = Arrays.asList(1, 2, 4, 3, 5, 6, 7);
//RDD有两个分区
JavaRDD javaRDD = javaSparkContext.parallelize(data,2);
//计算每个分区的合计
JavaRDD mapPartitionsRDD = javaRDD.mapPartitions(new FlatMapFunction, Integer>() {   
 @Override
 public Iterable call(Iterator integerIterator) throws Exception {
        int isum = 0;
        while(integerIterator.hasNext())
            isum += integerIterator.next();
        LinkedList linkedList = new LinkedList();
        linkedList.add(isum);
        return linkedList;    }
});

System.out.println("mapPartitionsRDD~~~~~~~~~~~~~~~~~~~~~~" + mapPartitionsRDD.collect());

mapPartitionsWithIndex


官方文档说明:

 Return a new RDD by applying a function to each partition of this RDD, while tracking the index of the original partition.

**
mapPartitionsWithIndex与mapPartitions基本相同,只是在处理函数的参数是一个二元元组,元组的第一个元素是当前处理的分区的index,元组的第二个元素是当前处理的分区元素组成的Iterator
**

函数原型:

def mapPartitionsWithIndex[R]( f:JFunction2[jl.Integer, java.util.Iterator[T], 
java.util.Iterator[R]],
preservesPartitioning: Boolean = false): JavaRDD[R]

源码分析:

def mapPartitions[U: ClassTag](f:Iterator[T] => Iterator[U],  
preservesPartitioning:Boolean = false): RDD[U] = withScope {  
val cleanedF = sc.clean(f)  
new MapPartitionsRDD(this,  (context: TaskContext, index: Int, iter: Iterator[T]) => cleanedF(iter), 
preservesPartitioning)
}
def mapPartitionsWithIndex[U: ClassTag]( f: (Int, Iterator[T]) => Iterator[U],
preservesPartitioning: Boolean = false): RDD[U] = withScope {  
val cleanedF = sc.clean(f)  
new MapPartitionsRDD(this,  (context: TaskContext, index: Int, iter: Iterator[T]) => 
cleanedF(index, iter),    
preservesPartitioning)
}

**
从源码中可以看到其实mapPartitions已经获得了当前处理的分区的index,只是没有传入分区处理函数,而mapPartitionsWithIndex将其传入分区处理函数。
**

实例:

List data = Arrays.asList(1, 2, 4, 3, 5, 6, 7);
//RDD有两个分区
JavaRDD javaRDD = javaSparkContext.parallelize(data,2);
//分区index、元素值、元素编号输出
JavaRDD mapPartitionsWithIndexRDD = javaRDD.mapPartitionsWithIndex(new Function2, Iterator>() {
 @Override 
public Iterator call(Integer v1, Iterator v2) throws Exception {        
  LinkedList linkedList = new LinkedList();        
  int i = 0;        
  while (v2.hasNext())            
   linkedList.add(Integer.toString(v1) + "|" + v2.next().toString() + Integer.toString(i++));        
  return linkedList.iterator();    
  }
},false);

System.out.println("mapPartitionsWithIndexRDD~~~~~~~~~~~~~~~~~~~~~~" + mapPartitionsWithIndexRDD.collect());

你可能感兴趣的:(【Spark Java API】Transformation(1)—mapPartitions、mapPartitionsWithIndex)