Spark使用小结:Java版的GroupByKey示例

Spark Java版的GroupByKey示例

感觉reduceByKey只能完成一些满足交换率,结合律的运算,如果想把某些数据聚合到一些做一些操作,得换groupbykey

比如下面:我想把相同key对应的value收集到一起,完成一些运算(例如拼接字符串,或者去重)

public class SparkSample {
    private static final Pattern SPACE = Pattern.compile(" ");

    public static void main(String args[]) {
        SparkConf sparkConf = new SparkConf();
        sparkConf.setAppName("Spark_GroupByKey_Sample");
        sparkConf.setMaster("local");

        JavaSparkContext context = new JavaSparkContext(sparkConf);

        List data = Arrays.asList(1,1,2,2,1);
        JavaRDD distData= context.parallelize(data);

        JavaPairRDD firstRDD = distData.mapToPair(new PairFunction() {
            @Override
            public Tuple2 call(Integer integer) throws Exception {
                return new Tuple2(integer, integer*integer);
            }
        });

        JavaPairRDD> secondRDD = firstRDD.groupByKey();

        List> reslist = secondRDD.map(new Function>, Tuple2>() {
            @Override
            public Tuple2 call(Tuple2> integerIterableTuple2) throws Exception {
                int key = integerIterableTuple2._1();
                StringBuffer sb = new StringBuffer();
                Iterable iter = integerIterableTuple2._2();
                for (Integer integer : iter) {
                        sb.append(integer).append(" ");
                }
                return new Tuple2(key, sb.toString().trim());
            }
        }).collect();


        for(Tuple2 str : reslist) {
            System.out.println(str._1() + "\t" + str._2() );
        }
        context.stop();
    }
}


你可能感兴趣的:(hadoop)