Spark SQL 实现 group_concat

Spark SQL 实现 group_concat

环境:Spark 2.0.1 

以下貌似需要至少Spark 1.6支持,未实测(网友yanshichuan1反馈spark 1.5.1同样支持,感谢)

表结构及内容:

+-------+---+
|   name|age|
+-------+---+
|Michael| 29|
|   Andy| 30|
| Justin| 19|
| Justin| 20|
|      LI| 20|
+-------+---+


parquetFile.registerTempTable("people")
sqlContext.sql("select concat_ws(',',collect_set(name)) as names from people group by age").show()
+---------+---+
|   names|age|
+---------+---+
|LI,Justin| 20|
|   Justin| 19|
|  Michael| 29|
    Andy| 30|
+---------+---+
import org.apache.spark.sql.functions._
parquetFile.groupBy("age")
           .agg(collect_set("name"))
           .show()
+---+-----------------+
|age|collect_set(name)|
+---+-----------------+
| 20|    [LI, Justin]|
| 19|      [Justin]|
| 29|     [Michael]|
| 30|        [Andy]|
+---+-----------------+


来源:http://blog.csdn.net/liliwei0213/article/details/52813576

你可能感兴趣的:(大数据,spark)