spark之RDD

  • 启动spark-shell
    bin/spark-shell --master spark://bigdata.eclipse.com:7077
  • reduceBykey、groupBykey、sortByKey、join的使用
1、reduceBykey
var rdd = sc.textFile("/data/wc.input") 
val reduceBykey = rdd.flatMap(line => line.split(" ")).map(x => (x, 1)).reduceByKey((a,b) => (a + b)).collect 2、groupBykey var rdd = sc.textFile("/data/wc.input") val groupBykey= rdd.flatMap(line => line.split(" ")).map(x => (x, 1)).groupByKey().map(x => (x._1,x._2.sum)).collect 3、sortByKey var rdd = sc.textFile("/data/wc.input") val groupBykey= rdd.flatMap(line => line.split(" ")).map(x => (x, 1)).groupByKey().map(x => (x._1,x._2.sum)) 1) key升序 roupBykey.sortByKey().collect 2)key降序 groupBykey.sortByKey(false).collect 3)value降序 groupBykey.map(x => (x._2,x._1)).sortByKey(false).map(x => (x._2,x._1)).collect 4、join val rdd1 = sc.parallelize(List("1,spark","2,sqoop")).map(_.split(",")).map(x => (x(0),x(1))) val rdd2 = sc.parallelize(List("1,hadoop","2,yarn")).map(_.split(",")).map(x => (x(0),x(1))) rdd1.join(rdd2).collect
  • 分组排序
1)方式一
rdd.map(line => line.split(" ")).map(x => (x(0),x(1).toInt)).groupByKey().map(x => (x._1,x._2.toList.sorted.reverse)).collect

2)方式二
使用:paste模式
rdd.map(_.split(" ")).map(x => (x(0),x(1).toInt)).groupByKey()
.map(x =>{
    var xx = x._1
    var xy = x._2.toList.sorted.reverse 
   (xx,yy)
}).collect

你可能感兴趣的:(spark之RDD)