Spark | RDD编码

RDDs创建

val rdd = sc.parallelize(Array(1,2,3,4), 4)
rdd.count()
rdd.foreach(print)
rdd.foreach(println)

val rdd = sc.textFile("")

scala val和var区别
val变量值不可修改,一旦分配不能重新指向别的值 // reassignment to val
var分配后,可以指向类型相同的值

基本操作

Transformation
map()          接收函数, 把函数应用到RDD的每个元素, 返回新的RDD
filter()       接收函数, 返回只满足filter()函数的元素的新RDD
flatMap()      对每个输入元素, 输出多个输出元素

val lines = sc.parallelize(Array("hello", "world", "hello", "spark", "!"))
val mapResult = lines.map(word => (word, 1))
mapResult.foreach(println)

val filterResult = lines.filter(word=>word.contains("hello"))
filterResult.foreach(println)

val input = sc.textFile("/data/spark/demo/hellospark")
input.foreach(println)
val flatMapResult = input.flatMap(line=>line.split(" "))
flatMapResult.foreach(println)

集合操作

val lineA = sc.parallelize(Array("coffee", "coffee", "panda", "monkey", "tea"))
val lineB = sc.parallelize(Array("coffee", "monkey", "kitty"))
val distinctResult = lineA.distinct()       // 去重
val unionResult = lineA.union(lineB)        // 并集
val interResult = lineA.intersection(lineB) // 交集
val subResult = lineA.subtract(lineB)       // 差集

Action

reduce()       接收函数, 作用在RDD两个类型相同的元素上, 返回新元素
collect()      遍历整个RDD
take(n)        返回RDD的n个元素, 返回结果无序
top()          排序
foreach()      计算RDD中元素

val rdd = sc.parallelize(Array(1,2,3,3))
rdd.collect()
rdd.reduce((x, y) => x + y)
rdd.top(1)
rad.foreach(println)

key value

val rddResult = sc.parallelize(Array((1, 2), (3, 4), (3, 6)))

# 把相同key的结合                  
# rddResult.reduceByKey((x, y) => x + y)     
# {(1, 2), (3, 10)}
reduceByKey(func)     

# 把相同key分组 
# rddResult.groupByKey() 
#  {(1, [2]), (2, [4, 6])}
groupByKey()                                                

# 函数作用于每个元素,key不变
# radResult.mapValues(x => x + 1)
# {(1, 3), (3, 5), (3, 7)}
mapValues()                              

# 符号化的时候使用
# rddResult.flatMapValues(x => x to 5)
flatMapValues() 

# 返回keys                        
keys()                

# 返回values
values()              

# 按照key排序                         
sortByKey()           

val rdd = sc.textFile("/data/spark/demo/hellospark")
rdd.foreach(println)

val rdd2 = rdd.map(line => (line.split(" ")(0), line))
rdd2.foreach(println)

combineByKey()

求平均值
val scores = sc.parallelize(
Array(("jack",80.0),
("jack",90.0),
("jack",85.0),
("mike",85.0),
("mike",85.0),
("mike",90.0)))

val scoreResult = scores.combineByKey(
score=>(1, score),
(c1:(Int,Double),newScore)=>(c1._1+1,c1._2+newScore),
(c1:(Int,Double),c2:(Int,Double)=>(c1._1+c2._1,c1._2+c2._2)))
val average = scoreResult.map{case(name,(num,score)=>(name,score/num))}

你可能感兴趣的:(Spark | RDD编码)