Spark RDD Key-Value基本转换和动作运算实例

创建Key-Value RDD

 kvRDD1 = sc.parallelize([(3,6),(6,9),(3,4),(5,6),(1,2)])

转换:取key和value

>>> kvRDD1.collect()
[(3, 6), (6, 9), (3, 4), (5, 6), (1, 2)]
>>> kvRDD1.keys().collect()
[3, 6, 3, 5, 1]
>>> kvRDD1.values().collect()
[6, 9, 4, 6, 2]

filter:

>>> kvRDD1.filter(lambda keyValue:keyValue[0]<5).collect()
[(3, 6), (3, 4), (1, 2)]
>>> kvRDD1.filter(lambda keyValue:keyValue[1]<5).collect()
[(3, 4), (1, 2)]

mapValues:针对RDD每一组(Key,Value)进行运算

>>> kvRDD1.mapValues(lambda x:x**2).collect()
[(3, 36), (6, 81), (3, 16), (5, 36), (1, 4)]

sortByKey:默认从小到大按照key排序

>>> kvRDD1.sortByKey(ascending=True).collect()
[(1, 2), (3, 6), (3, 4), (5, 6), (6, 9)]

reduceByKey():按照key值进行reduce运算,将相同的key的value相加

>>> kvRDD1.reduceByKey(lambda x,y:x+y).collect()
[(5, 6), (1, 2), (6, 9), (3, 10)]

多个RDD Key-Value转换运算

>>> kvRDD2 = sc.parallelize([(3,6),(3,8),(6,12)])
>>> kvRDD1 = sc.parallelize([(3,6),(6,9),(3,4),(5,6),(1,2)])

jion:将两个RDD按照相同的key值jion起来

>>> kvRDD1.join(kvRDD2).collect()
[(3, (6, 6)), (3, (6, 8)), (3, (4, 6)), (3, (4, 8)), (6, (9, 12))]

leftOuterJoin:如何左边的key值在右边中没有,那么join时value就显示None

>>> kvRDD1.leftOuterJoin(kvRDD2).collect()
[(1, (2, None)), (3, (6, 6)), (3, (6, 8)), (3, (4, 6)), (3, (4, 8)), (5, (6, None)), (6, (9, 12))]

>>> kvRDD1.rightOuterJoin(kvRDD2).collect()
[(3, (6, 6)), (3, (6, 8)), (3, (4, 6)), (3, (4, 8)), (6, (9, 12))]

subtractByKey:删除相同key的数据

>>> kvRDD1.subtractByKey(kvRDD2).collect()
[(1, 2), (5, 6)]

Key-Value动作运算

>>> kvRDD1.first()
(3, 6)
>>> kvRDD1.take(3)
[(3, 6), (6, 9), (3, 4)]
>>> kvRDD1.first()[0]
3
>>> kvRDD1.first()[1]
6
>>> kvRDD1.countByKey()
defaultdict(, {3: 2, 6: 1, 5: 1, 1: 1})

查看key值得value有那些?

>>> kvRDD1.lookup(3)
[6, 4]

 

你可能感兴趣的:(Spark,机器学习,数理统计)