Spark的join与mysql的join类似,mysql的join是将表与表之间连接查询,spark中join是将RDD数据集进行连接,Spark主要有join、leftOuterJoin、rightOuterJoin及fullOuterJoin这4种连接
join:相当于mysql的INNER JOIN,当join左右两边的数据集都存在时才返回
leftOuterJoin:相当于mysql的LEFT JOIN,leftOuterJoin返回数据集左边的全部数据和数据集左边与右边有交集的数据
rightOuterJoin:相当于mysql的RIGHT JOIN,rightOuterJoin返回数据集右边的全部数据和数据集右边与左边有交集的数据
fullOuterJoin:返回左右数据集的全部数据,左右有一边不存在的数据以None填充
下面以代码看个例子:
from pyspark import SparkConf, SparkContext
conf = SparkConf()
sc = SparkContext(conf=conf)
def func_join():
a = sc.parallelize([("name", "Alice"), ("age", 20), ("job", "student"), ("fav", "basket")])
b = sc.parallelize([("name", "Bob"), ("age", 22), ("address", "WuHan")])
print("join:{}".format(a.join(b).collect()))
print("leftOuterJoin:{}".format(a.leftOuterJoin(b).collect()))
print("rightOuterJoin:{}".format(a.rightOuterJoin(b).collect()))
print("fullOuterJoin:{}".format(a.fullOuterJoin(b).collect()))
func_join()
sc.stop()
"""
result:
join:[('name', ('Alice', 'Bob')), ('age', (20, 22))]
leftOuterJoin:[('fav', ('basket', None)), ('name', ('Alice', 'Bob')), ('job', ('student', None)), ('age', (20, 22))]
rightOuterJoin:[('name', ('Alice', 'Bob')), ('age', (20, 22)), ('address', (None, 'WuHan'))]
fullOuterJoin:[('fav', ('basket', None)), ('name', ('Alice', 'Bob')), ('job', ('student', None)), ('age', (20, 22)), ('address', (None, 'WuHan'))]
"""