Resilient Distributed Datasets(RDD) 弹性分布式数据集。它们是在多个节点上运行和操作并且在集群上进行并行处理的元素。 RDD是不可变元素,这意味着一旦创建了RDD,就无法对其进行更改。 RDD也具有容错能力,因此在发生任何故障时,它们会自动恢复。 可以在这些RDD上应用多个操作来完成某项任务。对开发者而言,RDD可以看作是Spark的一个对象,它本身运行于内存中,如读文件是一个RDD,对文件计算是一个RDD,结果集也是一个RDD ,不同的分片、 数据之间的依赖 、key-value类型的map数据都可以看做RDD。
要对这些RDD进行操作,有两种方法 :
转换 - 这些操作应用于RDD以创建新的RDD。 Filter,groupBy和map都是转换。
操作 - 这些是应用于RDD的操作,它指示Spark执行计算并将结果发送回驱动程序。
要在PySpark中应用任何操作,我们首先需要创建一个PySpark RDD。 以下代码块具有PySpark RDD类的详细信息 -
class pyspark.RDD (
jrdd,
ctx,
jrdd_deserializer = AutoBatchedSerializer(PickleSerializer())
)
让我们看看如何使用PySpark运行一些基本操作。 Python文件中的以下代码创建RDD,其中存储了一组单词。
words = sc.parallelize (
["scala",
"java",
"hadoop",
"spark",
"akka",
"spark vs hadoop",
"pyspark",
"pyspark and spark"]
)
----------------------------------------count.py---------------------------------------
from pyspark import SparkContext
sc = SparkContext("local", "count app")
words = sc.parallelize (
["scala",
"java",
"hadoop",
"spark",
"akka",
"spark vs hadoop",
"pyspark",
"pyspark and spark"]
)
counts = words.count()
print("Number of elements in RDD -> %i" % (counts))
----------------------------------------count.py---------------------------------------
$SPARK_HOME/bin/spark-submit count.py
Number of elements in RDD → 8
----------------------------------------collect.py---------------------------------------
from pyspark import SparkContext
sc = SparkContext("local", "Collect app")
words = sc.parallelize (
["scala",
"java",
"hadoop",
"spark",
"akka",
"spark vs hadoop",
"pyspark",
"pyspark and spark"]
)
coll = words.collect()
print "Elements in RDD -> %s" % (coll)
----------------------------------------collect.py---------------------------------------
Command − The command for collect() is −
$SPARK_HOME/bin/spark-submit collect.py
Output − The output for the above command is −
Elements in RDD -> [
'scala',
'java',
'hadoop',
'spark',
'akka',
'spark vs hadoop',
'pyspark',
'pyspark and spark'
]
仅返回满足foreach内函数条件的元素。 在下面的示例中,我们在foreach中调用print函数,它打印RDD中的所有元素。
----------------------------------------foreach.py---------------------------------------
from pyspark import SparkContext
sc = SparkContext("local", "ForEach app")
words = sc.parallelize (
["scala",
"java",
"hadoop",
"spark",
"akka",
"spark vs hadoop",
"pyspark",
"pyspark and spark"]
)
def f(x): print(x)
fore = words.foreach(f)
----------------------------------------foreach.py---------------------------------------
Command − The command for foreach(f) is −
$SPARK_HOME/bin/spark-submit foreach.py
Output − The output for the above command is −
scala
java
hadoop
spark
akka
spark vs hadoop
pyspark
pyspark and spark
返回一个包含元素的新RDD,它满足过滤器内部的功能。 在下面的示例中,我们过滤掉包含''spark'的字符串。
----------------------------------------filter.py---------------------------------------
from pyspark import SparkContext
sc = SparkContext("local", "Filter app")
words = sc.parallelize (
["scala",
"java",
"hadoop",
"spark",
"akka",
"spark vs hadoop",
"pyspark",
"pyspark and spark"]
)
words_filter = words.filter(lambda x: 'spark' in x)
filtered = words_filter.collect()
print("Fitered RDD -> %s" % (filtered))
----------------------------------------filter.py----------------------------------------
Command − The command for filter(f) is −
$SPARK_HOME/bin/spark-submit filter.py
Output − The output for the above command is −
Fitered RDD -> [
'spark',
'spark vs hadoop',
'pyspark',
'pyspark and spark'
]
通过将函数应用于RDD中的每个元素来返回新的RDD。 在下面的示例中,我们形成一个键值对,并将每个字符串映射为值1。
----------------------------------------map.py---------------------------------------
from pyspark import SparkContext
sc = SparkContext("local", "Map app")
words = sc.parallelize (
["scala",
"java",
"hadoop",
"spark",
"akka",
"spark vs hadoop",
"pyspark",
"pyspark and spark"]
)
words_map = words.map(lambda x: (x, 1))
mapping = words_map.collect()
print("Key value pair -> %s" % (mapping))
----------------------------------------map.py---------------------------------------
Command − The command for map(f, preservesPartitioning=False) is −
$SPARK_HOME/bin/spark-submit map.py
Output − The output of the above command is −
Key value pair -> [
('scala', 1),
('java', 1),
('hadoop', 1),
('spark', 1),
('akka', 1),
('spark vs hadoop', 1),
('pyspark', 1),
('pyspark and spark', 1)
]
执行指定的可交换和关联二进制操作后返回RDD中的元素。 在下面的示例中,我们从运算符导入add包并将其应用于'num'以执行简单的加法运算。
----------------------------------------reduce.py---------------------------------------
from pyspark import SparkContext
from operator import add
sc = SparkContext("local", "Reduce app")
nums = sc.parallelize([1, 2, 3, 4, 5])
adding = nums.reduce(add)
print("Adding all the elements -> %i" % (adding))
----------------------------------------reduce.py---------------------------------------
Command − The command for reduce(f) is −
$SPARK_HOME/bin/spark-submit reduce.py
Output − The output of the above command is −
Adding all the elements -> 15
合并键值对。 在以下示例中,两个不同的RDD中有两对元素。 在连接这两个RDD之后,我们得到一个RDD,其元素具有匹配的键及其值。
----------------------------------------join.py---------------------------------------
from pyspark import SparkContext
sc = SparkContext("local", "Join app")
x = sc.parallelize([("spark", 1), ("hadoop", 4)])
y = sc.parallelize([("spark", 2), ("hadoop", 5)])
joined = x.join(y)
final = joined.collect()
print("Join RDD -> %s" % (final))
----------------------------------------join.py---------------------------------------
Command − The command for join(other, numPartitions = None) is −
$SPARK_HOME/bin/spark-submit join.py
Output − The output for the above command is −
Join RDD -> [
('spark', (1, 2)),
('hadoop', (4, 5))
]
使用默认存储级别(MEMORY_ONLY)保留此RDD。 还可以检查RDD是否被缓存。
----------------------------------------cache.py---------------------------------------
from pyspark import SparkContext
sc = SparkContext("local", "Cache app")
words = sc.parallelize (
["scala",
"java",
"hadoop",
"spark",
"akka",
"spark vs hadoop",
"pyspark",
"pyspark and spark"]
)
words.cache()
caching = words.persist().is_cached
print("Words got chached > %s" % (caching))
----------------------------------------cache.py---------------------------------------
Command − The command for cache() is −
$SPARK_HOME/bin/spark-submit cache.py
Output − The output for the above program is −
Words got cached -> True
看到一个写的很好的PySpark之RDD入门最全攻略! mark