转载 http://homepage.cs.latrobe.edu.au/zhe/ZhenHeSparkRDDAPIExamples.html#aggregate
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
RDD function calls aggregate aggregateByKey [Pair] cartesian checkpoint coalesce, repartition cogroup [pair], groupWith [Pair] collect, toArray collectAsMap [pair] combineByKey [pair] compute context, sparkContext count countApprox countApproxDistinct
countApproxDistinctByKey [pair]
countByKey [pair] countByKeyApprox [pair] countByValue countByValueApprox dependencies distinct first filter filterByRange [Ordered] filterWith flatMap flatMapValues [Pair] flatMapWith fold foldByKey [Pair] foreach foreachPartition foreachWith fullOuterJoin [Pair] generator, setGenerator getCheckpointFile preferredLocations getStorageLevel glom groupBy groupByKey [Pair] histogram [Double] id intersection isCheckpointed iterator join [pair] keyBy keys [pair] leftOuterJoin [pair] lookup [pair] map mapPartitions mapPartitionsWithContext mapPartitionsWithIndex mapPartitionsWithSplit mapValues [pair] mapWith max mean [Double], meanApprox [Double] min name, setName partitionBy [Pair] partitioner partitions persist, cache pipe randomSplit reduce reduceByKey [Pair], reduceByKeyLocally[Pair], reduceByKeyToDriver[Pair] repartition repartitionAndSortWithPartitions [Ordered] rightOuterJoin [Pair] sample sampleByKey [Pair] sampleByKeyExact [Pair] saveAsHodoopFile [Pair], saveAsHadoopDataset [Pair], saveAsNewAPIHadoopFile [Pair] saveAsObjectFile saveAsSequenceFile [SeqFile] saveAsTextFile stats [Double] sortBy sortByKey [Ordered] stdev [Double], sampleStdev [Double] subtract subtractByKey [Pair] sum [Double], sumApprox[Double] take takeOrdered takeSample treeAggregate treeReduce toDebugString toJavaRDD toLocalIterator top toString union, ++ unpersist values [Pair] variance [Double], sampleVariance [Double] zip zipPartitions zipWithIndex zipWithUniquId
|
Our research group has a very strong focus on using and improving Apache Spark to solve real world programs. In order to do this we need to have a very solid understanding of the capabilities of Spark. So one of the first things we have done is to go through the entire Spark RDD API and write examples to test their functionality. This has been a very useful exercise and we would like to share the examples with everyone. Authors of examples: Matthias Langer and Zhen He Emails addresses: [email protected], [email protected] These examples have only been tested for Spark version 1.4. We assume the functionality of Spark is stable and therefore the examples should be valid for later releases. If you find any errors in the example we would love to hear about them so we can fix them up. So please email us to let us know. The RDD API By Example RDD is short for Resilient Distributed Dataset. RDDs are the workhorse of the Spark system. As a user, one can consider a RDD as a handle for a collection of individual data partitions, which are the result of some computation. However, an RDD is actually more than that. On cluster installations, separate data partitions can be on separate nodes. Using the RDD as a handle one can access all partitions and perform computations and transformations using the contained data. Whenever a part of a RDD or an entire RDD is lost, the system is able to reconstruct the data of lost partitions by using lineage information. Lineage refers to the sequence of transformations used to produce the current RDD. As a result, Spark is able to recover automatically from most failures. All RDDs available in Spark derive either directly or indirectly from the class RDD. This class comes with a large set of methods that perform operations on the data within the associated partitions. The class RDD is abstract. Whenever, one uses a RDD, one is actually using a concertized implementation of RDD. These implementations have to overwrite some core functions to make the RDD behave as expected. One reason why Spark has lately become a very popular system for processing big data is that it does not impose restrictions regarding what data can be stored within RDD partitions. The RDD API already contains many useful operations. But, because the creators of Spark had to keep the core API of RDDs common enough to handle arbitrary data-types, many convenience functions are missing. The basic RDD API considers each data item as a single value. However, users often want to work with key-value pairs. Therefore Spark extended the interface of RDD to provide additional functions (PairRDDFunctions), which explicitly work on key-value pairs. Currently, there are four extensions to the RDD API available in spark. They are as follows: DoubleRDDFunctions
This extension contains many useful methods for aggregating numeric values. They become available if the data items of an RDD are implicitly convertible to the Scala data-type double.
PairRDDFunctions Methods defined in this interface extension become available when the data items have a two component tuple structure. Spark will interpret the first tuple item (i.e. tuplename. 1) as the key and the second item (i.e. tuplename. 2) as the associated value. OrderedRDDFunctions Methods defined in this interface extension become available if the data items are two-component tuples where the key is implicitly sortable. SequenceFileRDDFunctions This extension contains several methods that allow users to create Hadoop sequence- les from RDDs. The data items must be two compo- nent key-value tuples as required by the PairRDDFunctions. However, there are additional requirements considering the convertibility of the tuple components to Writable types. Since Spark will make methods with extended functionality automatically available to users when the data items fulfill the above described requirements, we decided to list all possible available functions in strictly alphabetical order. We will append either of the followingto the function-name to indicate it belongs to an extension that requires the data items to conform to a certain format or type. [Double] - Double RDD Functions [Ordered] - OrderedRDDFunctions [Pair] - PairRDDFunctions [SeqFile] - SequenceFileRDDFunctions
aggregate The aggregate function allows the user to apply two different reduce functions to the RDD. The first reduce function is applied within each partition to reduce the data within each partition into a single result. The second reduce function is used to combine the different reduced results of all partitions together to arrive at one final result. The ability to have two separate reduce functions for intra partition versus across partition reducing adds a lot of flexibility. For example the first reduce function can be the max function and the second one can be the sum function. The user also specifies an initial value. Here are some important facts.
def aggregate[U: ClassTag](zeroValue: U)(seqOp: (U, T) => U, combOp: (U, U) => U): U
Examples 1
The main issue with the code above is that the result of the inner min is a string of length 1. The zero in the output is due to the empty string being the last string in the list. We see this result because we are not recursively reducing any further within the partition for the final string. Examples 2
aggregateByKey [Pair]
Works like the aggregate function except the aggregation is applied to the values with the same key. Also unlike the aggregate function the initial value is not applied to the second reduce.
def aggregateByKey[U](zeroValue: U)(seqOp: (U, V) ⇒ U, combOp: (U, U) ⇒ U)(implicit arg0: ClassTag[U]): RDD[(K, U)]
def aggregateByKey[U](zeroValue: U, numPartitions: Int)(seqOp: (U, V) ⇒ U, combOp: (U, U) ⇒ U)(implicit arg0: ClassTag[U]): RDD[(K, U)] def aggregateByKey[U](zeroValue: U, partitioner: Partitioner)(seqOp: (U, V) ⇒ U, combOp: (U, U) ⇒ U)(implicit arg0: ClassTag[U]): RDD[(K, U)] Example
cartesian
Computes the cartesian product between two RDDs (i.e. Each item of the first RDD is joined with each item of the second RDD) and returns them as a new RDD.
(Warning: Be careful when using this function.! Memory consumption can quickly become an issue!)
def cartesian[U: ClassTag](other: RDD[U]): RDD[(T, U)]
Example
checkpoint Will create a checkpoint when the RDD is computed next. Checkpointed RDDs are stored as a binary file within the checkpoint directory which can be specified using the Spark context. (Warning: Spark applies lazy evaluation. Checkpointing will not occur until an action is invoked.) Important note: the directory "my_directory_name" should exist in all slaves. As an alternative you could use an HDFS directory URL as well. Listing Variants
def checkpoint()
Example
coalesce, repartition
Coalesces the associated data into a given number of partitions.
repartition(numPartitions) is simply an abbreviation for
coalesce(numPartitions, shuffle = true).
def coalesce ( numPartitions : Int , shuffle : Boolean = false ): RDD [T]
def repartition ( numPartitions : Int ): RDD [T] Example
cogroup [Pair], groupWith [Pair]
A very powerful set of functions that allow grouping up to 3 key-value RDDs together using their keys.
def cogroup[W](other: RDD[(K, W)]): RDD[(K, (Iterable[V], Iterable[W]))]
def cogroup[W](other: RDD[(K, W)], numPartitions: Int): RDD[(K, (Iterable[V], Iterable[W]))] def cogroup[W](other: RDD[(K, W)], partitioner: Partitioner): RDD[(K, (Iterable[V], Iterable[W]))] def cogroup[W1, W2](other1: RDD[(K, W1)], other2: RDD[(K, W2)]): RDD[(K, (Iterable[V], Iterable[W1], Iterable[W2]))] def cogroup[W1, W2](other1: RDD[(K, W1)], other2: RDD[(K, W2)], numPartitions: Int): RDD[(K, (Iterable[V], Iterable[W1], Iterable[W2]))] def cogroup[W1, W2](other1: RDD[(K, W1)], other2: RDD[(K, W2)], partitioner: Partitioner): RDD[(K, (Iterable[V], Iterable[W1], Iterable[W2]))] def groupWith[W](other: RDD[(K, W)]): RDD[(K, (Iterable[V], Iterable[W]))] def groupWith[W1, W2](other1: RDD[(K, W1)], other2: RDD[(K, W2)]): RDD[(K, (Iterable[V], IterableW1], Iterable[W2]))] Examples
collect, toArray
Converts the RDD into a Scala array and returns it. If you provide a standard map-function (i.e. f = T -> U) it will be applied before inserting the values into the result array.
def collect(): Array[T]
def collect[U: ClassTag](f: PartialFunction[T, U]): RDD[U] def toArray(): Array[T] Example
collectAsMap [Pair]
Similar to
collect, but works on key-value RDDs and converts them into Scala maps to preserve their key-value structure.
def collectAsMap(): Map[K, V]
Example
combineByKey[Pair]
Very efficient implementation that combines the values of a RDD consisting of two-component tuples by applying multiple aggregators one after another.
def combineByKey[C](createCombiner: V => C, mergeValue: (C, V) => C, mergeCombiners: (C, C) => C): RDD[(K, C)]
def combineByKey[C](createCombiner: V => C, mergeValue: (C, V) => C, mergeCombiners: (C, C) => C, numPartitions: Int): RDD[(K, C)] def combineByKey[C](createCombiner: V => C, mergeValue: (C, V) => C, mergeCombiners: (C, C) => C, partitioner: Partitioner, mapSideCombine: Boolean = true, serializerClass: String = null): RDD[(K, C)] Example
compute
Executes dependencies and computes the actual representation of the RDD. This function should not be called directly by users.
def compute(split: Partition, context: TaskContext): Iterator[T]
context, sparkContext
Returns the
SparkContext that was used to create the RDD.
def compute(split: Partition, context: TaskContext): Iterator[T]
Example
count
Returns the number of items stored within a RDD.
def count(): Long
Example
countApprox
def (timeout: Long, confidence: Double = 0.95): PartialResult[BoundedDouble]
countApproxDistinct Computes the approximate number of distinct values. For large RDDs which are spread across many nodes, this function may execute faster than other counting methods. The parameter relativeSD controls the accuracy of the computation. Listing Variants
def countApproxDistinct(relativeSD: Double = 0.05): Long
Example
countApproxDistinctByKey [Pair] Similar to countApproxDistinct, but computes the approximate number of distinct values for each distinct key. Hence, the RDD must consist of two-component tuples. For large RDDs which are spread across many nodes, this function may execute faster than other counting methods. The parameter relativeSD controls the accuracy of the computation. Listing Variants
def countApproxDistinctByKey(relativeSD: Double = 0.05): RDD[(K, Long)]
def countApproxDistinctByKey(relativeSD: Double, numPartitions: Int): RDD[(K, Long)] def countApproxDistinctByKey(relativeSD: Double, partitioner: Partitioner): RDD[(K, Long)] Example
countByKey [Pair]
def countByKey(): Map[K, Long]
Example
countByKeyApprox [Pair]
def countByKeyApprox(timeout: Long, confidence: Double = 0.95): PartialResult[Map[K, BoundedDouble]]
countByValue Returns a map that contains all unique values of the RDD and their respective occurrence counts. (Warning: This operation will finally aggregate the information in a single reducer.) Listing Variants
def countByValue(): Map[T, Long]
Example
countByValueApprox
def countByValueApprox(timeout: Long, confidence: Double = 0.95): PartialResult[Map[T, BoundedDouble]]
dependencies Returns the RDD on which this RDD depends. Listing Variants
final def dependencies: Seq[Dependency[_]]
Example
distinct Returns a new RDD that contains each unique value only once. Listing Variants
def distinct(): RDD[T]
def distinct(numPartitions: Int): RDD[T] Example
first Looks for the very first data item of the RDD and returns it. Listing Variants
def first(): T
Example
filter Evaluates a boolean function for each data item of the RDD and puts the items for which the function returned true into the resulting RDD. Listing Variants
def filter(f: T => Boolean): RDD[T]
Example
When you provide a filter function, it must be able to handle all data items contained in the RDD. Scala provides so-called partial functions to deal with mixed data-types. (Tip: Partial functions are very useful if you have some data which may be bad and you do not want to handle but for the good data (matching data) you want to apply some kind of map function. The following article is good. It teaches you about partial functions in a very nice way and explains why case has to be used for partial functions: article) Examples for mixed data without partial functions
This fails because some components of a are not implicitly comparable against integers. Collect uses the isDefinedAt property of a function-object to determine whether the test-function is compatible with each data item. Only data items that pass this test (=filter) are then mapped using the function-object. Examples for mixed data with partial functions
Be careful! The above code works because it only checks the type itself! If you use operations on this type, you have to explicitly declare what type you want instead of any. Otherwise the compiler does (apparently) not know what bytecode it should produce:
filterByRange [Ordered] Returns an RDD containing only the items in the key range specified. From our testing, it appears this only works if your data is in key value pairs and it has already been sorted by key. Listing Variants
def filterByRange(lower: K, upper: K): RDD[P]
Example
filterWith (deprecated) This is an extended version of filter. It takes two function arguments. The first argument must conform to Int -> T and is executed once per partition. It will transform the partition index to type T. The second function looks like (U, T) -> Boolean. T is the transformed partition index and U are the data items from the RDD. Finally the function has to return either true or false (i.e. Apply the filter). Listing Variants
def filterWith[A: ClassTag](constructA: Int => A)(p: (T, A) => Boolean): RDD[T]
Example
flatMap Similar to map, but allows emitting more than one item in the map function. Listing Variants
def flatMap[U: ClassTag](f: T => TraversableOnce[U]): RDD[U]
Example
flatMapValues Very similar to mapValues, but collapses the inherent structure of the values during mapping. Listing Variants
def flatMapValues[U](f: V => TraversableOnce[U]): RDD[(K, U)]
Example
flatMapWith (deprecated) Similar to flatMap, but allows accessing the partition index or a derivative of the partition index from within the flatMap-function. Listing Variants
def flatMapWith[A: ClassTag, U: ClassTag](constructA: Int => A, preservesPartitioning: Boolean = false)(f: (T, A) => Seq[U]): RDD[U]
Example
fold Aggregates the values of each partition. The aggregation variable within each partition is initialized with zeroValue. Listing Variants
def fold(zeroValue: T)(op: (T, T) => T): T
Example
foldByKey [Pair] Very similar to fold, but performs the folding separately for each key of the RDD. This function is only available if the RDD consists of two-component tuples. Listing Variants
def foldByKey(zeroValue: V)(func: (V, V) => V): RDD[(K, V)]
def foldByKey(zeroValue: V, numPartitions: Int)(func: (V, V) => V): RDD[(K, V)] def foldByKey(zeroValue: V, partitioner: Partitioner)(func: (V, V) => V): RDD[(K, V)] Example
foreach Executes an parameterless function for each data item. Listing Variants
def foreach(f: T => Unit)
Example
foreachPartition Executes an parameterless function for each partition. Access to the data items contained in the partition is provided via the iterator argument. Listing Variants
def foreachPartition(f: Iterator[T] => Unit)
Example
foreachWith (Deprecated) Executes an parameterless function for each partition. Access to the data items contained in the partition is provided via the iterator argument. Listing Variants
def foreachWith[A: ClassTag](constructA: Int => A)(f: (T, A) => Unit)
Example
fullOuterJoin [Pair] Performs the full outer join between two paired RDDs. Listing Variants
def fullOuterJoin[W](other: RDD[(K, W)], numPartitions: Int): RDD[(K, (Option[V], Option[W]))]
def fullOuterJoin[W](other: RDD[(K, W)]): RDD[(K, (Option[V], Option[W]))] def fullOuterJoin[W](other: RDD[(K, W)], partitioner: Partitioner): RDD[(K, (Option[V], Option[W]))] Example
generator, setGenerator Allows setting a string that is attached to the end of the RDD's name when printing the dependency graph. Listing Variants
@transient var generator
def setGenerator(_generator: String) getCheckpointFile Returns the path to the checkpoint file or null if RDD has not yet been checkpointed. Listing Variants
def getCheckpointFile: Option[String]
Example
preferredLocations Returns the hosts which are preferred by this RDD. The actual preference of a specific host depends on various assumptions. Listing Variants
final def preferredLocations(split: Partition): Seq[String]
getStorageLevel Retrieves the currently set storage level of the RDD. This can only be used to assign a new storage level if the RDD does not have a storage level set yet. The example below shows the error you will get, when you try to reassign the storage level. Listing Variants
def getStorageLevel
Example
glom Assembles an array that contains all elements of the partition and embeds it in an RDD. Each returned array contains the contents of one partition. Listing Variants
def glom(): RDD[Array[T]]
Example
groupBy Listing Variants
def groupBy[K: ClassTag](f: T => K): RDD[(K, Iterable[T])]
def groupBy[K: ClassTag](f: T => K, numPartitions: Int): RDD[(K, Iterable[T])] def groupBy[K: ClassTag](f: T => K, p: Partitioner): RDD[(K, Iterable[T])] Example
groupByKey [Pair] Very similar to groupBy, but instead of supplying a function, the key-component of each pair will automatically be presented to the partitioner. Listing Variants
def groupByKey(): RDD[(K, Iterable[V])]
def groupByKey(numPartitions: Int): RDD[(K, Iterable[V])] def groupByKey(partitioner: Partitioner): RDD[(K, Iterable[V])] Example
histogram [Double] These functions take an RDD of doubles and create a histogram with either even spacing (the number of buckets equals to bucketCount) or arbitrary spacing based on custom bucket boundaries supplied by the user via an array of double values. The result type of both variants is slightly different, the first function will return a tuple consisting of two arrays. The first array contains the computed bucket boundary values and the second array contains the corresponding count of values (i.e. the histogram). The second variant of the function will just return the histogram as an array of integers. Listing Variants
def histogram(bucketCount: Int): Pair[Array[Double], Array[Long]]
def histogram(buckets: Array[Double], evenBuckets: Boolean = false): Array[Long] Example with even spacing
Example with custom spacing
id Retrieves the ID which has been assigned to the RDD by its device context. Listing Variants
val id: Int
Example
intersection Returns the elements in the two RDDs which are the same. Listing Variants
def intersection(other: RDD[T], numPartitions: Int): RDD[T]
def intersection(other: RDD[T], partitioner: Partitioner)(implicit ord: Ordering[T] = null): RDD[T] def intersection(other: RDD[T]): RDD[T] Example
isCheckpointed Indicates whether the RDD has been checkpointed. The flag will only raise once the checkpoint has really been created. Listing Variants
def isCheckpointed: Boolean
Example
iterator Returns a compatible iterator object for a partition of this RDD. This function should never be called directly. Listing Variants
final def iterator(split: Partition, context: TaskContext): Iterator[T]
join [Pair] Performs an inner join using two key-value RDDs. Please note that the keys must be generally comparable to make this work. Listing Variants
def join[W](other: RDD[(K, W)]): RDD[(K, (V, W))]
def join[W](other: RDD[(K, W)], numPartitions: Int): RDD[(K, (V, W))] def join[W](other: RDD[(K, W)], partitioner: Partitioner): RDD[(K, (V, W))] Example
keyBy Constructs two-component tuples (key-value pairs) by applying a function on each data item. The result of the function becomes the key and the original data item becomes the value of the newly created tuples. Listing Variants
def keyBy[K](f: T => K): RDD[(K, T)]
Example
keys [Pair] Extracts the keys from all contained tuples and returns them in a new RDD. Listing Variants
def keys: RDD[K]
Example
leftOuterJoin [Pair] Performs an left outer join using two key-value RDDs. Please note that the keys must be generally comparable to make this work correctly. Listing Variants
def leftOuterJoin[W](other: RDD[(K, W)]): RDD[(K, (V, Option[W]))]
def leftOuterJoin[W](other: RDD[(K, W)], numPartitions: Int): RDD[(K, (V, Option[W]))] def leftOuterJoin[W](other: RDD[(K, W)], partitioner: Partitioner): RDD[(K, (V, Option[W]))] Example
lookup Scans the RDD for all keys that match the provided value and returns their values as a Scala sequence. Listing Variants
def lookup(key: K): Seq[V]
Example
map Applies a transformation function on each item of the RDD and returns the result as a new RDD. Listing Variants
def map[U: ClassTag](f: T => U): RDD[U]
Example
mapPartitions This is a specialized map that is called only once for each partition. The entire content of the respective partitions is available as a sequential stream of values via the input argument (Iterarator[T]). The custom function must return yet another Iterator[U]. The combined result iterators are automatically converted into a new RDD. Please note, that the tuples (3,4) and (6,7) are missing from the following result due to the partitioning we chose. Listing Variants
def mapPartitions[U: ClassTag](f: Iterator[T] => Iterator[U], preservesPartitioning: Boolean = false): RDD[U]
Example 1
Example 2
The above program can also be written using flatMap as follows. Example 2 using flatmap
mapPartitionsWithContext (deprecated and developer API) Similar to mapPartitions, but allows accessing information about the processing state within the mapper. Listing Variants
def mapPartitionsWithContext[U: ClassTag](f: (TaskContext, Iterator[T]) => Iterator[U], preservesPartitioning: Boolean = false): RDD[U]
Example
mapPartitionsWithIndex Similar to mapPartitions, but takes two parameters. The first parameter is the index of the partition and the second is an iterator through all the items within this partition. The output is an iterator containing the list of items after applying whatever transformation the function encodes. Listing Variants
def mapPartitionsWithIndex[U: ClassTag](f: (Int, Iterator[T]) => Iterator[U], preservesPartitioning: Boolean = false): RDD[U]
Example
mapPartitionsWithSplit This method has been marked as deprecated in the API. So, you should not use this method anymore. Deprecated methods will not be covered in this document. Listing Variants
def mapPartitionsWithSplit[U: ClassTag](f: (Int, Iterator[T]) => Iterator[U], preservesPartitioning: Boolean = false): RDD[U]
mapValues [Pair] Takes the values of a RDD that consists of two-component tuples, and applies the provided function to transform each value. Then, it forms new two-component tuples using the key and the transformed value and stores them in a new RDD. Listing Variants
def mapValues[U](f: V => U): RDD[(K, U)]
Example
mapWith (deprecated) This is an extended version of map. It takes two function arguments. The first argument must conform to Int -> T and is executed once per partition. It will map the partition index to some transformed partition index of typeT. This is where it is nice to do some kind of initialization code once per partition. Like create a Random number generator object. The second function must conform to (U, T) -> U. T is the transformed partition index and Uis a data item of the RDD. Finally the function has to return a transformed data item of type U. Listing Variants
def mapWith[A: ClassTag, U: ClassTag](constructA: Int => A, preservesPartitioning: Boolean = false)(f: (T, A) => U): RDD[U]
Example
max Returns the largest element in the RDD Listing Variants
def max()(implicit ord: Ordering[T]): T
Example
mean [Double], meanApprox [Double] Calls stats and extracts the mean component. The approximate version of the function can finish somewhat faster in some scenarios. However, it trades accuracy for speed. Listing Variants
def mean(): Double
def meanApprox(timeout: Long, confidence: Double = 0.95): PartialResult[BoundedDouble] Example
min Returns the smallest element in the RDD Listing Variants
def min()(implicit ord: Ordering[T]): T
Example
name, setName Allows a RDD to be tagged with a custom name. Listing Variants
@transient var name: String
def setName(_name: String) Example
partitionBy [Pair] Repartitions as key-value RDD using its keys. The partitioner implementation can be supplied as the first argument. Listing Variants
def partitionBy(partitioner: Partitioner): RDD[(K, V)]
partitioner Specifies a function pointer to the default partitioner that will be used for groupBy, subtract, reduceByKey (from PairedRDDFunctions), etc. functions. Listing Variants
@transient val partitioner: Option[Partitioner]
partitions Returns an array of the partition objects associated with this RDD. Listing Variants
final def partitions: Array[Partition]
Example
persist, cache These functions can be used to adjust the storage level of a RDD. When freeing up memory, Spark will use the storage level identifier to decide which partitions should be kept. The parameterless variants persist() andcache() are just abbreviations for persist(StorageLevel.MEMORY_ONLY). (Warning: Once the storage level has been changed, it cannot be changed again!) Listing Variants
def cache(): RDD[T]
def persist(): RDD[T] def persist(newLevel: StorageLevel): RDD[T] Example
pipe Takes the RDD data of each partition and sends it via stdin to a shell-command. The resulting output of the command is captured and returned as a RDD of string values. Listing Variants
def pipe(command: String): RDD[String]
def pipe(command: String, env: Map[String, String]): RDD[String] def pipe(command: Seq[String], env: Map[String, String] = Map(), printPipeContext: (String => Unit) => Unit = null, printRDDElement: (T, String => Unit) => Unit = null): RDD[String] Example
randomSplit Randomly splits an RDD into multiple smaller RDDs according to a weights Array which specifies the percentage of the total data elements that is assigned to each smaller RDD. Note the actual size of each smaller RDD is only approximately equal to the percentages specified by the weights Array. The second example below shows the number of items in each smaller RDD does not exactly match the weights Array. A random optional seed can be specified. This function is useful for spliting data into a training set and a testing set for machine learning. Listing Variants
def randomSplit(weights: Array[Double], seed: Long = Utils.random.nextLong): Array[RDD[T]]
Example
reduce This function provides the well-known reduce functionality in Spark. Please note that any function f you provide, should be commutative in order to generate reproducible results. Listing Variants
def reduce(f: (T, T) => T): T
Example
reduceByKey [Pair], reduceByKeyLocally [Pair], reduceByKeyToDriver [Pair] This function provides the well-known reduce functionality in Spark. Please note that any function f you provide, should be commutative in order to generate reproducible results. Listing Variants
def reduceByKey(func: (V, V) => V): RDD[(K, V)]
def reduceByKey(func: (V, V) => V, numPartitions: Int): RDD[(K, V)] def reduceByKey(partitioner: Partitioner, func: (V, V) => V): RDD[(K, V)] def reduceByKeyLocally(func: (V, V) => V): Map[K, V] def reduceByKeyToDriver(func: (V, V) => V): Map[K, V] Example
repartition This function changes the number of partitions to the number specified by the numPartitions parameter Listing Variants
def repartition(numPartitions: Int)(implicit ord: Ordering[T] = null): RDD[T]
Example
repartitionAndSortWithinPartitions [Ordered] Repartition the RDD according to the given partitioner and, within each resulting partition, sort records by their keys. Listing Variants
def repartitionAndSortWithinPartitions(partitioner: Partitioner): RDD[(K, V)]
Example
rightOuterJoin [Pair] Performs an right outer join using two key-value RDDs. Please note that the keys must be generally comparable to make this work correctly. Listing Variants
def rightOuterJoin[W](other: RDD[(K, W)]): RDD[(K, (Option[V], W))]
def rightOuterJoin[W](other: RDD[(K, W)], numPartitions: Int): RDD[(K, (Option[V], W))] def rightOuterJoin[W](other: RDD[(K, W)], partitioner: Partitioner): RDD[(K, (Option[V], W))] Example
sample Randomly selects a fraction of the items of a RDD and returns them in a new RDD. Listing Variants
def sample(withReplacement: Boolean, fraction: Double, seed: Int): RDD[T]
Example
sampleByKey [Pair] Randomly samples the key value pair RDD according to the fraction of each key you want to appear in the final RDD. Listing Variants
def sampleByKey(withReplacement: Boolean, fractions: Map[K, Double], seed: Long = Utils.random.nextLong): RDD[(K, V)]
Example
sampleByKeyExact [Pair, experimental] This is labelled as experimental and so we do not document it. Listing Variants
def sampleByKeyExact(withReplacement: Boolean, fractions: Map[K, Double], seed: Long = Utils.random.nextLong): RDD[(K, V)]
saveAsHadoopFile [Pair], saveAsHadoopDataset [Pair], saveAsNewAPIHadoopFile [Pair] Saves the RDD in a Hadoop compatible format using any Hadoop outputFormat class the user specifies. Listing Variants
def saveAsHadoopDataset(conf: JobConf)
def saveAsHadoopFile[F <: OutputFormat[K, V]](path: String)(implicit fm: ClassTag[F]) def saveAsHadoopFile[F <: OutputFormat[K, V]](path: String, codec: Class[_ <: CompressionCodec]) (implicit fm: ClassTag[F]) def saveAsHadoopFile(path: String, keyClass: Class[_], valueClass: Class[_], outputFormatClass: Class[_ <: OutputFormat[_, _]], codec: Class[_ <: CompressionCodec]) def saveAsHadoopFile(path: String, keyClass: Class[_], valueClass: Class[_], outputFormatClass: Class[_ <: OutputFormat[_, _]], conf: JobConf = new JobConf(self.context.hadoopConfiguration), codec: Option[Class[_ <: CompressionCodec]] = None) def saveAsNewAPIHadoopFile[F <: NewOutputFormat[K, V]](path: String)(implicit fm: ClassTag[F]) def saveAsNewAPIHadoopFile(path: String, keyClass: Class[_], valueClass: Class[_], outputFormatClass: Class[_ <: NewOutputFormat[_, _]], conf: Configuration = self.context.hadoopConfiguration) saveAsObjectFile Saves the RDD in binary format. Listing Variants
def saveAsObjectFile(path: String)
Example
saveAsSequenceFile [SeqFile] Saves the RDD as a Hadoop sequence file. Listing Variants
def saveAsSequenceFile(path: String, codec: Option[Class[_ <: CompressionCodec]] = None)
Example
saveAsTextFile Saves the RDD as text files. One line at a time. Listing Variants
def saveAsTextFile(path: String)
def saveAsTextFile(path: String, codec: Class[_ <: CompressionCodec]) Example without compression
Example with compression
Example writing into HDFS
stats [Double] Simultaneously computes the mean, variance and the standard deviation of all values in the RDD. Listing Variants
def stats(): StatCounter
Example
sortBy This function sorts the input RDD's data and stores it in a new RDD. The first parameter requires you to specify a function which maps the input data into the key that you want to sortBy. The second parameter (optional) specifies whether you want the data to be sorted in ascending or descending order. Listing Variants
def sortBy[K](f: (T) ⇒ K, ascending: Boolean = true, numPartitions: Int = this.partitions.size)(implicit ord: Ordering[K], ctag: ClassTag[K]): RDD[T]
Example
sortByKey [Ordered] This function sorts the input RDD's data and stores it in a new RDD. The output RDD is a shuffled RDD because it stores data that is output by a reducer which has been shuffled. The implementation of this function is actually very clever. First, it uses a range partitioner to partition the data in ranges within the shuffled RDD. Then it sorts these ranges individually with mapPartitions using standard sort mechanisms. Listing Variants
def sortByKey(ascending: Boolean = true, numPartitions: Int = self.partitions.size): RDD[P]
Example
stdev [Double], sampleStdev [Double] Calls stats and extracts either stdev-component or corrected sampleStdev-component. Listing Variants
def stdev(): Double
def sampleStdev(): Double Example
subtract Performs the well known standard set subtraction operation: A - B Listing Variants
def subtract(other: RDD[T]): RDD[T]
def subtract(other: RDD[T], numPartitions: Int): RDD[T] def subtract(other: RDD[T], p: Partitioner): RDD[T] Example
subtractByKey [Pair] Very similar to subtract, but instead of supplying a function, the key-component of each pair will be automatically used as criterion for removing items from the first RDD. Listing Variants
def subtractByKey[W: ClassTag](other: RDD[(K, W)]): RDD[(K, V)]
def subtractByKey[W: ClassTag](other: RDD[(K, W)], numPartitions: Int): RDD[(K, V)] def subtractByKey[W: ClassTag](other: RDD[(K, W)], p: Partitioner): RDD[(K, V)] Example
sum [Double], sumApprox [Double] Computes the sum of all values contained in the RDD. The approximate version of the function can finish somewhat faster in some scenarios. However, it trades accuracy for speed. Listing Variants
def sum(): Double
def sumApprox(timeout: Long, confidence: Double = 0.95): PartialResult[BoundedDouble] Example
take Extracts the first n items of the RDD and returns them as an array. (Note: This sounds very easy, but it is actually quite a tricky problem for the implementors of Spark because the items in question can be in many different partitions.) Listing Variants
def take(num: Int): Array[T]
Example
takeOrdered Orders the data items of the RDD using their inherent implicit ordering function and returns the first n items as an array. Listing Variants
def takeOrdered(num: Int)(implicit ord: Ordering[T]): Array[T]
Example
takeSample Behaves different from sample in the following respects:
Listing Variants
def takeSample(withReplacement: Boolean, num: Int, seed: Int): Array[T]
Example
toDebugString Returns a string that contains debug information about the RDD and its dependencies. Listing Variants
def toDebugString: String
Example
toJavaRDD Embeds this RDD object within a JavaRDD object and returns it. Listing Variants
def toJavaRDD() : JavaRDD[T]
Example
toLocalIterator Converts the RDD into a scala iterator at the master node. Listing Variants
def toLocalIterator: Iterator[T]
Example
top Utilizes the implicit ordering of $T$ to determine the top $k$ values and returns them as an array. Listing Variants
ddef top(num: Int)(implicit ord: Ordering[T]): Array[T]
Example
toString Assembles a human-readable textual description of the RDD. Listing Variants
override def toString: String
Example
treeAggregate Computes the same thing as aggregate, except it aggregates the elements of the RDD in a multi-level tree pattern. Another difference is that it does not use the initial value for the second reduce function (combOp). By default a tree of depth 2 is used, but this can be changed via the depth parameter. Listing Variants
def treeAggregate[U](zeroValue: U)(seqOp: (U, T) ⇒ U, combOp: (U, U) ⇒ U, depth: Int = 2)(implicit arg0: ClassTag[U]): U
Example
treeReduce Works like reduce except reduces the elements of the RDD in a multi-level tree pattern. Listing Variants
def treeReduce(f: (T, T) ⇒ T, depth: Int = 2): T
Example
union, ++ Performs the standard set operation: A union B Listing Variants
def ++(other: RDD[T]): RDD[T]
def union(other: RDD[T]): RDD[T] Example
unpersist Dematerializes the RDD (i.e. Erases all data items from hard-disk and memory). However, the RDD object remains. If it is referenced in a computation, Spark will regenerate it automatically using the stored dependency graph. Listing Variants
def unpersist(blocking: Boolean = true): RDD[T]
Example
values Extracts the values from all contained tuples and returns them in a new RDD. Listing Variants
def values: RDD[V]
Example
variance [Double], sampleVariance [Double] Calls stats and extracts either variance-component or corrected sampleVariance-component. Listing Variants
def variance(): Double
def sampleVariance(): Double Example
zip Joins two RDDs by combining the i-th of either partition with each other. The resulting RDD will consist of two-component tuples which are interpreted as key-value pairs by the methods provided by the PairRDDFunctions extension. Listing Variants
def zip[U: ClassTag](other: RDD[U]): RDD[(T, U)]
Example
zipParititions Similar to zip. But provides more control over the zipping process. Listing Variants
def zipPartitions[B: ClassTag, V: ClassTag](rdd2: RDD[B])(f: (Iterator[T], Iterator[B]) => Iterator[V]): RDD[V]
def zipPartitions[B: ClassTag, V: ClassTag](rdd2: RDD[B], preservesPartitioning: Boolean)(f: (Iterator[T], Iterator[B]) => Iterator[V]): RDD[V] def zipPartitions[B: ClassTag, C: ClassTag, V: ClassTag](rdd2: RDD[B], rdd3: RDD[C])(f: (Iterator[T], Iterator[B], Iterator[C]) => Iterator[V]): RDD[V] def zipPartitions[B: ClassTag, C: ClassTag, V: ClassTag](rdd2: RDD[B], rdd3: RDD[C], preservesPartitioning: Boolean)(f: (Iterator[T], Iterator[B], Iterator[C]) => Iterator[V]): RDD[V] def zipPartitions[B: ClassTag, C: ClassTag, D: ClassTag, V: ClassTag](rdd2: RDD[B], rdd3: RDD[C], rdd4: RDD[D])(f: (Iterator[T], Iterator[B], Iterator[C], Iterator[D]) => Iterator[V]): RDD[V] def zipPartitions[B: ClassTag, C: ClassTag, D: ClassTag, V: ClassTag](rdd2: RDD[B], rdd3: RDD[C], rdd4: RDD[D], preservesPartitioning: Boolean)(f: (Iterator[T], Iterator[B], Iterator[C], Iterator[D]) => Iterator[V]): RDD[V] Example
zipWithIndex Zips the elements of the RDD with its element indexes. The indexes start from 0. If the RDD is spread across multiple partitions then a spark Job is started to perform this operation. Listing Variants
def zipWithIndex(): RDD[(T, Long)]
Example
zipWithUniqueId This is different from zipWithIndex since just gives a unique id to each data element but the ids may not match the index number of the data element. This operation does not start a spark job even if the RDD is spread across multiple partitions. Compare the results of the example below with that of the 2nd example of zipWithIndex. You should be able to see the difference. Listing Variants
def zipWithUniqueId(): RDD[(T, Long)]
Example
|