在学习转换之前先了解以下它们的基本概念
了解了基本的概念之后,接下来我们通过代码编写三种数据集的形成
RDD的形成
from pyspark.sql import SparkSession
if __name__ == '__main__':
spark = SparkSession \
.builder \
.appName("rddData") \
.master("local[*]") \
.getOrCreate()
# 方式一:
data = [1, 2, 3, 4, 5]
rdd1 = spark.sparkContext.parallelize(data)
print(rdd1.collect())
# [1, 2, 3, 4, 5]
# 方式二:
rdd2 = spark.sparkContext.textFile("/home/llh/data/people.txt")
print(rdd2.collect())
# ['Jack 27', 'Rose 24', 'Andy 32']
spark.stop()
DataFrame的形成
from pyspark.sql import SparkSession
if __name__ == '__main__':
spark = SparkSession \
.builder \
.appName("rddDataFrame") \
.master("local[*]") \
.getOrCreate()
df = spark.read.text("/home/llh/data/people.txt")
df.show()
# +---+----+
# |age|name|
# +---+----+
# | 27|Jack|
# | 24|Rose|
# | 32|Andy|
# +---+----+
spark.stop()
RDD转成DataFrame
from pyspark.sql import SparkSession
from pyspark.sql import Row
if __name__ == '__main__':
spark = SparkSession \
.builder \
.appName("rddRDD") \
.master("local[*]") \
.getOrCreate()
data = [1, 2, 3]
rdd1 = spark.sparkContext.parallelize(data)
print(rdd1.collect())
# [1, 2, 3]
# rdd -> dataframe
rdd2 = rdd1.map(lambda x: Row(x))
df = spark.createDataFrame(rdd2, schema=['num'])
df.show()
# +---+
# |num|
# +---+
# | 1 |
# | 2 |
# | 3 |
# +---+
spark.stop()
DataFrame转成RDD
from pyspark.sql import SparkSession
if __name__ == '__main__':
spark = SparkSession \
.builder \
.appName("rddDataFrame") \
.master("local[*]") \
.getOrCreate()
df = spark.read.text("/home/llh/data/people.txt")
rdd = df.rdd
print(rdd.collect())
# [Row(value='Jack 27'), Row(value='Rose 24'), Row(value='Andy 32')]
spark.stop()
以上就是RDD与DataFrame形成与相互转换
Spark学习目录: