SparkSQL之DataSet

Dataset是具有强类型的数据集合,需要提供对应的类型信息。

创建一个DataSet吧先

1)创建一个样例类

scala> case class Person(name: String, age: Long)
defined class Person

2)创建DataSet

scala> val caseClassDS = Seq(Person("Andy", 32)).toDS()
caseClassDS: org.apache.spark.sql.Dataset[Person] = [name: string, age: bigint]

RDD转换为DataSet

SparkSQL能够自动将包含有case类的RDD转换成DataFrame,case类定义了table的结构,case类属性通过反射变成了表的列名。
1)创建一个RDD

scala> val peopleRDD = sc.textFile("examples/src/main/resources/people.txt")
peopleRDD: org.apache.spark.rdd.RDD[String] = examples/src/main/resources/people.txt MapPartitionsRDD[3] at textFile at <console>:27

2)创建一个样例类

scala> case class Person(name: String, age: Long)
defined class Person

3)将RDD转化为DataSet

scala> peopleRDD.map(line => {val para = line.split(",");Person(para(0),para(1).trim.toInt)}).toDS()

DataSet转换为RDD

调用rdd方法即可。
1)创建一个DataSet

scala> val DS = Seq(Person("Andy", 32)).toDS()
DS: org.apache.spark.sql.Dataset[Person] = [name: string, age: bigint]

2)将DataSet转换为RDD

scala> DS.rdd
res11: org.apache.spark.rdd.RDD[Person] = MapPartitionsRDD[15] at rdd at <console>:28

DataFrame与DataSet的互操作

  1. DataFrame转换为DataSet
    1)创建一个DateFrame
scala> val df = spark.read.json("examples/src/main/resources/people.json")
df: org.apache.spark.sql.DataFrame = [age: bigint, name: string]

2)创建一个样例类

scala> case class Person(name: String, age: Long)
defined class Person

3)将DateFrame转化为DataSet

scala> df.as[Person]
res14: org.apache.spark.sql.Dataset[Person] = [age: bigint, name: string]
  1. DataSet转换为DataFrame
    1)创建一个样例类
scala> case class Person(name: String, age: Long)
defined class Person

2)创建DataSet

scala> val ds = Seq(Person("Andy", 32)).toDS()
ds: org.apache.spark.sql.Dataset[Person] = [name: string, age: bigint]

3)将DataSet转化为DataFrame

scala> val df = ds.toDF
df: org.apache.spark.sql.DataFrame = [name: string, age: bigint]

4)展示

scala> df.show
+----+---+
|name|age|
+----+---+
|Andy| 32|
+----+---+

DataSet转DataFrame

这个很简单,因为只是把case class封装成Row
(1)导入隐式转换

import spark.implicits._

(2)转换

val testDF = testDS.toDF

DataFrame转DataSet

(1)导入隐式转换

import spark.implicits._

(2)创建样例类

case class Coltest(col1:String,col2:Int)extends Serializable //定义字段名和类型

(3)转换

val testDS = testDF.as[Coltest]

这种方法就是在给出每一列的类型后,使用as方法,转成Dataset,这在数据类型是DataFrame又需要针对各个字段处理时极为方便。在使用一些特殊的操作时,一定要加上 import spark.implicits._ 不然toDF、toDS无法使用。

你可能感兴趣的:(Spark)