Spark从入门到精通24:Spark SQL 之 DataFrame

1.创建DataFrame

1.1通过case class创建DataFrame

(1)定义case class(相当于表结构Schema)

scala> case class Emp(empno:Int, ename:String, job:String, mgr:String,
hiredate:String, sal:Int, comm:String, deptno:Int)
defined class Emp

(2)创建RDD

scala> val lines = sc.textFile("hdfs://localhost:9000/input/emp.csv").map(_.split(","))
lines: org.apache.spark.rdd.RDD[Array[String]] = MapPartitionsRDD[2] at map at :24

(3)将RDD与case class关联

scala> val allEmp = lines.map(x => Emp(x(0).toInt, x(1), x(2), x(3), x(4), x(5).toInt, x(6), x(7).toInt))
allEmp: org.apache.spark.rdd.RDD[Emp] = MapPartitionsRDD[3] at map at :28

(4)转换成DataFrame

scala> val allEmpDF = allEmp.toDF
allEmpDF: org.apache.spark.sql.DataFrame = [empno: int, ename: string ... 6 more fields]

(5)使用DataFrame查询数据

scala> allEmpDF.show

image

1.2使用SparkSession创建DataFrame

(1)使用StructType来定义Schema结构信息

scala> import org.apache.spark.sql.types._
import org.apache.spark.sql.types._

scala> val myschema = StructType(List(StructField("empno",DataTypes.IntegerType),
StructField("ename",DataTypes.StringType),
StructField("job",DataTypes.StringType),
StructField("mgr",DataTypes.StringType),
StructField("hiredate",DataTypes.StringType),
StructField("sal",DataTypes.IntegerType),
StructField("comm",DataTypes.StringType),
StructField("deptno",DataTypes.IntegerType)))
myschema: org.apache.spark.sql.types.StructType = StructType(StructField(empno,IntegerType,true),
StructField(ename,StringType,true), StructField(job,StringType,true), StructField(mgr,StringType,true),
StructField(hiredate,StringType,true), StructField(sal,IntegerType,true), StructField(comm,StringType,true),
StructField(deptno,IntegerType,true))

(2)创建RDD

scala> val lines = sc.textFile("hdfs://localhost:9000/input/emp.csv").map(_.split(","))
lines: org.apache.spark.rdd.RDD[Array[String]] = MapPartitionsRDD[2] at map at :27

(3)将RDD中的数据映射成Row

scala> import org.apache.spark.sql.Row
import org.apache.spark.sql.Row

scala> val rowRDD = lines.map(x => Row(x(0).toInt, x(1), x(2), x(3), x(4), x(5).toInt, x(6), x(7).toInt))
rowRDD: org.apache.spark.rdd.RDD[org.apache.spark.sql.Row] = MapPartitionsRDD[3] at map at :30

(4)创建DataFrame

scala> val df = spark.createDataFrame(rowRDD,myschema)
df: org.apache.spark.sql.DataFrame = [empno: int, ename: string ... 6 more fields]

(5)使用DataFrame查询数据

scala> df.show

image

1.3使用json文件来创建DataFrame

# cp $SPARK_HOME/examples/src/main/resources/people.json /root/input/

scala> val df = spark.read.json("file:///root/input/people.json")
df: org.apache.spark.sql.DataFrame = [age: bigint, name: string]
scala> df.show

image

2.DataFrame操作

(*)查询所有员工的姓名

scala> df.select($"ename").show
scala> df.select($"ename").show

image

(*)查询所有员工的姓名和薪水,并给薪水加100块钱

scala> df.select($"ename",$"sal",$"sal"+100).show

image

(*)查询工资大于2000元的员工

scala> df.filter($"sal">2000).show

image

(*)求每个部门的员工人数

scala> df.groupBy($"deptno").count.show

image

(*)在DataFrame中使用SQL语句

将DataFrame注册成视图(表):

scala> df.createOrReplaceTempView("emp")

执行查询:

scala> spark.sql("select * from emp").show

image

scala> spark.sql("select * from emp where deptno = 10").show

image

scala> spark.sql("select deptno,sum(sal) from emp group by deptno").show

image

3.Global Temporary View

上面使用的是一个在Session声明周期中的临时view(视图)。在Spark SQL中,如果需要一个临时的view,且能在不同的Session中共享,而且在application的运行周期内可用,那么久需要创建一个全局的临时View。并且在使用时加上global_temp前缀来引用它,因为全局临时View是绑定到系统保留的数据库global_temp上的。

(1)创建一个普通临时View和一个全局临时View

scala> df.createOrReplaceTempView("emp1")
scala> df.createGlobalTempView("emp2")

(2)在当前会话中执行查询,均可正确查询到结果

scala> spark.sql("select * from emp1").show

image

scala> spark.sql("select * from global_temp.emp2").show

image

(3)使用newSession算子开启一个新的会话,执行同样的查询:普通临时View报错,全局临时View正确得到结果

scala> spark.newSession.sql("select * from emp1").show

image

scala> spark.newSession.sql("select * from global_temp.emp2").show

image

你可能感兴趣的:(Spark从入门到精通24:Spark SQL 之 DataFrame)