方式1:使用反射来推断包含特定对象类型的RDD的模式
def inferReflection(spark: SparkSession) ={
val rdd = spark.sparkContext.textFile("D:\\ssc\\spark\\people.txt")
//RDD => DF时需要的隐式转换
import spark.implicits._
val pDF = rdd.map(_.split(",")).map(line => People(line(0).toInt,line(1),line(2).toInt)).toDF()
pDF.printSchema()
pDF.show()
}
通过反射class的这种方式可以获得schema创建DataFrame,简单通用,但是在创建外部数据源的场景下不适用
方式2:通过编程接口,StructType可以构造Schema,然后将其应用于现有的RDD
def program(spark: SparkSession) = {
//创建RDD
val rdd = spark.sparkContext.textFile("D:\\ssc\\spark\\people.txt")
//1、RDD[String] => RDD[Row]
val infoRDD: RDD[Row] = rdd.map(line => {
val strArr = line.split(",")
val id = strArr(0).trim.toInt
val name = strArr(1).trim
val age = strArr(2).trim.toInt
Row(id, name, age)
})
//2、创建schema
val schema: StructType = StructType(
StructField("id", IntegerType, true) ::
StructField("name", StringType, true) ::
StructField("age", IntegerType, true) :: Nil
)
//3、包装DataFrame
val df: DataFrame = spark.createDataFrame(infoRDD, schema)
df.printSchema()
df.show()
}
步骤:
每个数据源下都有一个sql选项卡,其中就是对应的SQL数据源,生成对应的SQL视图代码
官网数据源
CREATE TEMPORARY VIEW jsonTable
USING org.apache.spark.sql.json
OPTIONS (
path "examples/src/main/resources/people.json"
)
SELECT * FROM jsonTable
跨数据源join
跨数据源join是Spark非常好用的一个特性,从不同的数据源拿到spark中,在从spark写出去
案例:从hive和mysql中分别拿出一个表join
1、jdbc
val jdbcDF = spark.read
.format("jdbc")
.option("url", "jdbc:mysql://ruozedata001:6619")
.option("dbtable", "ruozedata.dept")
.option("user", "root")
.option("password", "mysqladminroot")
.load()
2、hive
idea里面连接hive需要如下操作
1)启动hive的metastore:hive --service metastore -p 9083 &
2)把集群中的windows运行hive的配置文件放入到resources目录下
3、代码需要开启enableHiveSupport()
val spark = SparkSession.builder
.master("local")
.appName(this.getClass.getSimpleName)
.enableHiveSupport()
.getOrCreate()
val hiveDF = spark.sql("select * from default.emp")
注意windows下一定要放入如下参数,不然找不到block块
hdfs-site.xml
<property>
<name>dfs.client.use.datanode.hostname</name>
<value>true</value>
</property>
4、join
val joinDF: DataFrame = jdbcDF.join(hiveDF, "deptno")
+------+----------+--------+-----+------+---------+----+----------+------+------+
|deptno| dname| loc|empno| ename| job| mgr| hiredate| sal| comm|
+------+----------+--------+-----+------+---------+----+----------+------+------+
| 10|ACCOUNTING|NEW YORK| 7934|MILLER| CLERK|7782| 1982-1-23|1300.0| null|
| 10|ACCOUNTING|NEW YORK| 7839| KING|PRESIDENT|null|1981-11-17|5000.0| null|
| 10|ACCOUNTING|NEW YORK| 7782| CLARK| MANAGER|7839| 1981-6-9|2450.0| null|
| 20| RESEARCH| DALLAS| 7902| FORD| ANALYST|7566| 1981-12-3|3000.0| null|
| 20| RESEARCH| DALLAS| 7876| ADAMS| CLERK|7788| 1987-5-23|1100.0| null|
| 20| RESEARCH| DALLAS| 7788| SCOTT| ANALYST|7566| 1987-4-19|3000.0| null|
| 20| RESEARCH| DALLAS| 7566| JONES| MANAGER|7839| 1981-4-2|2975.0| null|
| 20| RESEARCH| DALLAS| 7369| SMITH| CLERK|7902|1980-12-17| 800.0| null|
| 30| SALES| CHICAGO| 7900| JAMES| CLERK|7698| 1981-12-3| 950.0| null|
| 30| SALES| CHICAGO| 7844|TURNER| SALESMAN|7698| 1981-9-8|1500.0| 0.0|
| 30| SALES| CHICAGO| 7698| BLAKE| MANAGER|7839| 1981-5-1|2850.0| null|
| 30| SALES| CHICAGO| 7654|MARTIN| SALESMAN|7698| 1981-9-28|1250.0|1400.0|
| 30| SALES| CHICAGO| 7521| WARD| SALESMAN|7698| 1981-2-22|1250.0| 500.0|
| 30| SALES| CHICAGO| 7499| ALLEN| SALESMAN|7698| 1981-2-20|1600.0| 300.0|
+------+----------+--------+-----+------+---------+----+----------+------+------+
5、如果不需要全部的数据,下面我们将经过处理选择我们需要的数据
import spark.implicits._
val ds: Dataset[Result] = joinDF.select("empno","ename","deptno","dname","sal").as[Result]
ds.show()
+-----+------+------+----------+------+
|empno| ename|deptno| dname| sal|
+-----+------+------+----------+------+
| 7934|MILLER| 10|ACCOUNTING|1300.0|
| 7839| KING| 10|ACCOUNTING|5000.0|
| 7782| CLARK| 10|ACCOUNTING|2450.0|
| 7902| FORD| 20| RESEARCH|3000.0|
| 7876| ADAMS| 20| RESEARCH|1100.0|
| 7788| SCOTT| 20| RESEARCH|3000.0|
| 7566| JONES| 20| RESEARCH|2975.0|
| 7369| SMITH| 20| RESEARCH| 800.0|
| 7900| JAMES| 30| SALES| 950.0|
| 7844|TURNER| 30| SALES|1500.0|
| 7698| BLAKE| 30| SALES|2850.0|
| 7654|MARTIN| 30| SALES|1250.0|
| 7521| WARD| 30| SALES|1250.0|
| 7499| ALLEN| 30| SALES|1600.0|
+-----+------+------+----------+------+
6、保存数据
ds.write.format("orc").save("./dataset/emp_result")
ds.write.format("jdbc")
.option("url", "jdbc:mysql://ruozedata001:6619")
.option("dbtable", "ruozedata.emp_result")
.option("user", "root")
.option("password", "mysqladminroot").mode("overwrite").save()
SQL与DF与DS的比较
问题:spark.read.load() 这句代码没用指定读取格式,那么它的默认格式是什么?
现在我们需要对比的是SQL、DF、DS三者对Syntax Errors和Analysis Errors的不同程度的响应
在上一步中,我们将joinDF转化成了joinDS,现在我们就看看他们在选择需要的列的时候,做了什么样的执行计划
00 Project [empno#6]
01 +- Join Inner, (cast(deptno#0 as decimal(10,0)) = cast(deptno#13 as decimal(10,0)))
02 :- Project [deptno#0]
03 : +- Filter isnotnull(deptno#0)
04 : +- Relation[deptno#0,dname#1,loc#2] JDBCRelation(ruozedata.dept) [numPartitions=1]
05 +- Project [empno#6, deptno#13]
06 +- Filter isnotnull(deptno#13)
07 +- HiveTableRelation `ruozedata_hive`.`emp`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [empno#6, ename#7, job#8, mgr#9, hiredate#10, sal#11, comm#12, deptno#13]
---------------------------------
00 SerializeFromObject [staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, input[0, java.lang.String, true], true, false) AS value#79]
01 +- MapElements com.ruozedata.bigdata.SourceJoinApp$$$Lambda$2060/1996620584@54997f67, class com.ruozedata.bigdata.SourceJoinApp$Result, [StructField(empno,StringType,true), StructField(ename,StringType,true), StructField(deptno,StringType,true), StructField(dname,StringType,true), StructField(sal,StringType,true)], obj#78: java.lang.String
02 +- DeserializeToObject newInstance(class com.ruozedata.bigdata.SourceJoinApp$Result), obj#77: com.ruozedata.bigdata.SourceJoinApp$Result
03 +- Project [empno#6, ename#7, deptno#0, dname#1, sal#11]
04 +- Join Inner, (cast(deptno#0 as decimal(10,0)) = cast(deptno#13 as decimal(10,0)))
05 :- Project [deptno#0, dname#1]
06 : +- Filter isnotnull(deptno#0)
07 : +- Relation[deptno#0,dname#1,loc#2] JDBCRelation(ruozedata.dept) [numPartitions=1]
08 +- Project [empno#6, ename#7, sal#11, deptno#13]
09 +- Filter isnotnull(deptno#13)
10 +- HiveTableRelation `ruozedata_hive`.`emp`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [empno#6, ename#7, job#8, mgr#9, hiredate#10, sal#11, comm#12, deptno#13]
由此我们可以根据在Spark SQL应用中,选择列的时候,SQL、DF、DS三者做一个比较
SQL | DF | DS | |
---|---|---|---|
Syntax Errors | runtime | compile | compile |
Analysis Errors | runtime | runtime | compile |
所以选择顺序为 DS > DF > SQL
面试题:RDD、DS、DF的区别
面试题:RDD和Dataset/DataFrame中的Persist的默认缓存级别
面试题:Spark RDD和Spark SQL的的cache有什么区别
val spark = SparkSession.builder().master("local[2]").appName(this.getClass.getSimpleName).enableHiveSupport().getOrCreate()
//拿到元数据
val catalog: Catalog = spark.catalog
//列举所有的database
val dbList: Dataset[Database] = catalog.listDatabases()
dbList.show(false)
//当前database
println("----->"+catalog.currentDatabase)
import spark.implicits._
//展示数据库的名字
dbList.map(_.name).show()
//设置数据库
catalog.setCurrentDatabase("ruozedata_hive")
//展示当前数据库中的表
catalog.listTables().show(false)
//过滤表
val listTable = catalog.listTables()
listTable.filter('name like "ruozedata%").show(false)
println("-------------缓存---------------------")
//判断某个表是否缓存
println(catalog.isCached("ruozedata_hive"))
catalog.cacheTable("ruozedata_hive")
println(catalog.isCached("ruozedata_hive"))
catalog.uncacheTable("ruozedata_hive")
//展示所有的function
catalog.listFunctions().show(1000,false)
注意:catalog.cacheTable是lazy的方法
注册UDF函数,展示函数
spark.udf.register("udf_string_length",(word:String) => {
word.split(",").length
})
catalog.listFunctions().filter('name === "udf_string_length").show(false)