Apache Spark 2.0引入了SparkSession,其为用户提供了一个统一的切入点来使用Spark的各项功能,并且允许用户通过它调用DataFrame和Dataset相关API来编写Spark程序。最重要的是,它减少了用户需要了解的一些概念,使得我们可以很容易地与Spark交互。
本文我们将介绍在Spark 2.0中如何使用SparkSession。更多关于SparkSession的文章请参见:《SparkSession:新的切入点》、《Spark 2.0介绍:创建和使用相关API》、《Apache Spark 2.0.0正式发布及其功能介绍》
在2.0版本之前,与Spark交互之前必须先创建SparkConf和SparkContext,代码如下:
//set up the spark configuration and create contexts
val sparkConf = new SparkConf().setAppName("SparkSessionZipsExample").setMaster("local")
// your handle to SparkContext to access other context like SQLContext
val sc = new SparkContext(sparkConf).set("spark.some.config.option", "some-value")
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
然而在Spark 2.0中,我们可以通过SparkSession来实现同样的功能,而不需要显式地创建SparkConf, SparkContext 以及 SQLContext,因为这些对象已经封装在SparkSession中。使用生成器的设计模式(builder design pattern),如果我们没有创建SparkSession对象,则会实例化出一个新的SparkSession对象及其相关的上下文。
// Create a SparkSession. No need to create SparkContext
// You automatically get it as part of the SparkSession
val warehouseLocation = "file:${system:user.dir}/spark-warehouse"
val spark = SparkSession
.builder()
.appName("SparkSessionZipsExample")
.config("spark.sql.warehouse.dir", warehouseLocation)
.enableHiveSupport()
.getOrCreate()
到现在我们可以使用上面创建好的spark对象,并且访问其public方法。
//set new runtime options
spark.conf.set("spark.sql.shuffle.partitions", 6)
spark.conf.set("spark.executor.memory", "2g")
//get all settings
val configMap:Map[String, String] = spark.conf.getAll()
//fetch metadata data from the catalog
scala> spark.catalog.listDatabases.show(false)
+--------------+---------------------+--------------------------------------------------------+
|name |description |locationUri |
+--------------+---------------------+--------------------------------------------------------+
|default |Default Hive database|hdfs://iteblogcluster/user/iteblog/hive/warehouse |
+--------------+---------------------+--------------------------------------------------------+
scala> spark.catalog.listTables.show(false)
+----------------------------------------+--------+-----------+---------+-----------+
|name |database|description|tableType|isTemporary|
+----------------------------------------+--------+-----------+---------+-----------+
|iteblog |default |null |MANAGED |false |
|table2 |default |null |EXTERNAL |false |
|test |default |null |MANAGED |false |
+----------------------------------------+--------+-----------+---------+-----------+
scala> val numDS = spark.range(5, 100, 5)
numDS: org.apache.spark.sql.Dataset[Long] = [id: bigint]
scala> numDS.orderBy(desc("id")).show(5)
+---+
| id|
+---+
| 95|
| 90|
| 85|
| 80|
| 75|
+---+
only showing top 5 rows
scala> numDS.describe().show()
+-------+------------------+
|summary| id|
+-------+------------------+
| count| 19|
| mean| 50.0|
| stddev|28.136571693556885|
| min| 5|
| max| 95|
+-------+------------------+
scala> val langPercentDF = spark.createDataFrame(List(("Scala", 35),
| ("Python", 30), ("R", 15), ("Java", 20)))
langPercentDF: org.apache.spark.sql.DataFrame = [_1: string, _2: int]
scala> val lpDF = langPercentDF.withColumnRenamed("_1", "language").withColumnRenamed("_2", "percent")
lpDF: org.apache.spark.sql.DataFrame = [language: string, percent: int]
scala> lpDF.orderBy(desc("percent")).show(false)
+--------+-------+
|language|percent|
+--------+-------+
|Scala |35 |
|Python |30 |
|Java |20 |
|R |15 |
+--------+-------+
val df = sparkSession.read.option("header","true").
csv("src/main/resources/sales.csv")
上面代码非常像使用SQLContext来读取数据,我们现在可以使用SparkSession来替代之前使用SQLContext编写的代码。下面是完整的代码片段:
package com.iteblog
import org.apache.spark.sql.SparkSession
/**
* Spark Session example
*
*/
object SparkSessionExample {
def main(args: Array[String]) {
val sparkSession = SparkSession.builder.
master("local")
.appName("spark session example")
.getOrCreate()
val df = sparkSession.read.option("header","true").csv("src/main/resources/sales.csv")
df.show()
}
}
// read the json file and create the dataframe
scala> val jsonFile = "/user/iteblog.json"
jsonFile: String = /user/iteblog.json
scala> val zipsDF = spark.read.json(jsonFile)
zipsDF: org.apache.spark.sql.DataFrame = [_id: string, city: string ... 3 more fields]
scala> zipsDF.filter(zipsDF.col("pop") > 40000).show(10, false)
+-----+----------+-----------------------+-----+-----+
|_id |city |loc |pop |state|
+-----+----------+-----------------------+-----+-----+
|01040|HOLYOKE |[-72.626193, 42.202007]|43704|MA |
|01085|MONTGOMERY|[-72.754318, 42.129484]|40117|MA |
|01201|PITTSFIELD|[-73.247088, 42.453086]|50655|MA |
|01420|FITCHBURG |[-71.803133, 42.579563]|41194|MA |
|01701|FRAMINGHAM|[-71.425486, 42.300665]|65046|MA |
|01841|LAWRENCE |[-71.166997, 42.711545]|45555|MA |
|01902|LYNN |[-70.941989, 42.469814]|41625|MA |
|01960|PEABODY |[-70.961194, 42.532579]|47685|MA |
|02124|DORCHESTER|[-71.072898, 42.287984]|48560|MA |
|02146|BROOKLINE |[-71.128917, 42.339158]|56614|MA |
+-----+----------+-----------------------+-----+-----+
only showing top 10 rows
// Now create an SQL table and issue SQL queries against it without
// using the sqlContext but through the SparkSession object.
// Creates a temporary view of the DataFrame
scala> zipsDF.createOrReplaceTempView("zips_table")
scala> zipsDF.cache()
res3: zipsDF.type = [_id: string, city: string ... 3 more fields]
scala> val resultsDF = spark.sql("SELECT city, pop, state, _id FROM zips_table")
resultsDF: org.apache.spark.sql.DataFrame = [city: string, pop: bigint ... 2 more fields]
scala> resultsDF.show(10)
+------------+-----+-----+-----+
| city| pop|state| _id|
+------------+-----+-----+-----+
| AGAWAM|15338| MA|01001|
| CUSHMAN|36963| MA|01002|
| BARRE| 4546| MA|01005|
| BELCHERTOWN|10579| MA|01007|
| BLANDFORD| 1240| MA|01008|
| BRIMFIELD| 3706| MA|01010|
| CHESTER| 1688| MA|01011|
|CHESTERFIELD| 177| MA|01012|
| CHICOPEE|23396| MA|01013|
| CHICOPEE|31495| MA|01020|
+------------+-----+-----+-----+
only showing top 10 rows
scala> spark.sql("DROP TABLE IF EXISTS iteblog_hive")
res5: org.apache.spark.sql.DataFrame = []
scala> spark.table("zips_table").write.saveAsTable("iteblog_hive")
16/08/24 21:52:59 WARN HiveMetaStore: Location: hdfs://iteblogcluster/user/iteblog/hive/warehouse/iteblog_hive specified for non-external table:iteblog_hive
scala> val resultsHiveDF = spark.sql("SELECT city, pop, state, _id FROM iteblog_hive WHERE pop > 40000")
resultsHiveDF: org.apache.spark.sql.DataFrame = [city: string, pop: bigint ... 2 more fields]
scala> resultsHiveDF.show(10)
+----------+-----+-----+-----+
| city| pop|state| _id|
+----------+-----+-----+-----+
| HOLYOKE|43704| MA|01040|
|MONTGOMERY|40117| MA|01085|
|PITTSFIELD|50655| MA|01201|
| FITCHBURG|41194| MA|01420|
|FRAMINGHAM|65046| MA|01701|
| LAWRENCE|45555| MA|01841|
| LYNN|41625| MA|01902|
| PEABODY|47685| MA|01960|
|DORCHESTER|48560| MA|02124|
| BROOKLINE|56614| MA|02146|
+----------+-----+-----+-----+
only showing top 10 rows
正如你所见,你使用DataFrame API, Spark SQL 以及 Hive查询的结果都一样。
本文翻译自:https://databricks.com/blog/2016/08/15/how-to-use-sparksession-in-apache-spark-2-0.html