SparkSQL下Parquet中PushDown的实现

Hive中也有PushDown。PushDown可以极大减少数据输入,极大的提高处理效率。

SparkSQL实现了PushDown,在Parquet文件中实现PushDown具有很重要的意义。

PushDown是一种SQL优化方式,通常用在查询。应用场景:

假设通过DataFrame,df.select(a,b,c).filter(by a).filter(by b).select(c).filter(by c)这样的查询,在optimizer阶段,需要合并多个filters(CombineFilters),并调整算子间的顺序,例如将部分filter移到select等前面(PushPredicateThroughAggregate/Generate/Join/Project)。filter前需要操作一大批数据,但filter后只需要操作很小一部分数据,SQL优化时就希望一开始就只操作这一小部分数据,而不需要把所有数据都导入进来,因为最终还是要被过滤掉。

PushDown本身既有SQL语法的层面也有物理执行的层面。

语法层面,SparkSQL和hive都有自己的语法实现。

下面看一下QueryExecution的源码:

/**
* The primary workflow for executing relational queries using Spark. Designed to allow easy
* access to the intermediate phases of query execution for developers.
*
* While this is not a public class, we should avoid changing the function names for the sake of
* changing them, because a lot of developers use the feature for debugging.
*/
class QueryExecution(valsqlContext: SQLContext, vallogical: LogicalPlan) {

def assertAnalyzed(): Unit = sqlContext.analyzer.checkAnalysis(analyzed)

lazy val analyzed: LogicalPlan = sqlContext.analyzer.execute(logical)

lazy val withCachedData: LogicalPlan = {
assertAnalyzed()
sqlContext.cacheManager.useCachedData(analyzed)
}

lazy val optimizedPlan: LogicalPlan = sqlContext.optimizer.execute(withCachedData)

lazy val sparkPlan: SparkPlan = {
SQLContext.setActive(sqlContext)
sqlContext.planner.plan(optimizedPlan).next()
}

// executedPlan should not be used to initialize any SparkPlan. It should be
// only used for execution.
lazy valexecutedPlan: SparkPlan = sqlContext.prepareForExecution.execute(sparkPlan)

/* Internal version of the RDD. Avoids copies and has no schema /
lazy valtoRdd: RDD[InternalRow] =executedPlan.execute()

protected def stringOrError[A](f: =>A): String =
try f.toString catch { casee: Throwable => e.toString }

def simpleString: String = {
s”“”== Physical Plan ==
|${stringOrError(executedPlan)}
“”“.stripMargin.trim
}

override def toString:String = {
def output =
analyzed.output.map(o =>s” o.name: {o.dataType.simpleString}”).mkString(“, “)

s"""== Parsed Logical Plan ==
   |${stringOrError(logical)}
   |== Analyzed Logical Plan ==
   |${stringOrError(output)}
   |${stringOrError(analyzed)}
   |== Optimized Logical Plan ==
   |${stringOrError(optimizedPlan)}
   |== Physical Plan ==
   |${stringOrError(executedPlan)}
""".stripMargin.trim

}
}

QueryExecution在具体实现时会把工作过程串联成一个WorkFlow。

SQL语句的翻译过程:

1 基本语法翻译

2 pharser翻译

3 优化

4 Logical plan

5 Physical执行计划

6 引擎上执行

例如DataFrame,df.select(a,b,c).filter(by a).filter(by b).select(c).filter(by c)这样一个SQL语句,在执行前会生成一个语法树,解析和优化,在优化阶段会把Filter合并,在合并时会考虑Filter的顺序。

下面再看一下spark.sql.catalyst的Optimizer的源码:

package org.apache.spark.sql.catalyst.optimizer

abstract class Optimizerextends RuleExecutor[LogicalPlan]

object DefaultOptimizer extends Optimizer {
val batches=
// SubQueries are only needed for analysis and can be removed before execution.
Batch(“Remove SubQueries”,FixedPoint(100),
EliminateSubQueries) ::
Batch(“Aggregate”,FixedPoint(100),
ReplaceDistinctWithAggregate,
RemoveLiteralFromGroupExpressions) ::
Batch(“Operator Optimizations”,FixedPoint(100),
// Operator push down
SetOperationPushDown,
SamplePushDown,
PushPredicateThroughJoin,
PushPredicateThroughProject,
PushPredicateThroughGenerate,
PushPredicateThroughAggregate,
ColumnPruning,
// Operator combine
ProjectCollapsing,
CombineFilters,
CombineLimits,
// Constant folding
NullPropagation,
OptimizeIn,
ConstantFolding,
LikeSimplification,
BooleanSimplification,
RemoveDispensableExpressions,
SimplifyFilters,
SimplifyCasts,
SimplifyCaseConversionExpressions) ::
Batch(“Decimal Optimizations”,FixedPoint(100),
DecimalAggregates) ::
Batch(“LocalRelation”,FixedPoint(100),
ConvertToLocalRelation) :: Nil
}

PushDown是要把操作放到叶子节点上。这也是为什么叫谓词下推(Predicate pushdown)的原因。当把操作放到叶子节点时就导致操作在数据源上执行。
下面图示PushDown的过程:

SparkSQL下Parquet中PushDown的实现_第1张图片

你可能感兴趣的:(SparkSQL)