Building Flink Batch Job DAG

Flink Batch Job DAG构造以及物理执行计划的生成

  • Flink 通过Dataset API来定义Batch Job, 实现上主要涉及如下几个internal数据结构
    1. [flink-java] org.apache.flink.api.java.operators.Operator
    2. [flink-core] org.apache.flink.api.common.operators.Operator
    3. [flink-optimizer] org.apache.flink.optimizer.dag.OptimizerNode
    4. [flink-optimizer] org.apache.flink.optimizer.plan.PlanNode /Channel
    5. [flink-runtime] org.apache.flink.runtime.jobgraph.JobVertex / IntermediateDataSet / JobEdge
  • 整个流程如下
    1. DataSet Api => ExecutionEnvironment:List> sinks
    2. ExecutionEnvironment#execute => Plan:List> sinks
    3. PlanExecutor#executePlan(p) => RemoteExecutor#executePlan => RemoteExecutor#executePlanWithJars => ClusterClient#run => ClusterClient#getOptimizedPlan => Optimizer#compile:OptimizedPlan
      a. GraphCreatingVisitor => (Plan) program.accept(graphCreator); => OptimizerNode [(DagConnection:ExecutionMode, InterestingProperties , ShipStrategyType,TempMode)(AbstractOperatorDescriptor)]
      b. IdAndEstimatesVisitor=> (OptimizerNode) rootNode.accept(new IdAndEstimatesVisitor(this.statistics));
      c. BranchesVisitor => rootNode.accept(branchingVisitor);
      d. InterestingPropertyVisitor => rootNode.accept(propsVisitor);
      e. List bestPlan = rootNode.getAlternativePlans(this.costEstimator); => PlanNode (DriverStrategy, LocalProperties, GlobalProperties)
      f. PlanFinalizer#createFinalPlan(bestPlanSinks, program.getJobName(), program);
      g. BinaryUnionReplacer => plan.accept(new BinaryUnionReplacer());
      h. RangePartitionRewriter => plan.accept(new RangePartitionRewriter(plan, executionConfig));
    4. ClusterClient#getJobGraph => jobGraphGenerator#compileJobGraph((OptimizedPlan) optPlan); JobGraph(JobVertex / IntermediateDataSet / JobEdge)
      a. create the job vertex and sets driver strategy.
      b. connects all of the current node's predecessors to the current node.
  • internal数据结构详细解析
    1. [flink-java] org.apache.flink.api.java.operators.Operator
      • 由DataSet API 对应构造
      • 各个java.operators.Operator实现类的translateToDataFlow方法定义了转换到common.operators.Operator 的逻辑
    2. [flink-core] org.apache.flink.api.common.operators.Operator
      • 描述Batch DAG中的计算算子
      • 并发,配置,资源规格,标志(id,name),CompilerHints(hints to the compiler),OperatorInformation输出类型(TypeInformation)是其中的核心成员
    3. [flink-optimizer] org.apache.flink.optimizer.dag.OptimizerNode
      • Optmizer DAG中的计算算子表示
      • 根据common.operators.Operator (almost) one-to-one 对应翻译,包含了一下optimizer需要的additional information。
      • GraphCreatingVisitor负责common.operators.Operator 到OptimizerNode的转换
      • 核心成员:
        • DagConnection:ExecutionMode, InterestingProperties, ShipStrategyType,TempMode
        • AbstractOperatorDescriptor
    4. [flink-optimizer] org.apache.flink.optimizer.plan.PlanNode /Channel
      • Batch Job的物理执行 DAG
      • OptimizerNode到PlanNode的转换依赖OptimizerNode#getAlternativePlans方法,其中依赖AbstractOperatorDescriptor的两个子类的instantiate方法
        • OperatorDescriptorSingle#instantiate(Channel in, SingleInputNode node)
        • OperatorDescriptorDual#instantiate(Channel in1, Channel in2, TwoInputNode node)
      • 核心成员:
        OptimizerNode template;
        DriverStrategy driverStrategy;
        LocalProperties localProps; GlobalProperties globalProps;(ship strategy/ local strategy)
        Iterable getInputs()
        List broadcastInputs
        List outChannels;
        
  • 其他的比较重要的数据结构
    • DriverStrategy
    • LocalProperties; GlobalProperties
    • DamBehavior

你可能感兴趣的:(Building Flink Batch Job DAG)