flink streamGraph生成jobGraph

当需要通过streamGraph生成jobGraph的时候,通过StreamingJobGraphGenerator的createJobGraph()方法来生成。

public static JobGraph createJobGraph(StreamGraph streamGraph, @Nullable JobID jobID) {
   return new StreamingJobGraphGenerator(streamGraph, jobID).createJobGraph();
}
private JobGraph createJobGraph() {

   // make sure that all vertices start immediately
   jobGraph.setScheduleMode(ScheduleMode.EAGER);

   // Generate deterministic hashes for the nodes in order to identify them across
   // submission iff they didn't change.
   Map hashes = defaultStreamGraphHasher.traverseStreamGraphAndGenerateHashes(streamGraph);

   // Generate legacy version hashes for backwards compatibility
   List> legacyHashes = new ArrayList<>(legacyStreamGraphHashers.size());
   for (StreamGraphHasher hasher : legacyStreamGraphHashers) {
      legacyHashes.add(hasher.traverseStreamGraphAndGenerateHashes(streamGraph));
   }

   Map>> chainedOperatorHashes = new HashMap<>();

   setChaining(hashes, legacyHashes, chainedOperatorHashes);

   setPhysicalEdges();

   setSlotSharingAndCoLocation();

   configureCheckpointing();

   JobGraphGenerator.addUserArtifactEntries(streamGraph.getEnvironment().getCachedFiles(), jobGraph);

   // set the ExecutionConfig last when it has been finalized
   try {
      jobGraph.setExecutionConfig(streamGraph.getExecutionConfig());
   }
   catch (IOException e) {
      throw new IllegalConfigurationException("Could not serialize the ExecutionConfig." +
            "This indicates that non-serializable types (like custom serializers) were registered");
   }

   return jobGraph;
}

在建立jobGraph的一开始,会将jobGraph的scheduledMode设置为eager,所有计算节点将会在开始时立即启动。

之后,会通过StreamGraphHasher的traverseStreamGraphAndGenerateHashes()方法生成所有streamNode的hash值。

@Override
public Map traverseStreamGraphAndGenerateHashes(StreamGraph streamGraph) {
   // The hash function used to generate the hash
   final HashFunction hashFunction = Hashing.murmur3_128(0);
   final Map hashes = new HashMap<>();

   Set visited = new HashSet<>();
   Queue remaining = new ArrayDeque<>();

   // We need to make the source order deterministic. The source IDs are
   // not returned in the same order, which means that submitting the same
   // program twice might result in different traversal, which breaks the
   // deterministic hash assignment.
   List sources = new ArrayList<>();
   for (Integer sourceNodeId : streamGraph.getSourceIDs()) {
      sources.add(sourceNodeId);
   }
   Collections.sort(sources);

   //
   // Traverse the graph in a breadth-first manner. Keep in mind that
   // the graph is not a tree and multiple paths to nodes can exist.
   //

   // Start with source nodes
   for (Integer sourceNodeId : sources) {
      remaining.add(streamGraph.getStreamNode(sourceNodeId));
      visited.add(sourceNodeId);
   }

   StreamNode currentNode;
   while ((currentNode = remaining.poll()) != null) {
      // Generate the hash code. Because multiple path exist to each
      // node, we might not have all required inputs available to
      // generate the hash code.
      if (generateNodeHash(currentNode, hashFunction, hashes, streamGraph.isChainingEnabled())) {
         // Add the child nodes
         for (StreamEdge outEdge : currentNode.getOutEdges()) {
            StreamNode child = outEdge.getTargetVertex();

            if (!visited.contains(child.getId())) {
               remaining.add(child);
               visited.add(child.getId());
            }
         }
      } else {
         // We will revisit this later.
         visited.remove(currentNode.getId());
      }
   }

   return hashes;
}

这里,会得到所有source的id进行排序,并以此开始进行计算hash,在计算完毕一个节点之后也将得到其所有输出节点并加入带访问的数组准备计算hash。

具体hash值的获得在StreamGraphHasherV2的generateDeterministicHash()方法中。

private byte[] generateDeterministicHash(
      StreamNode node,
      Hasher hasher,
      Map hashes,
      boolean isChainingEnabled) {

   // Include stream node to hash. We use the current size of the computed
   // hashes as the ID. We cannot use the node's ID, because it is
   // assigned from a static counter. This will result in two identical
   // programs having different hashes.
   generateNodeLocalHash(hasher, hashes.size());

   // Include chained nodes to hash
   for (StreamEdge outEdge : node.getOutEdges()) {
      if (isChainable(outEdge, isChainingEnabled)) {

         // Use the hash size again, because the nodes are chained to
         // this node. This does not add a hash for the chained nodes.
         generateNodeLocalHash(hasher, hashes.size());
      }
   }

   byte[] hash = hasher.hash().asBytes();

   // Make sure that all input nodes have their hash set before entering
   // this loop (calling this method).
   for (StreamEdge inEdge : node.getInEdges()) {
      byte[] otherHash = hashes.get(inEdge.getSourceId());

      // Sanity check
      if (otherHash == null) {
         throw new IllegalStateException("Missing hash for input node "
               + inEdge.getSourceVertex() + ". Cannot generate hash for "
               + node + ".");
      }

      for (int j = 0; j < hash.length; j++) {
         hash[j] = (byte) (hash[j] * 37 ^ otherHash[j]);
      }
   }

   if (LOG.isDebugEnabled()) {
      String udfClassName = "";
      if (node.getOperator() instanceof AbstractUdfStreamOperator) {
         udfClassName = ((AbstractUdfStreamOperator) node.getOperator())
               .getUserFunction().getClass().getName();
      }

      LOG.debug("Generated hash '" + byteToHexString(hash) + "' for node " +
            "'" + node.toString() + "' {id: " + node.getId() + ", " +
            "parallelism: " + node.getParallelism() + ", " +
            "user function: " + udfClassName + "}");
   }

   return hash;
}

这里的hash值的生成综合了三个地方,首先是保管已经生成的hash值的hash数组的大小,其次是当前节点的输入边和输出边,以上三者是影响hash生成的要素。

 

在依次生成了所有的streamNode的hash之后,如果相应的节点用户有专门指定的hash设置,接下来也会在legacyStreamGraphHashers中的StreamGraphUserHashHasher设定用户制定的hash到legacyHashes中。

 

在计算完毕所有streamNode的hash之后,调用setChaining()方法建立计算节点链。

private void setChaining(Map hashes, List> legacyHashes, Map>> chainedOperatorHashes) {
   for (Integer sourceNodeId : streamGraph.getSourceIDs()) {
      createChain(sourceNodeId, sourceNodeId, hashes, legacyHashes, 0, chainedOperatorHashes);
   }
}

private List createChain(
      Integer startNodeId,
      Integer currentNodeId,
      Map hashes,
      List> legacyHashes,
      int chainIndex,
      Map>> chainedOperatorHashes) {

   if (!builtVertices.contains(startNodeId)) {

      List transitiveOutEdges = new ArrayList();

      List chainableOutputs = new ArrayList();
      List nonChainableOutputs = new ArrayList();

      for (StreamEdge outEdge : streamGraph.getStreamNode(currentNodeId).getOutEdges()) {
         if (isChainable(outEdge, streamGraph)) {
            chainableOutputs.add(outEdge);
         } else {
            nonChainableOutputs.add(outEdge);
         }
      }

      for (StreamEdge chainable : chainableOutputs) {
         transitiveOutEdges.addAll(
               createChain(startNodeId, chainable.getTargetId(), hashes, legacyHashes, chainIndex + 1, chainedOperatorHashes));
      }

      for (StreamEdge nonChainable : nonChainableOutputs) {
         transitiveOutEdges.add(nonChainable);
         createChain(nonChainable.getTargetId(), nonChainable.getTargetId(), hashes, legacyHashes, 0, chainedOperatorHashes);
      }

      List> operatorHashes =
         chainedOperatorHashes.computeIfAbsent(startNodeId, k -> new ArrayList<>());

      byte[] primaryHashBytes = hashes.get(currentNodeId);

      for (Map legacyHash : legacyHashes) {
         operatorHashes.add(new Tuple2<>(primaryHashBytes, legacyHash.get(currentNodeId)));
      }

      chainedNames.put(currentNodeId, createChainedName(currentNodeId, chainableOutputs));
      chainedMinResources.put(currentNodeId, createChainedMinResources(currentNodeId, chainableOutputs));
      chainedPreferredResources.put(currentNodeId, createChainedPreferredResources(currentNodeId, chainableOutputs));

      StreamConfig config = currentNodeId.equals(startNodeId)
            ? createJobVertex(startNodeId, hashes, legacyHashes, chainedOperatorHashes)
            : new StreamConfig(new Configuration());

      setVertexConfig(currentNodeId, config, chainableOutputs, nonChainableOutputs);

      if (currentNodeId.equals(startNodeId)) {

         config.setChainStart();
         config.setChainIndex(0);
         config.setOperatorName(streamGraph.getStreamNode(currentNodeId).getOperatorName());
         config.setOutEdgesInOrder(transitiveOutEdges);
         config.setOutEdges(streamGraph.getStreamNode(currentNodeId).getOutEdges());

         for (StreamEdge edge : transitiveOutEdges) {
            connect(startNodeId, edge);
         }

         config.setTransitiveChainedTaskConfigs(chainedConfigs.get(startNodeId));

      } else {

         Map chainedConfs = chainedConfigs.get(startNodeId);

         if (chainedConfs == null) {
            chainedConfigs.put(startNodeId, new HashMap());
         }
         config.setChainIndex(chainIndex);
         StreamNode node = streamGraph.getStreamNode(currentNodeId);
         config.setOperatorName(node.getOperatorName());
         chainedConfigs.get(startNodeId).put(currentNodeId, config);
      }

      config.setOperatorID(new OperatorID(primaryHashBytes));

      if (chainableOutputs.isEmpty()) {
         config.setChainEnd();
      }
      return transitiveOutEdges;

   } else {
      return new ArrayList<>();
   }
}

在setChaining()方法中,会遍历所有source,从source开始调用createChain()方法建立节点链。

在这个方法中首先得到当前节点的所有输出边,调用isChainable()方法判断能否与下游节点建立节点链。

public static boolean isChainable(StreamEdge edge, StreamGraph streamGraph) {
   StreamNode upStreamVertex = edge.getSourceVertex();
   StreamNode downStreamVertex = edge.getTargetVertex();

   StreamOperator headOperator = upStreamVertex.getOperator();
   StreamOperator outOperator = downStreamVertex.getOperator();

   return downStreamVertex.getInEdges().size() == 1
         && outOperator != null
         && headOperator != null
         && upStreamVertex.isSameSlotSharingGroup(downStreamVertex)
         && outOperator.getChainingStrategy() == ChainingStrategy.ALWAYS
         && (headOperator.getChainingStrategy() == ChainingStrategy.HEAD ||
            headOperator.getChainingStrategy() == ChainingStrategy.ALWAYS)
         && (edge.getPartitioner() instanceof ForwardPartitioner)
         && upStreamVertex.getParallelism() == downStreamVertex.getParallelism()
         && streamGraph.isChainingEnabled();

对于是否可以建立链,这里的判断条件很多。

  1. 需要下游节点的输入边只有当前一条。
  2. 上下游的operator都不为空
  3. 上下游的slot共享名一致
  4. 下游的连接策略为ALWAYS
  5. 上游的连接策略为HEAD或者ALWAYS
  6. 该条边的分区选择为Forward
  7. 上下游的并行度相同
  8. 被转化的streamGraph支持节点链的生成

 

在完成以上的判断,如果全部符合,那么该条输出边符合节点链的生成,则会加入符合的集合,否则加入不符合的集合。

在完成当前节点的所有输出边的判断,则会开始对上述两个集合进行遍历依次递归调用createChain()处理,但是符合生成链条件的节点的开始节点是一开始的节点,而在不符合条件的节点中,初试节点则是当前集合中的节点。

 

而后,如果当前节点是开始节点,则会创建jobVertex的创建并开始streamConfig的配置,并会通过connect()与所有无法建立节点链的节点建立连接。

JobEdge jobEdge;
if (partitioner instanceof ForwardPartitioner || partitioner instanceof RescalePartitioner) {
   jobEdge = downStreamVertex.connectNewDataSetAsInput(
      headVertex,
      DistributionPattern.POINTWISE,
      ResultPartitionType.PIPELINED_BOUNDED);
} else {
   jobEdge = downStreamVertex.connectNewDataSetAsInput(
         headVertex,
         DistributionPattern.ALL_TO_ALL,
         ResultPartitionType.PIPELINED_BOUNDED);
}

正是在这个方法中,无法建立节点链的上下游节点建立了jobEdge,也是根据分区选择了下游节点连接上游DataSet的方式,同时上游节点也会在这里建立DataSet。

 

在调用完setChaining()方法之后,继续回到createJobGraph()方法中调用setPhysicalEdges()方法建立下游节点接受上游节点输入的物理边。

 

再回到createJobGraph()中,会根据setSlotSharingAndCoLocation()方法配置计算节点的slot共享组和CoLocationGroup。

 

之后,继续在createJobGraph()方法中调用configureCheckpointing()方法配置节点的checkpointing属性。

// collect the vertices that receive "trigger checkpoint" messages.
// currently, these are all the sources
List triggerVertices = new ArrayList<>();

// collect the vertices that need to acknowledge the checkpoint
// currently, these are all vertices
List ackVertices = new ArrayList<>(jobVertices.size());

// collect the vertices that receive "commit checkpoint" messages
// currently, these are all vertices
List commitVertices = new ArrayList<>(jobVertices.size());

for (JobVertex vertex : jobVertices.values()) {
   if (vertex.isInputVertex()) {
      triggerVertices.add(vertex.getID());
   }
   commitVertices.add(vertex.getID());
   ackVertices.add(vertex.getID());
}

在这里会分为三种计算节点,但是只有source节点会成为triggerVertic,其他两种类型所有计算节点都会成为。

这里分辨计算节点是否会source节点的根据是否存在输入边。

根据注释:

Trigger类型节点负责接收tirgger类型的checkpoint信息。

Ack类型节点需要发出ack信息。

Commit类型节点负责接收commit类型的checkpoint信息。

 

最后回到createJobGraph(),当前jobGraph将会接受streamGraph的运行配置,同时,宣告jobGraph的生成结束。

你可能感兴趣的:(flink)