Flink1.14 SourceReader概念入门讲解与源码解析 (三)

目录

SourceReader 概念

SourceReader 源码方法

void start();

InputStatus pollNext(ReaderOutput output) throws Exception;

List snapshotState(long checkpointId);

CompletableFuture isAvailable();

void addSplits(List splits);

参考


SourceReader 概念

SourceReader是一个运行在Task Manager上的组件,主要是负责读取 SplitEnumerator 分配的source split。

SourceReader 提供了一个拉动式(pull-based)处理接口。Flink任务会在循环中不断调用 pollNext(ReaderOutput) 轮询来自 SourceReader 的记录。 pollNext(ReaderOutput) 方法的返回值指示 SourceReader 的状态。

  • MORE_AVAILABLE - SourceReader 有可用的记录。
  • NOTHING_AVAILABLE - SourceReader 现在没有可用的记录,但是将来可能会有记录可用。
  • END_OF_INPUT - SourceReader 已经处理完所有记录,到达数据的尾部。这意味着 SourceReader 可以终止任务了。

pollNext(ReaderOutput) 会使用 ReaderOutput 作为参数,为了提高性能且在必要情况下, SourceReader 可以在一次 pollNext() 调用中返回多条记录。例如:有时外部系统的工作系统的工作粒度为块。而一个块可以包含多个记录,但是 source 只能在块的边界处设置 Checkpoint。在这种情况下, SourceReader 可以一次将一个块中的所有记录通过 ReaderOutput 发送至下游。

然而,除非有必要,SourceReader 的实现应该避免在一次 pollNext(ReaderOutput) 的调用中发送多个记录。这是因为对 SourceReader 轮询的任务线程工作在一个事件循环(event-loop)中,且不能阻塞。

在创建 SourceReader 时,相应的 SourceReaderContext 会提供给 Source,而 Source 则会将对应的上下文传递给 SourceReader 实例。 SourceReader 可以通过 SourceReaderContext 将 SourceEvent 传递给相应的 SplitEnumerator 。 Source 的一个典型设计模式是让 SourceReader 发送它们的本地信息给 SplitEnumerator,后者则会全局性地做出决定。

SourceReader API 是一个底层(low-level)API,允许用户自行处理分片,并使用自己的线程模型来获取和移交记录。为了帮助实现 SourceReader,Flink 提供了 SourceReaderBase 类,可以显著减少编写 SourceReader 所需要的工作量。

强烈建议连接器开发人员充分利用 SourceReaderBase 而不是从头开始编写 SourceReader

这里简单说一下,如何通过 Source 创建 DataStream ,有两种方法(感觉上没啥区别):

  • env.fromSource
  • env.addSource
// fromSource 这个返回的是source
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

Source mySource = new MySource(....);

DataStream stream = env.fromSource(
        mySource,
        WatermarkStrategy.noWatermarks(),// 无水标
        "MySourceName");
..

// addSource 这个返回的是Source function
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

DataStream<..> stream = env.addSource(new MySource(...));

SourceReader 源码方法

void start();

判断是否有splits了,如果当前没有已经分配的splits了就发送请求获取。


    /** Start the reader. */
    void start();


    // FileSourceReader的实现
    @Override
    public void start() {
        // we request a split only if we did not get splits during the checkpoint restore
        if (getNumberOfCurrentlyAssignedSplits() == 0) {
            context.sendSplitRequest(); // 发送split的读取请求给SplitEnumerator,在handleSplitRequest方法中被调用
        }
    }

InputStatus pollNext(ReaderOutput output) throws Exception;

主要负责拉取下一个可读取的记录到SourceOutput,确保这个方法是非阻塞的,并且最好一次调用只输出一条数据。


    /**
     * Poll the next available record into the {@link SourceOutput}.
     *
     * 

The implementation must make sure this method is non-blocking. * *

Although the implementation can emit multiple records into the given SourceOutput, it is * recommended not doing so. Instead, emit one record into the SourceOutput and return a {@link * InputStatus#MORE_AVAILABLE} to let the caller thread know there are more records available. * * @return The InputStatus of the SourceReader after the method invocation. */ InputStatus pollNext(ReaderOutput output) throws Exception; // FileSourceReader读取数据的pollNext方法位于父类SourceReaderBase中 @Override public InputStatus pollNext(ReaderOutput output) throws Exception { // make sure we have a fetch we are working on, or move to the next // 获取当前从fetcher中读取到的一批split // RecordsWithSplitIds代表了从fetcher拉取到SourceReader的数据 // RecordsWithSplitIds可以包含多个split,但是对于FileRecords而言,只代表一个split RecordsWithSplitIds recordsWithSplitId = this.currentFetch; if (recordsWithSplitId == null) { // 如果没有,获取下一批split recordsWithSplitId = getNextFetch(output); if (recordsWithSplitId == null) { // 如果还没有获取到,需要检查后续是否还会有数据到来。 return trace(finishedOrAvailableLater()); } } // we need to loop here, because we may have to go across splits while (true) { // Process one record. // 从split中获取下一条记录 final E record = recordsWithSplitId.nextRecordFromSplit(); if (record != null) { // emit the record. // 如果获取到数据 // 记录数量计数器加1 numRecordsInCounter.inc(1); // 发送数据到Output // currentSplitOutput为当前split对应的下游output // currentSplitContext.state为reader的读取状态 recordEmitter.emitRecord(record, currentSplitOutput, currentSplitContext.state); LOG.trace("Emitted record: {}", record); // We always emit MORE_AVAILABLE here, even though we do not strictly know whether // more is available. If nothing more is available, the next invocation will find // this out and return the correct status. // That means we emit the occasional 'false positive' for availability, but this // saves us doing checks for every record. Ultimately, this is cheaper. // 总是发送MORE_AVAILABLE // 如果真的没有可用数据,下次调用会返回正确的状态 return trace(InputStatus.MORE_AVAILABLE); } else if (!moveToNextSplit(recordsWithSplitId, output)) { // 如果本次fetch的split已经全部被读取(本批没有更多的split),读取下一批数据 // The fetch is done and we just discovered that and have not emitted anything, yet. // We need to move to the next fetch. As a shortcut, we call pollNext() here again, // rather than emitting nothing and waiting for the caller to call us again. return pollNext(output); } // else fall through the loop } }

getNextFetch方法获取下一批 split 。

@Nullable
private RecordsWithSplitIds getNextFetch(final ReaderOutput output) {
    // 检查fetcher是否有错误
    splitFetcherManager.checkErrors();

    LOG.trace("Getting next source data batch from queue");
    // elementsQueue中缓存了fetcher线程获取的split
    // 从这个队列中拿出一批split
    final RecordsWithSplitIds recordsWithSplitId = elementsQueue.poll();
    // 如果队列中没有数据,并且接下来这批split已被读取完毕,返回null
    if (recordsWithSplitId == null || !moveToNextSplit(recordsWithSplitId, output)) {
        // No element available, set to available later if needed.
        return null;
    }

    // 更新当前的fetch
    currentFetch = recordsWithSplitId;
    return recordsWithSplitId;
}

finishedOrAvailableLater 方法检查后续是否还有数据,返回对应的状态。

private InputStatus finishedOrAvailableLater() {
    // 检查所有的fetcher是否都已关闭
    final boolean allFetchersHaveShutdown = splitFetcherManager.maybeShutdownFinishedFetchers();
    // 如果reader不会再接收更多的split,或者所有的fetcher都已关闭
    // 返回NOTHING_AVAILABLE,将来可能会有记录可用。
    if (!(noMoreSplitsAssignment && allFetchersHaveShutdown)) {
        return InputStatus.NOTHING_AVAILABLE;
    }
    if (elementsQueue.isEmpty()) {
        // 如果缓存队列中没有数据,返回END_OF_INPUT
        // We may reach here because of exceptional split fetcher, check it.
        splitFetcherManager.checkErrors();
        return InputStatus.END_OF_INPUT;
    } else {
        // We can reach this case if we just processed all data from the queue and finished a
        // split,
        // and concurrently the fetcher finished another split, whose data is then in the queue.
        // 其他情况返回MORE_AVAILABLE
        return InputStatus.MORE_AVAILABLE;
    }
}

moveToNextSplit 方法前往读取下一个split。

private boolean moveToNextSplit(
    RecordsWithSplitIds recordsWithSplitIds, ReaderOutput output) {
    // 获取下一个split的ID
    final String nextSplitId = recordsWithSplitIds.nextSplit();
    if (nextSplitId == null) {
        // 如果没获取到,则当前获取过程结束
        LOG.trace("Current fetch is finished.");
        finishCurrentFetch(recordsWithSplitIds, output);
        return false;
    }

    // 获取当前split上下文
    // Map> splitStates它保存了split ID和split的状态
    currentSplitContext = splitStates.get(nextSplitId);
    checkState(currentSplitContext != null, "Have records for a split that was not registered");
    // 获取当前split对应的output
    // SourceOperator在从SourceCoordinator获取到分片后会为每个分片创建一个OUtput,currentSplitOutput是当前分片的输出
    currentSplitOutput = currentSplitContext.getOrCreateSplitOutput(output);
    LOG.trace("Emitting records from fetch for split {}", nextSplitId);
    return true;
}

List snapshotState(long checkpointId);

主要是负责创建 source 的 checkpoint 。


    /**
     * Checkpoint on the state of the source.
     *
     * @return the state of the source.
     */
    List snapshotState(long checkpointId);


    public List snapshotState(long checkpointId) {
        List splits = new ArrayList();
        this.splitStates.forEach((id, context) -> {
            splits.add(this.toSplitType(id, context.state));
        });
        return splits;
    }

CompletableFuture isAvailable();

     /**
     * Returns a future that signals that data is available from the reader.
     *
     * 

Once the future completes, the runtime will keep calling the {@link * #pollNext(ReaderOutput)} method until that methods returns a status other than {@link * InputStatus#MORE_AVAILABLE}. After that the, the runtime will again call this method to * obtain the next future. Once that completes, it will again call {@link * #pollNext(ReaderOutput)} and so on. * *

The contract is the following: If the reader has data available, then all futures * previously returned by this method must eventually complete. Otherwise the source might stall * indefinitely. * *

It is not a problem to have occasional "false positives", meaning to complete a future * even if no data is available. However, one should not use an "always complete" future in * cases no data is available, because that will result in busy waiting loops calling {@code * pollNext(...)} even though no data is available. * * @return a future that will be completed once there is a record available to poll. */ // 创建一个future,表明reader中是否有数据可被读取 // 一旦这个future进入completed状态,Flink一直调用pollNext(ReaderOutput)方法直到这个方法返回除InputStatus#MORE_AVAILABLE之外的内容 // 在这之后,会再次调isAvailable方法获取下一个future。如果它completed,再次调用pollNext(ReaderOutput)。以此类推 public CompletableFuture isAvailable() { return this.currentFetch != null ? FutureCompletingBlockingQueue.AVAILABLE : this.elementsQueue.getAvailabilityFuture(); }

void addSplits(List splits);

    /**
     * Adds a list of splits for this reader to read. This method is called when the enumerator
     * assigns a split via {@link SplitEnumeratorContext#assignSplit(SourceSplit, int)} or {@link
     * SplitEnumeratorContext#assignSplits(SplitsAssignment)}.
     *
     * @param splits The splits assigned by the split enumerator.
     */
    // 添加一系列splits,以供reader读取。这个方法在SplitEnumeratorContext#assignSplit(SourceSplit, int)或者SplitEnumeratorContext#assignSplits(SplitsAssignment)中调用
    void addSplits(List splits);

其中,SourceReaderBase类的实现,fetcher的作用是从拉取split缓存到SourceReader中。

@Override
public void addSplits(List splits) {
    LOG.info("Adding split(s) to reader: {}", splits);
    // Initialize the state for each split.
    splits.forEach(
        s ->
        splitStates.put(
            s.splitId(), new SplitContext<>(s.splitId(), initializedState(s))));
    // Hand over the splits to the split fetcher to start fetch.
    splitFetcherManager.addSplits(splits);
}

addSplits 方法将fetch任务交给 SplitFetcherManager 处理,它的 addSplits 方法如下:

@Override
public void addSplits(List splitsToAdd) {
    // 获取正在运行的fetcher
    SplitFetcher fetcher = getRunningFetcher();
    if (fetcher == null) {
        // 如果没有,创建出一个fetcher
        fetcher = createSplitFetcher();
        // Add the splits to the fetchers.
        // 将这个创建出的fetcher加入到running fetcher集合中
        fetcher.addSplits(splitsToAdd);
        // 启动这个fetcher
        startFetcher(fetcher);
    } else {
        // 如果获取到了正在运行的fetcher,调用它的addSplits方法
        fetcher.addSplits(splitsToAdd);
    }
}

最后我们查看SplitFetcheraddSplits方法:

public void addSplits(List splitsToAdd) {
    // 将任务包装成AddSplitTask,通过splitReader兼容不同格式数据的读取方式
    // 将封装好的任务加入到队列中
    enqueueTask(new AddSplitsTask<>(splitReader, splitsToAdd, assignedSplits));
    // 唤醒fetcher任务,使用SplitReader读取数据
    // Split读取数据并缓存到elementQueue的逻辑位于FetcherTask,不再具体分析
    wakeUp(true);
}

参考

数据源 | Apache Flink

Flink 源码之新 Source 架构 - 简书

Flink新Source架构(下) - 知乎

你可能感兴趣的:(Flink,flink)