Android 13 - Media框架(6)- NuPlayer

上一节我们通过 NuPlayerDriver 了解了 NuPlayer 的使用方式,这一节我们一起来学习 NuPlayer 的部分实现细节。
ps:之前用 NuPlayer 播放本地视频很多都无法播放,所以觉得它不太行,这两天重新阅读发现它的功能其实很全面,无法播放都是Extractor的锅,NuPlayer 有很多细节值得学习!

1、NuPlayer结构

Android 13 - Media框架(6)- NuPlayer_第1张图片

我将 NuPlayer 拆成4部分来学习,分别是:

  • Source:数据源,包括 IO 和 demux 两部分;
  • Decoder:编解码组件,使用 MediaCodec 实现;
  • Render:用于解码后的数据渲染与 Avsync;
  • Controller:以上三个部分的控制器,这里指的是 NuPlayer 自身;

2、NuPlayer控制接口的实现

2.1、setDataSourceAsync

setDataSource 是用来根据传进来的 url 创建对应类型 Source 的,这里找一个最常用的版本:

void NuPlayer::setDataSourceAsync(
        const sp<IMediaHTTPService> &httpService,
        const char *url,
        const KeyedVector<String8, String8> *headers) {
	// 1. 异步消息处理
    sp<AMessage> msg = new AMessage(kWhatSetDataSource, this);
    size_t len = strlen(url);
	// 2. 用于Source 给 NuPlayer 的 Callback
    sp<AMessage> notify = new AMessage(kWhatSourceNotify, this);

    sp<Source> source;
    // 3. 根据 url 创建对应的 Source
    if (IsHTTPLiveURL(url)) {
        source = new HTTPLiveSource(notify, httpService, url, headers);
        mDataSourceType = DATA_SOURCE_TYPE_HTTP_LIVE;
    } else if (!strncasecmp(url, "rtsp://", 7)) {
        source = new RTSPSource(
                notify, httpService, url, headers, mUIDValid, mUID);
        mDataSourceType = DATA_SOURCE_TYPE_RTSP;
    } else if ((!strncasecmp(url, "http://", 7)
                || !strncasecmp(url, "https://", 8))
                    && ((len >= 4 && !strcasecmp(".sdp", &url[len - 4]))
                    || strstr(url, ".sdp?"))) {
        source = new RTSPSource(
                notify, httpService, url, headers, mUIDValid, mUID, true);
        mDataSourceType = DATA_SOURCE_TYPE_RTSP;
    } else {
        sp<GenericSource> genericSource =
                new GenericSource(notify, mUIDValid, mUID, mMediaClock);

        status_t err = genericSource->setDataSource(httpService, url, headers);

        if (err == OK) {
            source = genericSource;
        } else {
            ALOGE("Failed to set data source!");
        }
        mDataSourceType = DATA_SOURCE_TYPE_GENERIC_URL;
    }
    msg->setObject("source", source);
    msg->post();
}
  1. NuPlayer 使用 android 异步消息处理机制来处理上层调用;
  2. 创建一个 AMessage 并将 target 设定为 NuPlayer 自身,从而实现 Source 到 NuPlayer 的 Callback;
  3. 根据 url 创建对应的 Source
    • 如果 url 以 .m3u8 结尾,那么认为这是一个直播源,创建 HTTPLiveSource
    • 如果 url 以 rtsp:// 开头,那么创建 RTSPSource
    • 如果 url 以 http:// https://开头,并以 .sdp 结尾,那么创建 RTSPSource,但是和上面的参数上会有区别;
    • 如果上面的条件不满足,则创建 GenericSource,一般是用来播放本地文件的。
  4. 设定对应的 mDataSourceType;

我们在上一篇笔记中说到,setDataSource 必须是同步调用,NuPlayer 完成 Source 创建后会 Callback 给 NuPlayerDriver:

        case kWhatSetDataSource:
        {
            CHECK(mSource == NULL);
            status_t err = OK;
            sp<RefBase> obj;
            CHECK(msg->findObject("source", &obj));
            if (obj != NULL) {
                Mutex::Autolock autoLock(mSourceLock);
                mSource = static_cast<Source *>(obj.get());
            } else {
                err = UNKNOWN_ERROR;
            }
            CHECK(mDriver != NULL);
            sp<NuPlayerDriver> driver = mDriver.promote();
            if (driver != NULL) {
                driver->notifySetDataSourceCompleted(err);
            }
            break;
        }

setDataSource 完成后,一些和 Source 相关的方法就可以调用了,比如 setBufferingSettings

2.2、prepareAsync

prepare 控制的内容很简单,就是调用 Source 的 prepareAsync 方法:

        case kWhatPrepare:
        {
            ALOGV("onMessageReceived kWhatPrepare");
            mSource->prepareAsync();
            break;
        }

Source prepareAsync 完成后会调用 post 将消息发送回来:

        case Source::kWhatPrepared:
        {
        	// 1、
            ALOGV("NuPlayer::onSourceNotify Source::kWhatPrepared source: %p", mSource.get());
            if (mSource == NULL) {
                // This is a stale notification from a source that was
                // asynchronously preparing when the client called reset().
                // We handled the reset, the source is gone.
                break;
            }

            int32_t err;
            CHECK(msg->findInt32("err", &err));

            if (err != OK) {
                // shut down potential secure codecs in case client never calls reset
                mDeferredActions.push_back(
                        new FlushDecoderAction(FLUSH_CMD_SHUTDOWN /* audio */,
                                               FLUSH_CMD_SHUTDOWN /* video */));
                processDeferredActions();
            } else {
                mPrepared = true;
            }

            sp<NuPlayerDriver> driver = mDriver.promote();
            if (driver != NULL) {
                // notify duration first, so that it's definitely set when
                // the app received the "prepare complete" callback.
                int64_t durationUs;
                if (mSource->getDuration(&durationUs) == OK) {
                    driver->notifyDuration(durationUs);
                }
                driver->notifyPrepareCompleted(err);
            }

            break;
        }

这里有对 prepareAsync 过程中调用 reset 的情况做一些处理,如果 Source 被reset销毁变成 NULL,那么就不会上抛回调消息

2.3、start

        case kWhatStart:
        {
            ALOGV("kWhatStart");
            if (mStarted) {
                // do not resume yet if the source is still buffering
                if (!mPausedForBuffering) {
                    onResume();
                }
            } else {
                onStart();
            }
            mPausedByClient = false;
            break;
        }

如果播放还未开始,则调用 onStart,如果是暂停状态则调用 onResume,但是如果因为 buffer 不足 Source callback 回来调用了 pause,则不做任何操作。

先来看 onStart:

void NuPlayer::onStart(int64_t startPositionUs, MediaPlayerSeekMode mode) {
	// 1. 启动source
    if (!mSourceStarted) {
        mSourceStarted = true;
        mSource->start();
    }
    // 1. 如果设置了起播位置,则调用seek
    if (startPositionUs > 0) {
        performSeek(startPositionUs, mode);
        if (mSource->getFormat(false /* audio */) == NULL) {
            return;
        }
    }
	// 2. 初始化状态
    mOffloadAudio = false;
    mAudioEOS = false;
    mVideoEOS = false;
    mStarted = true;
    mPaused = false;

    uint32_t flags = 0;

    if (mSource->isRealTime()) {
        flags |= Renderer::FLAG_REAL_TIME;
    }
	// 3. 检查audio video format
    bool hasAudio = (mSource->getFormat(true /* audio */) != NULL);
    bool hasVideo = (mSource->getFormat(false /* audio */) != NULL);
    if (!hasAudio && !hasVideo) {
        ALOGE("no metadata for either audio or video source");
        mSource->stop();
        mSourceStarted = false;
        notifyListener(MEDIA_ERROR, MEDIA_ERROR_UNKNOWN, ERROR_MALFORMED);
        return;
    }
    ALOGV_IF(!hasAudio, "no metadata for audio source");  // video only stream

    sp<MetaData> audioMeta = mSource->getFormatMeta(true /* audio */);

    audio_stream_type_t streamType = AUDIO_STREAM_MUSIC;
    if (mAudioSink != NULL) {
        streamType = mAudioSink->getAudioStreamType();
    }
	// 4. 判断当前 audio format 是否支持 offload 模式,如果是DRM视频,则不允许 offload
    mOffloadAudio =
        canOffloadStream(audioMeta, hasVideo, mSource->isStreaming(), streamType)
                && (mPlaybackSettings.mSpeed == 1.f && mPlaybackSettings.mPitch == 1.f);

    // Modular DRM: Disabling audio offload if the source is protected
    if (mOffloadAudio && mIsDrmProtected) {
        mOffloadAudio = false;
    }

    if (mOffloadAudio) {
        flags |= Renderer::FLAG_OFFLOAD_AUDIO;
    }
	// 5. 创建Render,创建 RendererLooper 处理 Render的事件
    sp<AMessage> notify = new AMessage(kWhatRendererNotify, this);
    ++mRendererGeneration;
    notify->setInt32("generation", mRendererGeneration);
    mRenderer = new Renderer(mAudioSink, mMediaClock, notify, flags);
    mRendererLooper = new ALooper;
    mRendererLooper->setName("NuPlayerRenderer");
    mRendererLooper->start(false, false, ANDROID_PRIORITY_AUDIO);
    mRendererLooper->registerHandler(mRenderer);
	// 6. 初始化 render 设置
    status_t err = mRenderer->setPlaybackSettings(mPlaybackSettings);
    if (err != OK) {
        mSource->stop();
        mSourceStarted = false;
        notifyListener(MEDIA_ERROR, MEDIA_ERROR_UNKNOWN, err);
        return;
    }
    float rate = getFrameRate();
    if (rate > 0) {
        mRenderer->setVideoFrameRate(rate);
    }
	// 7. 如果当前有 video 和 audio decoder,将decoder 和 render绑定
    if (mVideoDecoder != NULL) {
        mVideoDecoder->setRenderer(mRenderer);
    }
    if (mAudioDecoder != NULL) {
        mAudioDecoder->setRenderer(mRenderer);
    }
    startPlaybackTimer("onstart");
	// 8. 调用 postScanSources
    postScanSources();
}

虽然 onStart 方法比较长,但是还是比较有条理的,启动了 Source,创建并启动 Render,创建并启动 Decoder:

  1. 启动 Source,对起播位置进行设定;
  2. 检查视频是否含有audio video source,如果都没有则该视频不能播放;
  3. 判断当前 audio format 是否支持 offload 模式,如果是DRM视频,则不允许 offload,audio 的工作方式将会影响 Render 的工作方式;
  4. 创建Render,创建 RendererLooper 处理 Render的事件,初始化 Render 设定;
  5. 调用 postScanSources 创建 Decoder;

接下来看 postScanSources 是如何创建 Decoder 的:

        case kWhatScanSources:
        {
            bool rescan = false;
			// 1. 创建 video
            if (mSurface != NULL) {
                if (instantiateDecoder(false, &mVideoDecoder) == -EWOULDBLOCK) {
                    rescan = true;
                }
            }
			// 2. 创建 audio
            // Don't try to re-open audio sink if there's an existing decoder.
            if (mAudioSink != NULL && mAudioDecoder == NULL) {
                if (instantiateDecoder(true, &mAudioDecoder) == -EWOULDBLOCK) {
                    rescan = true;
                }
            }
            // 3. 如果创建失败则重新扫描
            if (rescan) {
                msg->post(100000LL);
                mScanSourcesPending = true;
            }
            break;
        }
  1. 如果 Surface 不为 NULL,那么就会调用 instantiateDecoder 创建 video decoder;
  2. mAudioSink 不为 NULL,那么就会去创建 audio decoder;
  3. 如果有哪一个 decoder 创建失败,那么就会不断发送 kWhatScanSources 扫描 source,直到 video 和 audio decoder 都创建完成。

从这里我们大致可以猜到,播放过程中是可以先只播放 video 或者 audio,再中途追加播放另一个,只要 avsync 能够支援就行。

status_t NuPlayer::instantiateDecoder(
        bool audio, sp<DecoderBase> *decoder, bool checkAudioModeChange) {
    // 1. 检查 decoder 是否已经创建,如果已经创建则不重复创建
    if (*decoder != NULL || (audio && mFlushingAudio == SHUT_DOWN)) {
        return OK;
    }
	// 2. 获取 format,如果没有 format 则退出等待下次扫描
    sp<AMessage> format = mSource->getFormat(audio);
    if (format == NULL) {
        return UNKNOWN_ERROR;
    } else {
        status_t err;
        if (format->findInt32("err", &err) && err) {
            return err;
        }
    }

    format->setInt32("priority", 0 /* realtime */);

    if (mDataSourceType == DATA_SOURCE_TYPE_RTP) {
        ALOGV("instantiateDecoder: set decoder error free on stream corrupt.");
        format->setInt32("corrupt-free", true);
    }
	// 3. 创建 CCDecoder,初始化Video Decoder config format
    if (!audio) {
        AString mime;
        CHECK(format->findString("mime", &mime));

        sp<AMessage> ccNotify = new AMessage(kWhatClosedCaptionNotify, this);
        if (mCCDecoder == NULL) {
            mCCDecoder = new CCDecoder(ccNotify);
        }

        if (mSourceFlags & Source::FLAG_SECURE) {
            format->setInt32("secure", true);
        }

        if (mSourceFlags & Source::FLAG_PROTECTED) {
            format->setInt32("protected", true);
        }

        float rate = getFrameRate();
        if (rate > 0) {
            format->setFloat("operating-rate", rate * mPlaybackSettings.mSpeed);
        }
    }

    Mutex::Autolock autoLock(mDecoderLock);
    if (audio) {
        sp<AMessage> notify = new AMessage(kWhatAudioNotify, this);
        ++mAudioDecoderGeneration;
        notify->setInt32("generation", mAudioDecoderGeneration);

        if (checkAudioModeChange) {
            determineAudioModeChange(format);
        }
        // 4. 创建 audio decoder,如果是offload 模式则创建DecoderPassThrough,否则创建Decoder
        if (mOffloadAudio) {
            mSource->setOffloadAudio(true /* offload */);

            const bool hasVideo = (mSource->getFormat(false /*audio */) != NULL);
            format->setInt32("has-video", hasVideo);
            *decoder = new DecoderPassThrough(notify, mSource, mRenderer);
            ALOGV("instantiateDecoder audio DecoderPassThrough  hasVideo: %d", hasVideo);
        } else {
            mSource->setOffloadAudio(false /* offload */);

            *decoder = new Decoder(notify, mSource, mPID, mUID, mRenderer);
            ALOGV("instantiateDecoder audio Decoder");
        }
        mAudioDecoderError = false;
    } else {
        sp<AMessage> notify = new AMessage(kWhatVideoNotify, this);
        ++mVideoDecoderGeneration;
        notify->setInt32("generation", mVideoDecoderGeneration);
		// 5. 创建 video decoder
        *decoder = new Decoder(
                notify, mSource, mPID, mUID, mRenderer, mSurface, mCCDecoder);
        mVideoDecoderError = false;

        // enable FRC if high-quality AV sync is requested, even if not
        // directly queuing to display, as this will even improve textureview
        // playback.
        {
            if (property_get_bool("persist.sys.media.avsync", false)) {
                format->setInt32("auto-frc", 1);
            }
        }
    }
    // 6. 调用创建的decoder的init方法
    (*decoder)->init();

    // Modular DRM
    if (mIsDrmProtected) {
        format->setPointer("crypto", mCrypto.get());
        ALOGV("instantiateDecoder: mCrypto: %p (%d) isSecure: %d", mCrypto.get(),
                (mCrypto != NULL ? mCrypto->getStrongCount() : 0),
                (mSourceFlags & Source::FLAG_SECURE) != 0);
    }
	// 7. 调用创建的decoder的configure方法
    (*decoder)->configure(format);
    return OK;
}
  1. 检查 decoder 是否已经创建,如果已经创建则不重复创建;
  2. 获取 format,如果没有 format 则退出等待下次扫描;
  3. 如果是要创建video decoder,则创建 CCDecoder,初始化Video Decoder config format;
  4. 创建 audio decoder,如果是offload 模式则创建DecoderPassThrough,否则创建Decoder
  5. 创建 video decoder
  6. 调用创建的decoder的init方法
  7. 调用创建的decoder的configure方法

在创建 VideoDecoder 和 AudioDecoder 时,需要将 render 作为参数传递进去,这里已经将decoder 和 render 做了绑定。

NuPlayerDecoder 没有start接口,configure 调用中会自动完成 start调用,所以 instantiateDecoder 调用完成后 decoder 就已经启动完成了。

2.4、pause

pause 方法很简单,只要将 Source 和 Render 都 pause 就行,Decoder 无法从 Source 拿到数据,那么自然而然就暂停了。

void NuPlayer::onPause() {
    updatePlaybackTimer(true /* stopping */, "onPause");
    if (mPaused) {
        return;
    }
    mPaused = true;
    if (mSource != NULL) {
        mSource->pause();
    } else {
        ALOGW("pause called when source is gone or not set");
    }
    if (mRenderer != NULL) {
        mRenderer->pause();
    } else {
        ALOGW("pause called when renderer is gone or not set");
    }
}

这里再看 start 方法中的 onResume,恢复 Source 和 Renderer 的动作就行:

void NuPlayer::onResume() {
    if (!mPaused || mResetting) {
        ALOGD_IF(mResetting, "resetting, onResume discarded");
        return;
    }
    mPaused = false;
    if (mSource != NULL) {
        mSource->resume();
    } else {
        ALOGW("resume called when source is gone or not set");
    }
    // |mAudioDecoder| may have been released due to the pause timeout, so re-create it if
    // needed.
    if (audioDecoderStillNeeded() && mAudioDecoder == NULL) {
        instantiateDecoder(true /* audio */, &mAudioDecoder);
    }
    if (mRenderer != NULL) {
        mRenderer->resume();
    } else {
        ALOGW("resume called when renderer is gone or not set");
    }
}

2.5、resetAsync

void NuPlayer::resetAsync() {
    sp<Source> source;
    {
        Mutex::Autolock autoLock(mSourceLock);
        source = mSource;
    }

    if (source != NULL) {
        source->disconnect();
    }

    (new AMessage(kWhatReset, this))->post();
}

由于 Source 可能会出现阻塞的情况,所以 resetAsync 处理过程中需要先调用 Source.disconnect ,从达到加速reset的目的。

        case kWhatReset:
        {
            mResetting = true;
            // 1. flush
            mDeferredActions.push_back(
                    new FlushDecoderAction(
                        FLUSH_CMD_SHUTDOWN /* audio */,
                        FLUSH_CMD_SHUTDOWN /* video */));
            // 2. reset
            mDeferredActions.push_back(
                    new SimpleAction(&NuPlayer::performReset));
            processDeferredActions();
            break;
        }

reset 的过程分为两个步骤:

  1. flush:刷新 decoder render 中的数据,立即停止播放;
  2. reset:释放 decoder render source 的资源;

先来看下 processDeferredActions 方法,这里用到一种延迟处理的机制,一开始我不是很理解为什么这里要用这种机制,为什么不直接按顺序 post 一条消息呢?仔细阅读注释可以发现意思是这样:当不在即时状态时,我们将不会执行被延迟的方法。看起来还是有点难懂,但是它后面还举了例子,当 decoder 进入到了 flushing 和 shutting down 的状态时,这些被延迟的方法将不再被处理。

我的理解是这样,接下来需要执行的几个 Action 必须是要按照顺序执行的,但是目前处在 onMessageReceived 处理过程中,用 postAwaitResponse 等待返回肯定是不行的;如果按顺序 post 消息执行,由于 Action 执行的任务是异步的,并不能保证后面 Action 执行时前面的 Action 已经执行完成。所以这里用了 DeferredActions 机制,处理 Action 时先检查状态,如果状态不如预期则延迟执行,等待上一条 Action 执行完成后再调用 processDeferredActions 统一执行延迟的任务,从而保证了执行顺序。

void NuPlayer::processDeferredActions() {
    while (!mDeferredActions.empty()) {
        // We won't execute any deferred actions until we're no longer in
        // an intermediate state, i.e. one more more decoders are currently
        // flushing or shutting down.
        if (mFlushingAudio != NONE || mFlushingVideo != NONE) {
            // We're currently flushing, postpone the reset until that's
            // completed.
            ALOGV("postponing action mFlushingAudio=%d, mFlushingVideo=%d",
                  mFlushingAudio, mFlushingVideo);
            break;
        }
        sp<Action> action = *mDeferredActions.begin();
        mDeferredActions.erase(mDeferredActions.begin());
        action->execute(this);
    }
}

接下来看 flush 流程,核心是调用的 decoder,并且将参数 needShutdown 置为 true

void NuPlayer::performDecoderFlush(FlushCommand audio, FlushCommand video) {
    ALOGV("performDecoderFlush audio=%d, video=%d", audio, video);

    if ((audio == FLUSH_CMD_NONE || mAudioDecoder == NULL)
            && (video == FLUSH_CMD_NONE || mVideoDecoder == NULL)) {
        return;
    }

    if (audio != FLUSH_CMD_NONE && mAudioDecoder != NULL) {
        flushDecoder(true /* audio */, (audio == FLUSH_CMD_SHUTDOWN));
    }

    if (video != FLUSH_CMD_NONE && mVideoDecoder != NULL) {
        flushDecoder(false /* audio */, (video == FLUSH_CMD_SHUTDOWN));
    }
}

flushDecoder 的核心是调用 Decoder 的 signalFlush 方法,并且将 mFlushingAudio 和 mFlushingVideo 置为 FLUSHING_DECODER_SHUTDOWN ,这将终止 decoder 的运行;如果是单纯的 flush 两个标志将被置为 FLUSHING_DECODER,decoder 将会继续运行。

void NuPlayer::flushDecoder(bool audio, bool needShutdown) {
    ALOGV("[%s] flushDecoder needShutdown=%d",
          audio ? "audio" : "video", needShutdown);

    const sp<DecoderBase> &decoder = getDecoder(audio);
    if (decoder == NULL) {
        ALOGI("flushDecoder %s without decoder present",
             audio ? "audio" : "video");
        return;
    }

    // Make sure we don't continue to scan sources until we finish flushing.
    ++mScanSourcesGeneration;
    if (mScanSourcesPending) {
        if (!needShutdown) {
            mDeferredActions.push_back(
                    new SimpleAction(&NuPlayer::performScanSources));
        }
        mScanSourcesPending = false;
    }

    decoder->signalFlush();

    FlushStatus newStatus =
        needShutdown ? FLUSHING_DECODER_SHUTDOWN : FLUSHING_DECODER;

    mFlushComplete[audio][false /* isDecoder */] = (mRenderer == NULL);
    mFlushComplete[audio][true /* isDecoder */] = false;
    if (audio) {
        ALOGE_IF(mFlushingAudio != NONE,
                "audio flushDecoder() is called in state %d", mFlushingAudio);
        mFlushingAudio = newStatus;
    } else {
        ALOGE_IF(mFlushingVideo != NONE,
                "video flushDecoder() is called in state %d", mFlushingVideo);
        mFlushingVideo = newStatus;
    }
}

由于 mFlushingAudio 和 mFlushingVideo 不再是 FLUSH_CMD_NONE,所以 flushDecoder 执行完成后将暂时跳过 performReset 的执行。我们之前了解到 reset 是阻塞执行的,那这里是不是就卡死了呢?

当然不会,decoder 执行完 flush之后会 callback 回来。调用 flush 的过程中 render 也会被 flush,同样也会有个 callback回来,render callback的处理内容和decoder类似,这里暂时就不贴了。

else if (what == DecoderBase::kWhatFlushCompleted) {
                ALOGV("decoder %s flush completed", audio ? "audio" : "video");

                handleFlushComplete(audio, true /* isDecoder */);
                finishFlushIfPossible();
            }

mFlushComplete 分为两组:

isAudio isDecoder
audio decoder
audio render
video decoder
video render

handleFlushComplete 需要分别等到audio 和 video 的 decoder 和 render 的 flush callback 都上抛才会真正去执行,如果当前的 FlushStatus 是 FLUSHING_DECODER,那么flush过程就完成了;如果FlushStatus 是 FLUSHING_DECODER_SHUTDOWN,那么还会继续调用 decoder 的 initiateShutdown 方法去释放 decoder 中的资源,并将 status 重新置为 SHUTTING_DOWN_DECODER

void NuPlayer::handleFlushComplete(bool audio, bool isDecoder) {
    // We wait for both the decoder flush and the renderer flush to complete
    // before entering either the FLUSHED or the SHUTTING_DOWN_DECODER state.

    mFlushComplete[audio][isDecoder] = true;
    if (!mFlushComplete[audio][!isDecoder]) {
        return;
    }

    FlushStatus *state = audio ? &mFlushingAudio : &mFlushingVideo;
    switch (*state) {
        case FLUSHING_DECODER:
        {
            *state = FLUSHED;
            break;
        }

        case FLUSHING_DECODER_SHUTDOWN:
        {
            *state = SHUTTING_DOWN_DECODER;

            ALOGV("initiating %s decoder shutdown", audio ? "audio" : "video");
            getDecoder(audio)->initiateShutdown();
            break;
        }

        default:
            // decoder flush completes only occur in a flushing state.
            LOG_ALWAYS_FATAL_IF(isDecoder, "decoder flush in invalid state %d", *state);
            break;
    }
}

由于 status 为 SHUTTING_DOWN_DECODER,finishFlushIfPossible 将不会有什么动作,接下来会继续等待 decoder shutdown 完成上抛 callback。

else if (what == DecoderBase::kWhatShutdownCompleted) {
                ALOGV("%s shutdown completed", audio ? "audio" : "video");
                if (audio) {
                    Mutex::Autolock autoLock(mDecoderLock);
                    mAudioDecoder.clear();
                    mAudioDecoderError = false;
                    ++mAudioDecoderGeneration;

                    CHECK_EQ((int)mFlushingAudio, (int)SHUTTING_DOWN_DECODER);
                    mFlushingAudio = SHUT_DOWN;
                } else {
                    Mutex::Autolock autoLock(mDecoderLock);
                    mVideoDecoder.clear();
                    mVideoDecoderError = false;
                    ++mVideoDecoderGeneration;

                    CHECK_EQ((int)mFlushingVideo, (int)SHUTTING_DOWN_DECODER);
                    mFlushingVideo = SHUT_DOWN;
                }

                finishFlushIfPossible();
            } 

收到 kWhatShutdownCompleted 之后 NuPlayer 将会释放掉 decoder,然后执行 finishFlushIfPossible。

finishFlushIfPossible 会重置 mFlushingAudio 和 mFlushingVideo 的状态,然后执行剩下来的被推迟的方法,接下来要执行的是 performReset,进行 reset 函数的收尾。

void NuPlayer::performReset() {
    ALOGV("performReset");

    CHECK(mAudioDecoder == NULL);
    CHECK(mVideoDecoder == NULL);

    updatePlaybackTimer(true /* stopping */, "performReset");
    updateRebufferingTimer(true /* stopping */, true /* exiting */);

    cancelPollDuration();

    ++mScanSourcesGeneration;
    mScanSourcesPending = false;

    if (mRendererLooper != NULL) {
        if (mRenderer != NULL) {
            mRendererLooper->unregisterHandler(mRenderer->id());
        }
        mRendererLooper->stop();
        mRendererLooper.clear();
    }
    mRenderer.clear();
    ++mRendererGeneration;

    if (mSource != NULL) {
        mSource->stop();

        Mutex::Autolock autoLock(mSourceLock);
        mSource.clear();
    }

    if (mDriver != NULL) {
        sp<NuPlayerDriver> driver = mDriver.promote();
        if (driver != NULL) {
            driver->notifyResetComplete();
        }
    }

    mStarted = false;
    mPrepared = false;
    mResetting = false;
    mSourceStarted = false;

    // Modular DRM
    if (mCrypto != NULL) {
        // decoders will be flushed before this so their mCrypto would go away on their own
        // TODO change to ALOGV
        ALOGD("performReset mCrypto: %p (%d)", mCrypto.get(),
                (mCrypto != NULL ? mCrypto->getStrongCount() : 0));
        mCrypto.clear();
    }
    mIsDrmProtected = false;
}

reset 函数将会停止并销毁掉 renderLooper 和 render,停止并销毁掉 Source,最后 callback 通知 NuPlayerDriver reset 操作完成。

Android 13 - Media框架(6)- NuPlayer_第2张图片

2.6、seekToAsync

如果调用 seekToAsync 时已经 prepare 完成但是还没起播,那么调用 seek 方法会帮助我们调用 start,解出内容后就暂停,从而达到预览的效果。

        case kWhatSeek:
        {
            int64_t seekTimeUs;
            int32_t mode;
            int32_t needNotify;
            CHECK(msg->findInt64("seekTimeUs", &seekTimeUs));
            CHECK(msg->findInt32("mode", &mode));
            CHECK(msg->findInt32("needNotify", &needNotify));

            if (!mStarted) {
                // Seek before the player is started. In order to preview video,
                // need to start the player and pause it. This branch is called
                // only once if needed. After the player is started, any seek
                // operation will go through normal path.
                // Audio-only cases are handled separately.
                onStart(seekTimeUs, (MediaPlayerSeekMode)mode);
                if (mStarted) {
                    onPause();
                    mPausedByClient = true;
                }
                if (needNotify) {
                    notifyDriverSeekComplete();
                }
                break;
            }

            mDeferredActions.push_back(
                    new FlushDecoderAction(FLUSH_CMD_FLUSH /* audio */,
                                           FLUSH_CMD_FLUSH /* video */));

            mDeferredActions.push_back(
                    new SeekAction(seekTimeUs, (MediaPlayerSeekMode)mode));

            // After a flush without shutdown, decoder is paused.
            // Don't resume it until source seek is done, otherwise it could
            // start pulling stale data too soon.
            mDeferredActions.push_back(
                    new ResumeDecoderAction(needNotify));

            processDeferredActions();
            break;
        }

处理 seek 总共分为3个 Action:FlushDecoderAction、SeekAction、ResumeDecoderAction。其中 FlushDecoderAction 我们在上一节中已经了解过了,不一样的是这里并不会走到 shutdown 的流程中。

SeekAction 核心是调用 Source 的 seekTo:

void NuPlayer::performSeek(int64_t seekTimeUs, MediaPlayerSeekMode mode) {
    ALOGV("performSeek seekTimeUs=%lld us (%.2f secs), mode=%d",
          (long long)seekTimeUs, seekTimeUs / 1E6, mode);

    if (mSource == NULL) {
        // This happens when reset occurs right before the loop mode
        // asynchronously seeks to the start of the stream.
        LOG_ALWAYS_FATAL_IF(mAudioDecoder != NULL || mVideoDecoder != NULL,
                "mSource is NULL and decoders not NULL audio(%p) video(%p)",
                mAudioDecoder.get(), mVideoDecoder.get());
        return;
    }
    mPreviousSeekTimeUs = seekTimeUs;
    mSource->seekTo(seekTimeUs, mode);
    ++mTimedTextGeneration;

    // everything's flushed, continue playback.
}

seek 完成后会立刻调用 resume 恢复播放,如果这里不恢复就会出现黑屏的情况。resume 主要是用来操作 Decoder,调用 Decoder 的 signalResume,signalResume 执行完成后,decoder 重新开始接收数据,开始播放。

void NuPlayer::performResumeDecoders(bool needNotify) {
    if (needNotify) {
        mResumePending = true;
        if (mVideoDecoder == NULL) {
            // if audio-only, we can notify seek complete now,
            // as the resume operation will be relatively fast.
            finishResume();
        }
    }

    if (mVideoDecoder != NULL) {
        // When there is continuous seek, MediaPlayer will cache the seek
        // position, and send down new seek request when previous seek is
        // complete. Let's wait for at least one video output frame before
        // notifying seek complete, so that the video thumbnail gets updated
        // when seekbar is dragged.
        mVideoDecoder->signalResume(needNotify);
    }

    if (mAudioDecoder != NULL) {
        mAudioDecoder->signalResume(false /* needNotify */);
    }
}

void NuPlayer::finishResume() {
    if (mResumePending) {
        mResumePending = false;
        notifyDriverSeekComplete();
    }
}

关于 seekToAsync 的第三个参数 needNotify 还要提一下,这里这么设计是因为 seekToAsync 除了我们主动调用外,NuPlayerDriver 那边还有可能自动调用。我们主动调用,需要将执行完成的消息 Callback 到上层,另外stop之后再重新prepare也会调用seek,这里也需要 Callback;自动调用指的是播放结束再调用 start,这里会 seek 到0的位置,不需要 Callback通知上层。

还有一点自己的理解,之前我们大致了解 prepare 是一个异步处理的过程,这个过程中 reset 需要有一些特殊的处理,这里的 seek 也是异步的过程,那 seek 过程中 reset 或者 stop 需要有特殊处理吗?答案是不需要的,seek 会在 Looper 执行,reset 和 stop 的消息需要等待 seek 执行完成再处理,所以这是是顺序执行的,并没有真正的异步。


3、总结

到这里 NuPlayer 的了解就告一段落,里面异步处理的思想 和 播放器的处理流程 还是要多多揣摩学习,回想起自己写的 Player 各种处理速度都不理想,还是太年轻了。

这里再整理关键方法需要执行的内容:

  • setDataSourceAsync:
    1. create Source
  • prepareAsync
    1. Source.prepareAsync
  • start
    1. create Render
    2. create Decoder,start Decoder and Render
  • pause
    1. pause Source
    2. pause Render
  • start (resume)
    1. resume Source
    2. resume Render
  • seekToAsync
    1. flush Decoder (pause) and Render
    2. seek Source
    3. resume Decoder and Render
  • resetAsync
    1. disconnect Source
    2. flush Decoder and Render
    3. shut down Decoder
    4. release Decoder
    5. stop and release Render
    6. stop and release Source

再对比下两个暂停的实现方式:
pause 通过暂停 Source 送数据,暂停 Render 渲染数据来完成,Decoder 不需要暂停;
flush 的暂停通过不给 Decoder 喂数据来实现,不需要暂停 Source。

你可能感兴趣的:(Android,android)