前面两篇文章,我们分别讲了setdataSource和prepare的过程,获得了mVideoTrack,mAudioTrack,mVideoSourc,mAudioSource,前两个来自于setdataSource过程,后面两是prepare。
status_t AwesomePlayer::setDataSource_l(const sp<MediaExtractor> &extractor) {…
if (!haveVideo && !strncasecmp(mime.string(), "video/", 6)) {
setVideoSource(extractor->getTrack(i));}
else if (!haveAudio && !strncasecmp(mime.string(), "audio/", 6)) {
setAudioSource(extractor->getTrack(i));
……………..
}
}
void AwesomePlayer::setVideoSource(sp<MediaSource> source) {
CHECK(source != NULL);
mVideoTrack = source;
}
void AwesomePlayer::setAudioSource(sp<MediaSource> source) {
CHECK(source != NULL);
mAudioTrack = source;
}
mVideoSource = OMXCodec::Create(
mClient.interface(), mVideoTrack->getFormat(),
false, // createEncoder
mVideoTrack,
NULL, flags, USE_SURFACE_ALLOC ? mNativeWindow : NULL);
mAudioSource = OMXCodec::Create(
mClient.interface(), mAudioTrack->getFormat(),
false, // createEncoder
mAudioTrack);
通过mVideoTrack,mAudioTrack我们找到了相应的解码器,并初始化了,下面我们就开讲mediaplayer如何播放了。前面的一些接口实现,我们就不讲了,不懂的可以回到setdataSource这一篇继续研究,我们直接看Awesomeplayer的实现。先看大体的时序图吧:
status_t AwesomePlayer::play_l() {
modifyFlags(SEEK_PREVIEW, CLEAR);
…………
modifyFlags(PLAYING, SET);
modifyFlags(FIRST_FRAME, SET); ---设置PLAYING和FIRST_FRAME的标志位
…………………..
if (mAudioSource != NULL) {-----mAudioSource不为空时初始化Audioplayer
if (mAudioPlayer == NULL) {
if (mAudioSink != NULL) {
(1) mAudioPlayer = new AudioPlayer(mAudioSink, allowDeepBuffering, this);
mAudioPlayer->setSource(mAudioSource);
seekAudioIfNecessary_l();
}
}
CHECK(!(mFlags & AUDIO_RUNNING));
if (mVideoSource == NULL) {-----如果单是音频,直接播放
….
(2) status_t err = startAudioPlayer_l(
false /* sendErrorNotification */);
modifyFlags((PLAYING | FIRST_FRAME), CLEAR);
…………..
return err;
}
}
}
……
if (mVideoSource != NULL) {-----有视频时,发送event到queue,等待处理
// Kick off video playback
(3) postVideoEvent_l();
if (mAudioSource != NULL && mVideoSource != NULL) {----有视频,音频时,检查他们是否同步
(4) postVideoLagEvent_l();
}
}
}
…………..
return OK;
}
在playe_l方法里,我们可以看到首先是实例化一个audioplayer来播放音频,如果单单是音频直接就播放,现在我们是本地视频播放,将不会走第二步,直接走第三和第四步。我们看下postVideoEvent_l()方法,跟我们在讲prepareAsync_l的类似:
void AwesomePlayer::postVideoEvent_l(int64_t delayUs) {
……………
mVideoEventPending = true;
mQueue.postEventWithDelay(mVideoEvent, delayUs < 0 ? 10000 : delayUs);
}
mVideoEvent在我们构造awesomeplayer时已经定义:
mVideoEvent = new AwesomeEvent(this, &AwesomePlayer::onVideoEvent);
所以我们看onVideoEvent方法:
void AwesomePlayer::onVideoEvent() {
if (!mVideoBuffer) {
for (;;) {
(1) status_t err = mVideoSource->read(&mVideoBuffer, &options); ---mVideoSource(omxcodec)
options.clearSeekTo();
++mStats.mNumVideoFramesDecoded;
}
(2) status_t err = startAudioPlayer_l();
if ((mNativeWindow != NULL)
&& (mVideoRendererIsPreview || mVideoRenderer == NULL)) {
mVideoRendererIsPreview = false;
(3) initRenderer_l();
}
if (mVideoRenderer != NULL) {
mSinceLastDropped++;
(4) mVideoRenderer->render(mVideoBuffer);
}
(5) postVideoEvent_l();
}
我们看到通过read方法去解码一个个sample,获取videobuffer,然后render到surfaceTexture。
read 方法:
status_t OMXCodec::read(
MediaBuffer **buffer, const ReadOptions *options) {
if (mInitialBufferSubmit) {
mInitialBufferSubmit = false;
if (seeking) {
CHECK(seekTimeUs >= 0);
mSeekTimeUs = seekTimeUs;
mSeekMode = seekMode;
// There's no reason to trigger the code below, there's
// nothing to flush yet.
seeking = false;
mPaused = false;
}
drainInputBuffers();---对应emptybuffer,输入端
if (mState == EXECUTING) {
// Otherwise mState == RECONFIGURING and this code will trigger
// after the output port is reenabled.
fillOutputBuffers();--对应fillbuffer,输出端
}
}
….
size_t index = *mFilledBuffers.begin();
mFilledBuffers.erase(mFilledBuffers.begin());
BufferInfo *info = &mPortBuffers[kPortIndexOutput].editItemAt(index);
CHECK_EQ((int)info->mStatus, (int)OWNED_BY_US);
info->mStatus = OWNED_BY_CLIENT;
info->mMediaBuffer->add_ref();
if (mSkipCutBuffer != NULL) {
mSkipCutBuffer->submit(info->mMediaBuffer);
}
*buffer = info->mMediaBuffer;
}
在讲read之前我们先来回顾下prepare时候的omxcodec::start方法,因为跟我们讲read有千丝万缕的关系,start方法:
status_t OMXCodec::start(MetaData *meta) {
Mutex::Autolock autoLock(mLock);
……….
sp<MetaData> params = new MetaData;
if (mQuirks & kWantsNALFragments) {
params->setInt32(kKeyWantsNALFragments, true);
}
if (meta) {
int64_t startTimeUs = 0;
int64_t timeUs;
if (meta->findInt64(kKeyTime, &timeUs)) {
startTimeUs = timeUs;
}
params->setInt64(kKeyTime, startTimeUs);
}
status_t err = mSource->start(params.get()); ---我们以mp4为例,就是mpeg4source
if (err != OK) {
return err;
}
mCodecSpecificDataIndex = 0;
mInitialBufferSubmit = true;
mSignalledEOS = false;
mNoMoreOutputData = false;
mOutputPortSettingsHaveChanged = false;
mSeekTimeUs = -1;
mSeekMode = ReadOptions::SEEK_CLOSEST_SYNC;
mTargetTimeUs = -1;
mFilledBuffers.clear();
mPaused = false;
return init();
}
status_t OMXCodec::init() {
….
err = allocateBuffers();
if (mQuirks & kRequiresLoadedToIdleAfterAllocation) {
err = mOMX->sendCommand(mNode, OMX_CommandStateSet, OMX_StateIdle);
CHECK_EQ(err, (status_t)OK);
setState(LOADED_TO_IDLE); -------发送命令到component,让component处于Idle状态,经过两次回调后使component处于OMX_StateExecuting
}
….
}
由于我们以MP4为例,所以mSource就是MPEG4Source,MPEG4Source在MPEG4Extractor.cpp,我们看下start方法做了什么:
status_t MPEG4Source::start(MetaData *params) {
Mutex::Autolock autoLock(mLock);
…………..
mGroup = new MediaBufferGroup;
int32_t max_size;
CHECK(mFormat->findInt32(kKeyMaxInputSize, &max_size));
mGroup->add_buffer(new MediaBuffer(max_size));
mSrcBuffer = new uint8_t[max_size];
mStarted = true;
return OK;
}
原来是设定输入的最大buffer.
我再看看allocateBuffers();
status_t OMXCodec::allocateBuffers() {
status_t err = allocateBuffersOnPort(kPortIndexInput);----配置输入端的buffer总量,大小等OMX_PARAM_PORTDEFINITIONTYPE
if (err != OK) {
return err;
}
return allocateBuffersOnPort(kPortIndexOutput);---配置输出端,并dequeuebuffer到OMX端
}
OMX_PARAM_PORTDEFINITIONTYPE 是component的配置信息。
typedef struct OMX_PARAM_PORTDEFINITIONTYPE {
OMX_U32 nSize; /**< Size of the structure in bytes */
OMX_VERSIONTYPE nVersion; /**< OMX specification version information */
OMX_U32 nPortIndex; /**< Port number the structure applies to */
OMX_DIRTYPE eDir; /**< Direction (input or output) of this port */
OMX_U32 nBufferCountActual; /**< The actual number of buffers allocated on this port */
OMX_U32 nBufferCountMin; /**< The minimum number of buffers this port requires */
OMX_U32 nBufferSize; /**< Size, in bytes, for buffers to be used for this channel */
OMX_BOOL bEnabled; /**< Ports default to enabled and are enabled/disabled by
OMX_CommandPortEnable/OMX_CommandPortDisable.
When disabled a port is unpopulated. A disabled port
is not populated with buffers on a transition to IDLE. */
OMX_BOOL bPopulated; /**< Port is populated with all of its buffers as indicated by
nBufferCountActual. A disabled port is always unpopulated.
An enabled port is populated on a transition to OMX_StateIdle
and unpopulated on a transition to loaded. */
OMX_PORTDOMAINTYPE eDomain; /**< Domain of the port. Determines the contents of metadata below. */
union {
OMX_AUDIO_PORTDEFINITIONTYPE audio;
OMX_VIDEO_PORTDEFINITIONTYPE video;
OMX_IMAGE_PORTDEFINITIONTYPE image;
OMX_OTHER_PORTDEFINITIONTYPE other;
} format;
OMX_BOOL bBuffersContiguous;
OMX_U32 nBufferAlignment;
} OMX_PARAM_PORTDEFINITIONTYPE;
OMX_PARAM_PORTDEFINITIONTYPE的参数从哪里来呢?原来来自解码器端,包括输入输出端的buffer大小,总数等信息。
status_t OMXCodec::allocateBuffersOnPort(OMX_U32 portIndex) {
if (mNativeWindow != NULL && portIndex == kPortIndexOutput) {
return allocateOutputBuffersFromNativeWindow();------当输出的时候走这里,给输出端分配内存空间,并dequeue buffer 到OMX。
}
OMX_PARAM_PORTDEFINITIONTYPE def;
InitOMXParams(&def);
def.nPortIndex = portIndex;
err = mOMX->getParameter(
mNode, OMX_IndexParamPortDefinition, &def, sizeof(def));---从component获取OMX_PARAM_PORTDEFINITIONTYPE相关配置,具体哪些可以看上面的结构体
if (err != OK) {
return err;
}
size_t totalSize = def.nBufferCountActual * def.nBufferSize; ---从getParameter获得的每个输入/输出端的buffer大小和总数
mDealer[portIndex] = new MemoryDealer(totalSize, "OMXCodec");
for (OMX_U32 i = 0; i < def.nBufferCountActual; ++i) {
sp<IMemory> mem = mDealer[portIndex]->allocate(def.nBufferSize);
CHECK(mem.get() != NULL);
BufferInfo info;
info.mData = NULL;
info.mSize = def.nBufferSize;
IOMX::buffer_id buffer;
if (portIndex == kPortIndexInput
&& ((mQuirks & kRequiresAllocateBufferOnInputPorts)
|| (mFlags & kUseSecureInputBuffers))) {
if (mOMXLivesLocally) {
mem.clear();
err = mOMX->allocateBuffer(
mNode, portIndex, def.nBufferSize, &buffer,
&info.mData);-----给输入端分配内存空间,并使info.mData指向mNode的header
…………….
info.mBuffer = buffer;
info.mStatus = OWNED_BY_US;
info.mMem = mem;
info.mMediaBuffer = NULL;
mPortBuffers[portIndex].push(info); ---BufferInfo 放到Vector<BufferInfo> mPortBuffers[2] mPortBuffers进行管理,到read的时候用,0是输入,1是输出。
………………………….
}
复习完start方法,我们就来讲reader方法了:
status_t OMXCodec::read(
MediaBuffer **buffer, const ReadOptions *options) {
if (mInitialBufferSubmit) {
mInitialBufferSubmit = false;
………….
drainInputBuffers();
if (mState == EXECUTING) {
// Otherwise mState == RECONFIGURING and this code will trigger
// after the output port is reenabled.
fillOutputBuffers();
}
…………………..
size_t index = *mFilledBuffers.begin();
mFilledBuffers.erase(mFilledBuffers.begin());
BufferInfo *info = &mPortBuffers[kPortIndexOutput].editItemAt(index);
CHECK_EQ((int)info->mStatus, (int)OWNED_BY_US);
info->mStatus = OWNED_BY_CLIENT;
info->mMediaBuffer->add_ref();
if (mSkipCutBuffer != NULL) {
mSkipCutBuffer->submit(info->mMediaBuffer);
}
*buffer = info->mMediaBuffer;
}
先看drainInputBuffers方法,主要是从mediasource读取数据元,
void OMXCodec::drainInputBuffers() {
CHECK(mState == EXECUTING || mState == RECONFIGURING);
if (mFlags & kUseSecureInputBuffers) {
Vector<BufferInfo> *buffers = &mPortBuffers[kPortIndexInput];---mPortBuffers是我们allocateBuffersOnPort方法存下来的对应的输入/输出bufferinfo数据
for (size_t i = 0; i < buffers->size(); ++i) {---循环每次输入端能填充数据的buffer总数,这是由component的结构决定的,各个厂商的解码器配置不一样
if (!drainAnyInputBuffer()-----往buffer里面填元数据,给解码器解码
|| (mFlags & kOnlySubmitOneInputBufferAtOneTime)) {
break;
}
}
}
………………
}
bool OMXCodec::drainAnyInputBuffer() {
return drainInputBuffer((BufferInfo *)NULL);
}
bool OMXCodec::drainInputBuffer(BufferInfo *info) {
for (;;) {
MediaBuffer *srcBuffer;
if (mSeekTimeUs >= 0) {
if (mLeftOverBuffer) {
mLeftOverBuffer->release();
mLeftOverBuffer = NULL;
}
MediaSource::ReadOptions options;
options.setSeekTo(mSeekTimeUs, mSeekMode);
mSeekTimeUs = -1;
mSeekMode = ReadOptions::SEEK_CLOSEST_SYNC;---seek模式
mBufferFilled.signal();
err = mSource->read(&srcBuffer, &options);---读mediasource,我们以mpeg4为例,它的实现就在MPEG4Extrator.cpp(),根据seek模式和seek时间从sampletable里面找到meta_data。存到srcBuffer。
if (mFlags & kUseSecureInputBuffers) {
info = findInputBufferByDataPointer(srcBuffer->data());---让bufferinfo的mData指向元数据的data
CHECK(info != NULL);
}
err = mOMX->emptyBuffer(
mNode, info->mBuffer, 0, offset,
flags, timestampUs); ----对应component的方法是OMX_EmptyThisBuffer,回调消息为:EmptyBufferDone。
if (err != OK) {
setState(ERROR);
return false;
}
info->mStatus = OWNED_BY_COMPONENT;----设置状态为OWNED_BY_COMPONENT
}
从上面的分析,我们得知emtyBuffer后在5msec之内会有个EmptyBufferDone回调,我们看下omxcodec对该回调的处理:
void OMXCodec::on_message(const omx_message &msg) {
case omx_message::EMPTY_BUFFER_DONE:
………………
IOMX::buffer_id buffer = msg.u.extended_buffer_data.buffer;
CODEC_LOGV("EMPTY_BUFFER_DONE(buffer: %p)", buffer);
Vector<BufferInfo> *buffers = &mPortBuffers[kPortIndexInput];
size_t i = 0;
while (i < buffers->size() && (*buffers)[i].mBuffer != buffer) {
++i;
}
BufferInfo* info = &buffers->editItemAt(i);
-------------通过buffer_id找到Vector<BufferInfo> bufferInfo
info->mStatus = OWNED_BY_US;-------设置info的状态为OWNED_BY_US
info->mMediaBuffer->release();-----释放mediabuffer
info->mMediaBuffer = NULL;
…………….
if (mState != ERROR
&& mPortStatus[kPortIndexInput] != SHUTTING_DOWN) {
CHECK_EQ((int)mPortStatus[kPortIndexInput], (int)ENABLED);
if (mFlags & kUseSecureInputBuffers) {
drainAnyInputBuffer();----下一片段buffer移交给component
} else {
drainInputBuffer(&buffers->editItemAt(i));
}
}
emptybuffer后应该就是fillOutputBuffer:
void OMXCodec::fillOutputBuffer(BufferInfo *info) {
CHECK_EQ((int)info->mStatus, (int)OWNED_BY_US);
if (mNoMoreOutputData) {
CODEC_LOGV("There is no more output data available, not "
"calling fillOutputBuffer");--------------没有数据了退出
return;
}
if (info->mMediaBuffer != NULL) {
sp<GraphicBuffer> graphicBuffer = info->mMediaBuffer->graphicBuffer();
if (graphicBuffer != 0) {
// When using a native buffer we need to lock the buffer before
// giving it to OMX.
CODEC_LOGV("Calling lockBuffer on %p", info->mBuffer);
int err = mNativeWindow->lockBuffer(mNativeWindow.get(),
graphicBuffer.get()); -------锁定该buffer,准备render图像
if (err != 0) {
CODEC_LOGE("lockBuffer failed w/ error 0x%08x", err);
setState(ERROR);
return;
}
}
}
CODEC_LOGV("Calling fillBuffer on buffer %p", info->mBuffer);
status_t err = mOMX->fillBuffer(mNode, info->mBuffer);---------填充输出端buffer
……….
info->mStatus = OWNED_BY_COMPONENT;
}
fillbuffer后获得mVideoBuffer就可以在Awesomeplayer的onvideoEvent方法中的mVideoRenderer->render(mVideoBuffer);进行图像的显示了。
以上我们就是播放的过程了。到此多媒体本地播放流程全部讲完了,里面很多细节的东西,还得大伙自己深入理解,往后有什么需要补充和添加的,我会再次补充上。