1.Video Playback的流程
在Android上,预设的多媒体框架(multimedia framework)是OpenCORE。OpenCORE的优点是兼顾了跨平台的移植性,而且已经过多方验证,所以相对来说较為稳定;但是其缺点是过於庞大复杂,需要耗费相当多的时间去维护。从Android 2.0开始,Google引进了架构稍為简洁的Stagefright,并且有逐渐取代OpenCORE的趋势 (註1)。
[图1] Stagefright在Android多媒体架构中的位置。
[图2] Stagefright所涵盖的模组 (註2)。
以下我们就先来看看Stagefright是如何播放一个影片档。
Stagefright在Android中是以shared library的形式存在(libstagefright.so),其中的module -- AwesomePlayer可用来播放video/audio (註3)。AwesomePlayer提供许多API,可以让上层的应用程式(Java/JNI)来呼叫,我们以一个简单的程式来说明video playback的流程。
在Java中,若要播放一个影片档,我们会这样写:
MediaPlayer mp = new MediaPlayer();
mp.setDataSource(PATH_TO_FILE); ...... (1)
mp.prepare(); ........................ (2)、(3)
mp.start(); .......................... (4)
在Stagefright中,则会看到相对应的处理;
(1) 将档案的绝对路径指定给mUri
status_t AwesomePlayer::setDataSource(const char* uri, ...)
{
return setDataSource_l(uri, ...);
}
status_t AwesomePlayer::setDataSource_l(const char* uri, ...)
{
mUri = uri;
}
(2)启动mQueue,作为event handler
status_t AwesomePlayer::prepare()
{
return prepare_l();
}
status_t AwesomePlayer::prepare_l()
{
prepareAsync_l();
while (mFlags & PREPARING)
{
mPreparedCondition.wait(mLock);
}
}
status_t AwesomePlayer::prepareAsync_l()
{
mQueue.start();
mFlags |= PREPARING;
mAsyncPrepareEvent = new AwesomeEvent(this&AwesomePlayer::onPrepareAsyncEvent);
mQueue.postEvent(mAsyncPrepareEvent);
}
(3) onPrepareAsyncEvent被触发
void AwesomePlayer::onPrepareAsyncEvent()
{
finishSetDataSource_l();
initVideoDecoder(); ...... (3.3)
initAudioDecoder();
}
status_t AwesomePlayer::finishSetDataSource_l()
{
dataSource = DataSource::CreateFromURI(mUri.string(), ...);
sp<MediaExtractor> extractor =MediaExtractor::Create(dataSource); ..... (3.1)
return setDataSource_l(extractor); ......................... (3.2)
}
(3.1) 解析mUri所指定的档案,并且根据其header来选择对应的extractor
sp<MediaExtractor> MediaExtractor::Create(const sp<DataSource> &source, ...)
{
source->sniff(&tmp, ...);
mime = tmp.string();
if (!strcasecmp(mime, MEDIA_MIMETYPE_CONTAINER_MPEG4)
{
return new MPEG4Extractor(source);
}
else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_MPEG))
{
return new MP3Extractor(source);
}
else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_AMR_NB)
{
return new AMRExtractor(source);
}
}
(3.2) 使用extractor对档案做A/V(Audio/Video)的分离 (mVideoTrack/mAudioTrack)
status_t AwesomePlayer::setDataSource_l(const sp<MediaExtractor> &extractor)
{
for (size_t i = 0; i < extractor->countTracks(); ++i)
{
sp<MetaData> meta = extractor->getTrackMetaData(i);
CHECK(meta->findCString(kKeyMIMEType, &mime));
if(!haveVideo && !strncasecmp(mime, "video/", 6))
{
setVideoSource(extractor->getTrack(i));
haveVideo = true;
}
else if (!haveAudio && !strncasecmp(mime, "audio/", 6))
{
setAudioSource(extractor->getTrack(i));
haveAudio = true;
}
}
}
void AwesomePlayer::setVideoSource(sp<MediaSource> source)
{
mVideoTrack = source;
}
(3.3) 根据mVideoTrack中的编码类型来选择video decoder (mVideoSource)
status_t AwesomePlayer::initVideoDecoder()
{
mVideoSource = OMXCodec::Create(mClient.interface(),mVideoTrack->getFormat(),false,mVideoTrack);
}
(4) 将mVideoEvent放入mQueue中,开始解码播放,并交由mVideoRenderer来画出
status_t AwesomePlayer::play()
{
return play_l();
}
status_t AwesomePlayer::play_l()
{
postVideoEvent_l();
}
void AwesomePlayer::postVideoEvent_l(int64_t delayUs)
{
mQueue.postEventWithDelay(mVideoEvent, delayUs);
}
void AwesomePlayer::onVideoEvent()
{
mVideoSource->read(&mVideoBuffer, &options);
[Check Timestamp]
mVideoRenderer->render(mVideoBuffer);
postVideoEvent_l();
}
2.选择Video Decoder
我们来看一看Stagefright是如何根据影片档的类型来选择适合的Video Decoder
(1) Video Decoder是在onPrepareAsyncEvent中的initVideoDecoder被決定的。OMXCodec::Create()会回传Video Decoder给mVideoSource。
status_t AwesomePlayer::initVideoDecoder()
{
mVideoSource = OMXCodec::Create(mClient.interface(),mVideoTrack->getFormat(),false,mVideoTrack);
}
sp<MediaSource> OMXCodec::Create(&omx, &meta, createEncoder, &source,matchComponentName)
{
meta->findCString(kKeyMIMEType, &mime);
findMatchingCodecs(mime, ..., &matchingCodecs); ........ (2)
for (size_t i = 0; i < matchingCodecs.size(); ++i)
{
componentName = matchingCodecs[i].string();
softwareCodec =InstantiateSoftwareCodec(componentName, ...); ..... (3)
if (softwareCodec != NULL) return softwareCodec;
err = omx->allocateNode(componentName, ..., &node); ... (4)
if (err == OK)
{
codec = new OMXCodec(..., componentName, ...); ...... (5)
return codec;
}
}
}
(2) 根据mVideoTrack的MIME从kDecoderInfo挑出合适的components
void OMXCodec::findMatchingCodecs(mime, ..., matchingCodecs)
{
for (int index = 0;; ++index)
{
componentName = GetCodec(kDecoderInfo,sizeof(kDecoderInfo)/sizeof(kDecoderInfo[0]),mime,index);
matchingCodecs->push(String8(componentName));
}
}
static const CodecInfo kDecoderInfo[] =
{
...
{ MEDIA_MIMETYPE_VIDEO_MPEG4, "OMX.qcom.video.decoder.mpeg4" },
{ MEDIA_MIMETYPE_VIDEO_MPEG4, "OMX.TI.Video.Decoder" },
{ MEDIA_MIMETYPE_VIDEO_MPEG4, "M4vH263Decoder" },
...
}
GetCodec会依据MIME从kDecoderInfo挑出所有的component name,然后存到matchingCodecs中。
(3) 根据matchingCodecs中component的顺序,我们会先去检查其是否为software decoder
static sp<MediaSource> InstantiateSoftwareCodec(name, ...)
{
FactoryInfo kFactoryInfo[] =
{
...
FACTORY_REF(M4vH263Decoder)
...
};
for (i = 0; i < sizeof(kFactoryInfo)/sizeof(kFactoryInfo[0]); ++i)
{
if (!strcmp(name, kFactoryInfo[i].name))
return (*kFactoryInfo[i].CreateFunc)(source);
}
}
所有的software decoder都会被列在kFactoryInfo中,我们藉由传进来的name来对应到适合的decoder。
(4) 如果該component不是software decoder,则试著去配置对应的OMX component
status_t OMX::allocateNode(name, ..., node)
{
mMaster->makeComponentInstance(name,&OMXNodeInstance::kCallbacks,instance,handle);
}
OMX_ERRORTYPE OMXMaster::makeComponentInstance(name, ...)
{
plugin->makeComponentInstance(name, ...);
}
OMX_ERRORTYPE OMXPVCodecsPlugin::makeComponentInstance(name, ...)
{
return OMX_MasterGetHandle(..., name, ...);
}
OMX_ERRORTYPE OMX_MasterGetHandle(...)
{
return OMX_GetHandle(...);
}
(5) 若该component為OMX deocder,则回传;否则继续检查下一个component
3.Video Buffer传输流程
下面介绍Stagefright中是如何和OMX Video decoder传递buffer
(1) OMXCodec会在一开始的时候透过read函式来传送未解码的data给decoder,并且要求decoder将解码后的data传回來
status_t OMXCodec::read(...)
{
if(mInitialBufferSubmit)
{
mInitialBufferSubmit = false;
drainInputBuffers(); //----- OMX_EmptyThisBuffer(清空InputBuffer)
fillOutputBuffers(); //----- OMX_FillThisBuffer(填充OutputBuffer)
}
...
}
void OMXCodec::drainInputBuffers()
{
Vector<BufferInfo> *buffers = &mPortBuffers[kPortIndexInput];
for (i = 0; i < buffers->size(); ++i)
{
drainInputBuffer(&buffers->editItemAt(i));
}
}
void OMXCodec::drainInputBuffer(BufferInfo *info)
{
mOMX->emptyBuffer(...);
}
void OMXCodec::fillOutputBuffers()
{
Vector<BufferInfo> *buffers = &mPortBuffers[kPortIndexOutput];
for (i = 0; i < buffers->size(); ++i)
{
fillOutputBuffer(&buffers->editItemAt(i));
}
}
void OMXCodec::fillOutputBuffer(BufferInfo *info)
{
mOMX->fillBuffer(...);
}
(2) Decoder从input port读取资料后,开始进行解码,并且回传EmptyBufferDone通知OMXCodec
void OMXCodec::on_message(const omx_message &msg)
{
switch (msg.type)
{
case omx_message::EMPTY_BUFFER_DONE:
{
IOMX::buffer_id buffer = msg.u.extended_buffer_data.buffer;
drainInputBuffer(&buffers->editItemAt(i));
}
}
}
OMXCodec收到EMPTY_BUFFER_DONE之后,继续传送下一个未解码的资料给decoder。
(3) Decoder将解码完的资料送到output port,并回传FillBufferDone通知OMXCodec
void OMXCodec::on_message(const omx_message &msg)
{
switch (msg.type)
{
case omx_message::FILL_BUFFER_DONE:
{
IOMX::buffer_id buffer = msg.u.extended_buffer_data.buffer;
fillOutputBuffer(info);
mFilledBuffers.push_back(i);
mBufferFilled.signal();
}
}
}
OMXCodec收到FILL_BUFFER_DONE之后,将解码后的资料放入mFilledBuffers,发出mBufferFilled信号,并且要求decoder继续送出资料。
(4) read函式在后段等待mBufferFilled信号。当mFilledBuffers被填入资料后,read函式将其指定给buffer指标,并回传给AwesomePlayer
status_t OMXCodec::read(MediaBuffer **buffer, ...)
{
...
while (mFilledBuffers.empty())
{
mBufferFilled.wait(mLock);
}
BufferInfo *info = &mPortBuffers[kPortIndexOutput].editItemAt(index);
info->mMediaBuffer->add_ref();
*buffer = info->mMediaBuffer;
}
4.Video Rendering
AwesomePlayer::onVideoEvent除了透过OMXCodec::read取得解码后的资料外,还必须将这些资料(mVideoBuffer)传给Video renderer,以便画到UIScreen上去。
(1) 要将mVideoBuffer中的资料绘制出来之前,必须先建立mVideoRenderer
void AwesomePlayer::onVideoEvent()
{
...
if (mVideoRenderer == NULL)
{
initRenderer_l();
}
...
}
void AwesomePlayer::initRenderer_l()
{
if (!strncmp("OMX.", component, 4))
{
mVideoRenderer = new AwesomeRemoteRenderer(mClient.interface()->createRenderer(mISurface,component,...)); .......... (2)
}
else
{
mVideoRenderer = new AwesomeLocalRenderer(...,component,mISurface); ............................ (3)
}
}
(2) 如果video decoder是OMX component,則建立一個AwesomeRemoteRenderer作為mVideoRenderer
从上段的程式码(1)来看,AwesomeRemoteRenderer的本质是由OMX::createRenderer所创建的。createRenderer会先建立一个hardware renderer -- SharedVideoRenderer (libstagefrighthw.so);若失败,则建立software renderer -- SoftwareRenderer (surface)。
sp<IOMXRenderer> OMX::createRenderer(...)
{
VideoRenderer *impl = NULL;
libHandle = dlopen("libstagefrighthw.so", RTLD_NOW);
if (libHandle)
{
CreateRendererFunc func = dlsym(libHandle, ...);
impl = (*func)(...); <----------------- Hardware Renderer
}
if (!impl)
{
impl = new SoftwareRenderer(...); <---- Software Renderer
}
}
(3) 如果video decoder是software component,則建立一个AwesomeLocalRenderer作為mVideoRenderer
AwesomeLocalRenderer的constructor会呼叫本身的init函式,其所做的事和OMX::createRenderer一模一样。
void AwesomeLocalRenderer::init(...)
{
mLibHandle = dlopen("libstagefrighthw.so", RTLD_NOW);
if(mLibHandle)
{
CreateRendererFunc func = dlsym(...);
mTarget = (*func)(...); <---------------- Hardware Renderer
}
if(mTarget == NULL)
{
mTarget = new SoftwareRenderer(...); <--- Software Renderer
}
}
(4) mVideoRenderer一经建立就可以开始将解码后的资料传給它
void AwesomePlayer::onVideoEvent()
{
if (!mVideoBuffer)
{
mVideoSource->read(&mVideoBuffer, ...);
}
[Check Timestamp]
if (mVideoRenderer == NULL)
{
initRenderer_l();
}
mVideoRenderer->render(mVideoBuffer); <----- Render Data
}
5.Audio Playback的流程
这篇文章将会开始audio处理的流程。Stagefright中关於audio的部分是交由AudioPlayer来处理,它是在AwesomePlayer::play_l中被建立的。
(1) 当上层应用程式要求播放影音时,AudioPlayer同时被建立出来,并且被啟动
status_t AwesomePlayer::play_l()
{
...
mAudioPlayer = new AudioPlayer(mAudioSink, ...);
mAudioPlayer->start(...);
...
}
(2) AudioPlayer在啟动的过程中会先去读取第一笔解码后的资料,并且开啟audio output
status_t AudioPlayer::start(...)
{
mSource->read(&mFirstBuffer);
if(mAudioSink.get() != NULL)
{
mAudioSink->open(..., &AudioPlayer::AudioSinkCallback, ...);
mAudioSink->start();
}
else
{
mAudioTrack = new AudioTrack(..., &AudioPlayer::AudioCallback, ...);
mAudioTrack->start();
}
}
从AudioPlayer::start的程式码来看,AudioPlayer似乎并没有将mFirstBuffer传给audio output。
(3) 开启audio output的同时,AudioPlayer会将callback函式設給它,之后每当callback函式被呼叫,AudioPlayer便去audio decoder读取解码后的资料
size_t AudioPlayer::AudioSinkCallback(audioSink, buffer, size, ...)
{
return fillBuffer(buffer, size);
}
void AudioPlayer::AudioCallback(..., info)
{
buffer = info;
fillBuffer(buffer->raw, buffer->size);
}
size_t AudioPlayer::fillBuffer(data, size)
{
mSource->read(&mInputBuffer, ...);
memcpy(data, mInputBuffer->data(), ...);
}
解码后audio资料的读取就是由callback函式所驱动,但是callback函式又是怎麼由audio output去驱动的,目前从程式码上还看不出来。另外一方面,从上面的程式片段可以看出,fillBuffer将资料(mInputBuffer)复製到data之后,audio output应该会去取用data。
至于audio decoder的工作流程则和video decoder相同,看参见上述第三部分Video Buffer传输流程
6.Audio和Video的同步
讲完了audio和video的处理流程,接下来要看的是audio和video同步化(synchronization)的问题。OpenCORE的做法是设置一个主clock,而audio和video就分别以此作為输出的依据。而在Stagefright中,audio的输出是透过callback函式来驱动,video则根据audio的timestamp来做同步。以下是详细的说明:
(1) 当callback函式驱动AudioPlayer读取解码后的资料时,AudioPlayer会取得两个时间戳 -- mPositionTimeMediaUs和mPositionTimeRealUs
size_t AudioPlayer::fillBuffer(data, size)
{
...
mSource->read(&mInputBuffer, ...);
mInputBuffer->meta_data()->findInt64(kKeyTime, &mPositionTimeMediaUs);
mPositionTimeRealUs = ((mNumFramesPlayed + size_done / mFrameSize) * 1000000) /mSampleRate;
...
}
mPositionTimeMediaUs是资料裡面所载明的时间戳(timestamp);mPositionTimeRealUs则是播放此资料的实际时间(依据frame number及sample rate得出)。
(2) Stagefright中的video便依据从AudioPlayer得出来之两个时间戳的差值,作為播放的依据
void AwesomePlayer::onVideoEvent()
{
...
mVideoSource->read(&mVideoBuffer, ...);
mVideoBuffer->meta_data()->findInt64(kKeyTime, &timeUs);
mAudioPlayer->getMediaTimeMapping(&realTimeUs, &mediaTimeUs);
mTimeSourceDeltaUs = realTimeUs - mediaTimeUs;
nowUs = ts->getRealTimeUs() - mTimeSourceDeltaUs;
latenessUs = nowUs - timeUs;
...
}
AwesomePlayer從AudioPlayer取得realTimeUs(即mPositionTimeRealUs)和mediaTimeUs(即mPositionTimeMediaUs),并算出其差值mTimeSourceDeltaUs。
(3) 最后我们将该video资料做排程
void AwesomePlayer::onVideoEvent()
{
...
if (latenessUs > 40000)
{
mVideoBuffer->release();
mVideoBuffer = NULL;
postVideoEvent_l();
return;
}
if (latenessUs < -10000)
{
postVideoEvent_l(10000);
return;
}
mVideoRenderer->render(mVideoBuffer);
...
}