android stagefright 框架

在Android上,�A�O的多媒�w框架(multimedia framework)是OpenCORE 。OpenCORE的���c是兼�了跨平台的移植性,而且已��^多方��C,所以相��碚f�^�榉�定;但是其缺�c是�^於��大�}�s,需要耗�M相��多的�r�g去�S�o。��Android 2.0�_始,Google引�M了架��稍�楹���的Stagefright,�K且有逐�u取代OpenCORE的��� (�]1)。
[�D1] Stagefright在Android多媒�w架��中的位置。
 
[�D2] Stagefright所涵�w的模�M (�]2)。
以下我��就先�砜纯�Stagefright是如何播放一��影片�n。

Stagefright
在Android中是以shared library的形式存在(libstagefright.so),其中的module -- AwesomePlayer可用�聿シ�video/audio (�]3)。AwesomePlayer提供�S多API,可以�上�拥��用程式(Java/JNI)�砗艚校�我��以一����蔚某淌�碚f明video playback的流程。

在Java中,若要播放一��影片�n,我�����@���:

MediaPlayer mp = new MediaPlayer();
mp.setDataSource(PATH_TO_FILE); ...... (1)
mp.prepare(); ........................ (2)、(3)
mp.start(); .......................... (4)

在Stagefright中,�t��看到相���的�理;

(1)
��n案的�^�β�街付��omUri
status_t AwesomePlayer::setDataSource(const char* uri, ...)
{
  return setDataSource_l(uri, ...);
}

status_t AwesomePlayer::setDataSource_l(const char* uri, ...)
{
  mUri = uri;
}

(2) ���mQueue,作��event handler

status_t AwesomePlayer::prepare()
{
  return prepare_l();
}

status_t AwesomePlayer::prepare_l()
{
  prepareAsync_l();

  while (mFlags & PREPARING)
  {
    mPreparedCondition.wait(mLock);
  }
}

status_t AwesomePlayer::prepareAsync_l()
{
  mQueue.start();

  mFlags |= PREPARING;
  mAsyncPrepareEvent = new AwesomeEvent(
                             this
                             &AwesomePlayer::onPrepareAsyncEvent);
  mQueue.postEvent(mAsyncPrepareEvent);
}

(3) onPrepareAsyncEvent被�|�l

 
void AwesomePlayer::onPrepareAsyncEvent()
{
  finishSetDataSource_l();

  initVideoDecoder(); ...... (3.3)
  initAudioDecoder();
}

status_t AwesomePlayer::finishSetDataSource_l()
{
  dataSource = DataSource::CreateFromURI(mUri.string(), ...);
  sp<MediaExtractor> extractor =
                     MediaExtractor::Create(dataSource); ..... (3.1)

  return setDataSource_l(extractor); ......................... (3.2)
}

(3.1) 解析mUri所指定的�n案,�K且根��其header�磉x����的extractor

 
sp<MediaExtractor> MediaExtractor::Create(const sp<DataSource> &source, ...)
{
  source->sniff(&tmp, ...);
  mime = tmp.string();

  if (!strcasecmp(mime, MEDIA_MIMETYPE_CONTAINER_MPEG4)
  {
    return new MPEG4Extractor(source);
  }
  else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_MPEG))
  {
    return new MP3Extractor(source);
  }
  else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_AMR_NB)
  {
    return new AMRExtractor(source);
  }
}

(3.2) 使用extractor��n案做A/V的分�x (mVideoTrack/mAudioTrack)

 
status_t AwesomePlayer::setDataSource_l(const sp<MediaExtractor> &extractor)
{
  for (size_t i = 0; i < extractor->countTracks(); ++i)
  {
    sp<MetaData> meta = extractor->getTrackMetaData(i);

    CHECK(meta->findCString(kKeyMIMEType, &mime));

    if (!haveVideo && !strncasecmp(mime, "video/", 6))
    {
      setVideoSource(extractor->getTrack(i));
      haveVideo = true;
    }
    else if (!haveAudio && !strncasecmp(mime, "audio/", 6))
    {
      setAudioSource(extractor->getTrack(i));
      haveAudio = true;
    }
  }
}

void AwesomePlayer::setVideoSource(sp<MediaSource> source)
{
  mVideoTrack = source;
}

(3.3) 根��mVideoTrack中的��a�型�磉x��video decoder (mVideoSource)

 
status_t AwesomePlayer::initVideoDecoder()
{
  mVideoSource = OMXCodec::Create(mClient.interface(),
                                  mVideoTrack->getFormat(),
                                  false,
                                  mVideoTrack);
}

(4)
status_t AwesomePlayer::play()
{
  return play_l();
}

status_t AwesomePlayer::play_l()
{
  postVideoEvent_l();
}

void AwesomePlayer::postVideoEvent_l(int64_t delayUs)
{
  mQueue.postEventWithDelay(mVideoEvent, delayUs);
}

void AwesomePlayer::onVideoEvent()
{
  mVideoSource->read(&mVideoBuffer, &options);
  [Check Timestamp]
  mVideoRenderer->render(mVideoBuffer);

  postVideoEvent_l();
}


(�]1) ��Android2.3 (Gingerbread) �_始,�A�O的多媒�w框架�� Stagefright。
(�]2) Stagefright的架��尚不�嘣谘葸M中,本系列文章�K未含括所有的模�M。
(�]3) Audio的播放是交由 AudioPlayer �硖�理,��⒖肌�Stagefright (6) - Audio Playback的流程》。

 
��mVideoEvent放入mQueue中,�_始解�a播放,�K交由mVideoRenderer�懋�出


 
stagefright框架(二)- 和OpenMAX的�\作
 
 
 
Stagefright的�解�a功能是利用OpenMAX框架,而且用的�是OpenCORE之OMX的��作,我���砜匆幌�Stagefright和OMX是如何�\作的。

(1) OMX_Init


OMXClient mClient;

AwesomePlayer::AwesomePlayer()
{
  mClient.connect();
}

status_t OMXClient::connect()
{
  mOMX = service->getOMX();
}

sp<IOMX> MediaPlayerService::getOMX()
{
  mOMX = new OMX;
}

OMX::OMX() : mMaster(new OMXMaster)

OMXMaster::OMXMaster()
{
  addPlugin(new OMXPVCodecsPlugin);
}

OMXPVCodecsPlugin::OMXPVCodecsPlugin()
{
  OMX_MasterInit();
}

OMX_ERRORTYPE OMX_MasterInit() <-- under OpenCORE
{
  return OMX_Init();
}

(2) OMX_SendCommand


OMXCodec::function_name()
{
  mOMX->sendCommand(mNode, OMX_CommandStateSet, OMX_StateIdle);
}
status_t OMX::sendCommand(node, cmd, param)
{
  return findInstance(node)->sendCommand(cmd, param);
}
status_t OMXNodeInstance::sendCommand(cmd, param)
{
  OMX_SendCommand(mHandle, cmd, param, NULL);
}

 
(3)  其他作用在 OMX 元件的指令

其他作用在 OMX 元件的指令也和 OMX_SendCommand call path 一�樱���下表:
OMXCodec
OMX
OMXNodeInstance
 
useBuffer
useBuffer (OMX_UseBuffer)
 
getParameter
getParameter (OMX_GetParameter)
 
fillBuffer
fillBuffer (OMX_FillThisBuffer)
 
emptyBuffer
emptyBuffer (OMX_EmptyThisBuffer)









 


 (4) Callback Functions


OMX_CALLBACKTYPE OMXNodeInstance::kCallbacks =
{
  &OnEvent, <--------------- omx_message::EVENT
  &OnEmptyBufferDone, <----- omx_message::EMPTY_BUFFER_DONE
  &OnFillBufferDone <------- omx_message::FILL_BUFFER_DONE
}

 

 
stagefright框架(三)-�x��Video Decoder
 
在《 Stagefright (1) – Video Playback的流程 》中,我���K�]有�述 Stagefright 是如何根��影片�n的�型�磉x�襁m合的 video decoder ,�F在,就�我���砜匆豢础�
(1) Video decoder是在onPrepareAsyncEvent中的initVideoDecoder被�Q定的

OMXCodec::Create()��回��video decoder�omVideoSource。

status_t AwesomePlayer::initVideoDecoder()
{
  mVideoSource = OMXCodec::Create(mClient.interface(),
                                  mVideoTrack->getFormat(),
                                  false,
                                  mVideoTrack);
}

sp<MediaSource> OMXCodec::Create(&omx, &meta, createEncoder, &source, matchComponentName)
{
  meta->findCString(kKeyMIMEType, &mime);

  findMatchingCodecs(mime, ..., &matchingCodecs); ........ (2)

  for (size_t i = 0; i < matchingCodecs.size(); ++i)
  {
    componentName = matchingCodecs[i].string();

    softwareCodec =
        InstantiateSoftwareCodec(componentName, ...); ..... (3)

    if (softwareCodec != NULL) return softwareCodec;
        
    err = omx->allocateNode(componentName, ..., &node); ... (4)

    if (err == OK)
    {
      codec = new OMXCodec(..., componentName, ...); ...... (5)
      return codec;
    }
  }
}

(2) 根��mVideoTrack的MIME��kDecoderInfo挑出合�m的components



void OMXCodec::findMatchingCodecs(mime, ..., matchingCodecs)
{
  for (int index = 0;; ++index)
  {
    componentName = GetCodec(
                       kDecoderInfo,
                       sizeof(kDecoderInfo)/sizeof(kDecoderInfo[0]),
                       mime,
                       index);

    matchingCodecs->push(String8(componentName));
  }
}

static const CodecInfo kDecoderInfo[] =
{
  ...
  { MEDIA_MIMETYPE_VIDEO_MPEG4, "OMX.qcom.video.decoder.mpeg4" },
  { MEDIA_MIMETYPE_VIDEO_MPEG4, "OMX.TI.Video.Decoder" },
  { MEDIA_MIMETYPE_VIDEO_MPEG4, "M4vH263Decoder" },
  ...
}
GetCodec��依��mime��kDecoderInfo挑出所有的component name,然後存到matchingCodecs中。

(3) 根��matchingCodecs中component的�序,我����先去�z查其是否��software decoder

static sp<MediaSource> InstantiateSoftwareCodec(name, ...)
{
  FactoryInfo kFactoryInfo[] =
  {
    ...
    FACTORY_REF(M4vH263Decoder)
    ...
  };

  for (i = 0; i < sizeof(kFactoryInfo)/sizeof(kFactoryInfo[0]); ++i)
  {
    if (!strcmp(name, kFactoryInfo[i].name))
      return (*kFactoryInfo[i].CreateFunc)(source);
  }
}

所有的software decoder都��被列在kFactoryInfo中,我��藉由�鬟M�淼�name����到�m合的decoder。

(4) 如果�component不是software decoder,�t�著去配置���的OMX component


status_t OMX::allocateNode(name, ..., node)
{
  mMaster->makeComponentInstance(
                           name,
                           &OMXNodeInstance::kCallbacks,
                           instance,
                           handle);
}

OMX_ERRORTYPE OMXMaster::makeComponentInstance(name, ...)
{
  plugin->makeComponentInstance(name, ...);
}

OMX_ERRORTYPE OMXPVCodecsPlugin::makeComponentInstance(name, ...)
{
  return OMX_MasterGetHandle(..., name, ...);
}

OMX_ERRORTYPE OMX_MasterGetHandle(...)
{
  return OMX_GetHandle(...);
}

(5) 若�component��OMX deocder,�t回�鳎环�t�^�m�z查下一��component


 
 
 
stagefright框架(四)-Video Buffer�鬏�流程
 
 
 
�@篇文章�⒔榻BStagefright中是如何和OMX video decoder�鬟fbuffer。

(1) OMXCodec��在一�_始的�r候透�^read函式��魉臀唇獯a的data�odecoder,�K且要求decoder�⒔獯a後的data�骰��


status_t OMXCodec::read(...)
{
  if (mInitialBufferSubmit)
  {
    mInitialBufferSubmit = false;

    drainInputBuffers(); <----- OMX_EmptyThisBuffer
    fillOutputBuffers(); <----- OMX_FillThisBuffer
  }

  ...
}

void OMXCodec::drainInputBuffers()
{
  Vector<BufferInfo> *buffers = &mPortBuffers[kPortIndexInput];

  for (i = 0; i < buffers->size(); ++i)
  {
    drainInputBuffer(&buffers->editItemAt(i));
  }
}

void OMXCodec::drainInputBuffer(BufferInfo *info)
{
  mOMX->emptyBuffer(...);
}

void OMXCodec::fillOutputBuffers()
{
  Vector<BufferInfo> *buffers = &mPortBuffers[kPortIndexOutput];

  for (i = 0; i < buffers->size(); ++i)
  {
    fillOutputBuffer(&buffers->editItemAt(i));
  }
}

void OMXCodec::fillOutputBuffer(BufferInfo *info)
{
  mOMX->fillBuffer(...);
}

(2) Decoder��input port�x取�Y料後,�_始�M行解�a,�K且回��EmptyBufferDone通知OMXCodec


void OMXCodec::on_message(const omx_message &msg)
{
  switch (msg.type)
  {
    case omx_message::EMPTY_BUFFER_DONE:
    {
      IOMX::buffer_id buffer = msg.u.extended_buffer_data.buffer;
      drainInputBuffer(&buffers->editItemAt(i));
    }
  }
}

OMXCodec收到EMPTY_BUFFER_DONE之後,�^�m�魉拖乱��未解�a的�Y料�odecoder。

(3) Decoder�⒔獯a完的�Y料送到output port,�K回��FillBufferDone通知OMXCodec


void OMXCodec::on_message(const omx_message &msg)
{
  switch (msg.type)
  {
    case omx_message::FILL_BUFFER_DONE:
    {
      IOMX::buffer_id buffer = msg.u.extended_buffer_data.buffer;
      fillOutputBuffer(info);

      mFilledBuffers.push_back(i);
      mBufferFilled.signal();
    }
  }
}

OMXCodec收到FILL_BUFFER_DONE之後,�⒔獯a後的�Y料放入mFilledBuffers,�l出mBufferFilled信�,�K且要求decoder�^�m送出�Y料。

(4) read函式在後段等待mBufferFilled信�。��mFilledBuffers被填入�Y料後,read函式�⑵渲付��obuffer指�耍��K回�鹘oAwesomePlayer


status_t OMXCodec::read(MediaBuffer **buffer, ...)
{
  ...

  while (mFilledBuffers.empty())
  {
    mBufferFilled.wait(mLock);
  }

  BufferInfo *info = &mPortBuffers[kPortIndexOutput].editItemAt(index);
  info->mMediaBuffer->add_ref();
  *buffer = info->mMediaBuffer;
}

 
 


stagefright框架(五)-Video Rendering
 
 
 
AwesomePlayer::onVideoEvent除了透�^OMXCodec::read取得解�a後的�Y料外,�必��⑦@些�Y料(mVideoBuffer)�鹘ovideo renderer,以便��到�幕上去。

(1) 要��mVideoBuffer中的�Y料��出�碇�前,必�先建立mVideoRenderer

void AwesomePlayer::onVideoEvent()
{
  ...

  if (mVideoRenderer == NULL)
  {
    initRenderer_l();
  }

  ...
}

void AwesomePlayer::initRenderer_l()
{
  if (!strncmp("OMX.", component, 4))
  {
    mVideoRenderer = new AwesomeRemoteRenderer(
                           mClient.interface()->createRenderer(
                                                  mISurface,
                                                  component,
                                                  ...)); .......... (2)
  }
  else
  {
    mVideoRenderer = new AwesomeLocalRenderer(
                           ...,
                           component,
                           mISurface); ............................ (3)
  }
}

(2) 如果video decoder是OMX component,�t建立一��AwesomeRemoteRenderer作��mVideoRenderer

�纳隙蔚某淌酱a(1)�砜矗�AwesomeRemoteRenderer的本�|是由OMX::createRenderer所��建的。createRenderer��先建立一��hardware renderer -- SharedVideoRenderer (libstagefrighthw.so);若失�。��t建立software renderer -- SoftwareRenderer (surface)。


sp<IOMXRenderer> OMX::createRenderer(...)
{
  VideoRenderer *impl = NULL;

  libHandle = dlopen("libstagefrighthw.so", RTLD_NOW);

  if (libHandle)
  {
    CreateRendererFunc func = dlsym(libHandle, ...);

    impl = (*func)(...); <----------------- Hardware Renderer
  }

  if (!impl)
  {
    impl = new SoftwareRenderer(...); <---- Software Renderer
  }
}

(3) 如果video decoder是software component,�t建立一��AwesomeLocalRenderer作��mVideoRenderer

AwesomeLocalRenderer的constructor��呼叫本身的init函式,其所做的事和OMX::createRenderer一模一�印�


void AwesomeLocalRenderer::init(...)
{
  mLibHandle = dlopen("libstagefrighthw.so", RTLD_NOW);

  if (mLibHandle)
  {
    CreateRendererFunc func = dlsym(...);

    mTarget = (*func)(...); <---------------- Hardware Renderer
  }

  if (mTarget == NULL)
  {
    mTarget = new SoftwareRenderer(...); <--- Software Renderer
  }
}

 

(4) mVideoRenderer一�建立就可以�_始�⒔獯a後的�Y料�鹘o它


void AwesomePlayer::onVideoEvent()
{
  if (!mVideoBuffer)
  {
    mVideoSource->read(&mVideoBuffer, ...);
  }

  [Check Timestamp]

  if (mVideoRenderer == NULL)
  {
    initRenderer_l();
  }

  mVideoRenderer->render(mVideoBuffer); <----- Render Data
}

 

 

 
stagefright框架(六)-Audio Playback的流程
 
到目前�橹梗�我��都只著重在video�理的部分,��於audio�s�b字未提。�@篇文章����_始audio�理的流程。

Stagefright中�P於audio的部分是交由AudioPlayer�硖�理,它是在AwesomePlayer::play_l中被建立的。


(1) ��上���用程式要求播放影音�r,AudioPlayer同�r被建立出�恚��K且被���

status_t AwesomePlayer::play_l()
{
  ...

  mAudioPlayer = new AudioPlayer(mAudioSink, ...);
  mAudioPlayer->start(...);

  ...
}

(2) AudioPlayer在��拥倪^程中��先去�x取第一�P解�a後的�Y料,�K且�_��audio output


status_t AudioPlayer::start(...)
{
  mSource->read(&mFirstBuffer);

  if (mAudioSink.get() != NULL)
  {
    mAudioSink->open(..., &AudioPlayer::AudioSinkCallback, ...);
    mAudioSink->start();
  }
  else
  {
    mAudioTrack = new AudioTrack(..., &AudioPlayer::AudioCallback, ...);
    mAudioTrack->start();
  }
}

��AudioPlayer::start的程式�a�砜矗�AudioPlayer似乎�K�]有��mFirstBuffer�鹘oaudio output。

(3) �_��audio output的同�r,AudioPlayer����callback函式�O�o它,之後每��callback函式被呼叫,AudioPlayer便去audio decoder�x取解�a後的�Y料

size_t AudioPlayer::AudioSinkCallback(audioSink, buffer, size, ...)
{
  return fillBuffer(buffer, size);
}
void AudioPlayer::AudioCallback(..., info)
{
  buffer = info;
  fillBuffer(buffer->raw, buffer->size);
}
size_t AudioPlayer::fillBuffer(data, size)
{
  mSource->read(&mInputBuffer, ...);
  memcpy(data, mInputBuffer->data(), ...);
}

解�a後audio�Y料的�x取就是由callback函式所��樱�但是callback函式又是怎�N由audio output去��拥模�目前�某淌酱a上�看不出�怼A硗庖环矫妫��纳厦娴某淌狡�段可以看出,fillBuffer�①Y料(mInputBuffer)�}�u到data之後,audio output�����去取用data。


(5) 至於audio decoder的工作流程�t和video decoder相同,可�㈤�《Stagefright (4) - Video Buffer�鬏�流程

 
 
stagefright框架(七)-Audio和Video的同步
 
�v完了audio和video的�理流程,接下�硪�看的是audio和video同步化(synchronization)的���}。OpenCORE的做法是�O置一��主clock,而audio和video就分�e以此作�檩�出的依��。而在Stagefright中,audio的�出是透�^callback函式�眚��樱�video�t根��audio的timestamp�碜鐾�步。以下是��的�f明:

(1) ��callback函式���AudioPlayer�x取解�a後的�Y料�r,AudioPlayer��取得����r�g戳 -- mPositionTimeMediaUs和mPositionTimeRealUs


size_t AudioPlayer::fillBuffer(data, size)
{
  ...

  mSource->read(&mInputBuffer, ...);

  mInputBuffer->meta_data()->findInt64(kKeyTime, &mPositionTimeMediaUs);
  mPositionTimeRealUs = ((mNumFramesPlayed + size_done / mFrameSize) * 1000000) / mSampleRate;

  ...
}

mPositionTimeMediaUs是�Y料�e面所�d明的�r�g戳(timestamp);mPositionTimeRealUs�t是播放此�Y料的���H�r�g(依��frame number及sample rate得出)。

(2) Stagefright中的video便依����AudioPlayer得出�碇�����r�g戳的差值,作�椴シ诺囊��


void AwesomePlayer::onVideoEvent()
{
  ...

  mVideoSource->read(&mVideoBuffer, ...);
  mVideoBuffer->meta_data()->findInt64(kKeyTime, &timeUs);

  mAudioPlayer->getMediaTimeMapping(&realTimeUs, &mediaTimeUs);
  mTimeSourceDeltaUs = realTimeUs - mediaTimeUs;

  nowUs = ts->getRealTimeUs() - mTimeSourceDeltaUs;
  latenessUs = nowUs - timeUs;

  ...
}

AwesomePlayer��AudioPlayer取得realTimeUs(即mPositionTimeRealUs)和mediaTimeUs(即mPositionTimeMediaUs),�K算出其差值mTimeSourceDeltaUs。

(3) 最後我���⒃�video�Y料做排程

void AwesomePlayer::onVideoEvent()
{
  ...
  if (latenessUs > 40000)
  {
    mVideoBuffer->release();
    mVideoBuffer = NULL;

    postVideoEvent_l();
    return;
  }
  if (latenessUs < -10000)
  {
    postVideoEvent_l(10000);
    return;
  }

  mVideoRenderer->render(mVideoBuffer);

  ...
}

 

你可能感兴趣的:(android,移动开发,职场,休闲,stagefright)