概述
OMX Codec是stagefrightplayer中负责解码的模块。
由于遵循openmax接口规范,因此结构稍微有点负责,这里就依照awesomeplayer中的调用顺序来介绍。
主要分如下几步:
1 mClient->connect
2 InitAudioDecoder & InitVideoDecoder
3 消息通信机制模型的介绍
4 解码过程介绍
先看下类图
这里OMX Codec是以service的方式提供服务的。Awesomeplayer中通过mOmx(IOMX) 作为客户端通过binder方式与OMX 通信完成解码的工作
下面一句具体代码分析
1 mClient->connect
在awesomeplayer的构造函数中调用,具体代码如下
AwesomePlayer::AwesomePlayer()
{
******
CHECK_EQ(mClient.connect(), (status_t)OK);
******
}
看下具体实现
status_t OMXClient::connect() {
sp sm = defaultServiceManager();
sp binder = sm->getService(String16("media.player"));
sp service = interface_cast(binder);
CHECK(service.get() != NULL);
mOMX = service->getOMX();
CHECK(mOMX.get() != NULL);
if (!mOMX->livesLocally(NULL /* node */, getpid())) {
ALOGI("Using client-side OMX mux.");
mOMX = new MuxOMX(mOMX);
}
return OK;
}
这里主要就是通过binder机制与mediaplayerservice通信来完成,具体实现看mediaplayerservice
sp MediaPlayerService::getOMX() {
Mutex::Autolock autoLock(mLock);
if (mOMX.get() == NULL) {
mOMX = new OMX;
}
return mOMX;
}
主要就是构造一个OMX对象返回给上层保存在mClient的IOMX对象mOmx中
看下构造函数都做了啥
OMX::OMX()
: mMaster(new OMXMaster),
mNodeCounter(0) {
}
在构造函数中又调用了OMXMaster的构造函数,代码如下
OMXMaster::OMXMaster()
: mVendorLibHandle(NULL) {
addVendorPlugin();
addPlugin(new SoftOMXPlugin);
}
这里OMXMaster可以看成是解码器的入口,通过makeComponentInstance建立解码器的实例,之后就可以进行解码操作了。
这里我们以软件解码器插件为例来看整个流程,主要是addPlugin(new
SoftOMXPlugin);
先看SoftOMXPlugin构造函数
SoftOMXPlugin::SoftOMXPlugin() {
}
是空的~~
再看下addPlugin代码
void OMXMaster::addPlugin(OMXPluginBase *plugin) {
Mutex::Autolock autoLock(mLock);
mPlugins.push_back(plugin);
OMX_U32 index = 0;
char name[128];
OMX_ERRORTYPE err;
while ((err = plugin->enumerateComponents(
name, sizeof(name), index++)) == OMX_ErrorNone) {
String8 name8(name);
if (mPluginByComponentName.indexOfKey(name8) >= 0) {
ALOGE("A component of name '%s' already exists, ignoring this one.",
name8.string());
continue;
}
mPluginByComponentName.add(name8, plugin);
}
if (err != OMX_ErrorNoMore) {
ALOGE("OMX plugin failed w/ error 0x%08x after registering %d "
"components", err, mPluginByComponentName.size());
}
}
这里传入的plugin参数时上面SoftOMXPlugin 构造函数产生的实例
从代码可以看出主要是将enumerateComponents枚举出来的各种解码器存放在成员变量mPluginByComponentName中,类型为 KeyedVector
看下enumerateComponents实现
static const struct {
const char *mName;
const char *mLibNameSuffix;
const char *mRole;
} kComponents[] = {
{ "OMX.google.aac.decoder", "aacdec", "audio_decoder.aac" },
{ "OMX.google.aac.encoder", "aacenc", "audio_encoder.aac" },
{ "OMX.google.amrnb.decoder", "amrdec", "audio_decoder.amrnb" },
{ "OMX.google.amrnb.encoder", "amrnbenc", "audio_encoder.amrnb" },
{ "OMX.google.amrwb.decoder", "amrdec", "audio_decoder.amrwb" },
{ "OMX.google.amrwb.encoder", "amrwbenc", "audio_encoder.amrwb" },
{ "OMX.google.h264.decoder", "h264dec", "video_decoder.avc" },
{ "OMX.google.h264.encoder", "h264enc", "video_encoder.avc" },
{ "OMX.google.g711.alaw.decoder", "g711dec", "audio_decoder.g711alaw" },
{ "OMX.google.g711.mlaw.decoder", "g711dec", "audio_decoder.g711mlaw" },
{ "OMX.google.h263.decoder", "mpeg4dec", "video_decoder.h263" },
{ "OMX.google.h263.encoder", "mpeg4enc", "video_encoder.h263" },
{ "OMX.google.mpeg4.decoder", "mpeg4dec", "video_decoder.mpeg4" },
{ "OMX.google.mpeg4.encoder", "mpeg4enc", "video_encoder.mpeg4" },
{ "OMX.google.mp3.decoder", "mp3dec", "audio_decoder.mp3" },
{ "OMX.google.vorbis.decoder", "vorbisdec", "audio_decoder.vorbis" },
{ "OMX.google.vpx.decoder", "vpxdec", "video_decoder.vpx" },
{ "OMX.google.raw.decoder", "rawdec", "audio_decoder.raw" },
{ "OMX.google.flac.encoder", "flacenc", "audio_encoder.flac" },
};
OMX_ERRORTYPE SoftOMXPlugin::enumerateComponents(
OMX_STRING name,
size_t size,
OMX_U32 index) {
if (index >= kNumComponents) {
return OMX_ErrorNoMore;
}
strcpy(name, kComponents[index].mName);
return OMX_ErrorNone;
}
这里只是将插件名字返回最终存储在mPluginByComponentName列表中
后面还会通过makeComponentInstance产生实际的解码器实例,后面再详细看
至此mClient->connect()就结束了。
这里的主要工作就是通过getOMX()在mediaplayerservice端构造一个OMX实例,并返回给mClient的IOMX成员mOmx中
而且在OMX的构造函数中调用OMXMaster的构造函数,可以通过makeComponentInstance 建立实际的解码器实例。
2 InitAudioDecoder & InitVideoDecoder
awesomeplayer构造函数结束后,在setDataSource之后会调用prepare方法,其实现中会调用initAudioDecoder和initVideoDecoder
由于在setDataSource中已经拿到了对应的解码器信息,因此此处initAudioDecoder 便可以构造实际的解码器了。以audio为例
status_t AwesomePlayer::initAudioDecoder() {
mAudioSource = OMXCodec::Create(
mClient.interface(), mAudioTrack->getFormat(),
false, // createEncoder
mAudioTrack);}
status_t err = mAudioSource->start();
return mAudioSource != NULL ? OK : UNKNOWN_ERROR;
}
这里只列出的主要操作,下面依次来看OMXCodec::Create和mAudioSource->start的主要工作
代码比较多,我们这里主要将重要的代码列出,无关代码省略
sp OMXCodec::Create(*)
{
findMatchingCodecs(
mime, createEncoder, matchComponentName, flags, &matchingCodecs);
sp observer = new OMXCodecObserver;
IOMX::node_id node = 0;
status_t err = omx->allocateNode(componentName, observer, &node);
sp codec = new OMXCodec(
omx, node, quirks, flags,
createEncoder, mime, componentName,
source, nativeWindow);
observer->setCodec(codec);
err = codec->configureCodec(meta);
}
下面依次来看每个过程
2.1 findMatchingCodecs
先看下代码
void OMXCodec::findMatchingCodecs(
const char *mime,
bool createEncoder, const char *matchComponentName,
uint32_t flags,
Vector *matchingCodecs) {
matchingCodecs->clear();
const MediaCodecList *list = MediaCodecList::getInstance();
if (list == NULL) {
return;
}
size_t index = 0;
for (;;) {
ssize_t matchIndex =
list->findCodecByType(mime, createEncoder, index);
if (matchIndex < 0) {
break;
}
index = matchIndex + 1;
const char *componentName = list->getCodecName(matchIndex);
// If a specific codec is requested, skip the non-matching ones.
if (matchComponentName && strcmp(componentName, matchComponentName)) {
continue;
}
// When requesting software-only codecs, only push software codecs
// When requesting hardware-only codecs, only push hardware codecs
// When there is request neither for software-only nor for
// hardware-only codecs, push all codecs
if (((flags & kSoftwareCodecsOnly) && IsSoftwareCodec(componentName)) ||
((flags & kHardwareCodecsOnly) && !IsSoftwareCodec(componentName)) ||
(!(flags & (kSoftwareCodecsOnly | kHardwareCodecsOnly)))) {
ssize_t index = matchingCodecs->add();
CodecNameAndQuirks *entry = &matchingCodecs->editItemAt(index);
entry->mName = String8(componentName);
entry->mQuirks = getComponentQuirks(list, matchIndex);
ALOGV("matching '%s' quirks 0x%08x",
entry->mName.string(), entry->mQuirks);
}
}
if (flags & kPreferSoftwareCodecs) {
matchingCodecs->sort(CompareSoftwareCodecsFirst);
}
}
MediaCodecList 的实现不看了,感兴趣的看下,主要就是从/etc/media_codecs.xml解析出支持的解码器并匹配出对应的解码器
举例:
这里需要注意的是在前面我们看到 kComponents 数组定义了支持的解码器,这里/etc/media_codecs.xml 也列出了对应的解码器,这里名字要对应上
这里找到符合条件的解码器便通过matchingCodecs->add()添加一个项,并将各个成员赋值,主要是name
最终符合条件的插件便都放在了matchingCodecs列表中
2.2 allocateNode
这里主要有如下重要代码
sp
IOMX::node_id node = 0;
status_t err = omx->allocateNode(componentName, observer, &node);
observer的作用主要用于消息传递。
这里allocateNode主要是调用service端的OMX类来完成工作(省略中间的binder操作)
status_t OMX::allocateNode(
const char *name, const sp &observer, node_id *node) {
Mutex::Autolock autoLock(mLock);
*node = 0;
OMXNodeInstance *instance = new OMXNodeInstance(this, observer);
OMX_COMPONENTTYPE *handle;
OMX_ERRORTYPE err = mMaster->makeComponentInstance(
name, &OMXNodeInstance::kCallbacks,
instance, &handle);
if (err != OMX_ErrorNone) {
ALOGV("FAILED to allocate omx component '%s'", name);
instance->onGetHandleFailed();
return UNKNOWN_ERROR;
}
*node = makeNodeID(instance);
mDispatchers.add(*node, new CallbackDispatcher(instance));
instance->setHandle(*node, handle);
mLiveNodes.add(observer->asBinder(), instance);
observer->asBinder()->linkToDeath(this);
return OK;
}
首先构造了OMXNodeInstance,封装了传入的observer参数,作为消息传递的载体
调用mMaster->makeComponentInstance生成实际的解码器实例
生成node_id
将node_id与实际的解码器handle保存在instance中,最终instance会保存在OMX的mLiveNodes列表中
这样OMXCodec就可以通过OMXNodeInstance与解码器通信了,具体参考下面通信模型。
后面会介绍通信过程。这里重点讲解一下与解码器的操作
上面代码中通过mMaster->makeComponentInstance创建了解码器的实例,这里我们以android自带的mp3 解码器为例来讲解
通过上面介绍mp3解码器对应的项为(/etc/media_codecs.xml):
而findMatchingCodecs 传入的字符串为:audio/mpeg 以此为依据进行匹配
这里找到对应的解码器后,解码器的名字为:OMX.google.mp3.decoder
这样便可以通过查表(数组kComponents )得到实际的解码器了
实际的mp3解码器代码文件为:framework/av/media/libstagefright/codecs/mp3dec/SoftMP3.cpp
调用的方法为:mMaster->makeComponentInstance 实际代码是
OMX_ERRORTYPE OMXMaster::makeComponentInstance(
const char *name,
const OMX_CALLBACKTYPE *callbacks,
OMX_PTR appData,
OMX_COMPONENTTYPE **component) {
Mutex::Autolock autoLock(mLock);
*component = NULL;
ssize_t index = mPluginByComponentName.indexOfKey(String8(name));
if (index < 0) {
return OMX_ErrorInvalidComponentName;
}
OMXPluginBase *plugin = mPluginByComponentName.valueAt(index);
OMX_ERRORTYPE err =
plugin->makeComponentInstance(name, callbacks, appData, component);
if (err != OMX_ErrorNone) {
return err;
}
mPluginByInstance.add(*component, plugin);
return err;
}
主要是调用插件的makeComponentInstance方法
这里插件是通过OMXMaster构造函数addPlugin(new SoftOMXPlugin);加载的插件,因此这里makeComponentInstance 是SoftOMXPlugin 的方法
看下具体实现
OMX_ERRORTYPE SoftOMXPlugin::makeComponentInstance(
const char *name,
const OMX_CALLBACKTYPE *callbacks,
OMX_PTR appData,
OMX_COMPONENTTYPE **component) {
ALOGV("makeComponentInstance '%s'", name);
for (size_t i = 0; i < kNumComponents; ++i) {
if (strcmp(name, kComponents[i].mName)) {
continue;
}
AString libName = "libstagefright_soft_";
libName.append(kComponents[i].mLibNameSuffix);
libName.append(".so");
void *libHandle = dlopen(libName.c_str(), RTLD_NOW);
if (libHandle == NULL) {
ALOGE("unable to dlopen %s", libName.c_str());
return OMX_ErrorComponentNotFound;
}
typedef SoftOMXComponent *(*CreateSoftOMXComponentFunc)(
const char *, const OMX_CALLBACKTYPE *,
OMX_PTR, OMX_COMPONENTTYPE **);
CreateSoftOMXComponentFunc createSoftOMXComponent =
(CreateSoftOMXComponentFunc)dlsym(
libHandle,
"_Z22createSoftOMXComponentPKcPK16OMX_CALLBACKTYPE"
"PvPP17OMX_COMPONENTTYPE");
if (createSoftOMXComponent == NULL) {
dlclose(libHandle);
libHandle = NULL;
return OMX_ErrorComponentNotFound;
}
sp codec =
(*createSoftOMXComponent)(name, callbacks, appData, component);
if (codec == NULL) {
dlclose(libHandle);
libHandle = NULL;
return OMX_ErrorInsufficientResources;
}
OMX_ERRORTYPE err = codec->initCheck();
if (err != OMX_ErrorNone) {
dlclose(libHandle);
libHandle = NULL;
return err;
}
codec->incStrong(this);
codec->setLibHandle(libHandle);
return OMX_ErrorNone;
}
return OMX_ErrorInvalidComponentName;
}
这里主要是通过枚举kComponents找到对应的解码器记录
{ "OMX.google.mp3.decoder", "mp3dec", "audio_decoder.mp3" },
这里可以看到每个库都是以.so的方式提供的,命名符合如下规则:libstagefright_soft_mp3dec.so
通过dlopen加载后通过dlsym找到createSoftOMXComponent方法并执行,这里每个解码器都应该实现此函数
这里看下mp3的具体实现
android::SoftOMXComponent *createSoftOMXComponent(
const char *name, const OMX_CALLBACKTYPE *callbacks,
OMX_PTR appData, OMX_COMPONENTTYPE **component) {
return new android::SoftMP3(name, callbacks, appData, component);
}
主要工作就是构造了SoftMP3的类对象并返回,这里注意并没有将解码器句柄返回给上层,而是在构造函数中将这种联系放在给定的OMX_COMPONENTTYPE **component参数中
看下构造函数
SoftMP3::SoftMP3(
const char *name,
const OMX_CALLBACKTYPE *callbacks,
OMX_PTR appData,
OMX_COMPONENTTYPE **component)
: SimpleSoftOMXComponent(name, callbacks, appData, component),
mConfig(new tPVMP3DecoderExternal),
mDecoderBuf(NULL),
mAnchorTimeUs(0),
mNumFramesOutput(0),
mNumChannels(2),
mSamplingRate(44100),
mSignalledError(false),
mOutputPortSettingsChange(NONE) {
initPorts();
initDecoder();
}
这里就是基本的初始化操作,这里SoftMP3是继承自SimpleSoftOMXComponent,因此会调用其构造函数,如下
SimpleSoftOMXComponent::SimpleSoftOMXComponent(
const char *name,
const OMX_CALLBACKTYPE *callbacks,
OMX_PTR appData,
OMX_COMPONENTTYPE **component)
: SoftOMXComponent(name, callbacks, appData, component),
mLooper(new ALooper),
mHandler(new AHandlerReflector(this)),
mState(OMX_StateLoaded),
mTargetState(OMX_StateLoaded) {
mLooper->setName(name);
mLooper->registerHandler(mHandler);
mLooper->start(
false, // runOnCallingThread
false, // canCallJava
ANDROID_PRIORITY_FOREGROUND);
}
这里主要是构造了Alooper 对象来处理消息,以及调用了父类也就是SoftOMXComponent的构造函数
关于alooper的工作原理,后面会有一篇专门的文章介绍。
SoftOMXComponent::SoftOMXComponent(
const char *name,
const OMX_CALLBACKTYPE *callbacks,
OMX_PTR appData,
OMX_COMPONENTTYPE **component)
: mName(name),
mCallbacks(callbacks),
mComponent(new OMX_COMPONENTTYPE),
mLibHandle(NULL) {
mComponent->nSize = sizeof(*mComponent);
mComponent->pComponentPrivate = this;
mComponent->SetParameter = SetParameterWrapper;
*********
mComponent->UseEGLImage = NULL;
mComponent->ComponentRoleEnum = NULL;
*component = mComponent;
}
这里才是构造component的地方,并初始化了其中的方法,如
mComponent->SetParameter = SetParameterWrapper;
OMX_ERRORTYPE SoftOMXComponent::SetParameterWrapper(
OMX_HANDLETYPE component,
OMX_INDEXTYPE index,
OMX_PTR params) {
SoftOMXComponent *me =
(SoftOMXComponent *)
((OMX_COMPONENTTYPE *)component)->pComponentPrivate;
return me->setParameter(index, params);
}
这里初始化了mComponent 的SetParameter 方法为SoftOMXComponent::SetParameterWrapper而从构造函数知道
mComponent->pComponentPrivate = this;
因此实际调用的是this->SetParameter 也就是其子类的实现
(这里很重要,请注意理解透彻)
通过上面分析可以知道,android已经为解码器的消息传递通过两个父类及SoftOMXComponent和SimpleSoftOMXComponent完成了
后面解码器只要从SimpleSoftOMXComponent 继承并实现对应的消息处理就可以了
在mp3的构造函数中还有如下语句
initPorts(); 主要作用是配置解码器的输入输出
initDecoder(); 实际的解码器
这里需要注意的是在SoftMP3的代码里便可以调用实际解码器的init decoder等操作了,而SoftMP3可以认为是实际解码器的封装
具体调用顺序会在后面消息处理阶段介绍。
到这里allocateNode 就介绍完了:主要工作就是建立与解码器的联系 observer nodeid,以及找到实际的解码器并初始化
2.3OMXCodec构造函数
后面的执行语句如下:
sp
omx, node, quirks, flags,
createEncoder, mime, componentName,
source, nativeWindow);
具体实现如下
OMXCodec::OMXCodec(
const sp &omx, IOMX::node_id node,
uint32_t quirks, uint32_t flags,
bool isEncoder,
const char *mime,
const char *componentName,
const sp &source,
const sp &nativeWindow)
: mOMX(omx),
mOMXLivesLocally(omx->livesLocally(node, getpid())),
mNode(node),
mQuirks(quirks),
mFlags(flags),
mIsEncoder(isEncoder),
mIsVideo(!strncasecmp("video/", mime, 6)),
mMIME(strdup(mime)),
mComponentName(strdup(componentName)),
mSource(source),
mCodecSpecificDataIndex(0),
mState(LOADED),
mInitialBufferSubmit(true),
mSignalledEOS(false),
mNoMoreOutputData(false),
mOutputPortSettingsHaveChanged(false),
mSeekTimeUs(-1),
mSeekMode(ReadOptions::SEEK_CLOSEST_SYNC),
mTargetTimeUs(-1),
mOutputPortSettingsChangedPending(false),
mSkipCutBuffer(NULL),
mLeftOverBuffer(NULL),
mPaused(false),
mNativeWindow(
(!strncmp(componentName, "OMX.google.", 11)
|| !strcmp(componentName, "OMX.Nvidia.mpeg2v.decode"))
? NULL : nativeWindow) {
mPortStatus[kPortIndexInput] = ENABLED;
mPortStatus[kPortIndexOutput] = ENABLED;
setComponentRole();
}
这里主要就是将之前的所有工作,都保存在OMXCodec实例中,之后awesomeplayer便直接操作OMXCodec(mAudioSource)了
这里 setComponentRole();主要是设置角色(编码器还是解码器),后面再介绍消息传递时介绍。
这里还要注意的是OMXCodec继承自MediaSource&MediaBufferObserver
因此才可以作为输出模块的数据源
2.4 配置解码器
observer->setCodec(codec);
err = codec->configureCodec(meta);
这两部分在我们后面介绍完消息机制之后,读者可自行回来分析
上面介绍了create的操作
下面介绍mAudioSource->start 的操作
status_t OMXCodec::start(MetaData *meta) {
mSource->start(params.get());
return init();
}
只列出了重要代码,其中mSource->start是指启动解码器的数据源MediaSource,这里也就是extractor中通过getTrack拿到的mediaSource。比较简单不说了
看下init实现 ,省略无关代码
status_t OMXCodec::init() {
err = allocateBuffers();
return mState == ERROR ? UNKNOWN_ERROR : OK;
}
主要工作是通过allocateBuffers申请内存
status_t OMXCodec::allocateBuffers() {
status_t err = allocateBuffersOnPort(kPortIndexInput);
if (err != OK) {
return err;
}
return allocateBuffersOnPort(kPortIndexOutput);
}
这里分别申请输入和输出的buffer
这里分段来看allocateBuffersOnPort函数
status_t OMXCodec::allocateBuffersOnPort(OMX_U32 portIndex) {
OMX_PARAM_PORTDEFINITIONTYPE def;
InitOMXParams(&def);
def.nPortIndex = portIndex;
err = mOMX->getParameter(
mNode, OMX_IndexParamPortDefinition, &def, sizeof(def));
size_t totalSize = def.nBufferCountActual * def.nBufferSize;
mDealer[portIndex] = new MemoryDealer(totalSize, "OMXCodec");
这里开头先通过命令OMX_IndexParamPortDefinition获取解码器配置的大小
然后构造MemoryDealer实例,存放buffer数量及大小信息
这里命令的传输过程请参考消息通讯机制模型的介绍,看完再回来理解这部分
以mp3为例,在SoftMP3的构造函数中会调用initPorts来初始化OMX_PARAM_PORTDEFINITIONTYPE对象
里面会确定buffer的大小:包括有几个buffer,每个buffer的容量等
这里OMX_IndexParamPortDefinition主要是查询此信息,然后就知道要申请多少内存了
for (OMX_U32 i = 0; i < def.nBufferCountActual; ++i) {
sp mem = mDealer[portIndex]->allocate(def.nBufferSize);
CHECK(mem.get() != NULL);
BufferInfo info;
info.mData = NULL;
info.mSize = def.nBufferSize;
err = mOMX->useBuffer(mNode, portIndex, mem, &buffer);
if (mem != NULL) {
info.mData = mem->pointer();
}
info.mBuffer = buffer;
info.mStatus = OWNED_BY_US;
info.mMem = mem;
info.mMediaBuffer = NULL;
mPortBuffers[portIndex].push(info);
CODEC_LOGV("allocated buffer %p on %s port", buffer,
portIndex == kPortIndexInput ? "input" : "output");
}
下面是一个循环(忽略了secure等无关代码)
主要是申请内存,并为每个内存新建一个BufferInfo变量,最终都放在mPortBuffers[index]对应的栈中
至此InitAudioDecoder 便执行完毕了,主要做了两件事:建立实际的解码器+申请buffer
3 消息通信机制模型的介绍
当与解码器的联系建立之后,后面的工作主要就是传递消息由解码器处理将处理结果返回给调用者
但前面的介绍对消息模型并不清晰,这里专门介绍一下
下面就以 OMXCodec构造函数中的 setComponentRole();为例来介绍此过程
具体代码如下:
void OMXCodec::setComponentRole() {
setComponentRole(mOMX, mNode, mIsEncoder, mMIME);
}
// static
void OMXCodec::setComponentRole(
const sp &omx, IOMX::node_id node, bool isEncoder,
const char *mime) {
struct MimeToRole {
const char *mime;
const char *decoderRole;
const char *encoderRole;
};
static const MimeToRole kMimeToRole[] = {
{ MEDIA_MIMETYPE_AUDIO_MPEG,
"audio_decoder.mp3", "audio_encoder.mp3" },
{ MEDIA_MIMETYPE_AUDIO_MPEG_LAYER_I,
"audio_decoder.mp1", "audio_encoder.mp1" },
{ MEDIA_MIMETYPE_AUDIO_MPEG_LAYER_II,
"audio_decoder.mp2", "audio_encoder.mp2" },
{ MEDIA_MIMETYPE_AUDIO_AMR_NB,
"audio_decoder.amrnb", "audio_encoder.amrnb" },
{ MEDIA_MIMETYPE_AUDIO_AMR_WB,
"audio_decoder.amrwb", "audio_encoder.amrwb" },
{ MEDIA_MIMETYPE_AUDIO_AAC,
"audio_decoder.aac", "audio_encoder.aac" },
{ MEDIA_MIMETYPE_AUDIO_VORBIS,
"audio_decoder.vorbis", "audio_encoder.vorbis" },
{ MEDIA_MIMETYPE_AUDIO_G711_MLAW,
"audio_decoder.g711mlaw", "audio_encoder.g711mlaw" },
{ MEDIA_MIMETYPE_AUDIO_G711_ALAW,
"audio_decoder.g711alaw", "audio_encoder.g711alaw" },
{ MEDIA_MIMETYPE_VIDEO_AVC,
"video_decoder.avc", "video_encoder.avc" },
{ MEDIA_MIMETYPE_VIDEO_MPEG4,
"video_decoder.mpeg4", "video_encoder.mpeg4" },
{ MEDIA_MIMETYPE_VIDEO_H263,
"video_decoder.h263", "video_encoder.h263" },
{ MEDIA_MIMETYPE_VIDEO_VPX,
"video_decoder.vpx", "video_encoder.vpx" },
{ MEDIA_MIMETYPE_AUDIO_RAW,
"audio_decoder.raw", "audio_encoder.raw" },
{ MEDIA_MIMETYPE_AUDIO_FLAC,
"audio_decoder.flac", "audio_encoder.flac" },
};
static const size_t kNumMimeToRole =
sizeof(kMimeToRole) / sizeof(kMimeToRole[0]);
size_t i;
for (i = 0; i < kNumMimeToRole; ++i) {
if (!strcasecmp(mime, kMimeToRole[i].mime)) {
break;
}
}
if (i == kNumMimeToRole) {
return;
}
const char *role =
isEncoder ? kMimeToRole[i].encoderRole
: kMimeToRole[i].decoderRole;
if (role != NULL) {
OMX_PARAM_COMPONENTROLETYPE roleParams;
InitOMXParams(&roleParams);
strncpy((char *)roleParams.cRole,
role, OMX_MAX_STRINGNAME_SIZE - 1);
roleParams.cRole[OMX_MAX_STRINGNAME_SIZE - 1] = '\0';
status_t err = omx->setParameter(
node, OMX_IndexParamStandardComponentRole,
&roleParams, sizeof(roleParams));
if (err != OK) {
ALOGW("Failed to set standard component role '%s'.", role);
}
}
}
这里主要执行了如下步骤:
InitOMXParams(&roleParams);
status_t err = omx->setParameter(
node, OMX_IndexParamStandardComponentRole,
&roleParams, sizeof(roleParams));
其中InitOMXParams主要是初始化roleParams变量
主要是靠setParameter来完成工作:记录下传递进来的参数:OMX_IndexParamStandardComponentRole 是具体命令 roleParams 是具体参数,而node 则是与service的桥梁
具体调用的是OMX的方法(service端的,不了解可参考第一部分mClient->connect的介绍)
status_t OMX::setParameter(
node_id node, OMX_INDEXTYPE index,
const void *params, size_t size) {
return findInstance(node)->setParameter(
index, params, size);
}
首先是通过nodeID得到了OMXNodeInstance(这里OMXNodeInstance是封装了observer的实例)
继续进入instance
status_t OMXNodeInstance::setParameter(
OMX_INDEXTYPE index, const void *params, size_t size) {
Mutex::Autolock autoLock(mLock);
OMX_ERRORTYPE err = OMX_SetParameter(
mHandle, index, const_cast(params));
return StatusFromOMXError(err);
}
看下 OMX_SetParameter实现
#define OMX_SetParameter( \
hComponent, \
nParamIndex, \
pComponentParameterStructure) \
((OMX_COMPONENTTYPE*)hComponent)->SetParameter( \
hComponent, \
nParamIndex, \
pComponentParameterStructure) /* Macro End */
这里可以看到主要是通过OMX_COMPONENTTYPE对象(也就是SoftMP3父类构造函数初始化过的对象)来完成工作
这里在SoftOMXComponent没有具体实现,在SimpleSoftOMXComponent中,如下
OMX_ERRORTYPE SimpleSoftOMXComponent::setParameter(
OMX_INDEXTYPE index, const OMX_PTR params) {
Mutex::Autolock autoLock(mLock);
CHECK(isSetParameterAllowed(index, params));
return internalSetParameter(index, params);
}
OMX_ERRORTYPE SoftMP3::internalSetParameter(
OMX_INDEXTYPE index, const OMX_PTR params) {
switch (index) {
case OMX_IndexParamStandardComponentRole:
{
const OMX_PARAM_COMPONENTROLETYPE *roleParams =
(const OMX_PARAM_COMPONENTROLETYPE *)params;
if (strncmp((const char *)roleParams->cRole,
"audio_decoder.mp3",
OMX_MAX_STRINGNAME_SIZE - 1)) {
return OMX_ErrorUndefined;
}
return OMX_ErrorNone;
}
default:
return SimpleSoftOMXComponent::internalSetParameter(index, params);
}
}
这里需要注意的是调用的internalSetParameter是SoftMP3的实现,而不是SimpleSoftOMXComponent 中的,代码如下
传入的命令为:OMX_IndexParamStandardComponentRole,处理完毕后返回OMX_ErrorNone
这里通过OMXCodec变量借由OMXNodeInstance得到OMX_COMPONENTYPE句柄,就获得了与解码器实际通信的能力。
4 解码过程介绍
下面介绍如何通过OMXCodec驱动解码一帧数据
这里建立了OMXCodec实例之后,在awesomeplayer中的audioplayer的fillbuffer中
mAudioPlayer便通过mSource->read(&mInputBuffer, &options来读取pcm数据
这里mSource为mAudioSource
看下read函数
具体代码在OMXCodec.cpp中,我们分段来看
status_t OMXCodec::read(
MediaBuffer **buffer, const ReadOptions *options) {
status_t err = OK;
*buffer = NULL;
Mutex::Autolock autoLock(mLock);
if (mState != EXECUTING && mState != RECONFIGURING) {
return UNKNOWN_ERROR;
}
前面设置好参数后,会经过几次回调将状态设置成EXECUTING
这里需要注意的是mInitialBufferSubmit默认是true
if (mInitialBufferSubmit) {
mInitialBufferSubmit = false;
drainInputBuffers();
fillOutputBuffers();
}
drainInputBuffers可以认为从extractor读取一包数据
fillOutputBuffers是解码一包数据并放在输出buffer中
忽略seek代码
size_t index = *mFilledBuffers.begin(); mFilledBuffers.erase(mFilledBuffers.begin());
BufferInfo *info = &mPortBuffers[kPortIndexOutput].editItemAt(index);
CHECK_EQ((int)info->mStatus, (int)OWNED_BY_US);
info->mStatus = OWNED_BY_CLIENT;
info->mMediaBuffer->add_ref();
if (mSkipCutBuffer != NULL) {
mSkipCutBuffer->submit(info->mMediaBuffer);
}
*buffer = info->mMediaBuffer;
return OK;
}
这里我们将输出缓冲区中的bufferinfo取出来,并将其中的mediabuffer赋值给传递进来的参数buffer,当decoder解码出来数据后会将存放数据的buffer放在mFilledBuffers中,因此audioplayer每次从omxcodec读取数据时,会从mFilledBuffers中取。区别在于,当mFilledBuffers为空时会等待解码器解码并填充数据,如果有数据,则直接取走数据。
在audioplayer->start代码中用到这里返回的mediabuffer做了一些事情,后面设置了一些参数如
info->mStatus = OWNED_BY_CLIENT;
说明此info归client所有,client释放后会归还的,这里多啰嗦一句,通过设置mStatus可以让这一块内存由不同的模块来支配,如其角色有如下几个:
enum BufferStatus {
OWNED_BY_US,
OWNED_BY_COMPONENT,
OWNED_BY_NATIVE_WINDOW,
OWNED_BY_CLIENT,
};
显然component是解码器的,client是外部比如audioplayer的。
info->mMediaBuffer->add_ref();是增加一个引用,估计release的时候用~~
下面着重分析下如何从extractor读数据,和如何解码数据
4.1 看下 drainInputBuffers();实现
void OMXCodec::drainInputBuffers() {
for (size_t i = 0; i < buffers->size(); ++i) {
BufferInfo *info = &buffers->editItemAt(i);
if (info->mStatus != OWNED_BY_US) {
continue;
}
if (!drainInputBuffer(info)) {
break;
}
if (mFlags & kOnlySubmitOneInputBufferAtOneTime) {
break;
}
}
}
这里解释下,我们可能申请了多个输入缓冲区,因此是一个循环,先检查我们有没有权限使用即OWNED_BY_US,这一缓冲区获取完数据后会检测
kOnlySubmitOneInputBufferAtOneTime即每次只允许读一个包,否则循环都读满。
下面继续跟进drainInputBuffer(info),忽略无关代码:
bool OMXCodec::drainInputBuffer(BufferInfo *info) {
**********
status_t err;
bool signalEOS = false;
int64_t timestampUs = 0;
size_t offset = 0;
int32_t n = 0;
for (;;) {
MediaBuffer *srcBuffer;
err = mSource->read(&srcBuffer);
size_t remainingBytes = info->mSize - offset;
下面是判断从extractor读取到的数据是不是超过了总大小
if (srcBuffer->range_length() > remainingBytes) {
if (offset == 0) {
srcBuffer->release();
srcBuffer = NULL;
setState(ERROR);
return false;
}
mLeftOverBuffer = srcBuffer;
break;
} memcpy((uint8_t *)info->mData + offset,
(const uint8_t *)srcBuffer->data()
+ srcBuffer->range_offset(),
srcBuffer->range_length());
offset += srcBuffer->range_length();
if (releaseBuffer) {
srcBuffer->release();
srcBuffer = NULL;
}
数据读取完毕后将srcBufferrelease掉
}
err = mOMX->emptyBuffer(
mNode, info->mBuffer, 0, offset,
flags, timestampUs);
info->mStatus = OWNED_BY_COMPONENT;
}
这里可以看出来读取数据时实现了一次拷贝~~,而不是用的同一块缓冲区,注意下
读取数据可以参考前面介绍的extractor的内容,比较简单不说了。
下面看读取数据完毕后调用mOMX->emptyBuffer都干了些啥
通过前面我们很容易的理解实际调用的是
omx::emptybufferèOMXNodeInstance::emptyBuffer,
从代码可以看到最终调用的是
((OMX_COMPONENTTYPE*)hComponent)->EmptyThisBuffer()
实际代码在SimpleSoftOMXComponent.cpp中,具体如下
OMX_ERRORTYPE SimpleSoftOMXComponent::emptyThisBuffer(
OMX_BUFFERHEADERTYPE *buffer) {
sp msg = new AMessage(kWhatEmptyThisBuffer, mHandler->id());
msg->setPointer("header", buffer);
msg->post();
return OMX_ErrorNone;
}
可以看到就是发了一条命令kWhatEmptyThisBuffer
通过handler->id确定了自己发的还得自己收,处理函数如下:
void SimpleSoftOMXComponent::onMessageReceived(const sp &msg) {
Mutex::Autolock autoLock(mLock);
uint32_t msgType = msg->what();
ALOGV("msgType = %d", msgType);
switch (msgType) {
********
case kWhatEmptyThisBuffer:
case kWhatFillThisBuffer:
{
OMX_BUFFERHEADERTYPE *header;
CHECK(msg->findPointer("header", (void **)&header));
CHECK(mState == OMX_StateExecuting && mTargetState == mState);
bool found = false;
size_t portIndex = (kWhatEmptyThisBuffer == msgType)? header->nInputPortIndex: header->nOutputPortIndex;
PortInfo *port = &mPorts.editItemAt(portIndex);
for (size_t j = 0; j < port->mBuffers.size(); ++j) {
BufferInfo *buffer = &port->mBuffers.editItemAt(j);
if (buffer->mHeader == header) {
CHECK(!buffer->mOwnedByUs);
buffer->mOwnedByUs = true;
CHECK((msgType == kWhatEmptyThisBuffer
&& port->mDef.eDir == OMX_DirInput)|| (port->mDef.eDir == OMX_DirOutput));
port->mQueue.push_back(buffer);
onQueueFilled(portIndex);
found = true;
break;
}
}
CHECK(found);
break;
}
default:
TRESPASS();
break;
}
}
从代码这里来看这两个case都走同一套代码,而且都是通过onQueueFilled来处理,这样我们就引出了实际的处理函数,也就是onQueueFilled,
以mp3为例这里具体实现在SoftMP3中。
具体解释看代码中注释
void SoftMP3::onQueueFilled(OMX_U32 portIndex) {
if (mSignalledError || mOutputPortSettingsChange != NONE) {
return;
}
获取输入输出链表
List &inQueue = getPortQueue(0);
List &outQueue = getPortQueue(1);
while (!inQueue.empty() && !outQueue.empty()) {
各自取输入输出缓冲区中的第一个缓冲区
BufferInfo *inInfo = *inQueue.begin();
OMX_BUFFERHEADERTYPE *inHeader = inInfo->mHeader;
BufferInfo *outInfo = *outQueue.begin();
OMX_BUFFERHEADERTYPE *outHeader = outInfo->mHeader;
判断缓冲区是不是没有数据,若果第一个都没有那就是没有
if (inHeader->nFlags & OMX_BUFFERFLAG_EOS) {
inQueue.erase(inQueue.begin());
inInfo->mOwnedByUs = false;
notifyEmptyBufferDone(inHeader);
if (!mIsFirst) {
// pad the end of the stream with 529 samples, since that many samples
// were trimmed off the beginning when decoding started
outHeader->nFilledLen =
kPVMP3DecoderDelay * mNumChannels * sizeof(int16_t);
memset(outHeader->pBuffer, 0, outHeader->nFilledLen);
} else {
// Since we never discarded frames from the start, we won't have
// to add any padding at the end either.
outHeader->nFilledLen = 0;
}
outHeader->nFlags = OMX_BUFFERFLAG_EOS;
outQueue.erase(outQueue.begin());
outInfo->mOwnedByUs = false;
notifyFillBufferDone(outHeader);
return;
}
如果offset==0说明是第一包的开头,需要读取pts,请结合extractor理解
if (inHeader->nOffset == 0) {
mAnchorTimeUs = inHeader->nTimeStamp;
mNumFramesOutput = 0;
}
mConfig->pInputBuffer =
inHeader->pBuffer + inHeader->nOffset;
mConfig->inputBufferCurrentLength = inHeader->nFilledLen;
mConfig->inputBufferMaxLength = 0;
mConfig->inputBufferUsedLength = 0;
mConfig->outputFrameSize = kOutputBufferSize / sizeof(int16_t);
mConfig->pOutputBuffer =
reinterpret_cast(outHeader->pBuffer);
ERROR_CODE decoderErr;
上面是配置参数 下面调用自己的解码器进行解码
if ((decoderErr = pvmp3_framedecoder(mConfig, mDecoderBuf))
!= NO_DECODING_ERROR) {
***出错处理*
这里注意如果解码失败,则填充0数据,也就是静音帧
// play silence instead.
memset(outHeader->pBuffer,
0,
mConfig->outputFrameSize * sizeof(int16_t));
mConfig->inputBufferUsedLength = inHeader->nFilledLen;
} else if (mConfig->samplingRate != mSamplingRate
|| mConfig->num_channels != mNumChannels) {
这里说明参数发生了改变,即采样率等改变了,需要重新设置输出
mSamplingRate = mConfig->samplingRate;
mNumChannels = mConfig->num_channels;
notify(OMX_EventPortSettingsChanged, 1, 0, NULL);
mOutputPortSettingsChange = AWAITING_DISABLED;
return;
}
if (mIsFirst) {
mIsFirst = false;
// The decoder delay is 529 samples, so trim that many samples off
// the start of the first output buffer. This essentially makes this
// decoder have zero delay, which the rest of the pipeline assumes.
outHeader->nOffset =
kPVMP3DecoderDelay * mNumChannels * sizeof(int16_t);
outHeader->nFilledLen =
mConfig->outputFrameSize * sizeof(int16_t) - outHeader->nOffset;
} else {
outHeader->nOffset = 0;
outHeader->nFilledLen = mConfig->outputFrameSize * sizeof(int16_t);
}
outHeader->nTimeStamp =
mAnchorTimeUs
+ (mNumFramesOutput * 1000000ll) / mConfig->samplingRate;
outHeader->nFlags = 0;
CHECK_GE(inHeader->nFilledLen, mConfig->inputBufferUsedLength);
inHeader->nOffset += mConfig->inputBufferUsedLength;
inHeader->nFilledLen -= mConfig->inputBufferUsedLength;
mNumFramesOutput += mConfig->outputFrameSize / mNumChannels;
如果输入缓冲区数据都解码完了,则调用notifyEmptyBufferDone
if (inHeader->nFilledLen == 0) {
inInfo->mOwnedByUs = false;
inQueue.erase(inQueue.begin());
inInfo = NULL;
notifyEmptyBufferDone(inHeader);
inHeader = NULL;
}
outInfo->mOwnedByUs = false;
outQueue.erase(outQueue.begin());
outInfo = NULL;
这是将解码出来的数据告诉外部,通过调用notifyFillBufferDone
notifyFillBufferDone(outHeader);
outHeader = NULL;
}
}
下面分析下,如何将输入缓冲区释放和将输出缓冲区中的数据传递出去
A、输入部分的清空
void SoftOMXComponent::notifyEmptyBufferDone(OMX_BUFFERHEADERTYPE *header) {
(*mCallbacks->EmptyBufferDone)(
mComponent, mComponent->pApplicationPrivate, header);
}
通知外面我们emptythisbuffer完工了,具体调用的是OMXNodeInstance中的方法,具体怎么传进去的,大家可以自己分析下:
OMX_ERRORTYPE OMXNodeInstance::OnEmptyBufferDone(
OMX_IN OMX_HANDLETYPE hComponent,
OMX_IN OMX_PTR pAppData,
OMX_IN OMX_BUFFERHEADERTYPE* pBuffer) {
OMXNodeInstance *instance = static_cast(pAppData);
if (instance->mDying) {
return OMX_ErrorNone;
}
return instance->owner()->OnEmptyBufferDone(instance->nodeID(), pBuffer);
}
OMXNodeInstance的ownner是OMX,因此代码为
OMX_ERRORTYPE OMX::OnEmptyBufferDone(
node_id node, OMX_IN OMX_BUFFERHEADERTYPE *pBuffer) {
ALOGV("OnEmptyBufferDone buffer=%p", pBuffer);
omx_message msg;
msg.type = omx_message::EMPTY_BUFFER_DONE;
msg.node = node;
msg.u.buffer_data.buffer = pBuffer;
findDispatcher(node)->post(msg);
return OMX_ErrorNone;
}
其中findDispatcher定义如下
sp OMX::findDispatcher(node_id node) {
Mutex::Autolock autoLock(mLock);
ssize_t index = mDispatchers.indexOfKey(node);
return index < 0 ? NULL : mDispatchers.valueAt(index);
}
这里mDispatcher在之前allocateNode中通过mDispatchers.add(*node, new CallbackDispatcher(instance)); 创建的
看下实际的实现可知道,CallbackDispatcher的post方法最终会调用dispatch
void OMX::CallbackDispatcher::dispatch(const omx_message &msg) {
if (mOwner == NULL) {
ALOGV("Would have dispatched a message to a node that's already gone.");
return;
}
mOwner->onMessage(msg);
}
而owner是OMXNodeInstance,因此消息饶了一圈还是到了OMXNodeInstance的OnMessage方法接收了
void OMXNodeInstance::onMessage(const omx_message &msg) {
if (msg.type == omx_message::FILL_BUFFER_DONE) {
OMX_BUFFERHEADERTYPE *buffer =
static_cast(
msg.u.extended_buffer_data.buffer);
BufferMeta *buffer_meta =
static_cast(buffer->pAppPrivate);
buffer_meta->CopyFromOMX(buffer);
}
mObserver->onMessage(msg);
}
而onMessage又将消息传递到 mObserver中,也就是在OMXCodec::Create中构造的OMXCodecObserver对象,其OnMessage实现如下
virtual void onMessage(const omx_message &msg) {
sp codec = mTarget.promote();
if (codec.get() != NULL) {
Mutex::Autolock autoLock(codec->mLock);
codec->on_message(msg);
codec.clear();
}
}
最终还是传递给了OMXCodec里,具体看下:
void OMXCodec::on_message(const omx_message &msg) {
switch (msg.type) {
************
case omx_message::EMPTY_BUFFER_DONE:
{
IOMX::buffer_id buffer = msg.u.extended_buffer_data.buffer;
Vector *buffers = &mPortBuffers[kPortIndexInput];
size_t i = 0;
while (i < buffers->size() && (*buffers)[i].mBuffer != buffer) {
++i;
}
BufferInfo* info = &buffers->editItemAt(i);
info->mStatus = OWNED_BY_US;
// Buffer could not be released until empty buffer done is called.
if (info->mMediaBuffer != NULL) {
info->mMediaBuffer->release();
info->mMediaBuffer = NULL;
}
drainInputBuffer(&buffers->editItemAt(i));
break;
}
****************
}
二是当release完毕后,会调用drainInputBuffer(&buffers->editItemAt(i));来填充数据
也就是说当我们启动一次解码播放后,会在此处循环读取数和据解码数据。而输出数据在后面的filloutbuffer中。
B、输出数据的清空notifyFillBufferDone(outHeader);
void SoftOMXComponent::notifyFillBufferDone(OMX_BUFFERHEADERTYPE *header) {
(*mCallbacks->FillBufferDone)(
mComponent, mComponent->pApplicationPrivate, header);
}
OMX_ERRORTYPE OMX::OnFillBufferDone(
node_id node, OMX_IN OMX_BUFFERHEADERTYPE *pBuffer) {
ALOGV("OnFillBufferDone buffer=%p", pBuffer);
omx_message msg;
msg.type = omx_message::FILL_BUFFER_DONE;
msg.node = node;
msg.u.extended_buffer_data.buffer = pBuffer;
msg.u.extended_buffer_data.range_offset = pBuffer->nOffset;
msg.u.extended_buffer_data.range_length = pBuffer->nFilledLen;
msg.u.extended_buffer_data.flags = pBuffer->nFlags;
msg.u.extended_buffer_data.timestamp = pBuffer->nTimeStamp;
msg.u.extended_buffer_data.platform_private = pBuffer->pPlatformPrivate;
msg.u.extended_buffer_data.data_ptr = pBuffer->pBuffer;
findDispatcher(node)->post(msg);
return OMX_ErrorNone;
}
最终处理在OMXCodec.cpp中
void OMXCodec::on_message(const omx_message &msg) {
{
case omx_message::FILL_BUFFER_DONE:
info->mStatus = OWNED_BY_US;
mFilledBuffers.push_back(i);
mBufferFilled.signal();
break;
}
}
4.2 fillOutputBuffers
void OMXCodec::fillOutputBuffers() {
CHECK_EQ((int)mState, (int)EXECUTING);
Vector *buffers = &mPortBuffers[kPortIndexOutput];
for (size_t i = 0; i < buffers->size(); ++i) {
BufferInfo *info = &buffers->editItemAt(i);
if (info->mStatus == OWNED_BY_US) {
fillOutputBuffer(&buffers->editItemAt(i));
}
}
}
找到一个输出缓冲区bufferinfo,启动输出
void OMXCodec::fillOutputBuffer(BufferInfo *info) {
**************
status_t err = mOMX->fillBuffer(mNode, info->mBuffer);
info->mStatus = OWNED_BY_COMPONENT;
}
下面和解码流程类似,我们依次来看:
status_t OMXNodeInstance::fillBuffer(OMX::buffer_id buffer) {
Mutex::Autolock autoLock(mLock);
OMX_BUFFERHEADERTYPE *header = (OMX_BUFFERHEADERTYPE *)buffer;
header->nFilledLen = 0;
header->nOffset = 0;
header->nFlags = 0;
OMX_ERRORTYPE err = OMX_FillThisBuffer(mHandle, header);
return StatusFromOMXError(err);
}
进行一些初始化后,调用进入了softMP3中,也就是
OMX_ERRORTYPE SimpleSoftOMXComponent::fillThisBuffer(
OMX_BUFFERHEADERTYPE *buffer) {
sp msg = new AMessage(kWhatFillThisBuffer, mHandler->id());
msg->setPointer("header", buffer);
msg->post();
return OMX_ErrorNone;
}
同理,接收程序也在本文件中:
void SimpleSoftOMXComponent::onMessageReceived(const sp &msg) {
Mutex::Autolock autoLock(mLock);
uint32_t msgType = msg->what();
ALOGV("msgType = %d", msgType);
switch (msgType) {
case kWhatEmptyThisBuffer:
case kWhatFillThisBuffer:
{
OMX_BUFFERHEADERTYPE *header;
CHECK(msg->findPointer("header", (void **)&header));
CHECK(mState == OMX_StateExecuting && mTargetState == mState);
bool found = false;
size_t portIndex = (kWhatEmptyThisBuffer == msgType)?
header->nInputPortIndex: header->nOutputPortIndex;
PortInfo *port = &mPorts.editItemAt(portIndex);
for (size_t j = 0; j < port->mBuffers.size(); ++j) {
BufferInfo *buffer = &port->mBuffers.editItemAt(j);
if (buffer->mHeader == header) {
CHECK(!buffer->mOwnedByUs);
buffer->mOwnedByUs = true;
CHECK((msgType == kWhatEmptyThisBuffer
&& port->mDef.eDir == OMX_DirInput)
|| (port->mDef.eDir == OMX_DirOutput));
port->mQueue.push_back(buffer);
onQueueFilled(portIndex);
found = true;
break;
}
}
CHECK(found);
break;
}
default:
TRESPASS();
break;
}
}
notifyEmptyBufferDone(inHeader);
notifyFillBufferDone(outHeader);
两个函数来推进播放进度。
【结束】
欢迎转载,请注明出处!