从Android 2.3开始,Android MediaPlayer采用Stagefright框架。Based on Android 4.0.1.
StagefrightPlayer创建函数如下:(MediaPlayerService.cpp,详细过程见文章:Android Audio 数据流详解)
static sp<MediaPlayerBase> createPlayer(player_type playerType, void* cookie, notify_callback_f notifyFunc) { sp<MediaPlayerBase> p; switch (playerType) { case SONIVOX_PLAYER: LOGV(" create MidiFile"); p = new MidiFile(); break; case STAGEFRIGHT_PLAYER: LOGV(" create StagefrightPlayer"); p = new StagefrightPlayer; break; case NU_PLAYER: LOGV(" create NuPlayer"); p = new NuPlayerDriver; break; case TEST_PLAYER: LOGV("Create Test Player stub"); p = new TestPlayerStub(); break; default: LOGE("Unknown player type: %d", playerType); return NULL; } if (p != NULL) { if (p->initCheck() == NO_ERROR) { p->setNotifyCallback(cookie, notifyFunc); } else { p.clear(); } } if (p == NULL) { LOGE("Failed to create player object"); } return p; }
首先了解下系统本身的调用流程,StagefrightPlayer使用的AwesomePlayer来实现的,首先是Demuxer的实现,对于系统本身不支持的格式是没有分离器的,具体查看代码(本文以本地文件播放为例).
1. Demuxer的实现
Awesomeplayer.cpp的setDataSource_l(
const sp<DataSource> &dataSource) {
sp<MediaExtractor> extractor = MediaExtractor::Create(dataSource);
这个就是创建Demux的地方,查看代码MediaExtractor.cpp的create函数,datasource的sniff函数就是自动检测媒体格式容器的类型的,如果需要新增原系统无法识别的媒体格式,必然无法得到有效的分离器,所以这里需要自己创建自己Demuxer,每一种格式的XXXXExtractor.cpp函数中有个SniffXXXX函数,在Awesomeplayer的构造函数中的这一句DataSource::RegisterDefaultSniffers()就是注册好所有的sniff函数,写一个自己的XXXExtractor类,照样写个XXXXsinff函数,在RegisterDefaultSniffers中加入自己的函数。然后自已实现一个需要的Demuxer.通过Demuxer分离出音视频流后,就是进入解码阶段了。
2. AV Decoder
status_t AwesomePlayer::initVideoDecoder() {
mVideoSource = OMXCodec::Create(
mClient.interface(), mVideoTrack->getFormat(),
false,
mVideoTrack);
这个就是创建的解码器了,create的最后一个参数就是分离出来的独立的视频流,具备的接口最重要的就是read接口,是分离器中实现的,这个track是XXXXExtractor中的getTrack获取的。
解码器的逻辑主要集中在OMXCodec.cpp中,在OMXCodec::Create中主要负责寻找匹配的codec并创建codec.
寻找匹配的codec:
Vector<String8> matchingCodecs; findMatchingCodecs( mime, createEncoder, matchComponentName, flags, &matchingCodecs); if (matchingCodecs.isEmpty()) { return NULL; }
创建codec:
status_t err = omx->allocateNode(componentName, observer, &node); if (err == OK) { LOGV("Successfully allocated OMX node '%s'", componentName); sp<OMXCodec> codec = new OMXCodec( omx, node, quirks, flags, createEncoder, mime, componentName, source, nativeWindow); observer->setCodec(codec); err = codec->configureCodec(meta); if (err == OK) { if (!strcmp("OMX.Nvidia.mpeg2v.decode", componentName)) { codec->mFlags |= kOnlySubmitOneInputBufferAtOneTime; } return codec; } LOGV("Failed to configure codec '%s'", componentName); }
findMatchingCodecs这个函数就是查找解码器的,它在kDecoderInfo数组中寻找需要的解码器。
void OMXCodec::findMatchingCodecs( const char *mime, bool createEncoder, const char *matchComponentName, uint32_t flags, Vector<String8> *matchingCodecs) { matchingCodecs->clear(); for (int index = 0;; ++index) { const char *componentName; if (createEncoder) { componentName = GetCodec( kEncoderInfo, sizeof(kEncoderInfo) / sizeof(kEncoderInfo[0]), mime, index); } else { componentName = GetCodec( kDecoderInfo, sizeof(kDecoderInfo) / sizeof(kDecoderInfo[0]), mime, index); } if (!componentName) { break; } // If a specific codec is requested, skip the non-matching ones. if (matchComponentName && strcmp(componentName, matchComponentName)) { continue; } // When requesting software-only codecs, only push software codecs // When requesting hardware-only codecs, only push hardware codecs // When there is request neither for software-only nor for // hardware-only codecs, push all codecs if (((flags & kSoftwareCodecsOnly) && IsSoftwareCodec(componentName)) || ((flags & kHardwareCodecsOnly) && !IsSoftwareCodec(componentName)) || (!(flags & (kSoftwareCodecsOnly | kHardwareCodecsOnly)))) { matchingCodecs->push(String8(componentName)); } } if (flags & kPreferSoftwareCodecs) { matchingCodecs->sort(CompareSoftwareCodecsFirst); } }
差不多流程是这样把,关于显示部分就不用管了,解码器对接render部分,应该自己会弄好,解码器最重要的对外接口也就是read接口的,返回的一帧帧的解码后的数据,