从5.0之后Android的音视频播放框架就开始全面抛弃AwesomePlayer,本地播放开始采用NuPlayer框架。本章会介绍NuPlayer创建流程、音视频解析、音视频解码器创建、音频Track创建、音频offload播放等NuPlayer相关流程。
音视频播放上层对外的接口都是调用MediaPlayer,做过应用的同学都是MediaPlayer的三步:
我们可以这三步来贯穿分析NuPlayer的一系列流程:
status_t MediaPlayer::setDataSource(const sp &source){...
const sp service(getMediaPlayerService());
const sp service(getMediaPlayerService());
if (service != 0) {
sp player(service->create(this, mAudioSessionId));
if ((NO_ERROR != doSetRetransmitEndpoint(player)) ||
(NO_ERROR != player->setDataSource(source))) {
player.clear();
}
err = attachNewPlayer(player);
MediaPlayer的setDataSource首先会获取MediaPlayerService bp端,并且将MediaPlayerService create的返回值绑定到player上,跟踪一下MediaPlayerService 的create就可以发现返回值就是MediaPlayerService的内部类Client。所以这里setDataSource就是调往Client的setDataSource。
status_t MediaPlayerService::Client::setDataSource(
const sp &source) {
sp dataSource = CreateDataSourceFromIDataSource(source);
player_type playerType = MediaPlayerFactory::getPlayerType(this, dataSource);
sp p = setDataSource_pre(playerType);
if (p == NULL) {
return NO_INIT;
}
// now set data source
return mStatus = setDataSource_post(p, p->setDataSource(dataSource));}
这里可以看一下getPlayerType的流程,getPlayerType是MediaPlayerFactory的函数,该函数返回的是GET_PLAYER_TYPE_IMPL的值:
#define GET_PLAYER_TYPE_IMPL(a...) \
Mutex::Autolock lock_(&sLock); \
player_type ret = STAGEFRIGHT_PLAYER; \
float bestScore = 0.0; \
for (size_t i = 0; i < sFactoryMap.size(); ++i) { \
IFactory* v = sFactoryMap.valueAt(i); \
float thisScore; \
CHECK(v != NULL); \
thisScore = v->scoreFactory(a, bestScore); \
if (thisScore > bestScore) { \
ret = sFactoryMap.keyAt(i); \
bestScore = thisScore; \
} \
} \
if (0.0 == bestScore) { \
ret = getDefaultPlayerType(); \
} \
return ret;
其实就是遍历一下sFactoryMap,如果有得分符合的,就返回该map的字符串。看下sFactoryMap这个map add的地方,可以跟到是在registerFactory_l中,初始化MediaPlayerFactory的时候调用registerBuiltinFactories,之后看到就registerFactory_l(factory, NU_PLAYER)。所以当前平台用的是NuPlayer,如果有系统媒体播放框架定制的需求,可以在这里添加MediaPlayer对于定制框架的关联。
之后看一下setDataSource_pre的流程:
sp MediaPlayerService::Client::setDataSource_pre(
player_type playerType){
...
// create the right type of player
sp p = createPlayer(playerType);
...
//中间部分省略为绑定Extractor、omx、codec2服务的death监听,
if (!p->hardwareOutput()) {
mAudioOutput = new AudioOutput(mAudioSessionId, IPCThreadState::self()->getCallingUid(),
mPid, mAudioAttributes, mAudioDeviceUpdatedListener);
static_cast(p.get())->setAudioSink(mAudioOutput);
}
return p;}
createPlayer就是根据之前返回的NU_PLAYER来创建NuPlayerDriver,如名字,这就像是NuPlayer的驱动器一样,相当于NuPlayer框架给MediaPlayer提供的接口。(NuPlayerDriver初始化的时候还会创建NuPlayer、MediaAnalyticsItem:性能分析、MediaClock:用于音频时间计算)
virtual sp createPlayer(pid_t pid) {
ALOGV(" create NuPlayer");
return new NuPlayerDriver(pid);
}
接着看setDataSource_pre中new了一个AudioOutput,AudioOutput继承于AudioSink,看上去很陌生,后续接着跟着看下这是个什么玩意。这里p.get())->setAudioSink(mAudioOutput)就是调用NuPlayerDriver的setAudioSink存下AudioSink对象。
回到Client的setDataSource的函数,
return mStatus = setDataSource_post(p, p->setDataSource(dataSource));
先调用NuPlayerDriver的setDataSource,再在setDataSource_post来处理返回值。
NuPlayerDriver会调用NuPlayer的setDataSourceAsync函数,setDataSourceAsync主要就是做了一件事,初始化GenericSource对象,并调用GenericSource的setDataSource。GenericSource也是一个比较重要的类,可以说是主要用于NuPlayer与Extractor的交互。GenericSource的setDataSource看下代码可以看到主要干的就是resetDataSource,将一些变量数据清一清,没干啥。
prepare从MediaPlayer调用开始,调用流程基本跟setDataSource一样,一步一步调用到GenericSource,我们这里可以直接从GenericSource::onPrepareAsync开始看,
void NuPlayer::GenericSource::onPrepareAsync() {
...
if (mDataSource == NULL) {
...
//这部分是流媒体创建datasource
if (!mUri.empty()) {
const char* uri = mUri.c_str();
String8 contentType;
if (!strncasecmp("http://", uri, 7) || !strncasecmp("https://", uri, 8)) {
...
httpSource = DataSourceFactory::CreateMediaHTTP(mHTTPService);
...
}
...
sp dataSource = DataSourceFactory::CreateFromURI(
mHTTPService, uri, &mUriHeaders, &contentType,
static_cast(mHttpSource.get()));
...
}
} else {
if (property_get_bool("media.stagefright.extractremote", true) &&
!FileSource::requiresDrm(mFd, mOffset, mLength, nullptr /* mime */)) {
sp binder =
defaultServiceManager()->getService(String16("media.extractor"));
if (binder != nullptr) {
ALOGD("FileSource remote");
sp mediaExService(
interface_cast(binder));
sp source =
mediaExService->makeIDataSource(mFd, mOffset, mLength);
ALOGV("IDataSource(FileSource): %p %d %lld %lld",
source.get(), mFd, (long long)mOffset, (long long)mLength);
if (source.get() != nullptr) {
mDataSource = CreateDataSourceFromIDataSource(source);
if (mDataSource != nullptr) {
// Close the local file descriptor as it is not needed anymore.
close(mFd);
mFd = -1;
}
} else {
ALOGW("extractor service cannot make data source");
}
} else {
ALOGW("extractor service not running");
}
}
if (mDataSource == nullptr) {
ALOGD("FileSource local");
mDataSource = new FileSource(mFd, mOffset, mLength);
}
// TODO: close should always be done on mFd, see the lines following
// CreateDataSourceFromIDataSource above,
// and the FileSource constructor should dup the mFd argument as needed.
mFd = -1;
}
if (mDataSource == NULL) {
ALOGE("Failed to create data source!");
mDisconnectLock.unlock();
notifyPreparedAndCleanup(UNKNOWN_ERROR);
return;
}
}
可以看到以上主要就是对于mDataSource的赋值,根据之前setdatasource的赋值决定mDataSource。
1、如果之前传入uri的话,采用CreateFromURI来创建,当uri以"file://"为开头,就是本地文件uri,创建FileSource,当uri以"http://"为开头,创建NuCachedSource2,其他情况都创建FileSource
2、如果之前未传入uri的话,当"media.stagefright.extractremote"设置过的话,通过CreateDataSourceFromIDataSource来创建,这个datasource暂时没见到用过,如果没设置过这个property的话,就用FileSource。
之后开始创建解析器
// init extractor from data source
status_t err = initFromDataSource();
status_t NuPlayer::GenericSource::initFromDataSource() {
extractor = MediaExtractorFactory::Create(dataSource, NULL);
==>MediaExtractorFactory::Create
sp ex = mediaExService->makeExtractor(
==> MediaExtractorService::makeExtractor
sp extractor = MediaExtractorFactory::CreateFromService(localSource, mime);
==> MediaExtractorFactory::CreateFromService
creator = sniff(source, &confidence, &meta, &freeMeta, plugin, &creatorVersion);
==> MediaExtractorFactory::sniff
void *MediaExtractorFactory::sniff(
const sp &source, float *confidence, void **meta,
FreeMetaFunc *freeMeta, sp &plugin, uint32_t *creatorVersion) {
*confidence = 0.0f;
*meta = nullptr;
std::shared_ptr>> plugins;
{
Mutex::Autolock autoLock(gPluginMutex);
if (!gPluginsRegistered) {
return NULL;
}
plugins = gPlugins;
}
void *bestCreator = NULL;
for (auto it = plugins->begin(); it != plugins->end(); ++it) {
ALOGV("sniffing %s", (*it)->def.extractor_name);
float newConfidence;
void *newMeta = nullptr;
FreeMetaFunc newFreeMeta = nullptr;
void *curCreator = NULL;
if ((*it)->def.def_version == EXTRACTORDEF_VERSION_NDK_V1) {
curCreator = (void*) (*it)->def.u.v2.sniff(
source->wrap(), &newConfidence, &newMeta, &newFreeMeta);
} else if ((*it)->def.def_version == EXTRACTORDEF_VERSION_NDK_V2) {
curCreator = (void*) (*it)->def.u.v3.sniff(
source->wrap(), &newConfidence, &newMeta, &newFreeMeta);
}
if (curCreator) {
if (newConfidence > *confidence) {
*confidence = newConfidence;
if (*meta != nullptr && *freeMeta != nullptr) {
(*freeMeta)(*meta);
}
*meta = newMeta;
*freeMeta = newFreeMeta;
plugin = *it;
bestCreator = curCreator;
*creatorVersion = (*it)->def.def_version;
} else {
if (newMeta != nullptr && newFreeMeta != nullptr) {
newFreeMeta(newMeta);
}
}
}
}
return bestCreator;
}
Sniff函数是创建解析器最重要的一个环节,以上代码可以看这里遍历了plugins迭代器,自从mediaExtractor服务从mediaserver中分离出来之后,所有的Extractor(如MPEG4Extractor、MP3Extractor等等)都在开机的时候就加载起来了,这里plugins就是开机时候将所有extractor都存进来了,所以这里的遍历plugins,同时调用每一个extractor的sniff,得到一个回参newConfidence,这个可以翻译为得分值,每一个Extractor中sniff函数都会根据datasource的头部来确定播放文件的类型,返回一个confidence。最后比较下得分最高的,就为最匹配的extractor。
Extractor创建好之后,可以看到从extractor中取到所有track,并将track赋值给mAudioTrack和mVideoTrack。
for (size_t i = 0; i < numtracks; ++i) {
sp track = extractor->getTrack(i);
按照惯例,跳过无关紧要流程,直接看下NuPlayer::onStart:
void NuPlayer::onStart(int64_t startPositionUs, MediaPlayerSeekMode mode) {
ALOGV("onStart: mCrypto: %p (%d)", mCrypto.get(),
(mCrypto != NULL ? mCrypto->getStrongCount() : 0));
if (!mSourceStarted) {
mSourceStarted = true;
mSource->start();
}
//恢复播放的话,seek到相应位置
if (startPositionUs > 0) {
performSeek(startPositionUs, mode);
if (mSource->getFormat(false /* audio */) == NULL) {
return;
}
}
mOffloadAudio = false;
mAudioEOS = false;
mVideoEOS = false;
mStarted = true;
mPaused = false;
uint32_t flags = 0;
if (mSource->isRealTime()) {
flags |= Renderer::FLAG_REAL_TIME;
}
bool hasAudio = (mSource->getFormat(true /* audio */) != NULL);
bool hasVideo = (mSource->getFormat(false /* audio */) != NULL);
if (!hasAudio && !hasVideo) {
ALOGE("no metadata for either audio or video source");
mSource->stop();
mSourceStarted = false;
notifyListener(MEDIA_ERROR, MEDIA_ERROR_UNKNOWN, ERROR_MALFORMED);
return;
}
ALOGV_IF(!hasAudio, "no metadata for audio source"); // video only stream
sp audioMeta = mSource->getFormatMeta(true /* audio */);
//默认流为music流
audio_stream_type_t streamType = AUDIO_STREAM_MUSIC;
if (mAudioSink != NULL) {
streamType = mAudioSink->getAudioStreamType();
}
//判定是否为offload音乐
mOffloadAudio =
canOffloadStream(audioMeta, hasVideo, mSource->isStreaming(), streamType)
&& (mPlaybackSettings.mSpeed == 1.f && mPlaybackSettings.mPitch == 1.f);
// Modular DRM: Disabling audio offload if the source is protected
if (mOffloadAudio && mIsDrmProtected) {
mOffloadAudio = false;
ALOGV("onStart: Disabling mOffloadAudio now that the source is protected.");
}
//设置offload flag
if (mOffloadAudio) {
flags |= Renderer::FLAG_OFFLOAD_AUDIO;
}
sp notify = new AMessage(kWhatRendererNotify, this);
++mRendererGeneration;
notify->setInt32("generation", mRendererGeneration);
mRenderer = new Renderer(mAudioSink, mMediaClock, notify, flags);
.....
postScanSources();
}
其中比较重要的就是实例化了NuPlayerRenderer对象,以及调用postScanSources,可以看到postScanSources发了个消息kWhatScanSources,其中最主要的处理就是调instantiateDecoder:
case kWhatScanSources:
......
//有surface既有视频的情况,创建videoDecoder
if (mSurface != NULL) {
if (instantiateDecoder(false, &mVideoDecoder) == -EWOULDBLOCK) {
rescan = true;
}
}
// Don't try to re-open audio sink if there's an existing decoder.
if (mAudioSink != NULL && mAudioDecoder == NULL) {
if (instantiateDecoder(true, &mAudioDecoder) == -EWOULDBLOCK) {
rescan = true;
}
}
看一下instantiateDecoder函数干了啥:
if (audio) {
sp notify = new AMessage(kWhatAudioNotify, this);
++mAudioDecoderGeneration;
notify->setInt32("generation", mAudioDecoderGeneration);
if (checkAudioModeChange) {
determineAudioModeChange(format);
}
if (mOffloadAudio) {
mSource->setOffloadAudio(true /* offload */);
const bool hasVideo = (mSource->getFormat(false /*audio */) != NULL);
format->setInt32("has-video", hasVideo);
*decoder = new DecoderPassThrough(notify, mSource, mRenderer);
ALOGV("instantiateDecoder audio DecoderPassThrough hasVideo: %d", hasVideo);
} else {
mSource->setOffloadAudio(false /* offload */);
*decoder = new Decoder(notify, mSource, mPID, mUID, mRenderer);
ALOGV("instantiateDecoder audio Decoder");
}
mAudioDecoderError = false;
} else {
sp notify = new AMessage(kWhatVideoNotify, this);
++mVideoDecoderGeneration;
notify->setInt32("generation", mVideoDecoderGeneration);
*decoder = new Decoder(
notify, mSource, mPID, mUID, mRenderer, mSurface, mCCDecoder);
mVideoDecoderError = false;
// enable FRC if high-quality AV sync is requested, even if not
// directly queuing to display, as this will even improve textureview
// playback.
{
if (property_get_bool("persist.sys.media.avsync", false)) {
format->setInt32("auto-frc", 1);
}
}
}
几个new地方可以看到这个函数主要用来创建Decoder,从上面可以看到当是offload场景的时候,创建DecoderPassThrough,人如其名,就是起到跳过创建解码器的效果,之前提到offload场景就是低功耗场景,即直接采用硬件解码,跳过软件解码,而NuPlayerDecoder创建的音频解码器都是软解码,所以这里offload创建DecoderPassThrough。
接着看下instantiateDecoder后续:
(*decoder)->configure(format);
这就是调用NuPlayerDecoder或者NuPlayerDecoderPassThrough的onConfigure函数了,NuPlayerDecoder::onConfigure中主要是通过MediaCodec创建omx组件,这个后续在讲编解码器创建的时候再说,这里主要看下NuPlayerDecoderPassThrough::onConfigure来跟进音频流程。
void NuPlayer::DecoderPassThrough::onConfigure(const sp &format) {
......
status_t err = mRenderer->openAudioSink(
format, true /* offloadOnly */, hasVideo,
AUDIO_OUTPUT_FLAG_NONE /* flags */, NULL /* isOffloaded */, mSource->isStreaming());
status_t NuPlayer::Renderer::onOpenAudioSink(
......
err = mAudioSink->open(
sampleRate,
numChannels,
(audio_channel_mask_t)channelMask,
audioFormat,
0 /* bufferCount - unused */,
&NuPlayer::Renderer::AudioSinkCallback,
this,
(audio_output_flags_t)offloadFlags,
&offloadInfo);
这里又看到了这个AudioSink,这个可以回到NuPlayer在new Render的时候,看到是作为参数传进来的。之前在setdatasource的时候看到过:p.get())->setAudioSink(mAudioOutput),所以NuPlayer中的AudioSink就是之前传进来的mAudioOutput。
这时候可以回头看下AudioOutput的open函数:
status_t MediaPlayerService::AudioOutput::open(
......
sp t;
CallbackData *newcbd = NULL;
// We don't attempt to create a new track if we are recycling an
// offloaded track. But, if we are recycling a non-offloaded or we
// are switching where one is offloaded and one isn't then we create
// the new track in advance so that we can read additional stream info
if (!(reuse && bothOffloaded)) {
ALOGV("creating new AudioTrack");
if (mCallback != NULL) {
newcbd = new CallbackData(this);
t = new AudioTrack(
mStreamType,
sampleRate,
format,
channelMask,
frameCount,
flags,
CallbackWrapper,
newcbd,
0, // notification frames
mSessionId,
AudioTrack::TRANSFER_CALLBACK,
offloadInfo,
mUid,
mPid,
mAttributes,
......
mSampleRateHz = sampleRate;
mFlags = flags;
mMsecsPerFrame = 1E3f / (mPlaybackRate.mSpeed * sampleRate);
mFrameSize = t->frameSize();
mTrack = t;
return updateTrack();
}
这里可以看到new了一个AudioTrack,这里就打通了NuPlayer框架和AudioTrack的流程,如1.1的框架图中可以看到,音频的数据流都是通过AudioTrack写向audio hal。
总结一下音频NuPlayer的流程:
(1)单音频offload:
MediaPlayer->MediaPlayerService->NuPlayer->DecoderPassThrough->Render->AudioTrack
(2)单音频非offload:
MediaPlayer->MediaPlayerService->NuPlayer->Decoder->MediaCodec->ACodec->omx(soft audio decoder)->ACodec->MediaCodec->Render->AudioTrack