比如说.Android平台接入新设备(3.5mm耳机插入,或者蓝牙耳机连接)之后
status_t AudioPolicyManager::setDeviceConnectionStateInt(audio_devices_t device,
audio_policy_dev_state_t state,
const char *device_address,
const char *device_name)
{
//从mHwModules中取出连入的设备描述
sp<DeviceDescriptor> devDesc =
mHwModules.getDeviceDescriptor(device, device_address, device_name);
...
switch (state)
{
// handle output device connection
case AUDIO_POLICY_DEVICE_STATE_AVAILABLE: {
checkOutputsForDevice(devDesc, state, outputs, devDesc->mAddress)
}
...
}
有新的设备(外设)接入的时候,我们需要去为Device做checkOutputs
status_t AudioPolicyManager::checkOutputsForDevice(const sp<DeviceDescriptor>& devDesc,
audio_policy_dev_state_t state,
SortedVector<audio_io_handle_t>& outputs,
const String8& address)
{
...
if (state == AUDIO_POLICY_DEVICE_STATE_AVAILABLE) {
...
//先列举出已经打开的能routed到这个device的outputs
//然后查找能routed到此device的output profiles.
//如果需要的话给匹配的profiles打开outputs(已经打开过就啥也不干).
audio_io_handle_t output = AUDIO_IO_HANDLE_NONE;
status_t status = mpClientInterface->openOutput(profile->getModuleHandle(),
&output,
&config,
&desc->mDevice,
address,
&desc->mLatency,
desc->mFlags);
...
//不懂具体干嘛的
updateAudioProfiles(device, output, profile->getAudioProfiles());
else if (((desc->mFlags & AUDIO_OUTPUT_FLAG_DIRECT) == 0) &&
hasPrimaryOutput()) {
//没有AUDIO_OUTPUT_FLAG_DIRECT标记,而且
//mPrimaryOutput有值
...
// 为新output和primary output打开一个duplicating output thread
duplicatedOutput =
mpClientInterface->openDuplicateOutput(output,
mPrimaryOutput->mIoHandle);
if (duplicatedOutput != AUDIO_IO_HANDLE_NONE) {
//打开成功了
sp<SwAudioOutputDescriptor> dupOutputDesc =
new SwAudioOutputDescriptor(NULL,mpClientInterface);
dupOutputDesc->mOutput1 = mPrimaryOutput;//输出1
dupOutputDesc->mOutput2 = desc;//输出2
dupOutputDesc->mSamplingRate = desc->mSamplingRate;
dupOutputDesc->mFormat = desc->mFormat;
dupOutputDesc->mChannelMask = desc->mChannelMask;
dupOutputDesc->mLatency = desc->mLatency;
//加到mOutputs中
addOutput(duplicatedOutput, dupOutputDesc);
}
...
}
}
...
}
首先看看前面说到的mPrimaryOutput:
在AudioPolicyManager的构造函数中,对audio_policy.conf或者audio_policy_configuration.xml(Android N以后)进行了解析,构造了很多mHwModules.
然后遍历这些mHwModules的mOutputProfiles,构造SwAudioOutputDescriptor对象,取名outputDesc.
如果某个outputDesc对应的outProfile配有AUDIO_OUTPUT_FLAG_PRIMARY标记,就把这个outputDesc设置为
mPrimaryOutput!
接下来,我们看下我们比较关心的openDuplicateOutput做了什么事情.
audio_io_handle_t AudioFlinger::openDuplicateOutput(audio_io_handle_t output1,
audio_io_handle_t output2)
{
Mutex::Autolock _l(mLock);
MixerThread *thread1 = checkMixerThread_l(output1);
MixerThread *thread2 = checkMixerThread_l(output2);
...
audio_io_handle_t id = nextUniqueId(AUDIO_UNIQUE_ID_USE_OUTPUT);
DuplicatingThread *thread = new DuplicatingThread(this, thread1, id, mSystemReady);
thread->addOutputTrack(thread2);
mPlaybackThreads.add(id, thread);
// notify client processes of the new output creation
thread->ioConfigChanged(AUDIO_OUTPUT_OPENED);
return id;
}
先看看checkMixerThread_l:
PlaybackThread *thread = checkPlaybackThread_l(output);
return thread != NULL && thread->type() != ThreadBase::DIRECT ? (MixerThread *) thread : NULL;
//checkPlaybackThread_l:
return mPlaybackThreads.valueFor(output).get();
由于openDuplicateOutput调用之前,每个output都调用过openOutput.而openOutput中都会创建一个PlaybackThread的子类.所以checkMixerThread_l的时候,只要不是ThreadBase::DIRECT类型(DirectOutputThread)
就算检查通过.取出output对应的PlaybackThread返回.
再看看DuplicatingThread的构造过程:
AudioFlinger::DuplicatingThread::DuplicatingThread(const sp<AudioFlinger>& audioFlinger,
AudioFlinger::MixerThread* mainThread, audio_io_handle_t id, bool systemReady)
: MixerThread(audioFlinger, mainThread->getOutput(), id, mainThread->outDevice(),
systemReady, DUPLICATING),
mWaitTimeMs(UINT_MAX)
{
addOutputTrack(mainThread);
}
一个是调用了MixerThread的构造函数
MixerThread(audioFlinger, mainThread->getOutput(), id, mainThread->outDevice(),
systemReady, DUPLICATING),
最后一个参数是DUPLICATING,主要影响:
//AudioFlinger::MixerThread::MixerThread
mAudioMixer = new AudioMixer(mNormalFrameCount, mSampleRate);
if (type == DUPLICATING) {
// The Duplicating thread uses the AudioMixer and delivers data to OutputTracks
// (downstream MixerThreads) in DuplicatingThread::threadLoop_write().
// Do not create or use mFastMixer, mOutputSink, mPipeSink, or mNormalSink.
return;
}
只构造了AudioMixer对象.然后就return.因为
Duplicating线程使用AudioMixer并将数据传递到DuplicatingThread :: threadLoop_write()中的OutputTracks(下游MixerThreads)。不要创建或使用mFastMixer,mOutputSink,mPipeSink或mNormalSink。
此外还调用了addOutputTrack.
而addOutputTrack:
void AudioFlinger::DuplicatingThread::addOutputTrack(MixerThread *thread)
{
...
sp<OutputTrack> outputTrack = new OutputTrack(thread,
this,
mSampleRate,
mFormat,
mChannelMask,
frameCount,
IPCThreadState::self()->getCallingUid());
...
thread->setStreamVolume(AUDIO_STREAM_PATCH, 1.0f);
mOutputTracks.add(outputTrack);
updateWaitTime_l();
...
}
那简单总结下openDuplicateOutput做的事情就主要是这样的
1.取出output1和output2对应的MixerThread.
2.新建DuplicatingThread对象thread.
3.两次调用addOutputTrack,分别构造两个对应的OutputTrack对象.加入thread的mOutputTracks中.
4.把thread加入AudioFlinger的mPlaybackThreads中.
5.updateWaitTime_l对mWaitTimeMs赋值为:
...
(strong->frameCount() * 2 * 1000) / strong->sampleRate();
...
这里的strong指此OutputTrack对应的那个PlaybackThread.
OutputTrack定义:
class OutputTrack : public Track {
OutputTrack构造函数:
AudioFlinger::PlaybackThread::OutputTrack::OutputTrack(
PlaybackThread *playbackThread,
DuplicatingThread *sourceThread,
uint32_t sampleRate,
audio_format_t format,
audio_channel_mask_t channelMask,
size_t frameCount,
uid_t uid)
: Track(playbackThread, NULL, AUDIO_STREAM_PATCH,
sampleRate, format, channelMask, frameCount,
nullptr /* buffer */, (size_t)0 /* bufferSize */, nullptr /* sharedBuffer */,
AUDIO_SESSION_NONE, uid, AUDIO_OUTPUT_FLAG_NONE,
TYPE_OUTPUT),
mActive(false), mSourceThread(sourceThread)
{
if (mCblk != NULL) {
mOutBuffer.frameCount = 0;
//将本outputTrack加入mTracks数组统一管理
playbackThread->mTracks.add(this);
// 由于客户端服务端在同一个进程,两边的buffer有相同的虚拟地址
mClientProxy = new AudioTrackClientProxy(mCblk, mBuffer, mFrameCount, mFrameSize,true /*clientInServer*/);
...
}
...
}
s初始化列表里调用了基类Track的构造函数
Track(playbackThread, NULL, AUDIO_STREAM_PATCH,
sampleRate, format, channelMask, frameCount,
nullptr /* buffer */, (size_t)0 /* bufferSize */, nullptr /* sharedBuffer */,
AUDIO_SESSION_NONE, uid, AUDIO_OUTPUT_FLAG_NONE,
TYPE_OUTPUT),
而Track的构造函数里也调用了TrackBase的构造函数,
1.后面要用到的mCblk,mBuffer都将在这个过程中初始化,分配空间.
2.streamType传了一个AUDIO_STREAM_PATCH类型.考证了下:
For internal audio flinger tracks. Fixed volume
(回头看了一眼,addOutputTrack中已经使用过一次)
暂时没看懂有什么用.
3.sharedBuffer为null.
当有数据需要往下写的时候,DuplicatingThread重载了threadLoop_write,取代了基类的函数(基类使用的各种Sink完全不存在):
ssize_t AudioFlinger::DuplicatingThread::threadLoop_write()
{
for (size_t i = 0; i < outputTracks.size(); i++) {
outputTracks[i]->write(mSinkBuffer, writeFrames);
}
mStandby = false;
return (ssize_t)mSinkBufferSize;
}
循环调用多个outputTracks的write函数
这个out函数做的事情,主要可能就是这几行代码了
bool AudioFlinger::PlaybackThread::OutputTrack::write(void* data, uint32_t frames)
{
...
status_t status = obtainBuffer(&mOutBuffer, waitTimeLeftMs);
memcpy(mOutBuffer.raw, pInBuffer->raw, outFrames * mFrameSize);
...
}
pInBuffer数据来自入参data.
mOutputBuffer来自obtainBuffer,即Track刚刚构造的时候,calloc的那段共享空间.即mBuffers!
目前来看。这两个Track(这里是OutputTrack)有空间mBuffer,也有数据。而且被加到了
playbackThread->mTracks.add中。
此处playbackThread追溯上去,分别对应着thread1和thread2.
回去看OutputTrack的构造函数,前两个参数
//AudioFlinger::PlaybackThread::OutputTrack::OutputTrack
...
PlaybackThread *playbackThread,
DuplicatingThread *sourceThread,
...
名字上能看出个大概。前者是真正播放的线程,后者是数据源。
难道DuplicatingThread的最大作用就是倒腾数据吗?
两个普通的thread1在得到数据之后。一切就和普通的Thread没什么区别了吧。
首先应该不去管音频焦点了(不然没法各自愉快地发声)。重要的一点是
根据应用来选设备,比如网易云音乐往耳机播放,导航软件往扬声器播放。这两者的数据不搅和在一起。
一个playbackThread里的不同Track才会发生混音。两个playbackThread各自对应两个Track就不会混音了。各走一条路。
接下来
ssize_t AudioFlinger::PlaybackThread::threadLoop_write() {
...
bytesWritten = mOutput->write((char *)mSinkBuffer + offset, mBytesRemaining);
...
}
两个设备的时候,再加一个mOutput1->write兴许就可以了(前提是底层硬件bsp要支持,hal层要适配好).
由于还没碰到实际应用场景,先这么结束了吧.
待完成...