Android Q允许多个应用同时录音。下面摘录一下官方说明:
当两个应用试图捕获音频时,它们都可以接收输入信号,或者其中一个可能会受到静默处理。
四种主要方案如下:
当多个应用同时捕获音频时,只有一个或两个应用处于“活动”状态(正在接收音频),其他应用则处于静音状态(接收静音)。当活动应用发生更改时,音频框架可能会根据以下规则重新配置音频路径:
当优先级较高的应用处于活动状态时,活动应用可能会受到静默处理,因此您可以在 AudioRecord 或 MediaRecorder 对象上注册一个 AudioManager.AudioRecordingCallback,以便在配置发生更改时收到通知。可能的更改如下:
您必须在开始捕获前调用 AudioRecord.registerAudioRecordingCallback()。仅当应用正在接收音频且发生更改时才执行回调。
onRecordingConfigChanged() 方法返回包含当前音频捕获状态的 AudioRecordingConfiguration。使用以下方法了解更改:
isClientSilenced()
如果返回到客户端的音频当前由于捕获策略而受到静默处理,则返回 true。
getAudioDevice()
返回活动音频设备。
getEffects()
返回活动预处理效果。请注意,如果客户端不是优先级最高的活动应用,则活动效果可能与 getClientEffects() 返回的效果不同。
getFormat()
返回音频流属性。请注意,客户端接收的实际音频数据始终遵循 **getClientFormat()**返回的所需格式。该框架自动执行必要的重新采样、通道,以及格式转换,即从硬件接口上使用的格式转换为客户端指定的格式。
AudioRecord.getActiveRecordingConfiguration()
返回活动录音配置。
通过调用 AudioManager.getActiveRecordingConfigurations(),您可以获得设备上所有活动录音的一般视图。
public static abstract class AudioRecordingCallback {
/**
* Called whenever the device recording configuration has changed.
* @param configs list containing the results of
* {@link AudioManager#getActiveRecordingConfigurations()}.
*/
public void onRecordingConfigChanged(List configs) {}
}
这是一个抽象类,是由应用自己实现的,所以需要找到是谁调用了onRecordingConfigChanged,跟踪代码发现是接收到了MSSG_RECORDING_CONFIG_CHANGE这个消息调用的,然后就去查找是在哪里发送的消息,然后找到了如下代码:
private final IRecordingConfigDispatcher mRecCb = new IRecordingConfigDispatcher.Stub() {
@Override
public void dispatchRecordingConfigChange(List configs) {
synchronized(mRecordCallbackLock) {
if (mRecordCallbackList != null) {
for (int i=0 ; i < mRecordCallbackList.size() ; i++) {
final AudioRecordingCallbackInfo arci = mRecordCallbackList.get(i);
if (arci.mHandler != null) {
final Message m = arci.mHandler.obtainMessage(
MSSG_RECORDING_CONFIG_CHANGE/*what*/,
new RecordConfigChangeCallbackData(arci.mCb, configs)/*obj*/);
arci.mHandler.sendMessage(m);
}
}
}
}
}
};
通过上面的代码我们发现我们需要去找到是哪里调用的dispatchRecordingConfigChange函数,然后就来到了RecordingActivityMonitor.java的dispatchCallbacks函数(下面就不挨个贴代码了,只贴重点),然后是onRecordingConfigurationChanged函数,然后就来到了AudioSystem.java的recordingCallbackFromNative函数,这里是AudioSystem.cpp的setRecordConfigCallback函数通过jni调用过来的(jni的调用比较简单,就不做过多分析)。
首先来看一下AudioSystem::setRecordConfigCallback
/*static*/ void AudioSystem::setRecordConfigCallback(record_config_callback cb)
{
Mutex::Autolock _l(gLock);
gRecordConfigCallback = cb;
}
callback的回调,和gRecordConfigCallback关联,通过gRecordConfigCallback找到了AudioSystem::AudioPolicyServiceClient::onRecordingConfigurationUpdate函数,然后去找是谁调用的这个函数,来到了AudioPolicyService::NotificationClient::onRecordingConfigurationUpdate函数,继续反向查找来到AudioPolicyService::doOnRecordingConfigurationUpdate,然后继续反向查找,跟踪代码确定是AudioPolicyService::onRecordingConfigurationUpdate通过RECORDING_CONFIGURATION_UPDATE这个消息调过来的,继续跟踪找到了AudioPolicyService::AudioPolicyClient::onRecordingConfigurationUpdate函数,接下来就到了AudioInputDescriptor::updateClientRecordingConfiguration函数,继续反向跟踪来到了AudioInputDescriptor::setAppState函数,然后就跟踪到了AudioPolicyService::setAppState_l函数,最终来到AudioPolicyService::updateUidStates_l这个函数,对于应用是否需要静默处理都是在这里做的,后面我们来仔细分析一下这个函数。
到这里api到native service的代码流程就分析完了,为了方便,我们用一个uml图来总结这个流程
最终决定应用是否需要静默处理的是updateUidStates_l函数
注:native的调用很多都是binder,因为这不是本文的重点,所以在这里就不做过多的赘述
首先看一下重要部分的代码
// By default allow capture if:
// The assistant is not on TOP
// AND is on TOP or latest started
// AND there is no active privacy sensitive capture or call
// OR client has CAPTURE_AUDIO_OUTPUT privileged permission
bool allowCapture = !isAssistantOnTop
&& ((isTopOrLatestActive && !isLatestSensitive) || isLatestSensitive)
&& !(isSensitiveActive && !(isLatestSensitive || current->canCaptureOutput))
&& !(isInCall && !current->canCaptureOutput);
if (isVirtualSource(source)) {
// Allow capture for virtual (remote submix, call audio TX or RX...) sources
allowCapture = true;
} else if (mUidPolicy->isAssistantUid(current->uid)) {
// For assistant allow capture if:
// An accessibility service is on TOP or a RTT call is active
// AND the source is VOICE_RECOGNITION or HOTWORD
// OR is on TOP AND uses VOICE_RECOGNITION
// OR uses HOTWORD
// AND there is no active privacy sensitive capture or call
// OR client has CAPTURE_AUDIO_OUTPUT privileged permission
if (isA11yOnTop || rttCallActive) {
if (source == AUDIO_SOURCE_HOTWORD || source == AUDIO_SOURCE_VOICE_RECOGNITION) {
allowCapture = true;
}
} else {
if (((isAssistantOnTop && source == AUDIO_SOURCE_VOICE_RECOGNITION) ||
source == AUDIO_SOURCE_HOTWORD) &&
(!(isSensitiveActive || isInCall) || current->canCaptureOutput)) {
allowCapture = true;
}
}
} else if (mUidPolicy->isA11yUid(current->uid)) {
// For accessibility service allow capture if:
// Is on TOP
// AND the source is VOICE_RECOGNITION or HOTWORD
// Or
// The assistant is not on TOP
// AND there is no active privacy sensitive capture or call
// OR client has CAPTURE_AUDIO_OUTPUT privileged permission
if (isA11yOnTop) {
if (source == AUDIO_SOURCE_VOICE_RECOGNITION || source == AUDIO_SOURCE_HOTWORD) {
allowCapture = true;
}
} else {
if (!isAssistantOnTop
&& (!(isSensitiveActive || isInCall) || current->canCaptureOutput)) {
allowCapture = true;
}
}
}
setAppState_l(current->uid,
allowCapture ? apmStatFromAmState(mUidPolicy->getUidState(current->uid)) :
APP_STATE_IDLE);
通过上面的代码确定,调用setAppState_l函数传入的参数值是取决于allowCapture变量和apmStatFromAmState函数的值,我们先来看一下allowCapture:
通过上面的条件我们可以确定,只要是应用满足CAPTURE_AUDIO_OUTPUT就可以捕获音频输入,所以我们如果想自己订制的话,可以自己在这些if条件下添加自己的条件,允许自己的应用也可以捕获音频输入。
app_state_t AudioPolicyService::apmStatFromAmState(int amState) {
if (amState == ActivityManager::PROCESS_STATE_UNKNOWN) {
return APP_STATE_IDLE;
} else if (amState <= ActivityManager::PROCESS_STATE_TOP) {
// include persistent services
return APP_STATE_TOP;
}
return APP_STATE_FOREGROUND;
}
通过上述代码确定,这个函数的返回值取决于传入的参数,所以我们需要重点看一下mUidPolicy->getUidState(current->uid),找到如下代码:
int AudioPolicyService::UidPolicy::getUidState(uid_t uid) {
if (isServiceUid(uid)) {
return ActivityManager::PROCESS_STATE_TOP;
}
checkRegistered();
{
Mutex::Autolock _l(mLock);
auto overrideIter = mOverrideUids.find(uid);
if (overrideIter != mOverrideUids.end()) {
if (overrideIter->second.first) {
if (overrideIter->second.second != ActivityManager::PROCESS_STATE_UNKNOWN) {
return overrideIter->second.second;
} else {
auto cacheIter = mCachedUids.find(uid);
if (cacheIter != mCachedUids.end()) {
return cacheIter->second.second;
}
}
}
return ActivityManager::PROCESS_STATE_UNKNOWN;
}
// In an absense of the ActivityManager, assume everything to be active.
if (!mObserverRegistered) {
return ActivityManager::PROCESS_STATE_TOP;
}
auto cacheIter = mCachedUids.find(uid);
if (cacheIter != mCachedUids.end()) {
if (cacheIter->second.first) {
return cacheIter->second.second;
} else {
return ActivityManager::PROCESS_STATE_UNKNOWN;
}
}
}
ActivityManager am;
bool active = am.isUidActive(uid, String16("audioserver"));
int state = ActivityManager::PROCESS_STATE_UNKNOWN;
if (active) {
state = am.getUidProcessState(uid, String16("audioserver"));
}
{
Mutex::Autolock _l(mLock);
mCachedUids.insert(std::pair>(uid, std::pair(active, state)));
}
return state;
}
首先是isServiceUid,这个是判断是否是系统service,如果是就允许捕获音频输入。接下来就是从mOverrideUids这个map中去取出存取的配置,和ActivityManager中的枚举定义的值相对应,枚举定义如下:
enum {
PROCESS_STATE_UNKNOWN = -1,
PROCESS_STATE_PERSISTENT = 0,
PROCESS_STATE_PERSISTENT_UI = 1,
PROCESS_STATE_TOP = 2,
PROCESS_STATE_FOREGROUND_SERVICE_LOCATION = 3,
PROCESS_STATE_BOUND_TOP = 4,
PROCESS_STATE_FOREGROUND_SERVICE = 5,
PROCESS_STATE_BOUND_FOREGROUND_SERVICE = 6,
PROCESS_STATE_IMPORTANT_FOREGROUND = 7,
PROCESS_STATE_IMPORTANT_BACKGROUND = 8,
PROCESS_STATE_TRANSIENT_BACKGROUND = 9,
PROCESS_STATE_BACKUP = 10,
PROCESS_STATE_SERVICE = 11,
PROCESS_STATE_RECEIVER = 12,
PROCESS_STATE_TOP_SLEEPING = 13,
PROCESS_STATE_HEAVY_WEIGHT = 14,
PROCESS_STATE_HOME = 15,
PROCESS_STATE_LAST_ACTIVITY = 16,
PROCESS_STATE_CACHED_ACTIVITY = 17,
PROCESS_STATE_CACHED_ACTIVITY_CLIENT = 18,
PROCESS_STATE_CACHED_RECENT = 19,
PROCESS_STATE_CACHED_EMPTY = 20,
PROCESS_STATE_NONEXISTENT = 21,
};
上面函数的代码主要是确定应用是否是系统service或者是否处于前台相关的,到这里共享音频输入的策略就分析完成了。但是这块还没有和audiopolicy关联上,我们知道AudioPolicyManager的startInput函数才是真正去打开输入设备的函数。所以接下来我们要看一下Android Q和O(我这里暂时没有P的源码)的这个函数的区别,这样我们才能清楚为什么Q之前是无法多个应用录音的。
status_t AudioPolicyManager::startInput(audio_io_handle_t input,
audio_session_t session,
concurrency_type__mask_t *concurrency)
{
......
audio_source_t activeSource = activeDesc->inputSource(true);
if (audioSession->inputSource() == AUDIO_SOURCE_HOTWORD) {
if (activeSource == AUDIO_SOURCE_HOTWORD) {
if (activeDesc->hasPreemptedSession(session)) {
ALOGW("startInput(%d) failed for HOTWORD: "
"other input %d already started for HOTWORD",
input, activeDesc->mIoHandle);
return INVALID_OPERATION;
}
} else {
ALOGV("startInput(%d) failed for HOTWORD: other input %d already started",
input, activeDesc->mIoHandle);
return INVALID_OPERATION;
}
} else {
if (activeSource != AUDIO_SOURCE_HOTWORD) {
ALOGW("startInput(%d) failed: other input %d already started",
input, activeDesc->mIoHandle);
return INVALID_OPERATION;
}
}
}
......
}
我们来分析一下上面的代码,我们只需关注一下if (audioSession->inputSource() == AUDIO_SOURCE_HOTWORD)这个条件,这个是判断是否是热词搜索,我们这里不考虑这种特殊情况。
if (activeSource != AUDIO_SOURCE_HOTWORD) {
ALOGW("startInput(%d) failed: other input %d already started",
input, activeDesc->mIoHandle);
return INVALID_OPERATION;
}
这个意思是如果活动中的source不是热词搜索,那么再有应用来打开输入设备的时候就会返回一个错误,所以接下来也不会去打开输入设备,因此第二个应用也就无法使用输入设备去录音了。那么Android Q对于这一块是怎么处理的呢?
那么接下来我们来分析一下Android Q的startInput函数。
status_t AudioPolicyManager::startInput(audio_port_handle_t portId)
{
ALOGV("%s portId %d", __FUNCTION__, portId);
sp inputDesc = mInputs.getInputForClient(portId);
if (inputDesc == 0) {
ALOGW("%s no input for client %d", __FUNCTION__, portId);
return BAD_VALUE;
}
audio_io_handle_t input = inputDesc->mIoHandle;
sp client = inputDesc->getClient(portId);
if (client->active()) {
ALOGW("%s input %d client %d already started", __FUNCTION__, input, client->portId());
return INVALID_OPERATION;
}
audio_session_t session = client->session();
ALOGV("%s input:%d, session:%d)", __FUNCTION__, input, session);
Vector> activeInputs = mInputs.getActiveInputs();
status_t status = inputDesc->start();
if (status != NO_ERROR) {
return status;
}
// increment activity count before calling getNewInputDevice() below as only active sessions
// are considered for device selection
inputDesc->setClientActive(client, true);
// indicate active capture to sound trigger service if starting capture from a mic on
// primary HW module
sp device = getNewInputDevice(inputDesc);
setInputDevice(input, device, true /* force */);
if (inputDesc->activeCount() == 1) {
sp policyMix = inputDesc->mPolicyMix.promote();
// if input maps to a dynamic policy with an activity listener, notify of state change
if ((policyMix != NULL)
&& ((policyMix->mCbFlags & AudioMix::kCbFlagNotifyActivity) != 0)) {
mpClientInterface->onDynamicPolicyMixStateUpdate(policyMix->mDeviceAddress,
MIX_STATE_MIXING);
}
DeviceVector primaryInputDevices = availablePrimaryModuleInputDevices();
if (primaryInputDevices.contains(device) &&
mInputs.activeInputsCountOnDevices(primaryInputDevices) == 1) {
SoundTrigger::setCaptureState(true);
}
// automatically enable the remote submix output when input is started if not
// used by a policy mix of type MIX_TYPE_RECORDERS
// For remote submix (a virtual device), we open only one input per capture request.
if (audio_is_remote_submix_device(inputDesc->getDeviceType())) {
String8 address = String8("");
if (policyMix == NULL) {
address = String8("0");
} else if (policyMix->mMixType == MIX_TYPE_PLAYERS) {
address = policyMix->mDeviceAddress;
}
if (address != "") {
setDeviceConnectionStateInt(AUDIO_DEVICE_OUT_REMOTE_SUBMIX,
AUDIO_POLICY_DEVICE_STATE_AVAILABLE,
address, "remote-submix", AUDIO_FORMAT_DEFAULT);
}
}
}
ALOGV("%s input %d source = %d exit", __FUNCTION__, input, client->source());
return NO_ERROR;
}
这个函数的所有代码都在上面,通过上面的代码我们发现这里面没有限制了,那么是不是代表Q就完全没有限制了呢?通过上面对于api的分析我们也知道Q是允许了共享音频输入但也是有条件的啊,所以我们需要仔细分析一下这个函数,但是这个函数里面确实没有限制,所以我们需要去看一下他调用的一些函数,然而我们发现这些函数都没有限制。所以我就需要看一下getInputForDevice函数,看一下那里面有没有限制
audio_io_handle_t AudioPolicyManager::getInputForDevice(const sp &device,
audio_session_t session,
const audio_attributes_t &attributes,
const audio_config_base_t *config,
audio_input_flags_t flags,
const sp &policyMix)
{
......
if (!profile->canOpenNewIo()) {
for (size_t i = 0; i < mInputs.size(); ) {
sp desc = mInputs.valueAt(i);
if (desc->mProfile != profile) {
i++;
continue;
}
// if sound trigger, reuse input if used by other sound trigger on same session
// else
// reuse input if active client app is not in IDLE state
//
RecordClientVector clients = desc->clientsList();
bool doClose = false;
for (const auto& client : clients) {
if (isSoundTrigger != client->isSoundTrigger()) {
continue;
}
if (client->isSoundTrigger()) {
if (session == client->session()) {
return desc->mIoHandle;
}
continue;
}
if (client->active() && client->appState() != APP_STATE_IDLE) {
return desc->mIoHandle;
}
doClose = true;
}
if (doClose) {
closeInput(desc->mIoHandle);
} else {
i++;
}
}
}
sp inputDesc = new AudioInputDescriptor(profile, mpClientInterface);
audio_config_t lConfig = AUDIO_CONFIG_INITIALIZER;
lConfig.sample_rate = profileSamplingRate;
lConfig.channel_mask = profileChannelMask;
lConfig.format = profileFormat;
status_t status = inputDesc->open(&lConfig, device, halInputSource, profileFlags, &input);
......
}
通过上面的代码我们发现了我们想要的部分,
if (client->active() && client->appState() != APP_STATE_IDLE) {
return desc->mIoHandle;
}
这里是判断客户端是否是允许录音的,APP_STATE_IDLE的定义如下:
typedef enum {
APP_STATE_IDLE = 0, /* client is idle: cannot capture */
APP_STATE_FOREGROUND, /* client has a foreground service: can capture */
APP_STATE_TOP, /* client has a visible UI: can capture and select use case */
} app_state_t;
由此我们可以确定这个是遵循我们上面对于Android Q的api分析的策略的。
我们知道getInputForDevice是getInputForAttr调用的,而getInputForAttr是AudioFlinger的createRecord函数通过AudioSystem调用的。
我们所关心的是这个函数调用AudioSystem::getInputForAttr函数的部分,部分代码如下:
lStatus = AudioSystem::getInputForAttr(&input.attr, &output.inputId,
input.riid,
sessionId,
// FIXME compare to AudioTrack
clientPid,
clientUid,
input.opPackageName,
&input.config,
output.flags, &output.selectedDeviceId, &portId);
if (lStatus != NO_ERROR) {
ALOGE("createRecord() getInputForAttr return error %d", lStatus);
goto Exit;
}
{
Mutex::Autolock _l(mLock);
RecordThread *thread = checkRecordThread_l(output.inputId);
if (thread == NULL) {
ALOGE("createRecord() checkRecordThread_l failed, input handle %d", output.inputId);
lStatus = BAD_VALUE;
goto Exit;
}
ALOGV("createRecord() lSessionId: %d input %d", sessionId, output.inputId);
output.sampleRate = input.config.sample_rate;
output.frameCount = input.frameCount;
output.notificationFrameCount = input.notificationFrameCount;
recordTrack = thread->createRecordTrack_l(client, input.attr, &output.sampleRate,
input.config.format, input.config.channel_mask,
&output.frameCount, sessionId,
&output.notificationFrameCount,
callingPid, clientUid, &output.flags,
input.clientInfo.clientTid,
&lStatus, portId,
input.opPackageName);
LOG_ALWAYS_FATAL_IF((lStatus == NO_ERROR) && (recordTrack == 0));
上面的代码我们看到RecordThread *thread = checkRecordThread_l(output.inputId);这条语句去创建录音线程的时候是传入的output.inputId参数,所以我们需要看一下这个参数,这个参数是从AudioSystem::getInputForAttr取到的,也就是AudioPolicyManager的getInputForAttr函数,上面我们介绍了getInputForAttr函数是调用的getInputForDevice去获取的输入设备,所以这就回到了上一个函数的分析了(我们这里只考虑共享音频输入),上面getInputForDevice函数分析的时候我们关注了如下代码:
if (client->active() && client->appState() != APP_STATE_IDLE) {
return desc->mIoHandle;
}
这里我们知道如果允许多应用录音的话会直接返回desc->mIoHandle,这个值就是input handle,然后传给刚才说的AudioFlinger的checkRecordThread_l函数。
AudioFlinger::RecordThread *AudioFlinger::checkRecordThread_l(audio_io_handle_t input) const
{
return mRecordThreads.valueFor(input).get();
}
这个函数比较简单,就是通过input handle去找到对应的thread。然后我们再回到createRecord函数,拿到对应的record thread之后再通过createRecordTrack_l去创建record track。
现在我们来分析一下createRecordTrack_l函数,
{ // scope for mLock
Mutex::Autolock _l(mLock);
track = new RecordTrack(this, client, attr, sampleRate,
format, channelMask, frameCount,
nullptr /* buffer */, (size_t)0 /* bufferSize */, sessionId, creatorPid, uid,
*flags, TrackBase::TYPE_DEFAULT, opPackageName, portId);
lStatus = track->initCheck();
if (lStatus != NO_ERROR) {
ALOGE("createRecordTrack_l() initCheck failed %d; no control block?", lStatus);
// track must be cleared from the caller as the caller has the AF lock
goto Exit;
}
mTracks.add(track);
这个函数的代码比较多,我们就不全都贴了,分析这个函数的代码我们发现,在这里调用RecordTrack函数去创建一个新的record track,并且把创建的track添加到mTracks这个容器中去管理。然而我们通过dump数据发现共享音频输入的时候是一个输入的thread可以对应于两个input tracks,但是这两个input tracks的采样率却可以不同,所以我们推测audioflinger中应该是做了重采样。dump的数据如下:
Fast capture thread: no
Fast track available: no
FastCapture not initialized
2 Tracks of which 2 are active
Active Id Client Session Port Id S Flags Format Chn mask SRate Source Server FrmCnt FrmRdy Sil Latency
yes 5019 29587 43553 7310 A 0x000 00000001 0000000C 48000 1 0026B380 3840 0 s 0.31 t
yes 5020 29695 43561 7312 A 0x000 00000001 00000010 44100 1 001961D9 3584 0 n 0.31 t
0 Effect Chains
那么在哪里做的重采样呢?我们知道AudioFlinger是通过threadLoop函数调用mInput->stream->read()从hal层获取数据的,所以我们猜想应该是threadLoop()函数拿到hal层的数据去做的重采样,接下来我们来验证一下我们的猜想。
bool AudioFlinger::RecordThread::threadLoop()
{
for (;;)
{
activeTrack->mSink.frameCount = ~0;
status_t status = activeTrack->getNextBuffer(&activeTrack->mSink);
size_t framesOut = activeTrack->mSink.frameCount;
LOG_ALWAYS_FATAL_IF((status == OK) != (framesOut > 0));
// check available frames and handle overrun conditions
// if the record track isn't draining fast enough.
bool hasOverrun;
size_t framesIn;
activeTrack->mResamplerBufferProvider->sync(&framesIn, &hasOverrun);
if (hasOverrun) {
overrun = OVERRUN_TRUE;
}
if (framesOut == 0 || framesIn == 0) {
break;
}
// Don't allow framesOut to be larger than what is possible with resampling
// from framesIn.
// This isn't strictly necessary but helps limit buffer resizing in
// RecordBufferConverter. TODO: remove when no longer needed.
framesOut = min(framesOut,
destinationFramesPossible(
framesIn, mSampleRate, activeTrack->mSampleRate));
if (activeTrack->isDirect()) {
// No RecordBufferConverter used for direct streams. Pass
// straight from RecordThread buffer to RecordTrack buffer.
AudioBufferProvider::Buffer buffer;
buffer.frameCount = framesOut;
status_t status = activeTrack->mResamplerBufferProvider->getNextBuffer(&buffer);
if (status == OK && buffer.frameCount != 0) {
ALOGV_IF(buffer.frameCount != framesOut,
"%s() read less than expected (%zu vs %zu)",
__func__, buffer.frameCount, framesOut);
framesOut = buffer.frameCount;
memcpy(activeTrack->mSink.raw, buffer.raw, buffer.frameCount * mFrameSize);
activeTrack->mResamplerBufferProvider->releaseBuffer(&buffer);
} else {
framesOut = 0;
ALOGE("%s() cannot fill request, status: %d, frameCount: %zu",
__func__, status, buffer.frameCount);
}
} else {
// process frames from the RecordThread buffer provider to the RecordTrack
// buffer
framesOut = activeTrack->mRecordBufferConverter->convert(
activeTrack->mSink.raw,
activeTrack->mResamplerBufferProvider,
framesOut);
}
我们来分析一下上面的代码,首先是通过getNextBuffer函数把hal层读取数据存到环形buffer中,然后判断是否是isDirect,这个条件为真的情况就说明不需要重采样。所以我们来看else的代码,这里是调用activeTrack->mRecordBufferConverter->convert()函数去重采样,然后再把重采样的数据写到环形buffer中去提供给record tracks。
到这里Android Q共享音频输入的api分析,audioflinger的工作原理以及重采样就分析完成了。