Android 9 Audio系统笔记:AudioRecord

AudioRecord

  • 前言
  • AudioTrack
  • 第一部分:AudioRecord创建
    • 8.1 native_setup
    • 8.1.4 set
    • 8.1.4.4 创建IAudioRecord对象 createRecord_l
    • A.4 调用 audioFlinger->createRecord(input,output, &status)创建IAudioRecord对象
      • B.3 调用AudioSystem::getInputForAttr获取输入流的句柄input
      • B.3.1 AudioPolicyService::getInputForAttr
        • C.2 AudioPolicyManager::getInputForAttr
        • D.1 调用getDeviceAndMixForInputSource
        • D.2 获取inputType的类型 *input = getInputForDevice
      • B.5 创建RecordThread::RecordTrack:thread->createRecordTrack_l
        • E.2 创建RecordTrack track = new RecordTrack
        • E.4. 通知设备发生变化 sendPrioConfigEvent_l
  • 第二部分 AudioRecord音频路由建立
    • 由startRecoding开始
      • AudioRecord::start
      • F.5 AudioSystem::startInput
        • H.4 继续分析setInputDevice函数:
        • I.4 调用mpClientInterface->createAudioPatch创建Audio通路
          • mAudioPolicyService->clientCreateAudioPatch
          • mAudioCommandThread->createAudioPatchCommand
          • AudioPolicyService::AudioCommandThread::threadLoop
          • AF端的af->createAudioPatch
          • AudioFlinger::PatchPanel::createAudioPatch
          • J.8 audioflinger->openInput_l 打开输入设备
          • J.9 createPatchConnections(newPatch, patch) 切换声音通道
  • 第三部分 开始读数据
  • 第四部分 AudioRecord整体架构
  • 小结

前言

还是绕不开录音部分,但是毫无头绪,硬着头皮先把AudioRecord熟悉一遍先。牵涉到audioflinger这块都是弯弯绕绕的,Are you ready ?

AudioTrack

AudioTrack,录音部分的核心,往简单的说无非以下几部分:
1、AudioRecord的创建:创建相应的线程
2、AudioRecord音频路由建立
3、开始读数据

第一部分:AudioRecord创建

从frameworks/base/media/java/android/media/AudioRecord.java开始
new AudioRecord();

frameworks/base/media/java/android/media/AudioRecord.java
public AudioRecord(AudioAttributes attributes, AudioFormat format, int bufferSizeInBytes,
        int sessionId) throws IllegalArgumentException {
	1.标记mRecordingState为stoped状态;
    2.获取一个MainLooper;
    3.判断录音源是否是REMOTE_SUBMIX,有兴趣的童鞋可以深入研究;
    4.重新获取rate与format参数,这里会根据AUDIO_FORMAT_HAS_PROPERTY_X来判断从哪里获取参数,而在之前的构造函数中,设置参数的时候已经标记了该标志位,所以这两个参数还是我们设置的;
    5.调用audioParamCheck对参数再一次进行检查合法性;
    6.获取声道数以及声道掩码,单声道掩码为0x10,双声道掩码为0x0c7.调用audioBuffSizeCheck检查最小缓冲区大小是否合法;
    8. 调用native_setup的native函数 ,注意这里传过去的参数包括:指向自己的指针,录制源,rate,声道掩码,format,minBuffSize,session[]9.标记mRecordingState为inited状态;
        注:关于SessionId
             一个Session就是一个会话,每个会话都有一个独一无二的Id来标识。该Id的最终管理在AudioFlinger中。
             一个会话可以被多个AudioTrack对象和MediaPlayer共用。
             共用一个Session的AudioTrack和MediaPlayer共享相同的AudioEffect(音效)
}

8.1 native_setup

//frameworks/base/core/jni/android_media_AudioRecord.cpp
static jint
android_media_AudioRecord_setup(JNIEnv *env, jobject thiz, jobject weak_this,
        jobject jaa, jint sampleRateInHertz, jint channelMask,
                // Java channel masks map directly to the native definition
        jint audioFormat, jint buffSizeInBytes, jintArray jSession)
{
	8.1.1.判断声道掩码是否合法,然后通过掩码计算出声道数;
    8.1.2.由于最小缓冲区大小是采样帧数量*每个采样帧大小得出,每个采样帧大小为所有声道数所占的字节数,从而求出采样帧数量frameCount;
    8.1.3.进行一系列的JNI处理录音源,以及把AudioRecord.java的指针绑定到lpCallbackData回调数据中,
    这样就能把数据通过回调的方式通知到上层;
    8.1.4.调用AudioRecord的set函数(lpRecorder->set),这里注意下flags,他的类型为audio_input_flags_t,
    定义在system\core\include\system\audio.h中,
    作为音频输入的标志,这里设置为AUDIO_INPUT_FLAG_NONE
typedef enum {
    AUDIO_INPUT_FLAG_NONE       = 0x0,  // no attributes
    AUDIO_INPUT_FLAG_FAST       = 0x1,  // prefer an input that supports "fast tracks"
    AUDIO_INPUT_FLAG_HW_HOTWORD = 0x2,  // prefer an input that captures from hw hotword source
} audio_input_flags_t;
    8.1.5.把lpRecorder对象以及lpCallbackData回调保存到javaAudioRecordFields的相应字段中
}

8.1.4 set

//frameworks\av\media\libmedia\AudioRecord.cpp
//9.0 frameworks/av/media/libaudioclient/AudioRecord.cpp
status_t AudioRecord::set(
        audio_source_t inputSource,
        uint32_t sampleRate,
        audio_format_t format,
        audio_channel_mask_t channelMask,
        size_t frameCount,
        callback_t cbf,
        void* user,
        uint32_t notificationFrames,
        bool threadCanCallJava,
        audio_session_t sessionId,
        transfer_type transferType,
        audio_input_flags_t flags,
        *uid_t uid,
        pid_t pid,
        const audio_attributes_t* pAttributes,
        audio_port_handle_t selectedDeviceId,
        audio_microphone_direction_t selectedMicDirection,
        float microphoneFieldDimension*)
{
	8.1.4.1.在JNI中传递过来的参数:transferType为TRANSFER_DEFAULT,cbf!=null,threadCanCallJava=true,
	所以mTransfer设置为TRANSFER_SYNC,他是决定如何从AudioRecord传输数据方式,后面会用到;
    8.1.4.2.保存相关的参数,如录制源mAttributes.source,采样率mSampleRate,采样精度mFormat,
    声道掩码mChannelMask,声道数mChannelCount,采样帧大小mFrameSize,采样帧数量mReqFrameCount,
    通知帧计数mNotificationFramesReq,mSessionId在这里更新了,
    音频输入标志mFlags还是之前的AUDIO_INPUT_FLAG_NONE
    8.1.4.3.当cbf数据回调函数不为null时,开启一个录音线程AudioRecordThread:
     mAudioRecordThread = new AudioRecordThread(*this)//8.1.4.4.调用openRecord_l(0)创建IAudioRecord对象;
	8.1.4.4.调用createRecord_l(0)创建IAudioRecord对象(9.0)8.1.4.5.如果建立失败,就销毁录音线程AudioRecordThread,否则更新参数;
}

8.1.4.4 创建IAudioRecord对象 createRecord_l

//frameworks\av\media\libmedia\AudioRecord.cpp
//frameworks/av/media/libaudioclient/AudioRecord.cpp
status_t AudioRecord::createRecord_l(const Modulo<uint32_t> &epoch, const String16& opPackageName)
{
	A.1.获取IAudioFlinger对象,其通过binder和AudioFlinger通信,所以也就是相当于直接调用到AudioFlinger服务中了;
    A.2.判断音频输入标志,是否需要清除AUDIO_INPUT_FLAG_FAST标志位,这里不需要,一直是AUDIO_INPUT_FLAG_NONE;
    //8.1.4.4.3.调用AudioSystem::getInputForAttr获取输入流的句柄input(9.0没这个);
    //8.1.4.4.4.调用audioFlinger->openRecord创建IAudioRecord对象;
    A.4.调用 audioFlinger->createRecord(input,output, &status)创建IAudioRecord对象;
    A.5.通过IMemory共享内存(通过A.4步骤的output对象获取到),获取录音数据;
    A.6.更新AudioRecordClientProxy客户端代理的录音数据;
}

A.4 调用 audioFlinger->createRecord(input,output, &status)创建IAudioRecord对象

//frameworks/av/services/audioflinger/AudioFlinger.cpp
sp<media::IAudioRecord> AudioFlinger::createRecord(const CreateRecordInput& input,
                                                   CreateRecordOutput& output,
                                                   status_t *status)
{
	B.1.参数有效性检查
	B.2.registerPid(clientPid)
	B.3.调用AudioSystem::getInputForAttr获取输入流的句柄input 
	B.4.创建RecordThread *thread = checkRecordThread_l(output.inputId),获取对应的recordthread 这个在打开设备节点的时候已经创建了;
	B.5. 创建RecordThread::RecordTrack:thread->createRecordTrack_l
	B.6. 判断是否有音效,有则添加: thread->addEffectChain_l(chain);
	B.7. 设置内存共享output.cblk = recordTrack->getCblk(), output.buffers = recordTrack->getBuffers()
	B.8. 返回创建RecordHandle client使用 recordHandle = new RecordHandle(recordTrack)
	
}

B.3 调用AudioSystem::getInputForAttr获取输入流的句柄input

这里跟5.1就有很大区别,不仅获取相应设备,并且设备打开也是在这里完成。一句话就是,输入音频路由以及设备的管理都是在这里完成。

//frameworks\av\media\libmedia\AudioSystem.cpp
status_t AudioSystem::getInputForAttr(const audio_attributes_t *attr,
                                audio_io_handle_t *input,
                                audio_session_t session,
                                uint32_t samplingRate,
                                audio_format_t format,
                                audio_channel_mask_t channelMask,
                                audio_input_flags_t flags)
{
    const sp<IAudioPolicyService>& aps = AudioSystem::get_audio_policy_service();
    if (aps == 0) return NO_INIT;
    return aps->getInputForAttr(attr, input, session, samplingRate, format, channelMask, flags);
}

B.3.1 AudioPolicyService::getInputForAttr

frameworks\av\services\audiopolicy\AudioPolicyInterfaceImpl.cpp
status_t AudioPolicyService::getInputForAttr(const audio_attributes_t *attr,
                                             audio_io_handle_t *input,
                                             audio_session_t session,
                                             uint32_t samplingRate,
                                             audio_format_t format,
                                             audio_channel_mask_t channelMask,
                                             audio_input_flags_t flags)
{
	C.1.对source为HOTWORD或FM_TUNER的录音源,判断是否具有相应的录音权限(根据应用进程号);
    C.2.继续调用AudioPolicyManager的方法(AudioPolicyManager::getInputForAttr)获取input以及inputType;
    C.3.检查应用是否具有该inputType的录音权限;
    C.4.判断是否需要添加音效(audioPolicyEffects),需要则使用audioPolicyEffects->addInputEffects添加音效;
}

C.2 AudioPolicyManager::getInputForAttr

//frameworks\av\services\audiopolicy\AudioPolicyManager.cpp
status_t AudioPolicyManager::getInputForAttr(const audio_attributes_t *attr,
                                             audio_io_handle_t *input,
                                             audio_session_t session,
                                             uint32_t samplingRate,
                                             audio_format_t format,
                                             audio_channel_mask_t channelMask,
                                             audio_input_flags_t flags,
                                             input_type_t *inputType)
{
 	D.1、device = getDeviceAndMixForInputSource(inputSource, &policyMix);
	D.2、获取inputType的类型  *input = getInputForDevice(device, address, session, uid, inputSource,//8.1.4.4.3.1.2.1.调用getDeviceAndMixForInputSource函数获取policyMix设备以及对应的audio_device_t设备类型(device)
	//8.1.4.4.3.1.2.2.获取inputType的类型
	//8.1.4.4.3.1.2.3.更新channelMask,适配声道到输入源;
    //8.1.4.4.3.1.2.4.调用getInputProfile,根据传进来的采样率/精度/掩码等参数与获得的设备支持的Input Profile比较,
    返回一个与设备Profile匹配的IOProfile对象,IOProfile是用来描述输出或输入流的能力,
    策略管理器使用它来确定输出或输入是否适合于给定的用例, 相应地打开/关闭它,以及连接/断开音频轨道;
   // 8.1.4.4.3.1.2.5.如果获取失败的话,则使用AUDIO_INPUT_FLAG_NONE再次获取一遍,如果依然失败,则return一个bad news;
    //8.1.4.4.3.1.2.6.继续调用mpClientInterface->openInput建立起输入流;
    //8.1.4.4.3.1.2.7.根据IOProfile对象构造AudioInputDescriptor,并绑定到input流中,最后更新AudioPortList;
}

D.1 调用getDeviceAndMixForInputSource

首先看下AudioPolicyManager.cpp::getInputForAttr()的第1步.获取policyMix设备以及对应的audio_device_t设备类型(device)

audio_devices_t AudioPolicyManager::getDeviceAndMixForInputSource(audio_source_t inputSource,
                                                            AudioMix **policyMix)
{
   这里就是通过InputSource去获取相应的policyMix与audio_device_t设备类型了,
   从这里也可以看出Android系统上对Audio设备的分类有多少种了。
}

类似AudioTrack的音频策略找到相应的设备类型,这里就是录音的策略了。

D.2 获取inputType的类型 *input = getInputForDevice


audio_io_handle_t AudioPolicyManager::getInputForDevice(audio_devices_t device,
                                                        String8 address,
                                                        audio_session_t session,
                                                        uid_t uid,
                                                        audio_source_t inputSource,
                                                        const audio_config_base_t *config,
                                                        audio_input_flags_t flags,
                                                        AudioMix *policyMix)
{
//一堆的参数校验,以及通过不知道是啥的AudioInputSesion搞来搞去
sp<AudioInputDescriptor> inputDesc = new AudioInputDescriptor(profile, mpClientInterface);
//这里才是角色,打开HW的设备节点的
status_t status = inputDesc->open(&lConfig, device, address,
            halInputSource, profileFlags, &input);
}

B.5 创建RecordThread::RecordTrack:thread->createRecordTrack_l

1、通过此来绑定A.4步骤获取的线程。2、设置共享内存。

//frameworks\av\services\audiopolicy\AudioPolicyClientImpl.cpp AudioPolicyService::AudioPolicyClient::openInput
//frameworks/av/services/audioflinger/Threads.cpp
sp<AudioFlinger::RecordThread::RecordTrack> AudioFlinger::RecordThread::createRecordTrack_l(
        const sp<AudioFlinger::Client>& client,
        const audio_attributes_t& attr,
        uint32_t *pSampleRate,
        audio_format_t format,
        audio_channel_mask_t channelMask,
        size_t *pFrameCount,
        audio_session_t sessionId,
        size_t *pNotificationFrameCount,
        uid_t uid,
        audio_input_flags_t *flags,
        pid_t tid,
        status_t *status,
        audio_port_handle_t portId)
{
	E.1.一般性校验:是否初始化了等等
	E.2.创建RecordTrack  track = new RecordTrack,创建的时候并绑定了此前创建的线程(C.4步骤)了
	E.3.检查RecordTrack是否有效,并添加到 mTracks.add(track)里去(类似palyback thread一样,丢到一个线程管理recordtrack);
	E.4.通知设备发生变化 sendPrioConfigEvent_l(callingPid, tid, kPriorityAudioApp, true /*forApp*/);
}

E.2 创建RecordTrack track = new RecordTrack

RecordTrack包含很多东西,包括内存共享。

//frameworks/av/services/audioflinger/Tracks.cpp
AudioFlinger::RecordThread::RecordTrack::RecordTrack(
            RecordThread *thread,
            const sp<Client>& client,
            const audio_attributes_t& attr,
            uint32_t sampleRate,
            audio_format_t format,
            audio_channel_mask_t channelMask,
            size_t frameCount,
            void *buffer,
            size_t bufferSize,
            audio_session_t sessionId,
            pid_t creatorPid,
            uid_t uid,
            audio_input_flags_t flags,
            track_type type,
            const String16& opPackageName,
            audio_port_handle_t portId)
    :   TrackBase(thread, client, attr, sampleRate, format,
                  channelMask, frameCount, buffer, bufferSize, sessionId,
                  creatorPid, uid, false /*isOut*/,
                  (type == TYPE_DEFAULT) ?
                          ((flags & AUDIO_INPUT_FLAG_FAST) ? ALLOC_PIPE : ALLOC_CBLK) :
                          ((buffer == NULL) ? ALLOC_LOCAL : ALLOC_NONE),
                  type, portId,
                  std::string(AMEDIAMETRICS_KEY_PREFIX_AUDIO_RECORD) + std::to_string(portId)),
        mOverflow(false),
        mFramesToDrop(0),
        mResamplerBufferProvider(NULL), // initialize in case of early constructor exit
        mRecordBufferConverter(NULL),
        mFlags(flags),
        mSilenced(false),
        mOpRecordAudioMonitor(OpRecordAudioMonitor::createIfNeeded(uid, attr, opPackageName))
{
	1、mServerProxy = new AudioRecordServerProxy(mCblk, mBuffer, frameCount,
            mFrameSize, !isExternalTrack());

	2. mResamplerBufferProvider = new ResamplerBufferProvider(this);

}

E.4. 通知设备发生变化 sendPrioConfigEvent_l

//frameworks/av/services/audioflinger/Threads.cpp
// sendPrioConfigEvent_l() must be called with ThreadBase::mLock held
void AudioFlinger::ThreadBase::sendPrioConfigEvent_l(
        pid_t pid, pid_t tid, int32_t prio, bool forApp)
{
    sp<ConfigEvent> configEvent = (ConfigEvent *)new PrioConfigEvent(pid, tid, prio, forApp);
    sendConfigEvent_l(configEvent);
}

status_t AudioFlinger::ThreadBase::sendConfigEvent_l(sp<ConfigEvent>& event)
{
    status_t status = NO_ERROR;

    if (event->mRequiresSystemReady && !mSystemReady) {
        event->mWaitStatus = false;
        mPendingConfigEvents.add(event);
        return status;
    }
    mConfigEvents.add(event);
    ALOGV("sendConfigEvent_l() num events %zu event %d", mConfigEvents.size(), event->mType);
    mWaitWorkCV.signal();
    mLock.unlock();
    {
        Mutex::Autolock _l(event->mLock);
        while (event->mWaitStatus) {
            if (event->mCond.waitRelative(event->mLock, kConfigEventTimeoutNs) != NO_ERROR) {
                event->mStatus = TIMED_OUT;
                event->mWaitStatus = false;
            }
        }
        status = event->mStatus;
    }
    mLock.lock();
    return status;
}

处理PrioConfigEvent

frameworks/av/services/audioflinger/Threads.cpp
void AudioFlinger::ThreadBase::processConfigEvents_l()
{
 case CFG_EVENT_PRIO: {
            PrioConfigEventData *data = (PrioConfigEventData *)event->mData.get();
            // FIXME Need to understand why this has to be done asynchronously
            int err = requestPriority(data->mPid, data->mTid, data->mPrio, data->mForApp,
                    true /*asynchronous*/);
            if (err != 0) {
                ALOGW("Policy SCHED_FIFO priority %d is unavailable for pid %d tid %d; error %d",
                      data->mPrio, data->mPid, data->mTid, err);
            }
        } break;

}

第二部分 AudioRecord音频路由建立

由startRecoding开始

frameworks/base/media/java/android/media/AudioRecord.java
public void startRecording()
throws IllegalStateException {
    if (mState != STATE_INITIALIZED) {
        throw new IllegalStateException("startRecording() called on an "
                + "uninitialized AudioRecord.");
    }
     // start recording
    synchronized(mRecordingStateLock) {
        if (native_start(MediaSyncEvent.SYNC_EVENT_NONE, 0) == SUCCESS) {
            handleFullVolumeRec(true);
            mRecordingState = RECORDSTATE_RECORDING;
        }
    }
}

frameworks/base/core/jni/android_media_AudioRecord.cpp
android_media_AudioRecord_start(JNIEnv *env, jobject thiz, jint event, jint triggerSession)
{
    sp<AudioRecord> lpRecorder = getAudioRecord(env, thiz);
    if (lpRecorder == NULL ) {
        jniThrowException(env, "java/lang/IllegalStateException", NULL);
        return (jint) AUDIO_JAVA_ERROR;
    }
 
    return nativeToJavaStatus(
            lpRecorder->start((AudioSystem::sync_event_t)event, triggerSession));
}

AudioRecord::start

//frameworks/av/media/libaudioclient/AudioRecord.cpp
status_t AudioRecord::start(AudioSystem::sync_event_t event, audio_session_t triggerSession)
{
	1.重置当前录音Buffer中的录音数据写入的起始位置,录音Buffer的组成在第一篇文章中已经介绍了;
	2.标记mRefreshRemaining为true,从注释中可以看到,他应该是用来强制刷新剩余的frames,后面应该会突出这个变量的作用,先不急;
	3.从mCblk->mFlags的地方获取flags,这里是0x04.第一次来,肯定走mAudioRecord->start()5.如果start失败了,会重新调用restoreRecord_l函数,再次建立输入流通道,这个函数在前一篇文章已经分析过了;
	6.调用AudioRecordThread线程的resume函数;
}

//frameworks/av/services/audioflinger/Threads.cpp
status_t AudioFlinger::RecordThread::start(RecordThread::RecordTrack* recordTrack,
                                           AudioSystem::sync_event_t event,
                                           audio_session_t triggerSession)
{
	F.1.判断传过来的event的值,从AudioRecord.java可以看到他一直是SYNC_EVENT_NONE,所以这里就清除SyncStartEvent;
    F.2.判断在mActiveTracks集合中传过来的recordTrack是否是第一个,而我们这是第一次来,肯定会是第一个,而如果不是第一个,也就是说之前因为某种状态已经开始了录音,所以再判断是否是PAUSING状态,更新状态到ACTIVE,然后直接return;
	F.3.设置recordTrack的状态为STARTING_1,然后加到mActiveTracks集合中,如果此时再去indexOf的话,肯定就是1了;
	F.4.判断recordTrack是否是外部的Track,而isExternalTrack的定义如下:
	bool        isTimedTrack() const { return (mType == TYPE_TIMED); }
	bool        isOutputTrack() const { return (mType == TYPE_OUTPUT); }
	bool        isPatchTrack() const { return (mType == TYPE_PATCH); }
	bool        isExternalTrack() const { return !isOutputTrack() && !isPatchTrack(); }
再回忆下,我们在new RecordTrack的时候传入的mType是TrackBase::TYPE_DEFAULT,所以这个recordTrack是外部的Track;
    F.5.确定是ExternalTrack,那么就会调用AudioSystem::startInput方法开始采集数据,这个sessionId就是上一篇文章中出现的那个了,而对于这个mId,在AudioSystem::startInput中他的类型是audio_io_handle_t,在上一篇文章中,这个io_handle是通过AudioSystem::getInputForAttr获取到的,获取到之后通过checkRecordThread_l(input)获取到了一个RecordThread对象,我们看下RecordThread类:class RecordThread : public ThreadBase,再看下ThreadBase父类,父类的构造函数实现在Threads.cpp文件中,在这里我们发现把input赋值给了mId,也就是说,调用AudioSystem::startInput函数的参数,就是之前建立的输入流input以及生成的sessionId了。
	F.6.如果mRsmpInRear不为null的话,就重置mRsmpInFront等缓冲区索引;这里显然还没开始录音,所以mRsmpInRear是null的;
    F.7.设置recordTrack的状态为STARTING_2,然后调用mWaitWorkCV.broadcast()广播通知所有的线程开始工作。注意:这里不得不提前剧透下,在AudioSystem::startInput中,AudioFlinger::RecordThread已经开始跑起来了,所以其实broadcast对RecordThread是没有作用的,并且,需要特别注意的是,这里更新了recordTrack->mState为STARTING_2,之前在加入mActiveTracks时的状态是STARTING_1,这个地方比较有意思,这里先标记下,到时候在分析RecordThread的时候揭晓答案;
	F.8.判断下recordTrack是否已经加到mActiveTracks集合中了,如果没有的话,就说明start失败了,需要stopInput等;
}

F.5 AudioSystem::startInput

frameworks\av\media\libmedia\AudioSystem.cpp
status_t AudioSystem::startInput(audio_io_handle_t input,
                                 audio_session_t session)
{
    const sp<IAudioPolicyService>& aps = AudioSystem::get_audio_policy_service();
    if (aps == 0) return PERMISSION_DENIED;
    return aps->startInput(input, session);
}
//frameworks/av/services/audiopolicy/service/AudioPolicyInterfaceImpl.cpp
status_t AudioPolicyService::startInput(audio_port_handle_t portId, bool *silenced)
{
status = mAudioPolicyManager->startInput(
                    client->input, client->session, *silenced, &concurrency);
}
//frameworks/av/services/audiopolicy/managerdefault/AudioPolicyManager.cpp
status_t AudioPolicyManager::startInput(audio_io_handle_t input,
                                        audio_session_t session,
                                        bool silenced,
                                        concurrency_type__mask_t *concurrency)
{
	H.1.通过input找到mInputs集合中的位置,并获取他的inputDesc;
	H.2.判断input设备是否是虚拟设备,若不是则再判断是否存在active的设备,我们第一次来,不存在的!
	H.3.第一次来嘛,所以会调用SoundTrigger::setCaptureState(true),不过这个是和语音识别有关系,这里就不多说了;
	H.4.继续调用setInputDevice函数( setInputDevice(input, device, true /* force */)),其中getNewInputDevice函数的作用是根据input获取audio_devices_t设备,同样,这个设备在上一篇文章中的AudioPolicyManager::getInputForAttr方法中通过getDeviceAndMixForInputSource获取到的,即AUDIO_DEVICE_IN_BUILTIN_MIC内置MIC设备,同时在该函数最后更新了inputDesc->mDevice;
	H.5. status_t status = inputDesc->start();//不知道这个是干嘛的,是不是跟5.1不一样
	H.5.判断是否是remote_submix设备,然后做相应处理;
	H.6.inputDesc的mRefCount计数+1}

H.4 继续分析setInputDevice函数:


status_t AudioPolicyManager::setInputDevice(audio_io_handle_t input,
                                            audio_devices_t device,
                                            bool force,
                                            audio_patch_handle_t *patchHandle)
{
	I.1.这里已经知道device与inputDesc->mDevice都已经是AUDIO_DEVICE_IN_BUILTIN_MIC,但是force是true;
	I.2.通过device获取mAvailableInputDevices集合中的所有设备,到此刻,我们还只向该集合中添加一个device;
	I.3.这里我们分析下struct audio_patch;他定义在system\core\include\system\audio.h,这里对audio_patch中的source与sinks进行赋值,注意一点,他把mId(audio_io_handle_t)赋值给了id,然后在这个audio_patch中保存了InputSource,sample_rate,channel_mask,format,hw_module等等,几乎都存进去了;
	I.4.调用mpClientInterface->createAudioPatch创建Audio通路;
	I.5.更新patchDesc的属性;
	I.6.如果createAudioPatch的status是NO_ERROR的话,就调用mpClientInterface->onAudioPatchListUpdate更新AudioPatch列表;
}

I.4 调用mpClientInterface->createAudioPatch创建Audio通路

frameworks\av\services\audiopolicy\AudioPolicyClientImpl.cpp
status_t AudioPolicyService::AudioPolicyClient::createAudioPatch(const struct audio_patch *patch,
                                                                  audio_patch_handle_t *handle,
                                                                  int delayMs)
{
    return mAudioPolicyService->clientCreateAudioPatch(patch, handle, delayMs);
}
mAudioPolicyService->clientCreateAudioPatch
frameworks\av\services\audiopolicy\AudioPolicyService.cpp
status_t AudioPolicyService::clientCreateAudioPatch(const struct audio_patch *patch,
                                                audio_patch_handle_t *handle,
                                                int delayMs)
{
    return mAudioCommandThread->createAudioPatchCommand(patch, handle, delayMs);

}
mAudioCommandThread->createAudioPatchCommand
status_t AudioPolicyService::AudioCommandThread::createAudioPatchCommand(
                                                const struct audio_patch *patch,
                                                audio_patch_handle_t *handle,
                                                int delayMs)
{
    status_t status = NO_ERROR;
 
    sp<AudioCommand> command = new AudioCommand();
    command->mCommand = CREATE_AUDIO_PATCH;
    CreateAudioPatchData *data = new CreateAudioPatchData();
    data->mPatch = *patch;
    data->mHandle = *handle;
    command->mParam = data;
    command->mWaitStatus = true;
    ALOGV("AudioCommandThread() adding create patch delay %d", delayMs);
    status = sendCommand(command, delayMs);
    if (status == NO_ERROR) {
        *handle = data->mHandle;
    }
    return status;
}
AudioPolicyService::AudioCommandThread::threadLoop
bool AudioPolicyService::AudioCommandThread::threadLoop(){
 case CREATE_AUDIO_PATCH: {
                    CreateAudioPatchData *data = (CreateAudioPatchData *)command->mParam.get();
                    ALOGV("AudioCommandThread() processing create audio patch");
                    sp<IAudioFlinger> af = AudioSystem::get_audio_flinger();
                    if (af == 0) {
                        command->mStatus = PERMISSION_DENIED;
                    } else {
                        command->mStatus = af->createAudioPatch(&data->mPatch, &data->mHandle);
                    }
                    } break;
}
AF端的af->createAudioPatch

这里直接看CREATE_AUDIO_PATCH的分支,他调用了AF端的af->createAudioPatch函数,同样在这个loop中,也有后面的UPDATE_AUDIOPATCH_LIST分支

frameworks\av\services\audioflinger\PatchPanel.cpp
status_t AudioFlinger::createAudioPatch(const struct audio_patch *patch,
                                   audio_patch_handle_t *handle)
{
    Mutex::Autolock _l(mLock);
    if (mPatchPanel != 0) {
        return mPatchPanel->createAudioPatch(patch, handle);
    }
    return NO_INIT;
}
AudioFlinger::PatchPanel::createAudioPatch
status_t AudioFlinger::PatchPanel::createAudioPatch(const struct audio_patch *patch,
                                   audio_patch_handle_t *handle)
{
	J.1.在AudioPolicyManager::setInputDevice()函数中,num_sources与num_sinks都为1;
	J. 2.当halHandle不是AUDIO_PATCH_HANDLE_NONE的时候,就去mPatches集合中找到这个halHandle,然后删除他,而在这里,halHandle就是AUDIO_PATCH_HANDLE_NONE;
	J.3.这里的source.type为AUDIO_PORT_TYPE_DEVICE,获取patch中的audio_module_handle_t,获取AF端的AudioHwDevice,后面有个for循环判断,根据之前的参数设定,均不会进到if里面;
	J.4.判断source里的audio_module_handle_t与sink里的是否一致,那肯定一致噻;
	J.5.再判断hal代码中的version版本,我们看下hardware\aw\audio\tulip\audio_hw.c的adev->hw_device.common.version = AUDIO_DEVICE_API_VERSION_2_0;
	J. 6.调用AF端的checkRecordThread_l函数,即通过audio_io_handle_t从mRecordThreads中获取到RecordThread线程;
	J.7.通过address创建一个AudioParameter对象,并把source.type与source放入AudioParameter对象中;
   // 4.3.5.2.4.4.5.8.调用thread->setParameters把AudioParameter对象传递过去
//这里的跟Android9.0的就不一样了,9.0的是直接createAudioPatch了
	J.8. audioflinger->openInput_l 打开输入设备
	J.9. createPatchConnections(newPatch, patch) 切换声音通道
}
J.8 audioflinger->openInput_l 打开输入设备

这个就是打开输入设备了,就是audiohal里的openInput巴拉巴拉的,暂不深究。
奇了怪了,这个和inputDesc->open( D.2步骤里,不信你跑回头看下)有啥区别咧?兜兜转转其实都一样的,为啥还要再走一遍,为了确保会打开输入设备?
inHwHal->openInputStream(*input, devices, &halconfig, flags, address.string(), source, &inStream)
这个是open_input_stream

J.9 createPatchConnections(newPatch, patch) 切换声音通道

这个就不知道是否有必要性了。目前尚未知到是干甚的。

status_t AudioFlinger::PatchPanel::createPatchConnections(Patch *patch,
                                                          const struct audio_patch *audioPatch)
{
 	K.1.status_t status = createAudioPatch(&subPatch, &patch->mRecordPatchHandle);
 	K.2.createAudioPatch(&subPatch, &patch->mPlaybackPatchHandle); 
 	create patch from playback thread output to sink device
 	K.3.tie playback and record tracks together
 	K.4.start capture and playback
 
}

这个函数主要就是创建相应的track并将其添加到对应的thread线程里,并开始了录音等操作。

第三部分 开始读数据

read(byte[] audioData, int offsetInBytes, int sizeInBytes)

这里就直接从AudioRecord.cpp开始了,Java的就不再赘述了,都一样的。

//frameworks/av/media/libaudioclient/AudioRecord.cpp
ssize_t AudioRecord::read(void* buffer, size_t userSize)
{
	L.1. obtainBuffer(&audioBuffer, &ClientProxy::kForever);
	L.2.memcpy(buffer, audioBuffer.i8, bytesRead);
	L.3.releaseBuffer(&audioBuffer);
}


其实这里就比较明了了,就是计算出可读的buffer大小并设置相应的标识,第二步就是进行数据拷贝,第三步就是读取完之后设置相应的标识以便底层获取这些标识进行计算并写入新的录音数据。这个就是一个环形buffer的控制和读写。

第四部分 AudioRecord整体架构

其实说了这么多,对录音的仅限于部分的调用流程,但是对录音的整体的架构还是很模糊的。所以想进一步对9.0的录音架构进行进一步了解。不知道能不能整出来,咱试试吧。

未完待续 …

===2021-04-27更新线=
基本流程框架如下:
录音流程图:
Android 9 Audio系统笔记:AudioRecord_第1张图片

小结

Android 录音就主要两个流程:
1、建立录音路由,这个会通过attr还有驱动支持的设备系统状态等选择对应的路由设备,这一块是重点。
2、app端开始向audiotrack读取数据,AF向驱动设备节点读取数据,app client端不断获取数据(这个是通过共享内存的环形buffer实现的)。
嗯,到这里就写完了,拖延了很久才更新。

你可能感兴趣的:(Android,Audio系统,android)