AndroidQ | AudioFlinger

Audioflinger负责管理android的所有音频设备,包括输入和输出。在Android Audio系统中,AudioFlinger起到承上启下的作用,上接AudioTrack/AudioRecord/AudioSystem等,下接AudioHal。AudioFlnger对上层会提供各种功能接口调用,对下层会对每个AudioHal设备开启一个独立线程,负责音频数据的管理。本文就依据AudioFlinger的作用来分析部分主要代码。

1 AudioFlinger Service注册

AndroidQ | AudioFlinger_第1张图片
frameworks\av\media\audioserver\main_audioserver.cpp

int main(int argc __unused, char **argv)
{
...
        sp<IServiceManager> sm = defaultServiceManager();
        ALOGI("ServiceManager: %p", sm.get());
        AudioFlinger::instantiate();------------------audioflinger service注册
...
        ProcessState::self()->startThreadPool();
        IPCThreadState::self()->joinThreadPool();
...
}

audioflinger注册后,client就可以使用IAudioFlinger声明的功能了,最主要的功能无非就是获取track和record对象,播放或者录取音频数据,设置或查询音量及其他参数等。

2 创建Playback和Record线程

2.1 Playback线程

对于每个播放设备,audioflinger都会创建一个线程来循环向hal层传输音频数据,具体的接口是openOutput,然后调用openOutput_l来具体创建线程

sp<AudioFlinger::ThreadBase> AudioFlinger::openOutput_l(audio_module_handle_t module,
                                                        audio_io_handle_t *output,
                                                        audio_config_t *config,
                                                        audio_devices_t deviceType,
                                                        const String8& address,
                                                        audio_output_flags_t flags)
{
    AudioHwDevice *outHwDev = findSuitableHwDev_l(module, deviceType);--------1.查询hal音频设备
...
    if (*output == AUDIO_IO_HANDLE_NONE) {
        *output = nextUniqueId(AUDIO_UNIQUE_ID_USE_OUTPUT);-------------------2.分配唯一id
...
    AudioStreamOut *outputStream = NULL;
    status_t status = outHwDev->openOutputStream(
            &outputStream,----------------------------------------------------3.获取stream
            *output,
            deviceType,
            flags,
            config,
            address.string());

    mHardwareStatus = AUDIO_HW_IDLE;

    if (status == NO_ERROR) {
...
        } else {
            sp<PlaybackThread> thread;
...
            } else {---------------------------------------------------------4.创建playback线程
                thread = new MixerThread(this, outputStream, *output, mSystemReady);
                ALOGV("openOutput_l() created mixer output: ID %d thread %p",
                      *output, thread.get());
            }
            mPlaybackThreads.add(*output, thread);---------------------------5.加入vector
...
        }
    }
...
}

线程第一次被引用的时候线程就会运行起来,如下

void AudioFlinger::PlaybackThread::onFirstRef()
{
    run(mThreadName, ANDROID_PRIORITY_URGENT_AUDIO);
}

对于不同类型的设备会创建不同的线程,flag和thread的对应关系如下表:

flag thread
AUDIO_OUTPUT_FLAG_MMAP_NOIRQ MmapPlaybackThread
AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD OffloadThread
AUDIO_OUTPUT_FLAG_DIRECT DirectOutputThread
default MixerThread

来看下两个典型的类继承关系:
AndroidQ | AudioFlinger_第2张图片

线程运行起来会一直在threadLoop中保持循环,playback最主要的功能就是在这里了,这个函数比较长,我们分段来看下

  • 首先是这个大循环,退出循环后,会做一些资源处理
bool AudioFlinger::PlaybackThread::threadLoop()
{
...
    // loopCount is used for statistics and diagnostics.
    for (int64_t loopCount = 0; !exitPending(); ++loopCount)
    {
    	...
    }

    threadLoop_exit();

    if (!mStandby) {
        threadLoop_standby();
        mStandby = true;
    }

    releaseWakeLock();
...
}
  • 循环第一步,判断线程是否有活动的track,是否要进standby等
        { // scope for mLock
            Mutex::Autolock _l(mLock);------作用域内的lock
...
            if (mSignalPending) {
                // A signal was raised while we were unlocked
                mSignalPending = false;
            } else if (waitingAsyncCallback_l()) {
                if (exitPending()) {
                    break;
                }
                bool released = false;
                if (!keepWakeLock()) {
                    releaseWakeLock_l();
                    released = true;
                }
...
                continue;
            }
            if ((mActiveTracks.isEmpty() && systemTime() > mStandbyTimeNs) ||
                                   isSuspended()) {
                // put audio hardware into standby after short delay
                if (shouldStandby_l()) {

                    threadLoop_standby();
...
                }

                if (mActiveTracks.isEmpty() && mConfigEvents.isEmpty()) {
                    // we're about to wait, flush the binder command buffer
                    IPCThreadState::self()->flushCommands();

                    clearOutputTracks();

                    if (exitPending()) {
                        break;
                    }
...
                    mWaitWorkCV.wait(mLock);
                    ALOGV("%s waking up", myName.string());
                    acquireWakeLock_l();
...
                    continue;
                }
            }
            // mMixerStatusIgnoringFastTracks is also updated internally
            mMixerStatus = prepareTracks_l(&tracksToRemove);
...
            activeTracks.insert(activeTracks.end(), mActiveTracks.begin(), mActiveTracks.end());
        } // mLock scope ends
  • 循环第二步,处理track buffer的长度
        if (mBytesRemaining == 0) {
            mCurrentWriteLength = 0;
            if (mMixerStatus == MIXER_TRACKS_READY) {
                // threadLoop_mix() sets mCurrentWriteLength
                threadLoop_mix();
            } else if ((mMixerStatus != MIXER_DRAIN_TRACK)
                        && (mMixerStatus != MIXER_DRAIN_ALL)) {
				...
                if (mSleepTimeUs == 0) {
                    mCurrentWriteLength = mSinkBufferSize;
					...
                }
            }
            // Either threadLoop_mix() or threadLoop_sleepTime() should have set
            // mMixerBuffer with data if mMixerBufferValid is true and mSleepTimeUs == 0.
            // Merge mMixerBuffer data into mEffectBuffer (if any effects are valid)
            // or mSinkBuffer (if there are no effects).
            //
            // This is done pre-effects computation; if effects change to
            // support higher precision, this needs to move.
            //
            // mMixerBufferValid is only set true by MixerThread::prepareTracks_l().
            // TODO use mSleepTimeUs == 0 as an additional condition.
            if (mMixerBufferValid) {
				...
                memcpy_by_audio_format(buffer, format, mMixerBuffer, mMixerBufferFormat,
                        mNormalFrameCount * (mChannelCount + mHapticChannelCount));
				...
            }

            mBytesRemaining = mCurrentWriteLength;
			...

            // only process effects if we're going to write
            if (mSleepTimeUs == 0 && mType != OFFLOAD) {
				...
            }
        }
  • 循环第三步,开始向hal层传输数据
        if (!waitingAsyncCallback()) {
            // mSleepTimeUs == 0 means we must write to audio hardware
            if (mSleepTimeUs == 0) {
				...
                if (mBytesRemaining) {
					...
                    ret = threadLoop_write();----------------------传输数据
                    const int64_t lastIoEndNs = systemTime();
                    if (ret < 0) {
                        mBytesRemaining = 0;
                    } else if (ret > 0) {
                        mBytesWritten += ret;
                        mBytesRemaining -= ret;
						...
                    }
                } else if ((mMixerStatus == MIXER_DRAIN_TRACK) ||
                        (mMixerStatus == MIXER_DRAIN_ALL)) {
                    threadLoop_drain();
                }
                if (mType == MIXER && !mStandby) {
				...
                }

            } else {
                ATRACE_BEGIN("sleep");
				...
                ATRACE_END();
            }
        }
  • 循环第四步,处理不活动的track等
        threadLoop_removeTracks(tracksToRemove);
        tracksToRemove.clear();
        clearOutputTracks();
        effectChains.clear();

2.2 Record线程

创建Record线程的函数为openInput->openInput_l

sp<AudioFlinger::ThreadBase> AudioFlinger::openInput_l(audio_module_handle_t module,
                                                         audio_io_handle_t *input,
                                                         audio_config_t *config,
                                                         audio_devices_t devices,
                                                         const String8& address,
                                                         audio_source_t source,
                                                         audio_input_flags_t flags,
                                                         audio_devices_t outputDevice,
                                                         const String8& outputDeviceAddress)
{
    AudioHwDevice *inHwDev = findSuitableHwDev_l(module, devices);---------------查询hal设备
...
    if (*input == AUDIO_IO_HANDLE_NONE) {
        *input = nextUniqueId(AUDIO_UNIQUE_ID_USE_INPUT);------------------------分配唯一id
    } else if (audio_unique_id_get_use(*input) != AUDIO_UNIQUE_ID_USE_INPUT) {
...
    }
...
    status_t status = inHwHal->openInputStream(---------------------------------打开stream
            *input, devices, &halconfig, flags, address.string(), source,
            outputDevice, outputDeviceAddress, &inStream);
...
    if (status == NO_ERROR && inStream != 0) {
        AudioStreamIn *inputStream = new AudioStreamIn(inHwDev, inStream, flags);
        if ((flags & AUDIO_INPUT_FLAG_MMAP_NOIRQ) != 0) {
...
        } else {
...
            sp<RecordThread> thread = new RecordThread(this, inputStream, *input, mSystemReady);-----创建线程
            mRecordThreads.add(*input, thread);------------------------------加入vector
...
        }
    }
...
}

Record线程根据设备的类型可以分两种

flag thread
AUDIO_INPUT_FLAG_MMAP_NOIRQ MmapCaptureThread
default RecordThread

这两种类型的类继承关系如下
AndroidQ | AudioFlinger_第3张图片
Record线程的threadloop也是在一个大循环中,读取音频数据然后分别分发给RecordTrack

3 创建AudioTrack和AudioRecord

3.1 AudioTrack

用户要播放音频,就要创建一个AudioTrack对象,具体调用步骤为AudioTrack@AudioTrack->set@AudioTrack->createTrack_l@AudioTrack

status_t AudioTrack::createTrack_l()
{
...
    const sp<IAudioFlinger>& audioFlinger = AudioSystem::get_audio_flinger();
...
    sp<IAudioTrack> track = audioFlinger->createTrack(input,
                                                      output,
                                                      &status);
...
	mAudioTrack = track;
...
}

IAudioTrack封装了binder通信的接口,这里通过binder通信机制来获取binder句柄,最终来和service端通信,这个还是要看一下

    virtual sp<IAudioTrack> createTrack(const CreateTrackInput& input,
                                        CreateTrackOutput& output,
                                        status_t *status)
    {
        Parcel data, reply;
        sp<IAudioTrack> track;
        data.writeInterfaceToken(IAudioFlinger::getInterfaceDescriptor());

        if (status == nullptr) {
            return track;
        }

        input.writeToParcel(&data);

        status_t lStatus = remote()->transact(CREATE_TRACK, data, &reply);----通过AudioFlinger获取数据
        if (lStatus != NO_ERROR) {
            ALOGE("createTrack transaction error %d", lStatus);
            *status = DEAD_OBJECT;
            return track;
        }
        *status = reply.readInt32();
        if (*status != NO_ERROR) {
            ALOGE("createTrack returned error %d", *status);
            return track;
        }
        track = interface_cast<IAudioTrack>(reply.readStrongBinder());-------数据转化为binder句柄
        if (track == 0) {
            ALOGE("createTrack returned an NULL IAudioTrack with status OK");
            *status = DEAD_OBJECT;
            return track;
        }
        output.readFromParcel(&reply);
        return track;
    }

接下来看AudioFlinger那边的实现

sp<IAudioTrack> AudioFlinger::createTrack(const CreateTrackInput& input,
                                          CreateTrackOutput& output,
                                          status_t *status)
{
...
    {
...
        track = thread->createTrack_l(client, streamType, localAttr, &output.sampleRate,
                                      input.config.format, input.config.channel_mask,
                                      &output.frameCount, &output.notificationFrameCount,
                                      input.notificationsPerBuffer, input.speed,
                                      input.sharedBuffer, sessionId, &output.flags,
                                      callingPid, input.clientInfo.clientTid, clientUid,
                                      &lStatus, portId);
...
        }
...
    // return handle to client
    trackHandle = new TrackHandle(track);
...
    return trackHandle;
}

然后需要把track对象加入到PlaybackThread的vector中,调用的函数为start,最终会调用到Bn端的start实现,我们来看下Bn端继承的对象
AndroidQ | AudioFlinger_第4张图片

status_t AudioFlinger::TrackHandle::start() {
    return mTrack->start();--------mTrack的类型为PlaybackThread::Track
}
status_t AudioFlinger::PlaybackThread::Track::start(AudioSystem::sync_event_t event __unused,
                                                    audio_session_t triggerSession __unused)
{
	...
    if (thread != 0) {
		...
        PlaybackThread *playbackThread = (PlaybackThread *)thread.get();
		...
        status = playbackThread->addTrack_l(this);
		...
    } else {
        status = BAD_VALUE;
    }
...
}

调用PlaybackThread来加入到线程中

status_t AudioFlinger::PlaybackThread::addTrack_l(const sp<Track>& track)
{
    status_t status = ALREADY_EXISTS;

    if (mActiveTracks.indexOf(track) < 0) {
		...
        mActiveTracks.add(track);---------正式加入到线程中
		...
    }
	...
}

3.2 AudioRecord

对于AudioRecord就直接来看start的实现

status_t AudioFlinger::RecordThread::start(RecordThread::RecordTrack* recordTrack,
                                           AudioSystem::sync_event_t event,
                                           audio_session_t triggerSession)
{
	...

    {
		...
        mActiveTracks.add(recordTrack);
		...
    }
}

你可能感兴趣的:(Android系统,java,android)