allwinner音频控制流程:
hal层的so库文件在device/softwinner/common/hardware/audio中编译生成,该路径下的audio_hw.c对上主要实现了android hal层so库的标准接口供audiofliger调用,对下主要通
过调用android标准的tinymix接口来控制底层驱动,从而实现音量控制,音频通路的切换等,tinymix驱动路径在external/tinyalsa中,它会编译生成tinyalsa可执行文件和
libtinyalsa.so库文件,其中库文件可以用来在终端命令行直接控制底层音频,而so库供提供库函数和audio_hw.c一起编译,从而实现通过audio_hw.c调用。
先从上层常用的接口讲起,这样便于理解,否则看完底层,其实也不知道到底怎么用。如应用层常用到的AudioSystem.setParameters("routing=8192");这表示设置当前音频通道的输出为那一路,看看它是如何从上层一路控制底层硬件输出的
通过aidl调用frameworks/base/media/java/android/media/AudioSystem.java的setParameters:
public static native int setParameters(String keyValuePairs);
这里又调用JNI的方法,在core/jni/android_media_AudioSystem.cpp 中:
79 static int
80 android_media_AudioSystem_setParameters(JNIEnv *env, jobject thiz, jstring keyValuePairs)
81 {
82 const jchar* c_keyValuePairs = env->GetStringCritical(keyValuePairs, 0);
83 String8 c_keyValuePairs8;
84 if (keyValuePairs) {
85 c_keyValuePairs8 = String8(c_keyValuePairs, env->GetStringLength(keyValuePairs));
86 env->ReleaseStringCritical(keyValuePairs, c_keyValuePairs);
87 }
88 int status = check_AudioSystem_Command(AudioSystem::setParameters(0, c_keyValuePairs8));
89 return status;
90 }
88行调用media/libmedia/AudioSystem.cpp方法:
167 status_t AudioSystem::setParameters(audio_io_handle_t ioHandle, const String8& keyValuePairs) {
168 const sp
& af = AudioSystem::get_audio_flinger();
169 if (af == 0) return PERMISSION_DENIED;
170 return af->setParameters(ioHandle, keyValuePairs);
171 }
710行,调用了AudioFlinger.cpp方法:
747 if (ioHandle == 0) {
748 AutoMutex lock(mHardwareLock);
749 mHardwareStatus = AUDIO_SET_PARAMETER;
750 status_t final_result = NO_ERROR;
751 for (size_t i = 0; i < mAudioHwDevs.size(); i++) {
752 audio_hw_device_t *dev = mAudioHwDevs[i];
753 result = dev->set_parameters(dev, keyValuePairs.string());
754 final_result = result ?: final_result;
755 }
753行,这里最终调用了hal层的set_parameters,所以进入device/softwinner/common/hardware/audio中:
adev->hw_device.set_parameters = adev_set_parameters;
----->
到这里,最后会通过str_parms_create_str将会把值放到哈系表中去,用str_parms_get_str可以将值取出来,供HAL层判断当前的输出设备为那一个
HAL层的音频库一般会编译成为audio.primary.default.so audio.primary.exDroid.so这两个库,其中exDroid为$(TARGET_BOARD_PLATFORM),即自己目标平台的名字,那我们的
android系统到底加载其中的那一个呢,这就要看hardware/libhardware/hardware.c中的hw_get_module_by_class函数了,这个函数会遍历一下数组,如果找不到,才会用default的:
45 static const char *variant_keys[] = {
46 "ro.hardware", /* This goes first so that it can pick up a different
47 file on the emulator. */
48 "ro.product.board",
49 "ro.board.platform",
50 "ro.arch"
51 };
我们看到ro.product.board的属性就是$(TARGET_BOARD_PLATFORM),所以加载的是自己平台的so库,即audio.primary.exDroid.so
再来看看audioflinger.cpp中一些常用的函数,播放声音时候首先创建播放线程,调用:
6753 audio_io_handle_t AudioFlinger::openOutput(audio_module_handle_t module,
6754 audio_devices_t *pDevices,
6755 uint32_t *pSamplingRate,
6756 audio_format_t *pFormat,
6757 audio_channel_mask_t *pChannelMask,
6758 uint32_t *pLatencyMs,
6759 audio_output_flags_t flags)
6760 {
....................................................................................
6785 outHwDev = findSuitableHwDev_l(module, *pDevices);
6786 if (outHwDev == NULL)
6787 return 0;
6788
6789 audio_io_handle_t id = nextUniqueId();
6790
6791 mHardwareStatus = AUDIO_HW_OUTPUT_OPEN;
6792
6793 status = outHwDev->open_output_stream(outHwDev,
6794 id,
6795 *pDevices,
6796 (audio_output_flags_t)flags,
6797 &config,
6798 &outStream);
6799
6800 mHardwareStatus = AUDIO_HW_IDLE;
..............................................................................................
6808 if (status == NO_ERROR && outStream != NULL) {
6809 AudioStreamOut *output = new AudioStreamOut(outHwDev, outStream);
6810
6811 if ((flags & AUDIO_OUTPUT_FLAG_DIRECT) ||
6812 (config.format != AUDIO_FORMAT_PCM_16_BIT) ||
6813 (config.channel_mask != AUDIO_CHANNEL_OUT_STEREO)) {
6814 thread = new DirectOutputThread(this, output, id, *pDevices);
6815 ALOGV("openOutput() created direct output: ID %d thread %p", id, thread);
6816 } else {
6817 thread = new MixerThread(this, output, id, *pDevices);
6818 ALOGV("openOutput() created mixer output: ID %d thread %p", id, thread);
6819 }
6820 mPlaybackThreads.add(id, thread);
这里主要是打开硬件设备,设置一些硬件的默认参数,入音量等,然后根据flags标记创建DirectOutputThread或者MixerThread,我们看他在AudioFlinger.h的定义:
class DirectOutputThread : public PlaybackThread {.................}
而PlaybackThread继承关系:
class PlaybackThread : public ThreadBase {...................}
可见他们都是PlaybackThread的子类,然后在6820行,将该thread添加到mPlaybackThreads中,mPlaybackThreads是一个vetor,它以id作为索引,将该线程保存起来,并返回给调用
者,后续播放声音时候通过传进该id(也就是audio_io_handle_t),从该vetor取就可以了。
什么时候开始运行这个线程呢,它是在创建线程时候就启动了,看如下函数就知道了:
1652 void AudioFlinger::PlaybackThread::onFirstRef()
1653 {
1654 run(mName, ANDROID_PRIORITY_URGENT_AUDIO);
1655 }
上面函数是播放时候调用,如果录音则流程一样相似,调用的是openInput:
6970 audio_io_handle_t AudioFlinger::openInput(audio_module_handle_t module,
6971 audio_devices_t *pDevices,
6972 uint32_t *pSamplingRate,
6973 audio_format_t *pFormat,
6974 uint32_t *pChannelMask)
6975 {
...................................................................
6995 inHwDev = findSuitableHwDev_l(module, *pDevices);
6996 if (inHwDev == NULL)
6997 return 0;
6998
6999 audio_io_handle_t id = nextUniqueId();
7000
7001 status = inHwDev->open_input_stream(inHwDev, id, *pDevices, &config,
7002 &inStream);
..................................................................................
7022 if (status == NO_ERROR && inStream != NULL) {
7023 AudioStreamIn *input = new AudioStreamIn(inHwDev, inStream);
7024
7025 // Start record thread
7026 // RecorThread require both input and output device indication to forward to audio
7027 // pre processing modules
7028 uint32_t device = (*pDevices) | primaryOutputDevice_l();
7029 thread = new RecordThread(this,
7030 input,
7031 reqSamplingRate,
7032 reqChannels,
7033 id,
7034 device);
7035 mRecordThreads.add(id, thread);
7036 ALOGV("openInput() created record thread: ID %d thread %p", id, thread);
7037 if (pSamplingRate != NULL) *pSamplingRate = reqSamplingRate;
7038 if (pFormat != NULL) *pFormat = config.format;
7039 if (pChannelMask != NULL) *pChannelMask = reqChannels;
7040
7041 input->stream->common.standby(&input->stream->common);
7042
7043 // notify client processes of the new input creation
7044 thread->audioConfigChanged_l(AudioSystem::INPUT_OPENED);
7045 return id;
7046 }
这里7029行的 RecordThread继承关系:
class RecordThread : public ThreadBase, public AudioBufferProvider
接着开始播放声音,调用的是createTrack:
438 sp AudioFlinger::createTrack(
439 pid_t pid,
440 audio_stream_type_t streamType,
441 uint32_t sampleRate,
442 audio_format_t format,
443 uint32_t channelMask,
444 int frameCount,
445 IAudioFlinger::track_flags_t flags,
446 const sp& sharedBuffer,
447 audio_io_handle_t output,
448 pid_t tid,
449 int *sessionId,
450 status_t *status)
451 {
466 {
467 Mutex::Autolock _l(mLock);
468 PlaybackThread *thread = checkPlaybackThread_l(output);
469 PlaybackThread *effectThread = NULL;
470 if (thread == NULL) {
471 ALOGE("unknown output thread");
472 lStatus = BAD_VALUE;
473 goto Exit;
474 }
475
476 client = registerPid_l(pid);
502 track = thread->createTrack_l(client, streamType, sampleRate, format,
503 channelMask, frameCount, sharedBuffer, lSessionId, flags, tid, &lStatus);
504
505 // move effect chain to this output thread if an effect on same session was waiting
506 // for a track to be created
507 if (lStatus == NO_ERROR && effectThread != NULL) {
508 Mutex::Autolock _dl(thread->mLock);
509 Mutex::Autolock _sl(effectThread->mLock);
510 moveEffectChain_l(lSessionId, effectThread, thread, true);
511 }
512
513 // Look for sync events awaiting for a session to be used.
514 for (int i = 0; i < (int)mPendingSyncEvents.size(); i++) {
515 if (mPendingSyncEvents[i]->triggerSession() == lSessionId) {
516 if (thread->isValidSyncEvent(mPendingSyncEvents[i])) {
517 if (lStatus == NO_ERROR) {
518 track->setSyncEvent(mPendingSyncEvents[i]);
519 } else {
520 mPendingSyncEvents[i]->cancel();
521 }
522 mPendingSyncEvents.removeAt(i);
523 i--;
524 }
525 }
526 }
528 if (lStatus == NO_ERROR) {
529 trackHandle = new TrackHandle(track);
530 } else {
531 // remove local strong reference to Client before deleting the Track so that the Client
532 // destructor is called by the TrackBase destructor with mLock held
533 client.clear();
534 track.clear();
535 }
536
537 Exit:
538 if (status != NULL) {
539 *status = lStatus;
540 }
541 return trackHandle;
476行的函数:
422 sp AudioFlinger::registerPid_l(pid_t pid)
423 {
424 // If pid is already in the mClients wp<> map, then use that entry
425 // (for which promote() is always != 0), otherwise create a new entry and Client.
426 sp client = mClients.valueFor(pid).promote();
427 if (client == 0) {
428 client = new Client(this, pid);
429 mClients.add(pid, client);
430 }
431
432 return client;
433 }
我们第一次进来,client为null,所以进入428行:
5685 AudioFlinger::Client::Client(const sp& audioFlinger, pid_t pid)
5686 : RefBase(),
5687 mAudioFlinger(audioFlinger),
5688 // FIXME should be a "k" constant not hard-coded, in .h or ro. property, see 4 lines below
5689 mMemoryDealer(new MemoryDealer(1024*1024, "AudioFlinger::Client")),
5690 mPid(pid),
5691 mTimedTrackCount(0)
5692 {
5693 // 1 MB of address space is good for 32 tracks, 8 buffers each, 4 KB/buffer
5694 }
申请了一块内存啊,接着往下。
这里型参中传进来的output参数就是前面加入vetor的id,通过468行checkPlaybackThread_l函数将前面的thread取出来,接着502行,创建PlaybackThread::Track类,从中可以看到
一个线程可以有多个track,对应着不同的音频,比如,在统一个进程中,我们可以边播电影边听音乐,同时有两个track输出。进入看看该函数:
1658 sp AudioFlinger::PlaybackThread::createTrack_l(
1659 const sp& client,
1660 audio_stream_type_t streamType,
1661 uint32_t sampleRate,
1662 audio_format_t format,
1663 uint32_t channelMask,
1664 int frameCount,
1665 const sp& sharedBuffer,
1666 int sessionId,
1667 IAudioFlinger::track_flags_t flags,
1668 pid_t tid,
1669 status_t *status)
...................................................................................
1759 lStatus = initCheck();
1760 if (lStatus != NO_ERROR) {
1761 ALOGE("Audio driver not initialized.");
1762 goto Exit;
1763 }
1764
1765 { // scope for mLock
1766 Mutex::Autolock _l(mLock);
1767
1768 // all tracks in same audio session must share the same routing strategy otherwise
1769 // conflicts will happen when tracks are moved from one output to another by audio policy
1770 // manager
1771 uint32_t strategy = AudioSystem::getStrategyForStream(streamType);
1772 for (size_t i = 0; i < mTracks.size(); ++i) {
1773 sp