Android多媒体:音效链

Android有一个默认的音效实现机制,是用音效链来实现的,一个音效可以认为是一个数字滤波器,对PCM音频数据进行处理。

每个音效处理逻辑被编译成一个so库,放在system/lib/soundfx下面。


DownMixer

这个音效比较常用,就是把多声道音频给降级成双声道的。

DownMixer分为两种策略,一种是直接把左右声道之外的声道去掉,叫DOWNMIX_TYPE_STRIP;一种是把左右边的声道叠加到左右声道上,叫DOWNMIX_TYPE_FOLD。而多声道的处理分为几种:CHANNEL_MASK_QUAD_BACK,CHANNEL_MASK_QUAD_SIDE,CHANNEL_MASK_SURROUND,CHANNEL_MASK_5POINT1_BACK,CHANNEL_MASK_5POINT1_SIDE,CHANNEL_MASK_7POINT1_SIDE_BACK。

QUAD指的是Quadraphonic,四声道环绕立体声。具体的叠加就是防止溢出:

// FL + RL

pDst[0] = clamp16((pSrc[0] + pSrc[2])>> 1);

// FR + RR

pDst[1] = clamp16((pSrc[1] + pSrc[3])>> 1);

surround sound指的是前左,前右,前中,后中四路,算法如下:

centerPlusRearContrib = (pSrc[2] *MINUS_3_DB_IN_Q19_12) + (pSrc[3] * MINUS_3_DB_IN_Q19_12);

           // FL + centerPlusRearContrib

           lt = (pSrc[0] << 12) + centerPlusRearContrib;

           // FR + centerPlusRearContrib

           rt = (pSrc[1] << 12) + centerPlusRearContrib;

           // store in destination

           pDst[0] = clamp16(lt >> 13); // differs from when accumulate istrue above

           pDst[1] = clamp16(rt >> 13); // differs from when accumulate istrue above

对于5.1声道的

centerPlusLfeContrib = (pSrc[2] *MINUS_3_DB_IN_Q19_12)

                    + (pSrc[3] *MINUS_3_DB_IN_Q19_12);

           // FL + centerPlusLfeContrib+ RL

           lt = (pSrc[0] << 12) + centerPlusLfeContrib + (pSrc[4] <<12);

           // FR + centerPlusLfeContrib + RR

           rt = (pSrc[1] << 12) + centerPlusLfeContrib + (pSrc[5] <<12);

           // store in destination

           pDst[0] = clamp16(lt >> 13); // differs from when accumulate istrue above

           pDst[1] = clamp16(rt >> 13); // differs from when accumulate istrue above

           pSrc += 6;

           pDst += 2;


Virtualizer

static inline int32_t clamp32(int64_tsample)

{

   // check overflow for both positive and negative values:

   // all bits above short range must me equal to sign bit

   if ((sample>>31) ^ (sample>>63))

       sample = 0x7FFFFFFF ^ (sample>>63);

   return sample;

}

上面是32bit数相加防止溢出的。

for (int i = 0; i < len; i++) {

#ifdef MTK_HIGH_RESOLUTION_AUDIO_SUPPORT

           int32_t smp =inBuffer->s32[i]>>16;

#else

           int32_t smp = inBuffer->s16[i];

#endif

           if (smp < 0) smp = -smp - 1; // take care to keep the max negative inrange

           int32_t clz = __builtin_clz(smp);

           if (shift > clz) shift = clz;

       }

这段不知道啥意思。可视化音频指的是把音频图像化


Bundle

#define CHECK_ARG(cond) {                     \

   if (!(cond)) {                           \

       ALOGV("Invalid argument: "#cond);      \

       return -EINVAL;                      \

   }                                        \

}

似乎是一组音效的混合,所以称为Bundle


Reverberation

the persistence of sound after a sound isproduced.

这个算法的核心是此结构体:ReverbContext,然后在LVREV_Process里进行处理

核心函数LVREV_Process,以及ReverbBlock。可以看到,在数学上,其实是用高低通滤波器来做的。具体数学上的内容参考:

http://en.wikipedia.org/wiki/Reverberation


你可能感兴趣的:(Audio系统)