AudioUnit音频特效

概述

本文在AudioUnit混音解读基础上,对混音后的音频t使kAudioUnitType_Effect类型的AudioUnit进行特效处理。

读取音频文件和建立AUgraph图和前一篇文章类似,稍微有点区别

1.将本地两个音频文件读取到内存

将本地文件读取到ExtAudioFileRef里面

  // create the URLs we'll use for source A and B
NSString *sourceA = [[NSBundle mainBundle] pathForResource:@"Track1" ofType:@"mp4"];// 只有纯音频
NSString *sourceB = [[NSBundle mainBundle] pathForResource:@"Track2" ofType:@"mp4"];
sourceURL[0] = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, (CFStringRef)sourceA, kCFURLPOSIXPathStyle, false);
sourceURL[1] = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, (CFStringRef)sourceB, kCFURLPOSIXPathStyle, false);
ExtAudioFileRef xafref = 0;
OSStatus result = ExtAudioFileOpenURL(sourceURL[i], &xafref);// 打开URL到文件xafref
 

从ExtAudioFileRef读取设定的音频格式的数据到mSoundBuffer.data

从ExtAudioFileRef读取设定的音频格式的数据到mSoundBuffer.data,这个是在每次拉取音频数据回调时,用来填充数据的
 // load up audio data from the demo files into mSoundBuffer.data used in the render proc
- (void)loadFiles
{
    mUserData.frameNum = 0;
    mUserData.maxNumFrames = 0;
        
    for (int i = 0; i < NUMFILES && i < MAXBUFS; i++)  {
        printf("loadFiles, %d\n", i);
        
        ExtAudioFileRef xafref = 0;
        
        // open one of the two source files
        OSStatus result = ExtAudioFileOpenURL(sourceURL[i], &xafref);
        if (result || 0 == xafref) { printf("ExtAudioFileOpenURL result %ld %08X %4.4s\n", result, (unsigned int)result, (char*)&result); return; }
        
        // 获取文件的格式,这个是文件的真实格式
        // get the file data format, this represents the file's actual data format
        // for informational purposes only -- the client format set on ExtAudioFile is what we really want back
        CAStreamBasicDescription fileFormat;
        UInt32 propSize = sizeof(fileFormat);
        result = ExtAudioFileGetProperty(xafref, kExtAudioFileProperty_FileDataFormat, &propSize, &fileFormat);
        if (result) { printf("ExtAudioFileGetProperty kExtAudioFileProperty_FileDataFormat result %ld %08X %4.4s\n", result, (unsigned int)result, (char*)&result); return; }
        
        printf("file %d, native file format\n", i);
        fileFormat.Print();
        
        // 设置文件的格式,这个是我们想要的文件输出格式
        // set the client format to be what we want back
        // this is the same format audio we're giving to the the mixer input
        result = ExtAudioFileSetProperty(xafref, kExtAudioFileProperty_ClientDataFormat, sizeof(mClientFormat), &mClientFormat);
        if (result) { printf("ExtAudioFileSetProperty kExtAudioFileProperty_ClientDataFormat %ld %08X %4.4s\n", result, (unsigned int)result, (char*)&result); return; }
        
        // 获取文件的帧数
        // get the file's length in sample frames
        UInt64 numFrames = 0;
        propSize = sizeof(numFrames);
        result = ExtAudioFileGetProperty(xafref, kExtAudioFileProperty_FileLengthFrames, &propSize, &numFrames);
        if (result || numFrames == 0) { printf("ExtAudioFileGetProperty kExtAudioFileProperty_FileLengthFrames result %ld %08X %4.4s\n", result, (unsigned int)result, (char*)&result); return; }
        // 记录文件最大帧数
        // keep track of the largest number of source frames
        if (numFrames > mUserData.maxNumFrames) mUserData.maxNumFrames = numFrames;
        
        /**
         这里的格式即为MixerUnit设置这个一致
         AudioUnitSetProperty(mMixer, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, i, &mClientFormat, sizeof(mClientFormat));
         
         */
        // 设置我们的内存buffer数据,包括帧数,格式,进行data初始化,读取文件里的数据信息到我们的内存buffer里
        // set up our buffer
        mUserData.soundBuffer[i].numFrames = numFrames;
        mUserData.soundBuffer[i].asbd = mClientFormat;

        UInt32 samples = numFrames * mUserData.soundBuffer[i].asbd.mChannelsPerFrame;
        // data初始化,其大小为帧数 * 每帧通道数 * 数据类型大小AudioSampleType
        mUserData.soundBuffer[i].data = (AudioSampleType *)calloc(samples, sizeof(AudioSampleType));
        
       
        // 使用AudioBufferList读取文件里的数据
        // set up a AudioBufferList to read data into
        AudioBufferList bufList;
        bufList.mNumberBuffers = 1;//几个数据数组
        bufList.mBuffers[0].mNumberChannels = mUserData.soundBuffer[i].asbd.mChannelsPerFrame;// 通道数,输入音频的通道数, 这里应该是2
        printf("ExtAudioFileRead result %ld \n", mUserData.soundBuffer[i].asbd.mChannelsPerFrame);
        bufList.mBuffers[0].mData = mUserData.soundBuffer[i].data;
        bufList.mBuffers[0].mDataByteSize = samples * sizeof(AudioSampleType);// 数据大小:帧数*每一帧的通道数 * 数据类型

        // perform a synchronous sequential read of the audio data out of the file into our allocated data buffer
        UInt32 numPackets = numFrames;
        // 读取多少帧数据到bufList里
        result = ExtAudioFileRead(xafref, &numPackets, &bufList);
        if (result) {
            printf("ExtAudioFileRead result %ld %08X %4.4s\n", result, (unsigned int)result, (char*)&result); 
            free(mUserData.soundBuffer[i].data);
            mUserData.soundBuffer[i].data = 0;
            return;
        }
        
        // close the file and dispose the ExtAudioFileRef
        ExtAudioFileDispose(xafref);
    }
}

2.建立AUGraph,将两个文件的音频数据作为Mixer AudioUnit的输入

image.png

首先有几个点需要说下,Mixer Unit的InputScope有多个Element, OutputScope有1个Element,和Remote I/O Unit有些不一样,Remote I//O Unit 有2个Element,但是每个Element有各自的InputScope和OutputScope
建立Augraph就是建立这个连接

当有Effect结点时,则MixerUnit的OutputScope的0Element的输出,为EffectUnit的输入,EffectUnit的输出为Remote I/O Unit的0 Element输入

2.1 建立AUGraph

1.生成Mixer AudioUnit 和Remote I/O AudioUnit

AUNode outputNode;
    AUNode eqNode;
    AUNode mixerNode;
    
    printf("create client ASBD\n");
    
    // client format audio goes into the mixer
    // 设置格式为交错格式,2通道
    mClientFormat.SetCanonical(2, true);                        
    mClientFormat.mSampleRate = kGraphSampleRate;
    mClientFormat.Print();
    
    printf("create output ASBD\n");
    
    // output format
    // 设置输出格式为非交错格式,2通道
    mOutputFormat.SetAUCanonical(2, false);                     
    mOutputFormat.mSampleRate = kGraphSampleRate;
    mOutputFormat.Print();
    
    OSStatus result = noErr;
    
    // load up the audio data
    printf("load up audio data\n");
    [self loadFiles];
    
    printf("\nnew AUGraph\n");
    
    // create a new AUGraph
    result = NewAUGraph(&mGraph);
    if (result) { printf("NewAUGraph result %ld %08X %4.4s\n", result, (unsigned int)result, (char*)&result); return; }
    
    // create three Audio Component Descriptons for the AUs we want in the graph using the CAComponentDescription helper class
    
    // output unit
    CAComponentDescription output_desc(kAudioUnitType_Output, kAudioUnitSubType_RemoteIO, kAudioUnitManufacturer_Apple);
    
    // iPodEQ unit
    CAComponentDescription eq_desc(kAudioUnitType_Effect, kAudioUnitSubType_AUiPodEQ, kAudioUnitManufacturer_Apple);
    
    // multichannel mixer unit
    CAComponentDescription mixer_desc(kAudioUnitType_Mixer, kAudioUnitSubType_MultiChannelMixer, kAudioUnitManufacturer_Apple);
    
    printf("add nodes\n");

    // create a node in the graph that is an AudioUnit, using the supplied AudioComponentDescription to find and open that unit
    result = AUGraphAddNode(mGraph, &output_desc, &outputNode);
    if (result) { printf("AUGraphNewNode 1 result %lu %4.4s\n", result, (char*)&result); return; }
    
    result = AUGraphAddNode(mGraph, &eq_desc, &eqNode);
    if (result) { printf("AUGraphNewNode 2 result %lu %4.4s\n", result, (char*)&result); return; }

    result = AUGraphAddNode(mGraph, &mixer_desc, &mixerNode);
    if (result) { printf("AUGraphNewNode 3 result %lu %4.4s\n", result, (char*)&result); return; }
2.2 连接Unit

这里是将Mixer Unit的0 Element 输出作为Remote Unit的0Element的输入, Remote I/O Unit的默认是0element的输出作为1Element的输入,1Element的输出是扬声器


// open the graph AudioUnits are open but not initialized (no resource allocation occurs here)
    result = AUGraphOpen(mGraph);
    if (result) { printf("AUGraphOpen result %ld %08X %4.4s\n", result, (unsigned int)result, (char*)&result); return; }
    
    // grab the audio unit instances from the nodes
    result = AUGraphNodeInfo(mGraph, mixerNode, NULL, &mMixer);
    if (result) { printf("AUGraphNodeInfo result %ld %08X %4.4s\n", result, (unsigned int)result, (char*)&result); return; }
    
    result = AUGraphNodeInfo(mGraph, eqNode, NULL, &mEQ);
    if (result) { printf("AUGraphNodeInfo result %ld %08X %4.4s\n", result, (unsigned int)result, (char*)&result); return; }

2.3 将音频文件的数据设置为Mixer unit的输入
 // set bus count
    UInt32 numbuses = 2;
    
    printf("set input bus count %lu\n", numbuses);
    
    result = AudioUnitSetProperty(mMixer, kAudioUnitProperty_ElementCount, kAudioUnitScope_Input, 0, &numbuses, sizeof(numbuses));
    if (result) { printf("AudioUnitSetProperty result %ld %08X %4.4s\n", result, (unsigned int)result, (char*)&result); return; }

    for (UInt32 i = 0; i < numbuses; ++i) {
        // setup render callback struct
        AURenderCallbackStruct rcbs;
        rcbs.inputProc = &renderInput;
        rcbs.inputProcRefCon = &mUserData;
        
        printf("set AUGraphSetNodeInputCallback\n");
        
        // set a callback for the specified node's specified input
        result = AUGraphSetNodeInputCallback(mGraph, mixerNode, i, &rcbs);
        if (result) { printf("AUGraphSetNodeInputCallback result %ld %08X %4.4s\n", result, (unsigned int)result, (char*)&result); return; }
        
        printf("set input bus %d, client kAudioUnitProperty_StreamFormat\n", (unsigned int)i);
        
        // 设置Mixer Unit的InputScope的i Element的格式为mClientFormat, 这个格式即为renderInput里的输入的格式
        // set the input stream format, this is the format of the audio for mixer input
        result = AudioUnitSetProperty(mMixer, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, i, &mClientFormat, sizeof(mClientFormat));
        if (result) { printf("AudioUnitSetProperty result %ld %08X %4.4s\n", result, (unsigned int)result, (char*)&result); return; }
    }
2.4 设置Unit的格式
 
    // get the eq's factory preset list -- this is a read-only CFArray array of AUPreset structures
    // host owns the retuned array and should release it when no longer needed
    UInt32 size = sizeof(mEQPresetsArray);
    result = AudioUnitGetProperty(mEQ, kAudioUnitProperty_FactoryPresets, kAudioUnitScope_Global, 0, &mEQPresetsArray, &size);
    if (result) { printf("AudioUnitGetProperty result %ld %08X %4.4s\n", result, (unsigned int)result, (char*)&result); return; }
    
    /* this code can be used if you're interested in dumping out the preset list
    printf("iPodEQ Factory Preset List:\n");
    UInt8 count = CFArrayGetCount(mEQPresetsArray);
    for (int i = 0; i < count; ++i) {
        AUPreset *aPreset = (AUPreset*)CFArrayGetValueAtIndex(mEQPresetsArray, i);
        CFShow(aPreset->presetName);
    }*/
    
    printf("set output kAudioUnitProperty_StreamFormat\n");
    mOutputFormat.Print();
    
    // set the output stream format of the mixer
    result = AudioUnitSetProperty(mMixer, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &mOutputFormat, sizeof(mOutputFormat));
    if (result) { printf("AudioUnitSetProperty result %ld %08X %4.4s\n", result, (unsigned int)result, (char*)&result); return; }
    
    // set the output stream format of the iPodEQ audio unit
    result = AudioUnitSetProperty(mEQ, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &mOutputFormat, sizeof(mOutputFormat));
    if (result) { printf("AudioUnitSetProperty result %ld %08X %4.4s\n", result, (unsigned int)result, (char*)&result); return; }

    printf("set render notification\n");
    
    // add a render notification, this is a callback that the graph will call every time the graph renders
    // the callback will be called once before the graph’s render operation, and once after the render operation is complete
    result = AUGraphAddRenderNotify(mGraph, renderNotification, &mUserData);
    if (result) { printf("AUGraphAddRenderNotify result %ld %08X %4.4s\n", result, (unsigned int)result, (char*)&result); return; }
        
    printf("AUGraphInitialize\n");

3.启动AUGraph

建立AUGraph之后,就可以启动工作了,一旦启动,需要数据时,就会触发我们前面设置的inputCallback,需要在这里对其填充数据

3.1 启动
result = AUGraphInitialize(mGraph);
OSStatus result = AUGraphStart(mGraph);

前面我们也设置了AUGraph的渲染回调,每次渲染前和渲染后都会回调,在这个函数里我们一半是等渲染后,记录下渲染到哪一帧了,方便填充数据不断往后更新帧

// add a render notification, this is a callback that the graph will call every time the graph renders
    // the callback will be called once before the graph’s render operation, and once after the render operation is complete
    result = AUGraphAddRenderNotify(mGraph, renderNotification, &mUserData);
3.2 音频拉取数据回调时填充数据

因为我们设置了数据输出格式是2通道的交叉类型,是LRLRLR,则给ioData只会在0号位置需要填充数据,只要是交叉存储,只会存在0号位置

这个拉取音频数据回调函数,是每次需要数据时都会调用,如果这里耗时,则会导致出现声音数据获取不及时,出现卡顿,一会有声音,一会没有,听起来像放鞭炮一样,或者没有声音播放出来

循环播放原理很简单:就是通过记录拉取的数是多少帧,给回调函数填充数据时,当拉取道最后一帧时,重新设置sample=0,从头开始拉取,这样,就实现了循环播放

// 设置输入的是交错2通道,所以ioData存储的数据都在0位上,1位没有,LRLRLR全部存储在0位置上
// audio render procedure to render our client data format
// 2 ch 'lpcm' 16-bit little-endian signed integer interleaved this is mClientFormat data, see CAStreamBasicDescription SetCanonical()
static OSStatus renderInput(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData)
{
    SourceAudioBufferDataPtr userData = (SourceAudioBufferDataPtr)inRefCon;

    AudioSampleType *in = userData->soundBuffer[inBusNumber].data;
 
    AudioSampleType *out = (AudioSampleType *)ioData->mBuffers[0].mData;

    UInt32 sample = userData->frameNum * userData->soundBuffer[inBusNumber].asbd.mChannelsPerFrame;


    // make sure we don't attempt to render more data than we have available in the source buffers
    // if one buffer is larger than the other, just render silence for that bus until we loop around again
    if ((userData->frameNum + inNumberFrames) > userData->soundBuffer[inBusNumber].numFrames) {// 如果要处理的数据长度+已经处理的数据长度>当前音频的数据总长度
        UInt32 offset = (userData->frameNum + inNumberFrames) - userData->soundBuffer[inBusNumber].numFrames;
        if (offset < inNumberFrames) {
            // copy the last bit of source
            SilenceData(ioData);
            memcpy(out, &in[sample], ((inNumberFrames - offset) * userData->soundBuffer[inBusNumber].asbd.mBytesPerFrame));
            return noErr;
        } else {
            // we have no source data
            SilenceData(ioData);
            *ioActionFlags |= kAudioUnitRenderAction_OutputIsSilence;
            return noErr;
        }
    }

    memcpy(out, &in[sample], ioData->mBuffers[0].mDataByteSize);

    printf("render input bus %ld from sample %ld, size: %ld\n", inBusNumber, sample, ioData->mBuffers[1].mDataByteSize);

    return noErr;
}

3.3 每渲染完后数据处理

// the render notification is used to keep track of the frame number position in the source audio
static OSStatus renderNotification(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData)
{
    SourceAudioBufferDataPtr userData = (SourceAudioBufferDataPtr)inRefCon;
    
    if (*ioActionFlags & kAudioUnitRenderAction_PostRender) {// kAudioUnitRenderAction_PostRender表示为渲染完后
    
        //printf("post render notification frameNum %ld inNumberFrames %ld\n", userData->frameNum, inNumberFrames);
        
        userData->frameNum += inNumberFrames;
        if (userData->frameNum >= userData->maxNumFrames) {
            userData->frameNum = 0;// 控制循环播放
        }
    }
    
    return noErr;
}

4.更改特效

更改特效也是很简单,只需要更改EffectUnit的特效类型属性,特效类型是可以获取的一个数组

获取特效列表

  // get the eq's factory preset list -- this is a read-only CFArray array of AUPreset structures
    // host owns the retuned array and should release it when no longer needed
    UInt32 size = sizeof(mEQPresetsArray);
    result = AudioUnitGetProperty(mEQ, kAudioUnitProperty_FactoryPresets, kAudioUnitScope_Global, 0, &mEQPresetsArray, &size);
    
// 更新特效
- (void)selectEQPreset:(NSInteger)value;
{
    AUPreset *aPreset = (AUPreset*)CFArrayGetValueAtIndex(mEQPresetsArray, value);
    OSStatus result = AudioUnitSetProperty(mEQ, kAudioUnitProperty_PresentPreset, kAudioUnitScope_Global, 0, aPreset, sizeof(AUPreset));
    if (result) { printf("AudioUnitSetProperty result %ld %08X %4.4s\n", result, (unsigned int)result, (char*)&result); return; };
    
    printf("SET EQ PRESET %d ", value);
    CFShow(aPreset->presetName);
}

调节音量直接设置AudioUnit的属性即可

给两路音频控制其输入音量大小

// sets the input volume for a specific bus
- (void)setInputVolume:(UInt32)inputNum value:(AudioUnitParameterValue)value
{
    OSStatus result = AudioUnitSetParameter(mMixer, kMultiChannelMixerParam_Volume, kAudioUnitScope_Input, inputNum, value, 0);
    if (result) { printf("AudioUnitSetParameter kMultiChannelMixerParam_Volume Input result %ld %08lX %4.4s\n", (long)result, (long)result, (char*)&result); return; }
}

给混合后的音频设置其输出音量大小

// sets the overall mixer output volume
- (void)setOutputVolume:(AudioUnitParameterValue)value
{
    OSStatus result = AudioUnitSetParameter(mMixer, kMultiChannelMixerParam_Volume, kAudioUnitScope_Output, 0, value, 0);
    if (result) { printf("AudioUnitSetParameter kMultiChannelMixerParam_Volume Output result %ld %08lX %4.4s\n", (long)result, (long)result, (char*)&result); return; }
}
5.Demo

demo其实是苹果官方的demo,我在里面加了些注释而已

https://developer.apple.com/library/archive/samplecode/iPhoneMixerEQGraphTest/Introduction/Intro.html#//apple_ref/doc/uid/DTS40009555

你可能感兴趣的:(AudioUnit音频特效)