AudioUnit实现简单的录音和耳返

最近学习了AudioUnit的官方指南,按照官方文档简单实现录音和耳返的功能。
1、首先配置AudioSession,代码如下

  self.graphSampleRate = 44100.0;
    self.ioBufferDuration = 0.005;
    NSError *error = nil;
    AVAudioSession *audioSession = [AVAudioSession sharedInstance];
    [audioSession setPreferredHardwareSampleRate:self.graphSampleRate error:&error];
    
    if (error) {
        NSLog(@"=====error===%@",error);
        exit(-1);
        return;
    }
    [audioSession setCategory:AVAudioSessionCategoryPlayAndRecord error:&error];
    if (error) {
        NSLog(@"=====error===%@",error);
        exit(-1);
        return;
    }
    [audioSession setActive:YES error:&error];
    if (error) {
        NSLog(@"=====error===%@",error);
        exit(-1);
        return;
    }
    //    音频会话激活后,根据系统提供的实际采样率更新您自己的采样率变量。
    self.graphSampleRate = [audioSession currentHardwareSampleRate];
    
    //    还有一个其他硬件特性可能需要配置:音频硬件I / O缓冲区持续时间。44.1 kHz采样率的默认持续时间约为23 ms,相当于1,024个采样的切片大小。如果I / O延迟对您的应用程序至关重要,则可以请求较短的持续时间,下降到大约0.005 ms(相当于256个采样),如下所示:
    [audioSession setPreferredIOBufferDuration:self.ioBufferDuration error:&error];
    if (error) {
        NSLog(@"=====error===%@",error);
        exit(-1);
        return;
    }

2、创建一个AudioUnit对象
2.1:构建AudioComponentDescription,主要需要设置componentTypecomponentSubType。由于我们这里是需要录音然后输出耳机,设置如下:

 //1.创建AudioUnit
    AudioComponentDescription ioUnitDes;
    ioUnitDes.componentType = kAudioUnitType_Output;
    ioUnitDes.componentSubType = kAudioUnitSubType_RemoteIO;
    ioUnitDes.componentManufacturer = kAudioUnitManufacturer_Apple;
    ioUnitDes.componentFlags = 0;
    ioUnitDes.componentFlagsMask = 0;

不同的的componentTypecomponentSubType组成有不能的作用,下图为常用不同组合的作用

391523962280_.pic_hd.jpg

2.2 创建AudioUnit,AudioUnit的创建方式有两种,这里使用官方推荐的方式AUGraph来创建,

  //1.创建一个图
   OSStatus status;
    status = NewAUGraph(&processingGraph);
    CheckStatus(status, @"不能构造图", YES);
//2.创建一个结点。
    AUNode ioNode;
    status = AUGraphAddNode(processingGraph, &ioUnitDes, &ioNode);
    CheckStatus(status, @"添加节点失败", YES);
    
    //3、打开图,相当于间接创建了音频处理单元
    status = AUGraphOpen(processingGraph);
    CheckStatus(status, @"打开图失败", YES);
    //4、获取ioUnit
    status = AUGraphNodeInfo(processingGraph, ioNode, NULL, &_ioUnit);
    CheckStatus(status, @"不能获取 node info", YES);

上面代码中CheckStatus为检测是否成功的函数

static void CheckStatus(OSStatus status, NSString *message, BOOL fatal)
{
    if(status != noErr)
    {
        char fourCC[16];
        *(UInt32 *)fourCC = CFSwapInt32HostToBig(status);
        fourCC[4] = '\0';
        
        if(isprint(fourCC[0]) && isprint(fourCC[1]) && isprint(fourCC[2]) && isprint(fourCC[3]))
            NSLog(@"%@: %s", message, fourCC);
        else
            NSLog(@"%@: %d", message, (int)status);
        
        if(fatal)
            exit(-1);
    }
}

2.3 设置AudioUnit的属性

  //2.1设置
    UInt32 flag = 1;
    status =  AudioUnitSetProperty(_ioUnit,kAudioOutputUnitProperty_EnableIO , kAudioUnitScope_Input,1, &flag, sizeof(flag));
    CheckStatus(status, @"设置输入scope 失败", YES);
    status =  AudioUnitSetProperty(_ioUnit,kAudioOutputUnitProperty_EnableIO , kAudioUnitScope_Output,0, &flag, sizeof(flag));
    CheckStatus(status, @"设置输出scope 失败", YES);

上面的1表示Element1为和录音的麦克风相连,0表示Element0和输出硬件相连。
2.4 设置流的格式AudioStreamBasicDescription

 size_t bytesPerSample = sizeof(AudioUnitSampleType);
    AudioStreamBasicDescription asbd = {0};
    asbd.mFormatID = kAudioFormatLinearPCM;
    asbd.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical;
    asbd.mBytesPerFrame = bytesPerSample;
    asbd.mBytesPerPacket = bytesPerSample;
    asbd.mBitsPerChannel = 8*bytesPerSample;
    asbd.mFramesPerPacket = 1;
    asbd.mChannelsPerFrame = 2;
    asbd.mSampleRate = self.graphSampleRate;
    
    status = AudioUnitSetProperty(_ioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &asbd, sizeof(AudioStreamBasicDescription));
    CheckStatus(status, @"设置输入流格式失败", YES);
    status = AudioUnitSetProperty(_ioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &asbd, sizeof(AudioStreamBasicDescription));
    CheckStatus(status, @"设置输出流格式失败", YES);

下面的图片为不同用途的AudioUnit的设置格式

401524022790_.pic_hd.jpg

411524022810_.pic_hd.jpg

421524022825_.pic_hd.jpg

431524022849_.pic_hd.jpg

3、设置回调函数,设置回调函数也要两种不同的方式,一种直接为AudioUnit设置可能线程不安全,还有一种就是AUGraph来设置
3.1 直接使用AudioUnit来设置

    //3、设置播放回调函数
    AURenderCallbackStruct playCallBack;
    playCallBack.inputProc = playCallBackFuc;
    playCallBack.inputProcRefCon = (__bridge void*)self;
    
    
   status =  AudioUnitSetProperty(_ioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Output, 0, &playCallBack, sizeof(playCallBack));
    CheckStatus(status, @"set renderCallBackError", YES);

  // 设置录音回调函数
    AURenderCallbackStruct recordCallback;
    recordCallback.inputProc = RecordCallbackFuc;
    recordCallback.inputProcRefCon = (__bridge void *)self;
    AudioUnitSetProperty(_ioUnit,
                         kAudioOutputUnitProperty_SetInputCallback,
                         kAudioUnitScope_Input,
                         1,
                         &recordCallback,
                         sizeof(recordCallback));

录音回调函数如下:

static OSStatus RecordCallbackFuc(    void *          inRefCon,
                                AudioUnitRenderActionFlags *    ioActionFlags,
                                const AudioTimeStamp *            inTimeStamp,
                                UInt32                            inBusNumber,
                                UInt32                            inNumberFrames,
                                AudioBufferList * __nullable    ioData){
    
    
    
    ViewController *viewC = (__bridge ViewController*)inRefCon;
     NSLog(@"录音");
    if (ioData) {
        NSLog(@"size2 = %d", ioData->mBuffers[0].mDataByteSize);
        //    memcpy(ioData->mBuffers[0].mData, buffList->mBuffers[0].mData, ioData->mBuffers[0].mDataByteSize);
        AudioUnitRender(viewC.ioUnit, ioActionFlags, inTimeStamp, 1, inNumberFrames, ioData);
    }
    return noErr; 
}

播放回调函数

 static OSStatus playCallBackFuc(    void *          inRefCon,
                    AudioUnitRenderActionFlags *    ioActionFlags,
                    const AudioTimeStamp *            inTimeStamp,
                    UInt32                            inBusNumber,
                    UInt32                            inNumberFrames,
                             AudioBufferList * __nullable    ioData){
    
    ViewController *viewC = (__bridge ViewController*)inRefCon;
    NSLog(@"size2 = %d", ioData->mBuffers[0].mDataByteSize);
//    memcpy(ioData->mBuffers[0].mData, buffList->mBuffers[0].mData, ioData->mBuffers[0].mDataByteSize);
    AudioUnitRender(viewC.ioUnit, ioActionFlags, inTimeStamp, 1, inNumberFrames, ioData);
    NSLog(@"播放");
    return noErr;
}

3.2 使用AUGraph来设置(缺点只能是指播放回调函数,优点线程安全)

   //3、设置回调函数
    AURenderCallbackStruct playCallBack;
    playCallBack.inputProc = playCallBackFuc;
    playCallBack.inputProcRefCon = (__bridge void*)self;
    
    //该方法是线程安全的(但是只能设置renderCallBack)
    AUGraphSetNodeInputCallback(processingGraph, ioNode, 0, &playCallBack);

4 初始化AUGraph

  //4初始化启动一个音频处理图
    OSStatus result = AUGraphInitialize(processingGraph);
    
    CheckStatus(result, @"初始化失败", YES);

5 开启AUGraph

 OSStatus status =  AUGraphStart(processingGraph);
    CheckStatus(status, @"启动图失败", YES);

你可能感兴趣的:(AudioUnit实现简单的录音和耳返)