iOS中获取音频流并提取pcm

最近工作中涉及采集手机麦克风的声音并且播放出来,我先后试了两种获取麦克风声音的方案:

1.此方案是通过AVCaptureSession获取的原始音频流,这个回调返回的音频数据

-(void)audioWithSampleBuffer:(CMSampleBufferRef)sampleBuffer { 

}

格式为CMSampleBufferRef,现在要从中提取pcm数据才能播放,方法如下:

-(NSData *)convertAudioSamepleBufferToPcmData:(CMSampleBufferRef)sampleBuffer {

    //获取pcm数据大小

    size_t size = CMSampleBufferGetTotalSampleSize(sampleBuffer);

    //分配空间

    int8_t *audio_data = (int8_t *)malloc(size); memset(audio_data, 0, size);

    //获取CMBlockBuffer, 这里面保存了PCM数据

    CMBlockBufferRef blockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);

    //将数据copy到我们分配的空间中

    CMBlockBufferCopyDataBytes(blockBuffer, 0, size, audio_data);

    NSData *data = [NSData dataWithBytes:audio_data length:size];

    free(audio_data);

    return data;

}

拿到这个data,就能直接播放了,此处参考了文章:https://blog.csdn.net/baidu_32469997/article/details/70017321


2.这个方案是通过AudioUnit获取的原始音频流,AudioUnit不详细介绍了,以下介绍从AudioUnit获取的原始数据中提取pcm:

首先AudioUnit有个录音的回调:

static OSStatus recordingCallback(void *inRefCon,

                                  AudioUnitRenderActionFlags *ioActionFlags,

                                  const AudioTimeStamp *inTimeStamp,

                                  UInt32 inBusNumber,

                                  UInt32 inNumberFrames,

                                  AudioBufferList *ioData) {

// Because of the way our audio format (setup below) is chosen:

// we only need 1 buffer, since it is mono

// Samples are 16 bits = 2 bytes.

// 1 frame includes only 1 sample

AudioBuffer buffer;

buffer.mNumberChannels = 1;

buffer.mDataByteSize = inNumberFrames * 2;

buffer.mData = malloc( inNumberFrames * 2 );

// Put buffer in a AudioBufferList

AudioBufferList bufferList;

bufferList.mNumberBuffers = 1;

bufferList.mBuffers[0] = buffer;

    // Then:

    // Obtain recorded samples

    OSStatus status;

    status = AudioUnitRender([ystAudio audioUnit], ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList);

checkStatus(status);

    // Now, we have the samples we just read sitting in buffers in bufferList

// Process the new data

    [ystAudio processAudio:&bufferList];

// release the malloc'ed data in the buffer we created earlier

free(bufferList.mBuffers[0].mData);

    return noErr;

}

拿到这个bufferList,然后开始转换:

- (void) processAudio: (AudioBufferList*) bufferList{

AudioBuffer sourceBuffer = bufferList->mBuffers[0];

// fix tempBuffer size if it's the wrong size

if (tempBuffer.mDataByteSize != sourceBuffer.mDataByteSize) {

free(tempBuffer.mData);

tempBuffer.mDataByteSize = sourceBuffer.mDataByteSize;

tempBuffer.mData = malloc(sourceBuffer.mDataByteSize);

}

// copy incoming audio data to temporary buffer

memcpy(tempBuffer.mData, bufferList->mBuffers[0].mData, bufferList->mBuffers[0].mDataByteSize);

    NSData *data = [NSData dataWithBytes:sourceBuffer.mData length:bufferList->mBuffers[0].mDataByteSize];

}

这里的data拿到的就是pcm数据了

github demo地址:https://github.com/kuqiqi/KtvKit

你可能感兴趣的:(iOS中获取音频流并提取pcm)