前言
使用VideoToolbox硬编码H.264
使用VideoToolbox硬解码H.264
这次在编码H.264视频流的同时,录制并编码AAC音频流。
介绍
自然界中的声音非常复杂,波形极其复杂,通常我们采用的是脉冲代码调制编码,即PCM编码。PCM通过抽样、量化、编码三个步骤将连续变化的模拟信号转换为数字编码。
- 抽样:对模拟信号进行周期性扫描,把时间上连续的信号变成时间上离散的信号;
- 量化:用一组规定的电平,把瞬时抽样值用最接近的电平值来表示,通常是用二进制表示;
- 编码:用一组二进制码组来表示每一个有固定电平的量化值;
PCM介绍:百度百科
容易知道,采样后的数据大小 = 采样率值×采样大小值×声道数 bps。
一个采样率为44.1KHz,采样大小为16bit,双声道的PCM编码的WAV文件,它的数据速率=44.1K×16×2 bps=1411.2 Kbps= 176.4 KB/s。
这个速率和压缩后的视频数据速率差不多!
延伸出来AAC高级音频编码。
AAC高级音频编码
AAC(Advanced Audio Coding),中文名:高级音频编码,出现于1997年,基于MPEG-2的音频编码技术。由Fraunhofer IIS、杜比实验室、AT&T、Sony等公司共同开发,目的是取代MP3格式。
AAC的维基百科
音频压缩编码原理看这里。
AAC音频格式
AAC音频格式有ADIF和ADTS:
- ADIF:Audio Data Interchange Format 音频数据交换格式。这种格式的特征是可以确定的找到这个音频数据的开始,不需进行在音频数据流中间开始的解码,即它的解码必须在明确定义的开始处进行。故这种格式常用在磁盘文件中。
- ADTS:Audio Data Transport Stream 音频数据传输流。这种格式的特征是它是一个有同步字的比特流,解码可以在这个流中任何位置开始。它的特征类似于mp3数据流格式。
iOS上把PCM音频编码成AAC音频流
- 1、设置编码器(codec),并开始录制;
- 2、收集到PCM数据,传给编码器;
-
3、编码完成回调callback,写入文件。
具体步骤
1、创建并配置AVCaptureSession
创建AVCaptureSession,然后找到音频的AVCaptureDevice,根据音频device创建输入并添加到session,最后添加output到session。
audioFileHandle是NSFileHandle,用户写入编码后的AAC音频到文件。
demo中,此段代码还包括Video的设置。为了缩短篇幅,去掉了video相关的配置。
- (void)startCapture {
self.mCaptureSession = [[AVCaptureSession alloc] init];
mCaptureQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
mEncodeQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
AVCaptureDevice *audioDevice = [[AVCaptureDevice devicesWithMediaType:AVMediaTypeAudio] lastObject];
self.mCaptureAudioDeviceInput = [[AVCaptureDeviceInput alloc] initWithDevice:audioDevice error:nil];
if ([self.mCaptureSession canAddInput:self.mCaptureAudioDeviceInput]) {
[self.mCaptureSession addInput:self.mCaptureAudioDeviceInput];
}
self.mCaptureAudioOutput = [[AVCaptureAudioDataOutput alloc] init];
if ([self.mCaptureSession canAddOutput:self.mCaptureAudioOutput]) {
[self.mCaptureSession addOutput:self.mCaptureAudioOutput];
}
[self.mCaptureAudioOutput setSampleBufferDelegate:self queue:mCaptureQueue];
NSString *audioFile = [[NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject] stringByAppendingPathComponent:@"abc.aac"];
[[NSFileManager defaultManager] removeItemAtPath:audioFile error:nil];
[[NSFileManager defaultManager] createFileAtPath:audioFile contents:nil attributes:nil];
audioFileHandle = [NSFileHandle fileHandleForWritingAtPath:audioFile];
[self.mCaptureSession startRunning];
}
2、创建转换器
AudioStreamBasicDescription是输出流的结构体描述,
配置好outAudioStreamBasicDescription后,
根据AudioClassDescription(编码器),
调用AudioConverterNewSpecific
创建转换器。
/**
* 设置编码参数
*
* @param sampleBuffer 音频
*/
- (void) setupEncoderFromSampleBuffer:(CMSampleBufferRef)sampleBuffer {
AudioStreamBasicDescription inAudioStreamBasicDescription = *CMAudioFormatDescriptionGetStreamBasicDescription((CMAudioFormatDescriptionRef)CMSampleBufferGetFormatDescription(sampleBuffer));
AudioStreamBasicDescription outAudioStreamBasicDescription = {0}; // 初始化输出流的结构体描述为0. 很重要。
outAudioStreamBasicDescription.mSampleRate = inAudioStreamBasicDescription.mSampleRate; // 音频流,在正常播放情况下的帧率。如果是压缩的格式,这个属性表示解压缩后的帧率。帧率不能为0。
outAudioStreamBasicDescription.mFormatID = kAudioFormatMPEG4AAC; // 设置编码格式
outAudioStreamBasicDescription.mFormatFlags = kMPEG4Object_AAC_LC; // 无损编码 ,0表示没有
outAudioStreamBasicDescription.mBytesPerPacket = 0; // 每一个packet的音频数据大小。如果的动态大小,设置为0。动态大小的格式,需要用AudioStreamPacketDescription 来确定每个packet的大小。
outAudioStreamBasicDescription.mFramesPerPacket = 1024; // 每个packet的帧数。如果是未压缩的音频数据,值是1。动态帧率格式,这个值是一个较大的固定数字,比如说AAC的1024。如果是动态大小帧数(比如Ogg格式)设置为0。
outAudioStreamBasicDescription.mBytesPerFrame = 0; // 每帧的大小。每一帧的起始点到下一帧的起始点。如果是压缩格式,设置为0 。
outAudioStreamBasicDescription.mChannelsPerFrame = 1; // 声道数
outAudioStreamBasicDescription.mBitsPerChannel = 0; // 压缩格式设置为0
outAudioStreamBasicDescription.mReserved = 0; // 8字节对齐,填0.
AudioClassDescription *description = [self
getAudioClassDescriptionWithType:kAudioFormatMPEG4AAC
fromManufacturer:kAppleSoftwareAudioCodecManufacturer]; //软编
OSStatus status = AudioConverterNewSpecific(&inAudioStreamBasicDescription, &outAudioStreamBasicDescription, 1, description, &_audioConverter); // 创建转换器
if (status != 0) {
NSLog(@"setup converter: %d", (int)status);
}
}
获取编码器的方法
/**
* 获取编解码器
*
* @param type 编码格式
* @param manufacturer 软/硬编
*
编解码器(codec)指的是一个能够对一个信号或者一个数据流进行变换的设备或者程序。这里指的变换既包括将 信号或者数据流进行编码(通常是为了传输、存储或者加密)或者提取得到一个编码流的操作,也包括为了观察或者处理从这个编码流中恢复适合观察或操作的形式的操作。编解码器经常用在视频会议和流媒体等应用中。
* @return 指定编码器
*/
- (AudioClassDescription *)getAudioClassDescriptionWithType:(UInt32)type
fromManufacturer:(UInt32)manufacturer
{
static AudioClassDescription desc;
UInt32 encoderSpecifier = type;
OSStatus st;
UInt32 size;
st = AudioFormatGetPropertyInfo(kAudioFormatProperty_Encoders,
sizeof(encoderSpecifier),
&encoderSpecifier,
&size);
if (st) {
NSLog(@"error getting audio format propery info: %d", (int)(st));
return nil;
}
unsigned int count = size / sizeof(AudioClassDescription);
AudioClassDescription descriptions[count];
st = AudioFormatGetProperty(kAudioFormatProperty_Encoders,
sizeof(encoderSpecifier),
&encoderSpecifier,
&size,
descriptions);
if (st) {
NSLog(@"error getting audio format propery: %d", (int)(st));
return nil;
}
for (unsigned int i = 0; i < count; i++) {
if ((type == descriptions[i].mSubType) &&
(manufacturer == descriptions[i].mManufacturer)) {
memcpy(&desc, &(descriptions[i]), sizeof(desc));
return &desc;
}
}
return nil;
}
3、获取到PCM数据并传入编码器
用CMSampleBufferGetDataBuffer
获取到CMSampleBufferRef里面的CMBlockBufferRef,再通过CMBlockBufferGetDataPointer
获取到_pcmBufferSize和_pcmBuffer;
调用AudioConverterFillComplexBuffer
传入数据,并在callBack函数调用填充buffer的方法。
CMBlockBufferRef blockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);
CFRetain(blockBuffer);
OSStatus status = CMBlockBufferGetDataPointer(blockBuffer, 0, NULL, &_pcmBufferSize, &_pcmBuffer);
NSError *error = nil;
if (status != kCMBlockBufferNoErr) {
error = [NSError errorWithDomain:NSOSStatusErrorDomain code:status userInfo:nil];
}
memset(_aacBuffer, 0, _aacBufferSize);
AudioBufferList outAudioBufferList = {0};
outAudioBufferList.mNumberBuffers = 1;
outAudioBufferList.mBuffers[0].mNumberChannels = 1;
outAudioBufferList.mBuffers[0].mDataByteSize = (int)_aacBufferSize;
outAudioBufferList.mBuffers[0].mData = _aacBuffer;
AudioStreamPacketDescription *outPacketDescription = NULL;
UInt32 ioOutputDataPacketSize = 1;
// Converts data supplied by an input callback function, supporting non-interleaved and packetized formats.
// Produces a buffer list of output data from an AudioConverter. The supplied input callback function is called whenever necessary.
status = AudioConverterFillComplexBuffer(_audioConverter, inInputDataProc, (__bridge void *)(self), &ioOutputDataPacketSize, &outAudioBufferList, outPacketDescription);
Callback函数
/**
* A callback function that supplies audio data to convert. This callback is invoked repeatedly as the converter is ready for new input data.
*/
OSStatus inInputDataProc(AudioConverterRef inAudioConverter, UInt32 *ioNumberDataPackets, AudioBufferList *ioData, AudioStreamPacketDescription **outDataPacketDescription, void *inUserData)
{
AACEncoder *encoder = (__bridge AACEncoder *)(inUserData);
UInt32 requestedPackets = *ioNumberDataPackets;
size_t copiedSamples = [encoder copyPCMSamplesIntoBuffer:ioData];
if (copiedSamples < requestedPackets) {
//PCM 缓冲区还没满
*ioNumberDataPackets = 0;
return -1;
}
*ioNumberDataPackets = 1;
return noErr;
}
/**
* 填充PCM到缓冲区
*/
- (size_t) copyPCMSamplesIntoBuffer:(AudioBufferList*)ioData {
size_t originalBufferSize = _pcmBufferSize;
if (!originalBufferSize) {
return 0;
}
ioData->mBuffers[0].mData = _pcmBuffer;
ioData->mBuffers[0].mDataByteSize = (int)_pcmBufferSize;
_pcmBuffer = NULL;
_pcmBufferSize = 0;
return originalBufferSize;
}
4、得到rawAAC码流,添加ADTS头,并写入文件
AudioConverterFillComplexBuffer
返回的是AAC原始码流,需要在AAC每帧添加ADTS头,调用adtsDataForPacketLength
方法生成,最后把数据写入audioFileHandle的文件。
if (status == 0) {
NSData *rawAAC = [NSData dataWithBytes:outAudioBufferList.mBuffers[0].mData length:outAudioBufferList.mBuffers[0].mDataByteSize];
NSData *adtsHeader = [self adtsDataForPacketLength:rawAAC.length];
NSMutableData *fullData = [NSMutableData dataWithData:adtsHeader];
[fullData appendData:rawAAC];
data = fullData;
} else {
error = [NSError errorWithDomain:NSOSStatusErrorDomain code:status userInfo:nil];
}
if (completionBlock) {
dispatch_async(_callbackQueue, ^{
completionBlock(data, error);
});
}
网上的ADTS头生成方法
/**
* Add ADTS header at the beginning of each and every AAC packet.
* This is needed as MediaCodec encoder generates a packet of raw
* AAC data.
*
* Note the packetLen must count in the ADTS header itself.
* See: http://wiki.multimedia.cx/index.php?title=ADTS
* Also: http://wiki.multimedia.cx/index.php?title=MPEG-4_Audio#Channel_Configurations
**/
- (NSData*) adtsDataForPacketLength:(NSUInteger)packetLength {
int adtsLength = 7;
char *packet = malloc(sizeof(char) * adtsLength);
// Variables Recycled by addADTStoPacket
int profile = 2; //AAC LC
//39=MediaCodecInfo.CodecProfileLevel.AACObjectELD;
int freqIdx = 4; //44.1KHz
int chanCfg = 1; //MPEG-4 Audio Channel Configuration. 1 Channel front-center
NSUInteger fullLength = adtsLength + packetLength;
// fill in ADTS data
packet[0] = (char)0xFF; // 11111111 = syncword
packet[1] = (char)0xF9; // 1111 1 00 1 = syncword MPEG-2 Layer CRC
packet[2] = (char)(((profile-1)<<6) + (freqIdx<<2) +(chanCfg>>2));
packet[3] = (char)(((chanCfg&3)<<6) + (fullLength>>11));
packet[4] = (char)((fullLength&0x7FF) >> 3);
packet[5] = (char)(((fullLength&7)<<5) + 0x1F);
packet[6] = (char)0xFC;
NSData *data = [NSData dataWithBytesNoCopy:packet length:adtsLength freeWhenDone:YES];
return data;
}
总结
demo主要是为了熟悉AAC编码的格式,实现了从麦克风录制音频并编码成AAC码流。
下一篇介绍如何解码播放这次生成的AAC码流。
代码地址点这里