https://github.com/WarBand/AVFoundationOCDemo
AVFoundation导入导出用到了AVAssetReader和AVAssetWriter。
AVFoundation提供的视频的导入和导出,还可以使用AVAssetReader和AVAssetWriter。这两个类可以成对使用,也可以单独使用。AVAssetReader可以将视频文件导出到CMSampleBuffer,AVAssetWriter可以将CMSampleBuffer编码成各种格式的音视频文件。
使用AVAssetReader
初始化一个AVAssetReader需要使用AVAsset对象。如下
//初始化AVAssetReader
NSURL *assetUrl = [[NSBundle mainBundle] URLForResource:@"ElephantSeals" withExtension:@"mov"];
AVAsset *asset = [AVAsset assetWithURL:assetUrl];
self.asset = asset;
AVAssetReader *assetReader = [AVAssetReader assetReaderWithAsset:asset error:nil];
self.assetReader = assetReader;
这里省去非必要的异常处理,只为简单说明流程。AVAsset是可以用来读取网络音视频流的,比如HLS媒体流。虽然AVAssetReader初始化使用到AVAsset,但是AVAssetReader却是不支持网络媒体流的。
AVAssetReader将媒体数据读取后,会通过AVAssetReaderOutput输出为CMSampleBuffer。AVAssetReaderOutput有三个主要子类:AVAssetReaderTrackOutput、AVAssetReaderAudioMixOutput、AVAssetReaderVideoCompositionOutput。AVAssetReaderAudioMixOutput、AVAssetReaderVideoCompositionOutput在读取可可编辑视频源是使用,普通的AVAsset可以AVAssetReaderTrackOutput即可。
创建输出:
//音频输出
AVAssetTrack *audioTrack = [[asset tracksWithMediaType:AVMediaTypeAudio] firstObject];
//此处配置音频输出设置,audioOutputSetting可以为nil,表示不经过处理,输出原始格式。
NSDictionary *audioOutputSetting = [self configAudioOutput];
AVAssetReaderTrackOutput *audioTrackOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:audioTrack outputSettings:audioOutputSetting];
self.assetReaderAudioOutput = audioTrackOutput;
//视频输出
AVAssetTrack *videoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] firstObject];
//此处配置视频输出设置,videoOutputSetting可以为nil,表示不经过处理,输出原始格式。
NSDictionary *videoOutputSetting = [self configVideoOutput];
AVAssetReaderTrackOutput *videoTrackOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:videoTrack outputSettings:videoOutputSetting];
self.assetReaderVideoOutput = videoTrackOutput;
//添加
if ([assetReader canAddOutput:audioTrackOutput]) {
[assetReader addOutput:audioTrackOutput];
}
if ([assetReader canAddOutput:videoTrackOutput]) {
[assetReader addOutput:videoTrackOutput];
}
虽然AVAssetReaderTrackOutput输出配置都可以是空,但是可以在使用的时候,可能会遇到问题
** -[AVAssetWriterInput appendSampleBuffer:] Cannot append sample buffer: Input buffer must be in an uncompressed format when outputSettings is not nil**
使用AVAssetWriter
初始化AVAssetWriter
NSString *path = [NSString stringWithFormat:@"%@demo.%@", NSTemporaryDirectory(), CFBridgingRelease(UTTypeCopyPreferredTagWithClass((__bridge CFStringRef _Nonnull)(AVFileTypeQuickTimeMovie), (CFStringRef)kUTTagClassFilenameExtension))];
if ([[NSFileManager defaultManager] fileExistsAtPath:path]) {
[[NSFileManager defaultManager] removeItemAtPath:path error:nil];
}
NSURL *url = [NSURL fileURLWithPath:path];
AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:url fileType:AVFileTypeQuickTimeMovie error:nil];
self.assetWriter = assetWriter;
AVAssetWriter的作用是将CMSampleBuffer转化为文件。输入CMsampleBuffer需要借助AVAssetWriterInput。
初始化AVAssetWriterInput
//音频
NSDictionary *audioInputSetting = [self configAudioInput];
//此处配置音频输入设置,audioInputSetting可以为nil,表示不经过处理
AVAssetWriterInput *audioTrackInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:audioInputSetting];
self.assetWriterAudioInput = audioTrackInput;
//视频
NSDictionary *videoInputSetting = [self configVideoInput];
//此处配置视频频输入设置,videoInputSetting可以为nil,表示不经过处理
AVAssetWriterInput *videoTrackInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoInputSetting];
self.assetWriterVideoInput = videoTrackInput;
if ([assetWriter canAddInput:audioTrackInput]) {
[assetWriter addInput:audioTrackInput];
}
if ([assetWriter canAddInput:videoTrackInput]) {
[assetWriter addInput:videoTrackInput];
}
准备工作准备好了,接下里开始读取和导出,我们这里是将一个QuickTime文件导出为一个QuickTime文件,所以刚才的几个配置字典都可以设置nil。
在这种模式下进行导入导出,需要使用AVAssetWriterInput主动拉取CMSampleBuffer数据。需要使用如下方法。这个方法里面的block会调用多次,并不是调用一次,然后执行循环就可以读完所有的内容,如果结束队列任务不正确,会提示必须调用startSessionAtSourceTime:的错误提示
- (void)requestMediaDataWhenReadyOnQueue:(dispatch_queue_t)queue usingBlock:(void (^)(void))block;
创建两个串行队列用于分别读取音频和视频数据
dispatch_queue_t audioQueue = dispatch_queue_create("Audio Queue", DISPATCH_QUEUE_SERIAL);
self.rwAudioSerializationQueue = audioQueue;
dispatch_queue_t videoQueue = dispatch_queue_create("Video Queue", DISPATCH_QUEUE_SERIAL);
self.rwVideoSerializationQueue = videoQueue;
使用组同步两个队列的任务
self.dispatchGroup = dispatch_group_create();
开始
[self.assetReader startReading];
[self.assetWriter startWriting];
//这里开始时间是可以自己设置的
[self.assetWriter startSessionAtSourceTime:CMTimeMake(2, 1)];
self.audioFinished = NO;
dispatch_group_enter(self.dispatchGroup);
[audioTrackInput requestMediaDataWhenReadyOnQueue:audioQueue usingBlock:^{
BOOL completedOrFailed = NO;
while ([self.assetWriterAudioInput isReadyForMoreMediaData] && !completedOrFailed) {
CMSampleBufferRef sampleBuffer = [self.assetReaderAudioOutput copyNextSampleBuffer];
if (sampleBuffer != NULL) {
BOOL success = [self.assetWriterAudioInput appendSampleBuffer:sampleBuffer];
sampleBuffer = NULL;
completedOrFailed = !success;
} else {
completedOrFailed = YES;
}
}
if (completedOrFailed) {
BOOL oldfinished = self.audioFinished;
self.audioFinished = YES;
if (oldfinished == NO) {
[self.assetWriterAudioInput markAsFinished];
}
dispatch_group_leave(self.dispatchGroup);
}
}];
self.videoFinished = NO;
dispatch_group_enter(self.dispatchGroup);
[videoTrackInput requestMediaDataWhenReadyOnQueue:videoQueue usingBlock:^{
while ([videoTrackInput isReadyForMoreMediaData] && !self.videoFinished) {
CMSampleBufferRef sampleBuffer = [videoTrackOutput copyNextSampleBuffer];
if (sampleBuffer != NULL) {
[videoTrackInput appendSampleBuffer:sampleBuffer];
sampleBuffer = NULL;
} else {
self.videoFinished = YES;
[videoTrackInput markAsFinished];
dispatch_group_leave(self.dispatchGroup);
}
}
}];
完成
dispatch_group_notify(self.dispatchGroup, dispatch_get_main_queue(), ^{
[self.assetWriter finishWriting];
});
大致的流程就是这样。
下面是输入输出配置的设置。
解码设置用到的AVAssetReaderTrackOutput仅仅支持输出非压缩的格式。
解码音频
解码音频时AVFormatIDKey只能为kAudioFormatLinearPCM,并且不支持AVSampleRateConverterAudioQualityKey
/** 音频解码 */
- (NSDictionary *)configAudioOutput
{
NSDictionary *audioOutputSetting = @{
AVFormatIDKey: @(kAudioFormatLinearPCM)
};
return audioOutputSetting;
}
解码视频
解码视频不支持AVVideoCleanApertureKey、AVVideoPixelAspectRatioKey、AVVideoScalingModeKey的设置。
/** 视频解码 */
- (NSDictionary *)configVideoOutput
{
NSDictionary *videoOutputSetting = @{
(__bridge NSString *)kCVPixelBufferPixelFormatTypeKey:[NSNumber numberWithUnsignedInt:kCVPixelFormatType_422YpCbCr8]
};
return videoOutputSetting;
}
编码音频
不支持AVEncoderAudioQualityKey、AVSampleRateConverterAudioQualityKey,必须包含AVFormatIDKey、AVSampleRateKey和AVNumberOfChannelsKey三个Key的设置,如果AVNumberOfChannelsKey 大于2,还必须设置AVChannelLayoutKey
- (NSDictionary *)configAudioInput
{
AudioChannelLayout channelLayout = {
.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,
.mChannelBitmap = kAudioChannelBit_Left,
.mNumberChannelDescriptions = 0
};
NSData *channelLayoutData = [NSData dataWithBytes:&channelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];
NSDictionary *audioInputSetting = @{
AVFormatIDKey: @(kAudioFormatMPEG4AAC),
AVSampleRateKey: @(44100),
AVNumberOfChannelsKey: @(2),
AVChannelLayoutKey:channelLayoutData
};
return audioInputSetting;
}
编码视频
必须设置AVVideoCodecKey、AVVideoWidthKey、AVVideoHeightKey三个值。对于iOS, AVVideoCodecKey支持AVVideoCodecH264、AVVideoCodecJPEG。并且AVVideoCodecH264不支持iPhone 3G。 对于 AVVideoScalingModeKey,不支持AVVideoScalingModeFit
- (NSDictionary *)configVideoInput
{
NSDictionary *videoInputSetting = @{
AVVideoCodecKey:AVVideoCodecH264,
AVVideoWidthKey: @(540),
AVVideoHeightKey: @(360)
};
return videoInputSetting;
}
好吧,其实这个东西最麻烦的解码和编码的设置,尤其是编码的设置很容易出现错误的情况。以上的这些设置都可以在AVVideoSettings.h和AVAudioSettings.h中找到。苹果提供了一个类可以帮助我们完成编码的配置——AVOutputSettingsAssistant。AVOutputSettingsAssistant 可以帮助我们选择最佳的编码的配置AVOutputSettingsAssistant有一个属性outputFileType。