AVFoundation编程指南07-导出

写在前面

喜欢AVFoundation资料的同学可以关注我的专题:《AVFoundation》专辑
也可以关注我的账号

正文

要读取和写入视频和音频assets,必须使用AVFoundation框架提供的导出API。AVAssetExportSession类为简单的导出需求提供了一个界面,例如修改文件格式或修剪资源的长度(请参阅Trimming and Transcoding a Movie)。要获得更深入的导出需求,请使用AVAssetReader和AVAssetWriter类。

如果要对asset内容执行操作,请使用AVAssetReader。例如,你可能会读取asset的音轨以生成波形的直观表示。要从样本缓冲区或静止图像等媒体生成asset,请使用AVAssetWriter对象。

注意:asset读取器和写入器类不用于实时处理。实际上,asset读取器甚至不能用于从HTTP实时流等实时源读取。但是,如果你正在使用具有实时数据源的asset写入器(例如AVCaptureOutput对象),请将asset写入器输入的expectsMediaDataInRealTime属性设置为YES。对于非实时数据源,将此属性设置为YES将导致文件无法正确交错。

读取Asset

每个AVAssetReader对象一次只能与一个asset相关联,但此asset可能包含多个track。因此,在开始读取之前,必须将AVAssetReaderOutput类的具体子类分配给asset读取器,以便配置媒体数据的读取方式。 AVAssetReaderOutput基类有三个具体的子类,可用于满足asset读取需求:AVAssetReaderTrackOutput,AVAssetReaderAudioMixOutput和AVAssetReaderVideoCompositionOutput。

创建Asset读取器

初始化AVAssetReader对象所需的只是你要读取的asset

NSError *outError;
AVAsset *someAsset = <#AVAsset that you want to read#>;
AVAssetReader *assetReader = [AVAssetReader assetReaderWithAsset:someAsset error:&outError];
BOOL success = (assetReader != nil);

注意:请务必检查返回给你的asset读取器是否为nil,以确保asset读取器已成功初始化。否则,error参数(前一个示例中的outError)将包含相关的错误信息。

设置Asset 读取器输出

创建Asset读取器后,设置至少一个输出以接收正在读取的媒体数据。设置输出时,请务必将alwaysCopiesSampleData属性设置为NO。通过这种方式,你可以获得性能改进的好处。在本章的所有示例中,此属性可以并且应该设置为NO

如果你只想从一个或多个轨道读取媒体数据并可能将该数据转换为其他格式,请使用AVAssetReaderTrackOutput类,为要从asset中读取的每个AVAssetTrack对象使用单个轨道输出对象。要使用asset读取​​器将音轨解压缩到Linear PCM,请按如下方式设置音轨输出:

AVAsset *localAsset = assetReader.asset;
// Get the audio track to read.
AVAssetTrack *audioTrack = [[localAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
// Decompression settings for Linear PCM
NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
// Create the output with the audio track and decompression settings.
AVAssetReaderOutput *trackOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:audioTrack outputSettings:decompressionAudioSettings];
// Add the output to the reader if possible.
if ([assetReader canAddOutput:trackOutput])
[assetReader addOutput:trackOutput];

注意:要以特定asset``track的格式读取媒体数据,请将nil传递给outputSettings参数。

你可以使用AVAssetReaderAudioMixOutputAVAssetReaderVideoCompositionOutput类分别读取已使用AVAudioMix对象或AVVideoComposition对象混合或合成的媒体数据。通常,当asset读取器从AVComposition对象读取时,将使用这些输出。

使用单个音频混合输出,你可以从asset中读取使用AVAudioMix对象混合在一起的多个音轨。要指定音轨的混合方式,请在初始化后将混音分配给AVAssetReaderAudioMixOutput对象。以下代码显示如何使用asset中的所有音轨创建音频混合输出,将音轨解压缩到Linear PCM,并将音频混合对象分配给输出。有关如何配置音频混合的详细信息,请参阅Editing。

AVAudioMix *audioMix = <#An AVAudioMix that specifies how the audio tracks from the AVAsset are mixed#>;
// Assumes that assetReader was initialized with an AVComposition object.
AVComposition *composition = (AVComposition *)assetReader.asset;
// Get the audio tracks to read.
NSArray *audioTracks = [composition tracksWithMediaType:AVMediaTypeAudio];
// Get the decompression settings for Linear PCM.
NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
// Create the audio mix output with the audio tracks and decompression setttings.
AVAssetReaderOutput *audioMixOutput = [AVAssetReaderAudioMixOutput assetReaderAudioMixOutputWithAudioTracks:audioTracks audioSettings:decompressionAudioSettings];
// Associate the audio mix used to mix the audio tracks being read with the output.
audioMixOutput.audioMix = audioMix;
// Add the output to the reader if possible.
if ([assetReader canAddOutput:audioMixOutput])
[assetReader addOutput:audioMixOutput];

注意:为audioSettings参数传递nil会告诉asset读取器以方便的未压缩格式返回样本。 AVAssetReaderVideoCompositionOutput类也是如此。

视频合成输出的行为方式大致相同:你可以使用AVVideoComposition对象从asset中读取多个视频轨道。要从多个合成视频轨道中读取媒体数据并将其解压缩为ARGB,请按如下方式设置输出:

 AVVideoComposition *videoComposition = <#An AVVideoComposition that specifies how the video tracks from the AVAsset are composited#>;
// Assumes assetReader was initialized with an AVComposition.
AVComposition *composition = (AVComposition *)assetReader.asset;
// Get the video tracks to read.
NSArray *videoTracks = [composition tracksWithMediaType:AVMediaTypeVideo];
// Decompression settings for ARGB.
NSDictionary *decompressionVideoSettings = @{ (id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32ARGB], (id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary] };
// Create the video composition output with the video tracks and decompression setttings.
AVAssetReaderOutput *videoCompositionOutput = [AVAssetReaderVideoCompositionOutput assetReaderVideoCompositionOutputWithVideoTracks:videoTracks videoSettings:decompressionVideoSettings];
// Associate the video composition used to composite the video tracks being read with the output.
videoCompositionOutput.videoComposition = videoComposition;
// Add the output to the reader if possible.
if ([assetReader canAddOutput:videoCompositionOutput])
    [assetReader addOutput:videoCompositionOutput];

读取Asset多媒体数据

要在设置所需的所有输出后开始读取,请在asset读取器上调用startReading方法。接下来,使用copyNextSampleBuffer方法从每个输出中单独检索媒体数据。要使用单个输出启动asset读取器并读取其所有媒体示例,请执行以下操作:

// Start the asset reader up.
[self.assetReader startReading];
BOOL done = NO;
while (!done)
{
  // Copy the next sample buffer from the reader output.
  CMSampleBufferRef sampleBuffer = [self.assetReaderOutput copyNextSampleBuffer];
  if (sampleBuffer)
  {
    // Do something with sampleBuffer here.
    CFRelease(sampleBuffer);
    sampleBuffer = NULL;
  }
  else
  {
// Find out why the asset reader output couldn't copy another sample buffer.
if (self.assetReader.status == AVAssetReaderStatusFailed)
{
  NSError *failureError = self.assetReader.error;
  // Handle the error here.
}
else
{
  // The asset reader output has read all of its samples.
  done = YES;
   }
 }
}

Asset写入

AVAssetWriter类将来自多个源的媒体数据写入指定文件格式的单个文件。你不需要将asset 写入器对象与特定asset相关联,但必须为要创建的每个输出文件使用单独的asset写入器。由于asset写入器可以从多个源写入媒体数据,因此必须为要写入输出文件的每个单独的轨道创建AVAssetWriterInput对象。每个AVAssetWriterInput对象都希望以CMSampleBufferRef对象的形式接收数据,但是如果要将CVPixelBufferRef对象附加到asset写入器输入,请使用AVAssetWriterInputPixelBufferAdaptor类。

创建Asset写入器

要创建asset写入器,请指定输出文件的URL和所需的文件类型。以下代码显示如何初始化asset写入器以创建QuickTime影片:

NSError *outError;
NSURL *outputURL = <#NSURL object representing the URL where you want to save the video#>;
AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:outputURL
                                                  
fileType:AVFileTypeQuickTimeMovie
                                                     error:&outError];
BOOL success = (assetWriter != nil);

设置Asset写入器输入

要使asset写入器能够写入媒体数据,你必须至少设置一个asset写入器输入。例如,如果你的媒体数据源已经将媒体样本作为CMSampleBufferRef对象输出,则只需使用AVAssetWriterInput类。要设置将音频媒体数据压缩为128 kbps AAC并将其连接到asset写入器的asset写入器输入,请执行以下操作:

// Configure the channel layout as stereo.
AudioChannelLayout stereoChannelLayout = {
    .mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,
    .mChannelBitmap = 0,
    .mNumberChannelDescriptions = 0
};

// Convert the channel layout object to an NSData object.
NSData *channelLayoutAsData = [NSData dataWithBytes:&stereoChannelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];

// Get the compression settings for 128 kbps AAC.
NSDictionary *compressionAudioSettings = @{
AVFormatIDKey         : [NSNumber numberWithUnsignedInt:kAudioFormatMPEG4AAC],
AVEncoderBitRateKey   : [NSNumber numberWithInteger:128000],
AVSampleRateKey       : [NSNumber numberWithInteger:44100],
AVChannelLayoutKey    : channelLayoutAsData,
AVNumberOfChannelsKey : [NSNumber numberWithUnsignedInteger:2]
};

// Create the asset writer input with the compression settings and specify the media type as audio.
AVAssetWriterInput *assetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:compressionAudioSettings];
// Add the input to the writer if possible.
if ([assetWriter canAddInput:assetWriterInput])
[assetWriter addInput:assetWriterInput];

注意:如果希望以存储的格式写入媒体数据,请在outputSettings参数中传递nil。仅当asset写入器使用fileType为AVFileTypeQuickTimeMovie初始化时才传递nil

你的asset写入器输入可以选择性地包含一些metadata,或者分别使用metadata和transform属性为特定轨道指定不同的变换。对于数据源为视频轨道的asset写入器输入,你可以通过执行以下操作在输出文件中维护视频的原始变换:

AVAsset *videoAsset = <#AVAsset with at least one video track#>;
AVAssetTrack *videoAssetTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
assetWriterInput.transform = videoAssetTrack.preferredTransform;

注意:在开始使用asset写入器进行写入之前,请先设置metadatatransform属性,以使其生效。

将媒体数据写入输出文件时,有时你可能需要分配像素缓冲区。为此,请使用AVAssetWriterInputPixelBufferAdaptor类。为了获得最高效率,请使用像素缓冲适配器提供的像素缓冲池,而不是添加使用单独池分配的像素缓冲区。以下代码创建一个在RGB域中工作的像素缓冲区对象,该对象将使用CGImage对象来创建其像素缓冲区。

NSDictionary *pixelBufferAttributes = @{
 kCVPixelBufferCGImageCompatibilityKey : [NSNumber numberWithBool:YES],
 kCVPixelBufferCGBitmapContextCompatibilityKey : [NSNumber numberWithBool:YES],
 kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithInt:kCVPixelFormatType_32ARGB]
};
AVAssetWriterInputPixelBufferAdaptor *inputPixelBufferAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:self.assetWriterInput sourcePixelBufferAttributes:pixelBufferAttributes];

注意:所有AVAssetWriterInputPixelBufferAdaptor对象必须连接到单个asset写入器输入。该asset写入器输入必须接受AVMediaTypeVideo类型的媒体数据。

写入媒体数据

配置asset写入器所需的所有输入后,即可开始写入媒体数据。正如您对asset读取器所做的那样,通过调用startWriting方法启动写入过程。然后,你需要通过调用startSessionAtSourceTime:方法来启动sample-writing会话。asset写入器完成的所有写入都必须在其中一个会话中进行,每个会话的时间范围定义了源中包含的媒体数据的时间范围。例如,如果你的源是提供从AVAsset对象读取的媒体数据的asset读取器,并且你不希望包含来自asset的前半部分的媒体数据,那么你将执行以下操作:

CMTime halfAssetDuration = CMTimeMultiplyByFloat64(self.asset.duration, 0.5);
[self.assetWriter startSessionAtSourceTime:halfAssetDuration];
//Implementation continues.

通常,要结束写入会话,必须调用endSessionAtSourceTime:方法。但是,如果你的写入会话直到文件末尾,则只需调用finishWriting方法即可结束写入会话。要使用单个输入启动asset写入器并写入其所有媒体数据,请执行以下操作:

// Prepare the asset writer for writing.
[self.assetWriter startWriting];
// Start a sample-writing session.
[self.assetWriter startSessionAtSourceTime:kCMTimeZero];
// Specify the block to execute when the asset writer is ready for media data and the queue to call it on.
[self.assetWriterInput requestMediaDataWhenReadyOnQueue:myInputSerialQueue usingBlock:^{
 while ([self.assetWriterInput isReadyForMoreMediaData])
 {
      // Get the next sample buffer.
      CMSampleBufferRef nextSampleBuffer = [self copyNextSampleBufferToWrite];
      if (nextSampleBuffer)
      {
           // If it exists, append the next sample buffer to the output file.
           [self.assetWriterInput appendSampleBuffer:nextSampleBuffer];
           CFRelease(nextSampleBuffer);
           nextSampleBuffer = nil;
      }
      else
      {
           // Assume that lack of a next sample buffer means the sample buffer source is out of samples and mark the input as finished.
           [self.assetWriterInput markAsFinished];
           break;
          }
     }
}];

上面代码中的copyNextSampleBufferToWrite方法只是一个存根。此存根的位置是你需要插入一些逻辑以返回表示你要写入的媒体数据的CMSampleBufferRef对象的位置。样本缓冲区的一个可能来源是asset读取器的输出。

录制Assets

你可以串联asset读取器和asset写入器对象,将asset从一种表示转换为另一种表示。使用这些对象,你可以比使用AVAssetExportSession对象更多地控制转换。例如,你可以选择要在输出文件中表示哪些track,指定自己的输出格式,或在转换过程中修改asset。此过程的第一步是根据需要设置asset读取器输出和asset写入器输入。在完全配置asset读取器和写入器之后,分别通过调用startReading和startWriting方法启动它们。以下代码段显示如何使用单个asset写入器输入来写入单个asset读取器输出提供的媒体数据:

 NSString *serializationQueueDescription = [NSString stringWithFormat:@"%@ serialization queue", self];

// Create a serialization queue for reading and writing.
dispatch_queue_t serializationQueue = dispatch_queue_create([serializationQueueDescription UTF8String], NULL);

// Specify the block to execute when the asset writer is ready for media data and the queue to call it on.
[self.assetWriterInput requestMediaDataWhenReadyOnQueue:serializationQueue usingBlock:^{
 while ([self.assetWriterInput isReadyForMoreMediaData])
 {
      // Get the asset reader output's next sample buffer.
      CMSampleBufferRef sampleBuffer = [self.assetReaderOutput copyNextSampleBuffer];
      if (sampleBuffer != NULL)
      {
           // If it exists, append this sample buffer to the output file.
           BOOL success = [self.assetWriterInput appendSampleBuffer:sampleBuffer];
           CFRelease(sampleBuffer);
           sampleBuffer = NULL;
           // Check for errors that may have occurred when appending the new sample buffer.
           if (!success && self.assetWriter.status == AVAssetWriterStatusFailed)
           {
                NSError *failureError = self.assetWriter.error;
                //Handle the error.
           }
      }
      else
      {
           // If the next sample buffer doesn't exist, find out why the asset reader output couldn't vend another one.
           if (self.assetReader.status == AVAssetReaderStatusFailed)
           {
                NSError *failureError = self.assetReader.error;
                //Handle the error here.
           }
           else
           {
                // The asset reader output must have vended all of its samples. Mark the input as finished.
                [self.assetWriterInput markAsFinished];
                break;
              }
          }
     }
}];

综述:使用Asset读取器和写入器串联重新编码Asset

这个简短的代码示例说明了如何使用asset读取器和写入器将asset的第一个视频和音频轨道重新编码为新文件。它显示了如何:

  • 使用序列化队列来处理读取和写入视频和音频数据的异步性质。

  • 初始化asset读取器并配置两个asset读取器输出,一个用于音频,一个用于视频。

  • 初始化asset写入器并配置两个asset写入器输入,一个用于音频,一个用于视频。

  • 使用asset读取器通过两种不同的输出/输入组合将媒体数据异步提供给asset写入器。

  • 使用调度组通知重新编码过程的完成。

  • 允许用户在开始后取消重新编码过程。

注意:为了关注最相关的代码,此示例省略了完整应用程序的几个方面。要使用AVFoundation,你应该有足够的经验使用Cocoa来推断缺失的部分。

处理初始设置

在创建asset读取器和写入器并配置其输出和输入之前,你需要处理一些初始设置。此设置的第一部分涉及创建三个单独的序列化队列以协调读写过程。

NSString *serializationQueueDescription = [NSString stringWithFormat:@"%@ serialization queue", self];

// Create the main serialization queue.
self.mainSerializationQueue = dispatch_queue_create([serializationQueueDescription UTF8String], NULL);
NSString *rwAudioSerializationQueueDescription = [NSString stringWithFormat:@"%@ rw audio serialization queue", self];

// Create the serialization queue to use for reading and writing the audio data.
self.rwAudioSerializationQueue = dispatch_queue_create([rwAudioSerializationQueueDescription UTF8String], NULL);
NSString *rwVideoSerializationQueueDescription = [NSString stringWithFormat:@"%@ rw video serialization queue", self];

// Create the serialization queue to use for reading and writing the video data.
self.rwVideoSerializationQueue = dispatch_queue_create([rwVideoSerializationQueueDescription UTF8String], NULL);

主序列化队列用于协调asset读取器和写入器的启动和停止(可能由于取消而导致停止),并且其他两个序列化队列用于序列化每个输出/输入组合的读取和写入以及可能的取消操作。

现在你已经有了一些序列化队列,加载assettrack并开始重新编码过程。

self.asset = <#AVAsset that you want to reencode#>;
self.cancelled = NO;
self.outputURL = <#NSURL representing desired output URL for file generated by asset writer#>;
// Asynchronously load the tracks of the asset you want to read.
[self.asset loadValuesAsynchronouslyForKeys:@[@"tracks"] completionHandler:^{
 // Once the tracks have finished loading, dispatch the work to the main serialization queue.
 dispatch_async(self.mainSerializationQueue, ^{
      // Due to asynchronous nature, check to see if user has already cancelled.
      if (self.cancelled)
           return;
      BOOL success = YES;
      NSError *localError = nil;
      // Check for success of loading the assets tracks.
      success = ([self.asset statusOfValueForKey:@"tracks" error:&localError] == AVKeyValueStatusLoaded);
      if (success)
      {
           // If the tracks loaded successfully, make sure that no file exists at the output path for the asset writer.
           NSFileManager *fm = [NSFileManager defaultManager];
           NSString *localOutputPath = [self.outputURL path];
           if ([fm fileExistsAtPath:localOutputPath])
                success = [fm removeItemAtPath:localOutputPath error:&localError];
      }
      if (success)
           success = [self setupAssetReaderAndAssetWriter:&localError];
      if (success)
           success = [self startAssetReaderAndWriter:&localError];
      if (!success)
           [self readingAndWritingDidFinishSuccessfully:success withError:localError];
     });
}];

track加载过程完成时,无论是否成功,其余的工作都将被分派到主序列化队列,以确保所有这些工作都被序列化并具有可能的取消。现在剩下的就是在上一个代码列表的末尾实现取消过程和三个自定义方法。

初始化Asset读取器和写入器

自定义setupAssetReaderAndAssetWriter:方法初始化读写器并配置两个输出/输入组合,一个用于音轨,一个用于视频轨。在此示例中,使用asset读取​​器将音频解压缩为线性PCM,并使用asset写入器将其压缩回128 kbps AAC。使用asset读取​​器将视频解压缩为YUV,并使用asset写入器压缩为H.264

- (BOOL)setupAssetReaderAndAssetWriter:(NSError **)outError
{
 // Create and initialize the asset reader.
 self.assetReader = [[AVAssetReader alloc] initWithAsset:self.asset error:outError];
 BOOL success = (self.assetReader != nil);
 if (success)
 {
      // If the asset reader was successfully initialized, do the same for the asset writer.
      self.assetWriter = [[AVAssetWriter alloc] initWithURL:self.outputURL fileType:AVFileTypeQuickTimeMovie error:outError];
      success = (self.assetWriter != nil);
 }

 if (success)
 {
      // If the reader and writer were successfully initialized, grab the audio and video asset tracks that will be used.
      AVAssetTrack *assetAudioTrack = nil, *assetVideoTrack = nil;
      NSArray *audioTracks = [self.asset tracksWithMediaType:AVMediaTypeAudio];
      if ([audioTracks count] > 0)
           assetAudioTrack = [audioTracks objectAtIndex:0];
      NSArray *videoTracks = [self.asset tracksWithMediaType:AVMediaTypeVideo];
      if ([videoTracks count] > 0)
           assetVideoTrack = [videoTracks objectAtIndex:0];

      if (assetAudioTrack)
      {
           // If there is an audio track to read, set the decompression settings to Linear PCM and create the asset reader output.
           NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
           self.assetReaderAudioOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetAudioTrack outputSettings:decompressionAudioSettings];
           [self.assetReader addOutput:self.assetReaderAudioOutput];
           // Then, set the compression settings to 128kbps AAC and create the asset writer input.
           AudioChannelLayout stereoChannelLayout = {
                .mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,
                .mChannelBitmap = 0,
                .mNumberChannelDescriptions = 0
           };
           NSData *channelLayoutAsData = [NSData dataWithBytes:&stereoChannelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];
           NSDictionary *compressionAudioSettings = @{
                AVFormatIDKey         : [NSNumber numberWithUnsignedInt:kAudioFormatMPEG4AAC],
                AVEncoderBitRateKey   : [NSNumber numberWithInteger:128000],
                AVSampleRateKey       : [NSNumber numberWithInteger:44100],
                AVChannelLayoutKey    : channelLayoutAsData,
                AVNumberOfChannelsKey : [NSNumber numberWithUnsignedInteger:2]
           };
           self.assetWriterAudioInput = [AVAssetWriterInput assetWriterInputWithMediaType:[assetAudioTrack mediaType] outputSettings:compressionAudioSettings];
           [self.assetWriter addInput:self.assetWriterAudioInput];
      }

      if (assetVideoTrack)
      {
           // If there is a video track to read, set the decompression settings for YUV and create the asset reader output.
           NSDictionary *decompressionVideoSettings = @{
                (id)kCVPixelBufferPixelFormatTypeKey     : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_422YpCbCr8],
                (id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary]
           };
           self.assetReaderVideoOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetVideoTrack outputSettings:decompressionVideoSettings];
           [self.assetReader addOutput:self.assetReaderVideoOutput];
           CMFormatDescriptionRef formatDescription = NULL;
           // Grab the video format descriptions from the video track and grab the first one if it exists.
           NSArray *videoFormatDescriptions = [assetVideoTrack formatDescriptions];
           if ([videoFormatDescriptions count] > 0)
                formatDescription = (__bridge CMFormatDescriptionRef)[formatDescriptions objectAtIndex:0];
           CGSize trackDimensions = {
                .width = 0.0,
                .height = 0.0,
           };
           // If the video track had a format description, grab the track dimensions from there. Otherwise, grab them direcly from the track itself.
           if (formatDescription)
                trackDimensions = CMVideoFormatDescriptionGetPresentationDimensions(formatDescription, false, false);
           else
                trackDimensions = [assetVideoTrack naturalSize];
           NSDictionary *compressionSettings = nil;
           // If the video track had a format description, attempt to grab the clean aperture settings and pixel aspect ratio used by the video.
           if (formatDescription)
           {
                NSDictionary *cleanAperture = nil;
                NSDictionary *pixelAspectRatio = nil;
                CFDictionaryRef cleanApertureFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_CleanAperture);
                if (cleanApertureFromCMFormatDescription)
                {
                     cleanAperture = @{
                          AVVideoCleanApertureWidthKey            : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureWidth),
                          AVVideoCleanApertureHeightKey           : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHeight),
                          AVVideoCleanApertureHorizontalOffsetKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHorizontalOffset),
                          AVVideoCleanApertureVerticalOffsetKey   : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureVerticalOffset)
                     };
                }
                CFDictionaryRef pixelAspectRatioFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_PixelAspectRatio);
                if (pixelAspectRatioFromCMFormatDescription)
                {
                     pixelAspectRatio = @{
                          AVVideoPixelAspectRatioHorizontalSpacingKey : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioHorizontalSpacing),
                          AVVideoPixelAspectRatioVerticalSpacingKey   : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioVerticalSpacing)
                     };
                }
                // Add whichever settings we could grab from the format description to the compression settings dictionary.
                if (cleanAperture || pixelAspectRatio)
                {
                     NSMutableDictionary *mutableCompressionSettings = [NSMutableDictionary dictionary];
                     if (cleanAperture)
                          [mutableCompressionSettings setObject:cleanAperture forKey:AVVideoCleanApertureKey];
                     if (pixelAspectRatio)
                          [mutableCompressionSettings setObject:pixelAspectRatio forKey:AVVideoPixelAspectRatioKey];
                     compressionSettings = mutableCompressionSettings;
                }
           }
           // Create the video settings dictionary for H.264.
           NSMutableDictionary *videoSettings = (NSMutableDictionary *) @{
                AVVideoCodecKey  : AVVideoCodecH264,
                AVVideoWidthKey  : [NSNumber numberWithDouble:trackDimensions.width],
                AVVideoHeightKey : [NSNumber numberWithDouble:trackDimensions.height]
           };
           // Put the compression settings into the video settings dictionary if we were able to grab them.
           if (compressionSettings)
                [videoSettings setObject:compressionSettings forKey:AVVideoCompressionPropertiesKey];
           // Create the asset writer input and add it to the asset writer.
           self.assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:[videoTrack mediaType] outputSettings:videoSettings];
           [self.assetWriter addInput:self.assetWriterVideoInput];
          }
     }
     return success;
}

重新编码Asset

如果asset读取器和写入器已成功初始化和配置,则调用Handling the Initial Setup的startAssetReaderAndWriter:方法。该方法是asset的实际读取和写入的地方。

- (BOOL)startAssetReaderAndWriter:(NSError **)outError
{
 BOOL success = YES;
 // Attempt to start the asset reader.
 success = [self.assetReader startReading];
 if (!success)
      *outError = [self.assetReader error];
 if (success)
 {
      // If the reader started successfully, attempt to start the asset writer.
      success = [self.assetWriter startWriting];
      if (!success)
           *outError = [self.assetWriter error];
 }

 if (success)
 {
      // If the asset reader and writer both started successfully, create the dispatch group where the reencoding will take place and start a sample-writing session.
      self.dispatchGroup = dispatch_group_create();
      [self.assetWriter startSessionAtSourceTime:kCMTimeZero];
      self.audioFinished = NO;
      self.videoFinished = NO;

      if (self.assetWriterAudioInput)
      {
           // If there is audio to reencode, enter the dispatch group before beginning the work.
           dispatch_group_enter(self.dispatchGroup);
           // Specify the block to execute when the asset writer is ready for audio media data, and specify the queue to call it on.
           [self.assetWriterAudioInput requestMediaDataWhenReadyOnQueue:self.rwAudioSerializationQueue usingBlock:^{
                // Because the block is called asynchronously, check to see whether its task is complete.
                if (self.audioFinished)
                     return;
                BOOL completedOrFailed = NO;
                // If the task isn't complete yet, make sure that the input is actually ready for more media data.
                while ([self.assetWriterAudioInput isReadyForMoreMediaData] && !completedOrFailed)
                {
                     // Get the next audio sample buffer, and append it to the output file.
                     CMSampleBufferRef sampleBuffer = [self.assetReaderAudioOutput copyNextSampleBuffer];
                     if (sampleBuffer != NULL)
                     {
                          BOOL success = [self.assetWriterAudioInput appendSampleBuffer:sampleBuffer];
                          CFRelease(sampleBuffer);
                          sampleBuffer = NULL;
                          completedOrFailed = !success;
                     }
                     else
                     {
                          completedOrFailed = YES;
                     }
                }
                if (completedOrFailed)
                {
                     // Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the audio work has finished).
                     BOOL oldFinished = self.audioFinished;
                     self.audioFinished = YES;
                     if (oldFinished == NO)
                     {
                          [self.assetWriterAudioInput markAsFinished];
                     }
                     dispatch_group_leave(self.dispatchGroup);
                }
           }];
      }

      if (self.assetWriterVideoInput)
      {
           // If we had video to reencode, enter the dispatch group before beginning the work.
           dispatch_group_enter(self.dispatchGroup);
           // Specify the block to execute when the asset writer is ready for video media data, and specify the queue to call it on.
           [self.assetWriterVideoInput requestMediaDataWhenReadyOnQueue:self.rwVideoSerializationQueue usingBlock:^{
                // Because the block is called asynchronously, check to see whether its task is complete.
                if (self.videoFinished)
                     return;
                BOOL completedOrFailed = NO;
                // If the task isn't complete yet, make sure that the input is actually ready for more media data.
                while ([self.assetWriterVideoInput isReadyForMoreMediaData] && !completedOrFailed)
                {
                     // Get the next video sample buffer, and append it to the output file.
                     CMSampleBufferRef sampleBuffer = [self.assetReaderVideoOutput copyNextSampleBuffer];
                     if (sampleBuffer != NULL)
                     {
                          BOOL success = [self.assetWriterVideoInput appendSampleBuffer:sampleBuffer];
                          CFRelease(sampleBuffer);
                          sampleBuffer = NULL;
                          completedOrFailed = !success;
                     }
                     else
                     {
                          completedOrFailed = YES;
                     }
                }
                if (completedOrFailed)
                {
                     // Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the video work has finished).
                     BOOL oldFinished = self.videoFinished;
                     self.videoFinished = YES;
                     if (oldFinished == NO)
                     {
                          [self.assetWriterVideoInput markAsFinished];
                     }
                     dispatch_group_leave(self.dispatchGroup);
                }
           }];
      }
      // Set up the notification that the dispatch group will send when the audio and video work have both finished.
      dispatch_group_notify(self.dispatchGroup, self.mainSerializationQueue, ^{
           BOOL finalSuccess = YES;
           NSError *finalError = nil;
           // Check to see if the work has finished due to cancellation.
           if (self.cancelled)
           {
                // If so, cancel the reader and writer.
                [self.assetReader cancelReading];
                [self.assetWriter cancelWriting];
           }
           else
           {
                // If cancellation didn't occur, first make sure that the asset reader didn't fail.
                if ([self.assetReader status] == AVAssetReaderStatusFailed)
                {
                     finalSuccess = NO;
                     finalError = [self.assetReader error];
                }
                // If the asset reader didn't fail, attempt to stop the asset writer and check for any errors.
                if (finalSuccess)
                {
                     finalSuccess = [self.assetWriter finishWriting];
                     if (!finalSuccess)
                          finalError = [self.assetWriter error];
                }
           }
           // Call the method to handle completion, and pass in the appropriate parameters to indicate whether reencoding was successful.
           [self readingAndWritingDidFinishSuccessfully:finalSuccess withError:finalError];
      });
 }
 // Return success here to indicate whether the asset reader and writer were started successfully.
 return success;
}

在重新编码期间,音频和视频轨道在各个序列化队列上异步处理,以提高进程的整体性能,但两个队列都包含在同一个调度组中。通过将每个轨道的工作放在同一个调度组中,该组可以在完成所有工作并且可以确定重新编码过程成功时发送通知。

结束回调

为了完成读写过程,readAndWritingDidFinishSuccessfully:方法被调用 - 带有指示重新编码是否成功完成的参数。如果进程未成功完成,则asset读取器和写入器都将被取消,并且任何与UI相关的任务都将分派到主队列。

- (void)readingAndWritingDidFinishSuccessfully:(BOOL)success withError:(NSError *)error
{
 if (!success)
 {
      // If the reencoding process failed, we need to cancel the asset reader and writer.
      [self.assetReader cancelReading];
      [self.assetWriter cancelWriting];
      dispatch_async(dispatch_get_main_queue(), ^{
           // Handle any UI tasks here related to failure.
      });
 }
 else
 {
      // Reencoding was successful, reset booleans.
      self.cancelled = NO;
      self.videoFinished = NO;
      self.audioFinished = NO;
      dispatch_async(dispatch_get_main_queue(), ^{
           // Handle any UI tasks here related to success.
          });
     }
}

取消回调

使用多个序列化队列,你可以允许应用程序的用户轻松取消重新编码过程。在主序列化队列中,将消息异步发送到每个asset重新编码序列化队列以取消其读取和写入。当这两个序列化队列完成取消时,调度组会向主序列化队列发送通知,其中取消的属性设置为YES。你可以使用UI上的按钮将以下代码列表中的cancel方法关联起来。

- (void)cancel
{
 // Handle cancellation asynchronously, but serialize it with the main queue.
 dispatch_async(self.mainSerializationQueue, ^{
      // If we had audio data to reencode, we need to cancel the audio work.
      if (self.assetWriterAudioInput)
      {
           // Handle cancellation asynchronously again, but this time serialize it with the audio queue.
           dispatch_async(self.rwAudioSerializationQueue, ^{
                // Update the Boolean property indicating the task is complete and mark the input as finished if it hasn't already been marked as such.
                BOOL oldFinished = self.audioFinished;
                self.audioFinished = YES;
                if (oldFinished == NO)
                {
                     [self.assetWriterAudioInput markAsFinished];
                }
                // Leave the dispatch group since the audio work is finished now.
                dispatch_group_leave(self.dispatchGroup);
           });
      }

      if (self.assetWriterVideoInput)
      {
           // Handle cancellation asynchronously again, but this time serialize it with the video queue.
           dispatch_async(self.rwVideoSerializationQueue, ^{
                // Update the Boolean property indicating the task is complete and mark the input as finished if it hasn't already been marked as such.
                BOOL oldFinished = self.videoFinished;
                self.videoFinished = YES;
                if (oldFinished == NO)
                {
                     [self.assetWriterVideoInput markAsFinished];
                }
                // Leave the dispatch group, since the video work is finished now.
                dispatch_group_leave(self.dispatchGroup);
           });
      }
      // Set the cancelled Boolean property to YES to cancel any work on the main queue as well.
      self.cancelled = YES;
     });
}

Asset输出设置Assistant

AVOutputSettingsAssistant类有助于为asset读取器或写入器创建输出设置词典。这使得设置更加简单,特别是对于具有许多特定预设的高帧率H264电影。代码5-1显示了一个使用输出设置assistant使用设置assistant的示例。

AVOutputSettingsAssistant *outputSettingsAssistant = [AVOutputSettingsAssistant outputSettingsAssistantWithPreset:];
CMFormatDescriptionRef audioFormat = [self getAudioFormat];

if (audioFormat != NULL)
[outputSettingsAssistant setSourceAudioFormat:(CMAudioFormatDescriptionRef)audioFormat];

CMFormatDescriptionRef videoFormat = [self getVideoFormat];

if (videoFormat != NULL)
[outputSettingsAssistant setSourceVideoFormat:(CMVideoFormatDescriptionRef)videoFormat];

CMTime assetMinVideoFrameDuration = [self getMinFrameDuration];
CMTime averageFrameDuration = [self getAvgFrameDuration]

[outputSettingsAssistant setSourceVideoAverageFrameDuration:averageFrameDuration];
[outputSettingsAssistant setSourceVideoMinFrameDuration:assetMinVideoFrameDuration];

AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL: fileType:[outputSettingsAssistant outputFileType] error:NULL];
AVAssetWriterInput *audioInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:[outputSettingsAssistant audioSettings] sourceFormatHint:audioFormat];
AVAssetWriterInput *videoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:[outputSettingsAssistant videoSettings] sourceFormatHint:videoFormat];
上一章 目录 下一章

你可能感兴趣的:(AVFoundation编程指南07-导出)