iOS视频重编码

iOS视频重编码


移动端编码无外乎两种:

  1. 软编码,利用CPU来对视频做编码和解码的,但效率不高.(大部分移动端通过FFMpeg来实现软编码。)
  2. 硬编码,利用GPU或者专用处理器来对视频做编码和解码。iOS 8.0之后开放的Video ToolBox框架就是来实现硬件的编码和解码的.

一. VideoToolbox基本数据结构

Video Toolbox视频编解码前后需要应用的数据结构进行说明

(1)CVPixelBuffer:编码前和解码后的图像数据结构。

(2)CMTime、CMClock和CMTimebase:时间戳相关。时间以64-bit/32-bit的形式出现。

(3)CMBlockBuffer:编码后,结果图像的数据结构。

(4)CMVideoFormatDescription:图像存储方式,编解码器等格式描述。

(5)CMSampleBuffer:存放编解码前后的视频图像的容器数据结构。

二. 硬解码和硬编码使用方法

  1. 创建AVAssetReader,将H264码流转换成解码前的CMSampleBuffer。

    (1)提取sps和pps生成format description。

    (2)提取视频图像数据生成CMBlockBuffer。

    (3)根据需要,生成CMTime信息。

  2. 创建AVAssetWriter,设置输出及压缩属性。


下面的代码简要示例了使用asset reader 和 writer 对一个asset中的第一个video 和 audio track 进行重新编码并将结果数据写入到一个新文件中.(可以改变视频的尺寸,帧率码率,文件格式等)


                             初始化设置

在创建和配置asset reader 和 writer 之前, 需要进行一些初始化设置. 首先需要为读写过程创建三个串行队列.

NSString *serializationQueueDescription = [NSString stringWithFormat:@"%@ serialization queue", self];

// Create the main serialization queue.
self.mainSerializationQueue = dispatch_queue_create([serializationQueueDescription UTF8String], NULL);
NSString *rwAudioSerializationQueueDescription = [NSString stringWithFormat:@"%@ rw audio serialization queue", self];

// Create the serialization queue to use for reading and writing the audio data.
self.rwAudioSerializationQueue = dispatch_queue_create([rwAudioSerializationQueueDescription UTF8String], NULL);
NSString *rwVideoSerializationQueueDescription = [NSString stringWithFormat:@"%@ rw video serialization queue", self];

// Create the serialization queue to use for reading and writing the video data.
self.rwVideoSerializationQueue = dispatch_queue_create([rwVideoSerializationQueueDescription UTF8String], NULL);

队列mainSerializationQueue 用于asset reader 和 writer 的启动,停止和取消. 其他两个队列用于output/input的读取和写入.

接着, 加载asset中的track, 并开始重编码.

    self.asset = <#AVAsset that you want to reencode#>;
    self.cancelled = NO;
    self.outputURL = <#NSURL representing desired output URL for file generated by asset writer#>;
    // Asynchronously load the tracks of the asset you want to read.
    [self.asset loadValuesAsynchronouslyForKeys:@[@"tracks"] completionHandler:^{
         // Once the tracks have finished loading, dispatch the work to the main serialization queue.
         dispatch_async(self.mainSerializationQueue, ^{
              // Due to asynchronous nature, check to see if user has already cancelled.
              if (self.cancelled)
                   return;
              BOOL success = YES;
              NSError *localError = nil;
              // Check for success of loading the assets tracks.
              success = ([self.asset statusOfValueForKey:@"tracks" error:&localError] == AVKeyValueStatusLoaded);
              if (success)
              {
                   // If the tracks loaded successfully, make sure that no file exists at the output path for the asset writer.
                   NSFileManager *fm = [NSFileManager defaultManager];
                   NSString *localOutputPath = [self.outputURL path];
                   if ([fm fileExistsAtPath:localOutputPath])
                        success = [fm removeItemAtPath:localOutputPath error:&localError];
              }
              if (success)
                   success = [self setupAssetReaderAndAssetWriter:&localError];
              if (success)
                   success = [self startAssetReaderAndWriter:&localError];
              if (!success)
                   [self readingAndWritingDidFinishSuccessfully:success withError:localError];
         });
    }]; 

剩下的工作就是实现取消的处理, 并实现三个自定义方法.


                      初始化Asset Reader 和 Writer

自定义方法setupAssetReaderAndAssetWriter实现了asset Reader 和 writer的初始化和配置. 在这个示例中, audio先被asset reader解压为 Linear PCM, 然后被asset write压缩为128 kbps AAC. video被asset reader 解压为YUV, 然后被asset writer 压缩为H.264:

    - (BOOL)setupAssetReaderAndAssetWriter:(NSError **)outError
{       
        // Create and initialize the asset reader.
     self.assetReader = [[AVAssetReader alloc] initWithAsset:self.asset error:outError];
     BOOL success = (self.assetReader != nil);
     if (success)
     {
          // If the asset reader was successfully initialized, do the same for the asset writer.
          self.assetWriter = [[AVAssetWriter alloc] initWithURL:self.outputURL fileType:AVFileTypeQuickTimeMovie error:outError];
          success = (self.assetWriter != nil);
     }

     if (success)
     {
          // If the reader and writer were successfully initialized, grab the audio and video asset tracks that will be used.
          AVAssetTrack *assetAudioTrack = nil, *assetVideoTrack = nil;
          NSArray *audioTracks = [self.asset tracksWithMediaType:AVMediaTypeAudio];
          if ([audioTracks count] > 0)
               assetAudioTrack = [audioTracks objectAtIndex:0];
          NSArray *videoTracks = [self.asset tracksWithMediaType:AVMediaTypeVideo];
          if ([videoTracks count] > 0)
               assetVideoTrack = [videoTracks objectAtIndex:0];

          if (assetAudioTrack)
          {
               // If there is an audio track to read, set the decompression settings to Linear PCM and create the asset reader output.
               NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
               self.assetReaderAudioOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetAudioTrack outputSettings:decompressionAudioSettings];
               [self.assetReader addOutput:self.assetReaderAudioOutput];
               // Then, set the compression settings to 128kbps AAC and create the asset writer input.
               AudioChannelLayout stereoChannelLayout = {
                    .mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,
                    .mChannelBitmap = 0,
                    .mNumberChannelDescriptions = 0
               };
               NSData *channelLayoutAsData = [NSData dataWithBytes:&stereoChannelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];
               NSDictionary *compressionAudioSettings = @{
                    AVFormatIDKey         : [NSNumber numberWithUnsignedInt:kAudioFormatMPEG4AAC],
                    AVEncoderBitRateKey   : [NSNumber numberWithInteger:128000],
                    AVSampleRateKey       : [NSNumber numberWithInteger:44100],
                    AVChannelLayoutKey    : channelLayoutAsData,
                    AVNumberOfChannelsKey : [NSNumber numberWithUnsignedInteger:2]
               };
               self.assetWriterAudioInput = [AVAssetWriterInput assetWriterInputWithMediaType:[assetAudioTrack mediaType] outputSettings:compressionAudioSettings];
               [self.assetWriter addInput:self.assetWriterAudioInput];
          }

          if (assetVideoTrack)
          {
               // If there is a video track to read, set the decompression settings for YUV and create the asset reader output.
               NSDictionary *decompressionVideoSettings = @{
                    (id)kCVPixelBufferPixelFormatTypeKey     : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_422YpCbCr8],
                    (id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary]
               };
               self.assetReaderVideoOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetVideoTrack outputSettings:decompressionVideoSettings];
               [self.assetReader addOutput:self.assetReaderVideoOutput];
               CMFormatDescriptionRef formatDescription = NULL;
               // Grab the video format descriptions from the video track and grab the first one if it exists.
               NSArray *videoFormatDescriptions = [assetVideoTrack formatDescriptions];
               if ([videoFormatDescriptions count] > 0)
                    formatDescription = (__bridge CMFormatDescriptionRef)[formatDescriptions objectAtIndex:0];
               CGSize trackDimensions = {
                    .width = 0.0,
                    .height = 0.0,
               };
               // If the video track had a format description, grab the track dimensions from there. Otherwise, grab them direcly from the track itself.
               if (formatDescription)
                    trackDimensions = CMVideoFormatDescriptionGetPresentationDimensions(formatDescription, false, false);
               else
                    trackDimensions = [assetVideoTrack naturalSize];
               NSDictionary *compressionSettings = nil;
               // If the video track had a format description, attempt to grab the clean aperture settings and pixel aspect ratio used by the video.
               if (formatDescription)
               {
                    NSDictionary *cleanAperture = nil;
                    NSDictionary *pixelAspectRatio = nil;
                    CFDictionaryRef cleanApertureFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_CleanAperture);
                    if (cleanApertureFromCMFormatDescription)
                    {
                         cleanAperture = @{
                              AVVideoCleanApertureWidthKey            : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureWidth),
                              AVVideoCleanApertureHeightKey           : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHeight),
                              AVVideoCleanApertureHorizontalOffsetKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHorizontalOffset),
                              AVVideoCleanApertureVerticalOffsetKey   : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureVerticalOffset)
                         };
                    }
                    CFDictionaryRef pixelAspectRatioFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_PixelAspectRatio);
                    if (pixelAspectRatioFromCMFormatDescription)
                    {
                         pixelAspectRatio = @{
                              AVVideoPixelAspectRatioHorizontalSpacingKey : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioHorizontalSpacing),
                              AVVideoPixelAspectRatioVerticalSpacingKey   : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioVerticalSpacing)
                         };
                    }
                    // Add whichever settings we could grab from the format description to the compression settings dictionary.
                    if (cleanAperture || pixelAspectRatio)
                    {
                         NSMutableDictionary *mutableCompressionSettings = [NSMutableDictionary dictionary];
                         if (cleanAperture)
                              [mutableCompressionSettings setObject:cleanAperture forKey:AVVideoCleanApertureKey];
                         if (pixelAspectRatio)
                              [mutableCompressionSettings setObject:pixelAspectRatio forKey:AVVideoPixelAspectRatioKey];
                              
                              [mutableCompressionSettings setObject:[NSNumber numberWithFloat:bps] forKey:AVVideoAverageBitRateKey];//比特率
                [mutableCompressionSettings setObject:@(24) forKey:AVVideoExpectedSourceFrameRateKey];//帧率
                [mutableCompressionSettings setObject:@(1) forKey:AVVideoMaxKeyFrameIntervalKey];//帧间隔
                [mutableCompressionSettings setObject:AVVideoProfileLevelH264Main31 forKey:AVVideoProfileLevelKey];
                         compressionSettings = mutableCompressionSettings;
                         
                    }
               }
               // Create the video settings dictionary for H.264.
               NSMutableDictionary *videoSettings = (NSMutableDictionary *) @{
                    AVVideoCodecKey  : AVVideoCodecH264,
                    AVVideoWidthKey  : [NSNumber numberWithDouble:trackDimensions.width],
                    AVVideoHeightKey : [NSNumber numberWithDouble:trackDimensions.height]
               };//设置视频尺寸,及h.264的编码格式
               // Put the compression settings into the video settings dictionary if we were able to grab them.
               if (compressionSettings)
                    [videoSettings setObject:compressionSettings forKey:AVVideoCompressionPropertiesKey];
               // Create the asset writer input and add it to the asset writer.
               self.assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:[videoTrack mediaType] outputSettings:videoSettings];
               [self.assetWriter addInput:self.assetWriterVideoInput];
          }
     }
     return success;
}

                            重编码Asset

方法startAssetReaderAndWriter负责读取和写入asset:

- (BOOL)startAssetReaderAndWriter:(NSError **)outError
{
     BOOL success = YES;
     // Attempt to start the asset reader.
     success = [self.assetReader startReading];
     if (!success)
          *outError = [self.assetReader error];
     if (success)
     {
          // If the reader started successfully, attempt to start the asset writer.
          success = [self.assetWriter startWriting];
          if (!success)
               *outError = [self.assetWriter error];
     }

     if (success)
     {
          // If the asset reader and writer both started successfully, create the dispatch group where the reencoding will take place and start a sample-writing session.
          self.dispatchGroup = dispatch_group_create();
          [self.assetWriter startSessionAtSourceTime:kCMTimeZero];
          self.audioFinished = NO;
          self.videoFinished = NO;

          if (self.assetWriterAudioInput)
          {
               // If there is audio to reencode, enter the dispatch group before beginning the work.
               dispatch_group_enter(self.dispatchGroup);
               // Specify the block to execute when the asset writer is ready for audio media data, and specify the queue to call it on.
               [self.assetWriterAudioInput requestMediaDataWhenReadyOnQueue:self.rwAudioSerializationQueue usingBlock:^{
                    // Because the block is called asynchronously, check to see whether its task is complete.
                    if (self.audioFinished)
                         return;
                    BOOL completedOrFailed = NO;
                    // If the task isn't complete yet, make sure that the input is actually ready for more media data.
                    while ([self.assetWriterAudioInput isReadyForMoreMediaData] && !completedOrFailed)
                    {
                         // Get the next audio sample buffer, and append it to the output file.
                         CMSampleBufferRef sampleBuffer = [self.assetReaderAudioOutput copyNextSampleBuffer];
                         if (sampleBuffer != NULL)
                         {
                              BOOL success = [self.assetWriterAudioInput appendSampleBuffer:sampleBuffer];
                              CFRelease(sampleBuffer);
                              sampleBuffer = NULL;
                              completedOrFailed = !success;
                         }
                         else
                         {
                              completedOrFailed = YES;
                         }
                    }
                    if (completedOrFailed)
                    {
                         // Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the audio work has finished).
                         BOOL oldFinished = self.audioFinished;
                         self.audioFinished = YES;
                         if (oldFinished == NO)
                         {
                              [self.assetWriterAudioInput markAsFinished];
                         }
                         dispatch_group_leave(self.dispatchGroup);
                    }
               }];
          }

          if (self.assetWriterVideoInput)
          {
               // If we had video to reencode, enter the dispatch group before beginning the work.
               dispatch_group_enter(self.dispatchGroup);
               // Specify the block to execute when the asset writer is ready for video media data, and specify the queue to call it on.
               [self.assetWriterVideoInput requestMediaDataWhenReadyOnQueue:self.rwVideoSerializationQueue usingBlock:^{
                    // Because the block is called asynchronously, check to see whether its task is complete.
                    if (self.videoFinished)
                         return;
                    BOOL completedOrFailed = NO;
                    // If the task isn't complete yet, make sure that the input is actually ready for more media data.
                    while ([self.assetWriterVideoInput isReadyForMoreMediaData] && !completedOrFailed)
                    {
                         // Get the next video sample buffer, and append it to the output file.
                         CMSampleBufferRef sampleBuffer = [self.assetReaderVideoOutput copyNextSampleBuffer];
                         if (sampleBuffer != NULL)
                         {
                              BOOL success = [self.assetWriterVideoInput appendSampleBuffer:sampleBuffer];
                              CFRelease(sampleBuffer);
                              sampleBuffer = NULL;
                              completedOrFailed = !success;
                         }
                         else
                         {
                              completedOrFailed = YES;
                         }
                    }
                    if (completedOrFailed)
                    {
                         // Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the video work has finished).
                         BOOL oldFinished = self.videoFinished;
                         self.videoFinished = YES;
                         if (oldFinished == NO)
                         {
                              [self.assetWriterVideoInput markAsFinished];
                         }
                         dispatch_group_leave(self.dispatchGroup);
                    }
               }];
          }
          // Set up the notification that the dispatch group will send when the audio and video work have both finished.
          dispatch_group_notify(self.dispatchGroup, self.mainSerializationQueue, ^{
               BOOL finalSuccess = YES;
               NSError *finalError = nil;
               // Check to see if the work has finished due to cancellation.
               if (self.cancelled)
               {
                    // If so, cancel the reader and writer.
                    [self.assetReader cancelReading];
                    [self.assetWriter cancelWriting];
               }
               else
               {
                    // If cancellation didn't occur, first make sure that the asset reader didn't fail.
                    if ([self.assetReader status] == AVAssetReaderStatusFailed)
                    {
                         finalSuccess = NO;
                         finalError = [self.assetReader error];
                    }
                    // If the asset reader didn't fail, attempt to stop the asset writer and check for any errors.
                    if (finalSuccess)
                    {
                         finalSuccess = [self.assetWriter finishWriting];
                         if (!finalSuccess)
                              finalError = [self.assetWriter error];
                    }
               }
               // Call the method to handle completion, and pass in the appropriate parameters to indicate whether reencoding was successful.
               [self readingAndWritingDidFinishSuccessfully:finalSuccess withError:finalError];
          });
     }
     // Return success here to indicate whether the asset reader and writer were started successfully.
     return success;
}

在重编码过程中, 为了提升性能, 音频处理和视频处理在两个不同队列中进行. 但这两个队列在一个dispatchGroup中, 当每个队列的任务都完成后, 会调用readingAndWritingDidFinishSuccessfully。


                           处理编码结果  

对重编码的结果进行处理并同步到UI:

- (void)readingAndWritingDidFinishSuccessfully:(BOOL)success withError:(NSError *)error
{
     if (!success)
     {
          // If the reencoding process failed, we need to cancel the asset reader and writer.
          [self.assetReader cancelReading];
          [self.assetWriter cancelWriting];
          dispatch_async(dispatch_get_main_queue(), ^{
               // Handle any UI tasks here related to failure.
          });
     }
     else
     {
          // Reencoding was successful, reset booleans.
          self.cancelled = NO;
          self.videoFinished = NO;
          self.audioFinished = NO;
          dispatch_async(dispatch_get_main_queue(), ^{
               // Handle any UI tasks here related to success.
          });
     }
}

当然,还可以取消重编码


使用多个串行队列, 可以很轻松的取消对asset的重编码. 可以将下面的代码与UI上的”取消”按钮关联起来:

- (void)cancel
{
     // Handle cancellation asynchronously, but serialize it with the main queue.
     dispatch_async(self.mainSerializationQueue, ^{
          // If we had audio data to reencode, we need to cancel the audio work.
          if (self.assetWriterAudioInput)
          {
               // Handle cancellation asynchronously again, but this time serialize it with the audio queue.
               dispatch_async(self.rwAudioSerializationQueue, ^{
                    // Update the Boolean property indicating the task is complete and mark the input as finished if it hasn't already been marked as such.
                    BOOL oldFinished = self.audioFinished;
                    self.audioFinished = YES;
                    if (oldFinished == NO)
                    {
                         [self.assetWriterAudioInput markAsFinished];
                    }
                    // Leave the dispatch group since the audio work is finished now.
                    dispatch_group_leave(self.dispatchGroup);
               });
          }

          if (self.assetWriterVideoInput)
          {
               // Handle cancellation asynchronously again, but this time serialize it with the video queue.
               dispatch_async(self.rwVideoSerializationQueue, ^{
                    // Update the Boolean property indicating the task is complete and mark the input as finished if it hasn't already been marked as such.
                    BOOL oldFinished = self.videoFinished;
                    self.videoFinished = YES;
                    if (oldFinished == NO)
                    {
                         [self.assetWriterVideoInput markAsFinished];
                    }
                    // Leave the dispatch group, since the video work is finished now.
                    dispatch_group_leave(self.dispatchGroup);
               });
          }
          // Set the cancelled Boolean property to YES to cancel any work on the main queue as well.
          self.cancelled = YES;
     });
}


如果需要重编码相册中的视频的话,还需要一下步骤:

  1. 导出视频资源

     PHVideoRequestOptions* options = [[PHVideoRequestOptions alloc] init];
         options.version = PHVideoRequestOptionsVersionOriginal;
         options.deliveryMode = PHVideoRequestOptionsDeliveryModeAutomatic;
         options.networkAccessAllowed = YES;
         [[PHImageManager defaultManager] requestAVAssetForVideo:asset options:options resultHandler:^(AVAsset* avasset, AVAudioMix* audioMix, NSDictionary* info){
             // NSLog(@"Info:\n%@",info);
             AVURLAsset *videoAsset = (AVURLAsset*)avasset;
             NSLog(@"AVAsset URL: %@",videoAsset.URL);
             NSString *videoPath = videoAsset.URL.path;
             NSLog(@"原视频大小:%llu", [[NSFileManager defaultManager] attributesOfItemAtPath:videoPath error:nil].fileSize);
             [self startExportVideoWithVideoAsset:videoAsset completion:completion];
    
  2. 设置导出视频是压缩系数等

     NSArray *presets = [AVAssetExportSession exportPresetsCompatibleWithAsset:videoAsset];
     if ([presets containsObject:AVAssetExportPreset640x480]) {
         AVAssetExportSession *session = [[AVAssetExportSession alloc]initWithAsset:videoAsset presetName:AVAssetExportPreset640x480];
         
         NSDateFormatter *formater = [[NSDateFormatter alloc] init];
         [formater setDateFormat:@"yyyy-MM-dd-HH:mm:ss"];
         NSString *outputPath = [[CACHES_FOLDER stringByAppendingPathComponent:@"video"] stringByAppendingFormat:@"/%@.mp4",[formater stringFromDate:[NSDate date]]];
         NSLog(@"video outputPath = %@",outputPath);
         session.outputURL = [NSURL fileURLWithPath:outputPath];
         
         // Optimize for network use.
         session.shouldOptimizeForNetworkUse = true;
         
         NSArray *supportedTypeArray = session.supportedFileTypes;
         if ([supportedTypeArray containsObject:AVFileTypeMPEG4]) {
             session.outputFileType = AVFileTypeMPEG4;
         } else if (supportedTypeArray.count == 0) {
             NSLog(@"No supported file types 视频类型暂不支持导出");
             return;
         } else {
             session.outputFileType = [supportedTypeArray objectAtIndex:0];
         }
         
         if (![[NSFileManager defaultManager] fileExistsAtPath:[CACHES_FOLDER stringByAppendingPathComponent:@"video"]]) {
             [[NSFileManager defaultManager] createDirectoryAtPath:[CACHES_FOLDER stringByAppendingPathComponent:@"video"] withIntermediateDirectories:YES attributes:nil error:nil];
         }
         
         AVMutableVideoComposition *videoComposition = [self fixedCompositionWithAsset:videoAsset];
         if (videoComposition.renderSize.width) {
             // 修正视频转向
             session.videoComposition = videoComposition;
         }
         
         // Begin to export video to the output path asynchronously.
         [session exportAsynchronouslyWithCompletionHandler:^(void) {
             switch (session.status) {
                 case AVAssetExportSessionStatusUnknown:
                     NSLog(@"AVAssetExportSessionStatusUnknown"); break;
                 case AVAssetExportSessionStatusWaiting:
                     NSLog(@"AVAssetExportSessionStatusWaiting"); break;
                 case AVAssetExportSessionStatusExporting:
                     NSLog(@"AVAssetExportSessionStatusExporting"); break;
                 case AVAssetExportSessionStatusCompleted: {
                     NSLog(@"AVAssetExportSessionStatusCompleted");
                     dispatch_async(dispatch_get_main_queue(), ^{
                         if (completion) {
                             completion(outputPath);
                         }
                         NSLog(@"导出的视频大小:%llu", [[NSFileManager defaultManager] attributesOfItemAtPath:outputPath error:nil].fileSize);
                     });
                 }  break;
                 case AVAssetExportSessionStatusFailed:
                     NSLog(@"AVAssetExportSessionStatusFailed"); break;
                 default: break;
             }
         }];
     }
    
  3. 获取优化后的视频转向信息

     -(AVMutableVideoComposition *)fixedCompositionWithAsset:(AVAsset *)videoAsset {
     AVMutableVideoComposition *videoComposition = [AVMutableVideoComposition videoComposition];
     // 视频转向
     int degrees = [self degressFromVideoFileWithAsset:videoAsset];
     if (degrees != 0) {
     CGAffineTransform translateToCenter;
     CGAffineTransform mixedTransform;
     videoComposition.frameDuration = CMTimeMake(1, 30);
     
     NSArray *tracks = [videoAsset tracksWithMediaType:AVMediaTypeVideo];
     AVAssetTrack *videoTrack = [tracks objectAtIndex:0];
     
     if (degrees == 90) {
         // 顺时针旋转90°
         translateToCenter = CGAffineTransformMakeTranslation(videoTrack.naturalSize.height, 0.0);
         mixedTransform = CGAffineTransformRotate(translateToCenter,M_PI_2);
         videoComposition.renderSize = CGSizeMake(videoTrack.naturalSize.height,videoTrack.naturalSize.width);
     } else if(degrees == 180){
         // 顺时针旋转180°
         translateToCenter = CGAffineTransformMakeTranslation(videoTrack.naturalSize.width, videoTrack.naturalSize.height);
         mixedTransform = CGAffineTransformRotate(translateToCenter,M_PI);
         videoComposition.renderSize = CGSizeMake(videoTrack.naturalSize.width,videoTrack.naturalSize.height);
     } else if(degrees == 270){
         // 顺时针旋转270°
         translateToCenter = CGAffineTransformMakeTranslation(0.0, videoTrack.naturalSize.width);
         mixedTransform = CGAffineTransformRotate(translateToCenter,M_PI_2*3.0);
         videoComposition.renderSize = CGSizeMake(videoTrack.naturalSize.height,videoTrack.naturalSize.width);
     }
     
     AVMutableVideoCompositionInstruction *roateInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
     roateInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, [videoAsset duration]);
     AVMutableVideoCompositionLayerInstruction *roateLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoTrack];
     
     [roateLayerInstruction setTransform:mixedTransform atTime:kCMTimeZero];
     
     roateInstruction.layerInstructions = @[roateLayerInstruction];
     // 加入视频方向信息
     videoComposition.instructions = @[roateInstruction];
     }
     
     return videoComposition;
    
  4. 获取视频角度

     -(int)degressFromVideoFileWithAsset:(AVAsset *)asset {
         int degress = 0;
         NSArray *tracks = [asset tracksWithMediaType:AVMediaTypeVideo];
         if([tracks count] > 0) {
             AVAssetTrack *videoTrack = [tracks objectAtIndex:0];
             CGAffineTransform t = videoTrack.preferredTransform;
             if(t.a == 0 && t.b == 1.0 && t.c == -1.0 && t.d == 0){
                 // Portrait
                 degress = 90;
             } else if(t.a == 0 && t.b == -1.0 && t.c == 1.0 && t.d == 0){
                 // PortraitUpsideDown
                 degress = 270;
             } else if(t.a == 1.0 && t.b == 0 && t.c == 0 && t.d == 1.0){
                 // LandscapeRight
                 degress = 0;
             } else if(t.a == -1.0 && t.b == 0 && t.c == 0 && t.d == -1.0){
                 // LandscapeLeft
                 degress = 180;
             }
         }
         return degress;
     }
    


在实现中发现,对于导出的压缩过后的视频,如果小于一定程度,再进行编解码的话,不但体积没有变小,反而增大了,无论改什么参数都不会变小体积,这个需要进一步研究。另外,这个是对于压缩过后的视频进行重编码的,还可以交换下步骤,先获取视频资源AVAsset,在将AVAsset重编码,然后进行压缩。经过试验,俩者得出的结果是一样的,在一定程度都能压缩,到达一定程度了,无论改什么参数都压缩不了大小了

参考文献:http://www.devzhang.cn/2016/09/20/Asset%E7%9A%84%E9%87%8D%E7%BC%96%E7%A0%81%E5%8F%8A%E5%AF%BC%E5%87%BA/

你可能感兴趣的:(iOS视频重编码)