AVFoundation - 读取和写入媒体

1. AVAssetReader, 用于从AVAsset实例中读取媒体样本. 通常会配置一个或多个AVAssetReaderOutput实例, 并通过copyNextSampleBuffer方法访问音频样本和视频帧. AVAssetReaderOutput是一个抽象类, 不过框架定义了3个具体实例来从指定的AVAssetTrack中读取解码的媒体样本, 从多音频轨道中读取混合输出, 或者从多视频轨道中读取组合输出. 一个资源读取器的内部通道都是以多线程的方式不断提取下一个可用样本, 这样可以在系统请求资源时最小化延时. 尽管提供了低时延的检索操作, 还是不倾向于实时操作, 比如播放. AVAssetReader只针对带有一个资源媒体样本, 如果需要同时从多个基于文件的资源中读取样本, 可将他们组合到一个AVAsset子类AVComposition中.

AVAsset *asset = //Asynchoroously loaded video asset

AVAssetTrack *track = [[asset trackWithMediaType:AVMediaTypeVideo] firstObject];

self.assetReader = [[AVAssetReader alloc] initWithAsset:asset error:nil];

NSDictionary *readerOutputSettings = @{kCVPixelBufferPixelFormatTypeKey: @(kCVPixelFormatType_32BGRA)};

AVAssetReaderTrackOutput *trackOutput = [[AVAssetReaderTrackOutput alloc] initWithTrack:track outputSettings:readerOutputSettings];    //从资源的视频轨道中读取样本, 将视频帧解压缩为BGRA格式.

[self.assetReader addOutput:trackOutput];

[self.assetReader startReading];

2. AVAssetWrite, 用于对媒体资源进行编码并将其写入容器文件中. 比如一个MPEG-4文件或一个QuickTime文件. 它由一个或多个AVAssetWriteInput对象配置. 用于附加将要写入容器的媒体样本的CMSampleBuffer对象. AVAssetWriteInput可以被配置为处理指定的媒体类型, 比如音频或视频, 并且附加在其后的样本在最终输出时生成一个独立的AVAssetTrack. 当使用了一个配置处理视频样本的AVAssetWriteInput时, 开发者经常用到一个专门的适配器对象AVAssetWriteInputPixelBufferAdaptor. 这个类在附加被包装为CVPixelBuffer对象的视频样本时提供最优性能. (创建一个AVAssetWriter对象, 传递新文件写入目的地, 并创建一个新的AVAssetWriterInput, 带有相应的媒体类型和输出设置, 以便创建一个720p, H.264格式的视频)

NSURL *outputURL = // destination output URL

self.assetWriter = [[AVAssetWriter alloc] initWithURL:outputURL fileType:AVFileTypeQuickTimeMovie error:nil];

NSDictionary *writeOutputSettings = @{AVVideoCodecKey: AVVideoCodecH264, AVVideoWidthKey: @1280, AVVideoHeightKey:@720, AVVideoCompressionPropertiesKey:@{AVVideoMaxKeyFrameIntervalKey: @1, AVVideoAverageBitRateKey:@10500000, VAVideoProfileLevelKey: AVVideoProfileLevelH264Main31}};

AVAssetWriterInput *writeInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo outputSettings:writeOutputSettings];

[self.assetWriter addInput:writerInput];

[self.assetWriter startWriting];

注意:与AVAssetExportSession相比, AVAssetWriter明显的优势是它对输出进行编码时能够进行更加细致的压缩控制. 可以让开发者指定诸如关键帧间隔, 视频比特率, h.264配置文件, 像素宽高比和纯净光圈等设置.

3. 创建一个新的写入会话来从资源中读取样本并写到新的位置. a: 使用startSessionAtSourceTime方法创建一个新的写入会话, 并传递kCMTimeZero参数作为资源样本的开始时间. 传给requestMediaDataWhenReadyOnQueue:usingBlock方法的代码块添加更多样本时会不断被调用.

dispatch_queue_t dispatchQueue = dispatch_queue_create("com.writequeue", NULL);

[self.assetWrite startSessionAtSourceTime:kCMTimeZero];    //a

[writeInput requestMediaDataWhenReadyOnQueue:dispatchQueue usingBlock:^{

    BOOL complete = NO;

    while ([writeInput isReadyForMoreMediaData] && !complete) {

        CMSampleBuffer sampleBuffer = [trackOutput copyNextSampleBuffer];

        if (sampleBuffer) {

            BOOL result = [writerInput appendSampleBuffer:sampleBuffer];

            CFRelease(sampleBuffer);

            complete = !result;

       }else {

            [writeInput markAsFinished];

            complete = YES;

        }

    }

    if (complete) {

        [self.assetWriter finishWritingWithCompletionHandle:^{

            if (status == AVAssetWriterStatusCompleted) {

                //handle success

            }else{

            }

        }];

    }

}];

4. 创建音频波形图, 绘制波形图的步骤: 1.第一步读取, 读取音频样本进行渲染, 需要读取或可能解压音频数据.(PCM是一种未压缩的音频样本格式)  2. 第二步缩减, 读取到的样本数量远比屏幕上渲染的多, 缩减过程必须作用于这个样本集, 这一过程通常包括将样本总量分为小的样本块, 并在每个样本块上找打最大样本, 所有样本的平均值或min/max值. 3. 第三步渲染, 将缩减后的样本呈现在屏幕上通常用到Quartz框架. 如果采用min/max对, 则为它的每一对绘制一条垂线. 如果使用样本的平均值或最大值, 绘制波形图比较合适.

5. 读取音频样本, a:首先对资源键执行标准的异步载入, 这样访问资源的tracks属性就不会遇到阻碍. b: tracks键成功载入, 调用readAudioSamplesFromAsset从资源音频轨道读取样本. c: 创建一个字典保存从资源轨道读取音频样本时使用的解压设置, 样本需要以未压缩的格式被读取. d: startReading允许资源读取器开始预收取样本数据. e: 调用跟踪输出的copyNextSampleBuffer方法开始每个迭代, 每次都返回一个包含音频样本的下一个可用buffer. f: CMSampleBuffer中的音频样本被包含在一个CMBlockBuffer类型中, 使用CMSampleBufferGetDataBuffer函数可以访问这个block buffer. 使用CMBlockBufferGetDataLenght函数确定其长度并创建一个16位带符号整形数组来保存音频样本. G:使用CMSampleBufferInvalidate函数来指定样本buffer已经处理和不可再继续使用.此外需要释放CMSampleBuffer副本来释放内容.

+ (void)loadAudioSamplesFromAsset:(AVAsset *)asset completionBlock:^(NSData *data){

    NSString *tracks = @"tracks";

    [asset loadValusAsynchronouslyForKeys:@[tracks] completionHandler:^{ //a

        AVKeyValueStatus status = [asset statusOfValueForKey:tracks error:nil];

        NSData *sampleData = nil;

        if (status == AVKeyValueStatusLoaded) {

            sampleData = [self readAudioSamplesFromAsset:asset]; //b

        }

        dispatch_async(dispatch_get_main_queue(), ^{

            completionBlock(sampleData);    

        });

    }];

}

+ (NSData *)readAudioSamplesFromAsset:(AVAsset *)asset {

    NSError *error = nil;

    AVAssetReader *assetReader = [[AVAssetReader alloc] initWithAsset:asset error:&error];

    if (!assetReader) {

        NSLog(@"Error creating asset reader: %@", [error localizedDescription]);

        return nil;

    }

    AVAssetTrack *track = [[asset trackWithMediaType:AVMediaTypeAudio] firstObject];

    NSDictionary *outputSettings = @{AVFormatIDKey : @(kAudioFormatLinearPCM), AVLinearPCMIsBigEndianKey: @NO, AVLinearPCMIsFloatKey: @NO, AVLinearPCMBitDepthKey: @(16)};    //c

    AVAssetReaderTrackOutput *trackOutput = [[AVAssetReaderTrackOutput alloc] initWithTrack:track outputSettings:outputSettings];

    [assetReader addOutput:trackOutput];

    [assetReader startReading];    //d

    NSMutableData *sampleData = [NSMutableData data];

    while (assetReader.status == AVAssetReaderStatusReading) {

        CMSampleBufferRef sampleBuffer = [trackOutput copyNextSampleBuffer]; //e

        if (sampleBuffer) {

            CMBlockBufferRef blockBufferRef = CMSampleBufferGetDataBuffer(sampleBuffer);    //f

            size_t length = CMBlockBufferGetDataLength(blockBufferRef);

            SInt16 sampleBytes[length];

            CMBlockBufferCopyDataBytes(blockBufferRef, 0, length, sampleBytes); //G 

            CMSampleBufferInvalidate(sampleBuffer);

            CFRelease(sampleBuffer);

        }

        if (assetReader.status == AVAssetReaderStatusCompleted) {

            return sampleData;

        }else {

            return nil;

        }

    }    

}

6. 缩减音频样本, 上面可以从一个指定的视频资源中提取全部样本集合, 即使非常小的音频文件都可能有十万个样本, 远大于屏幕上进行绘制所需的样本, 需要进行筛选. a: 用带有音频样本信息的NSData来初始化这个实例. b: 按照指定的尺寸约束来筛选数据集.(处理共分为两步: 1.首先将样本分成箱, 找到每个箱里的最大样本. 2.当所有箱都处理完成后, 对传递给filterSamplesForSize:方法的尺寸约束有关样本应用比例因子)

- (id)initWithData:(NSData *)sampleData {        //a

    self = [super init];

    if (self) {

        _sampleData = sampleData;

    }

    return self;

}

- (NSArray *)filteredSampleForSize:(CGSize)size {        //b

    NSMutableArray *filteredSamples = [[NSMutableArray alloc] init];

    NSUInteger sampleCount = self.sampleData.length / sizeof(SInt16);

    NSUInteger binSize = sampleCount / size.width;

    SInt16 *bytes = (SInt16 *)self.sampleData.bytes;

    SInt16 max = 0;

    for (NSUInteger I = 0; I < sampleCount; i += binSize) {

        SInt16 sampleBin[binSize];

        for (NSUinteger j = 0; j < binSize; j++) {

            sampleBin[j] = CFSwapInt16LittleToHost(bytes[I+j]);

        }

        SInt16 value = [self maxValueInArray:sampleBin ofSize:binSize];

        [filteredSamples addObject:@(value)];

        if (value > maxSample) {

            maxSample = value;

        }

    }

    for (NSUInteger i =0 ; I

        filteredSamples[I] = @([filteredSamples[i] intValue] * scaleFactor);

    }

    return filteredSamples;

}

- (SInt16)maxValueInArray:(SInt16[])values ofSize:(NSUInteger)size {

    SInt16 maxValue = 0;

    for (int I = 0; I < size; I++) {

        if (abs(value[I]) > maxValue) {

            maxValue = abs(values[I]);

        }

    }

    return maxValue;

}

7. 渲染音频样本, a:绘制波形的下半部, 需要对上半部路径应用translate和scale变化. 这会使得上半部路径翻转到下面, 填满整个波形.

@implementation WaveView

- (void)setAsset:(AVAsset *)asset {

    if (_asset != asset) {

        _asset = asset;

        [SampleDataProvider loadAudioSamplesFromAsset:self.asset completionBlock:^{    //载入音频样本

            self.filter = [[SampleDataFilter alloc] initWithData:sampleData];

            [self setNeedsDisplay];

        }];

    }

}

- (void)drawRect:(CGRect)rect {

    CGContextRef context = UIGraphicsGetCurrentContext();

    CGContextScaleCTM(context, THWithSCaling, THHeightScaling);

    CGFloat xOffset = self.bounds.size.width - (self.bounds.size.width * THWidthScaling);

    CGFloat yOffset = self.bounds.size.height - (self.bounds.size.height * THeightScaling);

    CGContextTranslateCTM(context, xOffset / 2, yOffset / 2);

    NSArray *filteredSamples = [self.filter filteredSamplesForSize:self.bouds.size];

    CGFloat midY = CGRectGetMidY(rect);

    CGMutablePathRef halfPath = CGPathCreateMutable();

    CGPathMoveToPoint(halfPath, NULL, 0, midY);

    for (NSUInteger I = 0; I < filteredSamples.count; i++){

        float sample = [filteredSamples[I] floatValue];

        CGPathAddLineToPoint(halfPath, NULL, I, midY - sample);

    }

    CGPathAddLineToPoint(halfPath, NULL, filteredSamples.cout, midY);

    CGMutablePathRef fullPath = CGPathCreateMutable();

    CGPathAddPath(fullPath, NULL, halfPath);   

    CGAffineTransform transform = CGAffineTransformIdentify;    //a

    transform = CGAffineTransformTranslate(transform, 0, CGRectGEtHeight(rect));

    transform = CGAffineTransformScale(transform, 1.0, -1.0);

    CGPathAddPath(fullPath, &transform, halfPath);

    CGContextAddPath(context, fullPath);

    CGContextSetFillColorWithColor(context, self.waveColor.CGColor);

    CGContextDrawPath(context, kCGPathFill);

    CGPathRelease(halfPath);

    CGPathRelease(fullPath);

}

8. 捕捉录制的高级方法, 前面讲述了AVCaptureVideoDataOutput捕捉的CVPixelBuffer对象作为OpenGL ES贴图来呈现, 不过失去了AVCaptureMovieFileOutput来记录输出的便捷性. 下面介绍AVAssetWriter从高级捕捉输出中记录输出.

- (void)captureOutput:(AVCaptureOutput *)caputureOutput didOutSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConection:(AVCaptureConnection *)con {

    if (captureOutput == self.videoDataOutput) {

        CVPixelBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); //从样本中获取基础CVPixelBuffer

        CIImage *sourceImage = [CIImage imageWithCVPicelBuffer:imageBuffer options:nil];    //从CVPixelBuffer中创建一个新的CIImage.

        [self.imageTarget setImage:sourceImage];

    }

}

- (void)startWriting {        //设置AVAssetWriter图片

    dispatch_Async(self.dispatchQueue, ^{

        NSError *error = nil;

        NSString *fileType = AVFileTypeQuickTimeMovie;

        self.assetWriter = [AVAssetWriter assetWriterWithURL:[self outputURL] fileType:fileType error:&error];

        if (!self.assetWriter || error) {

            return;        

        }

        self.assetWriteVideoInput = [[AVAssetWriteInput alloc] intWithMediaType:AVMediaTypeVideo outputSettings:self.videoSettings];

        self.assetWriterVideoInput.expectsMediaDataInRealTime = YES;

        UIDeviceOrientation ori = [UIDevice currentDevice].orientation;

        self.assetWriterVideoInput.transfrom = THTransformForDeviceOrientation(ori)

        NSDictionary *attributes = @{kCVPixelBufferPixelFormatTypeKey: @(kCVPixelFormatType_32BGRA), kCVPixelBufferWidthKey: self.videoSettings[AVVideoWidthKey], kCVPixelBufferHeightKey: self.videoSettings[AVVideoHeigthKey], kCVPixelFormatOpenGLESCompatibility: kCFBooleanTrue};

        self.assetWriterInputPixelBufferAdaptor = [[AVAssetWriterInputPixelBufferAdaptor alloc] initWithAssetWriterInput:self.assetWriterVideoInput sourcePixelBufferAttributes: attributes]; 

        if ([self.assetWriter canAddInput:self.assetWriterVideoInput]) {

            [self.assetWriter addInput:self.assetWriterVideoInput];

        }else {

            return;

        }

        self.assetWriteAudioInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeAudio outputSettings:self.audioSettings];

        self.assetWriterAudioInput.expectsMediaDataInRealTime = YES;

        if ([self.assetWriter canAddInput:self.assetWriterAudioInput]){

            [self.assetWriter addInput:self.assetWriterAudioInput];

        }else{

            return;

        }

        self.isWriting = YES:    

        self.firstSample = YES;        //设置为YES, 就可以开始附加样本了.

    });

}

9. 实现processSampleBuffer方法, 这个方法里我们将附加从捕捉输出得到的CMSampleBuffer对象.

- (void)processSampleBuffer:(CMSampleBufferRef)sampleBuffer {

    if (!self.isWriting) {

        return;

    }

    CMMediaType mediaType = CMFormatDescriptionGetMediaType(formatDesc);

    if (mediaType == kCMMediaType_Video) {

        CMTime timestamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);

        if (self.firstSample) {

            if ([self.assetWriter startWriting]) {

                [self.assetWriter startSessionAtSourceTime:timestamp];

            }else{

                NSLog(@"Failed to start writing");

            }

            self.firstSample = NO;

         }

        CVPixelBufferRef outputRenderBuffer = NULL;

        CVPixelBufferPoolRef pixelBufferPool = self.assetWriterInputPixelBufferAdaptor.pixelBufferPool;

        OSStatus err = CVPicelBufferPollCreatePixelBuffer(NULL, pixelBufferPool, &outputRenderBuffer);

        if(err){

            return;

        }

        CVPixelBufferRef imgBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) //获取当前视频样本的CVPixelBuffer, 然后根据像素buffer创建一个新的CIImage.

        CIImage *sourceIamge = [CIImage imageWithCVPicelBuffer:imageBuffer options:nil];

        [self.activeFilter setValue:sourceImage forKey:kCIInputImageKey];

        CIImage *filteredImage = self.activeFilter.outputImage;

        if (!filteredImage) {

            filteredImage = sourceImage;

        }

        [self.ciContext render:filteredImage toCVPicelBuffer:outputRenderBuffer bounds:filteredImage.extent colorSpace:self.colorSpace]; //将筛选好的CIImage的输出渲染到创建的CVPixelBuffer中

        if (self.assetWriterVideoInput.readyForMoreMediaData) {    //如果视频输入的readyForMoreMediaData为YES, 则将像素buffer连同当前样本的呈现时间附加到AVAssetWriterPixelBufferAdaptor, 现在就完成对当前视频样本的处理, 调用release.

            if (![self.assetWriterInputPixelBufferAdaptor appendPixelBuffer:outputRenderBuffer withPresentationTime:timestamp]) {

                NSLog(@"Error appending pixel buffer");

            }

        }

        CVPixelBufferRelease(outputRenderBuffer);

    }else if (!self.firstSample && meidaType == kCMMediaType_Audio) {

        if (self.assetWriterAudioInput.isReadyForMoreMediaData) {

            if (![self.assetWriterAudioInput appendSampleBuffer:sampleBuffer]){

                    NSLog("Error appending audio sample buffer.");

            }

        } 

    }

}

stopWriting方法

- (void)stopWriting {

    self.isWriting = NO;                //标志为NO, processSampleBuffer:mediaType:

    dispatch_async(self.dispatchQueue, ^{

        [self.assetWriter finishWritingWithCompletionHandler:^{ //终止写入会话并关闭磁盘上的文件.

            if (self.assetWriter.status == AVAssetWriterStatusCompleted) {

                dispatch_async(dispatch_get_main_queue(), ^{    

                    NSURL *fileURL = [self.assetWriter outputURL];

                    [self.delegate didWriteMovieAtURL:fileURL];

                });

            }else {

                NSLog(@"Failed to write movie");

            }

        }];

    });

}

你可能感兴趣的:(AVFoundation - 读取和写入媒体)