AVFoundation-10总结

概述

AVFoundation 主要参考了《AVFoundation开发秘籍:实践掌握iOS & OSX应用的视听处理技术》一书中的内容,之所以写这些内容是想对这块内容加强一下印象,所以这部分内容我也写得比较基础。当然,这块内容并不复杂,主要是看你对框架的熟悉程度,自己在温习一遍之后,感觉有许多收获。在复习这块内容的时候,我是参照了大牛写的书籍,在这个过程中学的是大神的思维、写作方式以及各个容易忽视细节。在复习完之后,我最大的感触就是AVFoundation框架的类比较多,在学习的时候很容易忘记。因此,一定需要归纳学习、类比学习、联想学习,还有就是多上代码。

媒体捕捉

AVFoundation-10总结_第1张图片
媒体捕捉.png

资源与编辑

AVFoundation-10总结_第2张图片
资源与编辑.png

媒体播放与录制

AVFoundation-10总结_第3张图片
Play&Record.png

媒体存储与读取

AVFoundation-10总结_第4张图片
Reader&Writer.png

编解码

AVFoundation-10总结_第5张图片
Encode&Decode.png

渲染

AVFoundation-10总结_第6张图片
Render.png

转换

  • CVPixelBuffer 转 CGImage
/**
 Create a CGImageRef from sample buffer data
 @param pixelBuffer - the CVPixelBufferRef
 @return a CGImageRef
 */
+ (CGImageRef)imageFromPixelBuffer:(CVPixelBufferRef)imageBuffer
{
    // Lock the image buffer
    CVPixelBufferLockBaseAddress(imageBuffer,0);
    
    // Get information of the image
    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
    
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);
    size_t bytesPerRow = 4 * width; //CVPixelBufferGetBytesPerRow(imageBuffer);
    
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    
    CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
    CGImageRef newImage = CGBitmapContextCreateImage(newContext);
    CGContextRelease(newContext);
    
    CGColorSpaceRelease(colorSpace);
    CVPixelBufferUnlockBaseAddress(imageBuffer,0);
    
    return newImage;
}
  • CGImage 转 CVPixelBuffer
/**
 Create a CVPixelBufferRef from UIImage

 @param image - the image to convert
 @param size - the image size
 @return a CVPixelBufferRef
 */
+ (CVPixelBufferRef)convertToCVPixelBufferRefFromImage:(CGImageRef)image withSize:(CGSize)size
{
    NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                            [NSNumber numberWithBool:YES],kCVPixelBufferCGImageCompatibilityKey,
                            [NSNumber numberWithBool:YES],kCVPixelBufferCGBitmapContextCompatibilityKey,nil];
    
    CVPixelBufferRef pxbuffer = NULL;
    CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,size.width,size.height,kCVPixelFormatType_32BGRA,(__bridge CFDictionaryRef)options,&pxbuffer);
    
    NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
    
    CVPixelBufferLockBaseAddress(pxbuffer,0);
    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
    NSParameterAssert(pxdata != NULL);
    
    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(pxdata,
                                                 size.width,
                                                 size.height,
                                                 8,
                                                 4*size.width,
                                                 rgbColorSpace,
                                                 kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little);
    NSParameterAssert(context);
    
    CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image)), image);
    CGColorSpaceRelease(rgbColorSpace);
    CGContextRelease(context);
    
    CVPixelBufferUnlockBaseAddress(pxbuffer,0);

    return pxbuffer;
}

参考

AVFoundation开发秘籍:实践掌握iOS & OSX应用的视听处理技术

源码地址:AVFoundation开发 https://github.com/QinminiOS/AVFoundation

你可能感兴趣的:(AVFoundation-10总结)