GPUImage--视频流处理之AVCaptureVideoDataOutputSampleBufferDelegate

如果你是第一次看到这篇博客请看http://blog.csdn.net/xoxo_x/article/details/52695032

如果app需求仅仅是自己得到美颜的效果, 请看这里http://blog.csdn.net/xoxo_x/article/details/52743107

如果想了解更多的滤镜使用方法,请看这里http://blog.csdn.net/xoxo_x/article/details/52749033

接下来咱们要做的工作是如何对 CMSampleBufferRef 数据进行渲染、然后显示 美颜效果

一、通过AVCaptureVideoDataOutputSampleBufferDelegate获取视频流:

#import "ViewController.h"
#import 
#import "ViewController.h"
#import 

@interface ViewController ()<AVCaptureVideoDataOutputSampleBufferDelegate>

@property (nonatomic, strong) AVCaptureVideoPreviewLayer  *preLayer;
@end

@implementation ViewController

- (void)viewDidLoad
{
    [super viewDidLoad];
    [self setupCaptureSession];
}
//捕获到视频的回调函数
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
       fromConnection:(AVCaptureConnection *)connection
{

}
//开启摄像头
- (void)setupCaptureSession
{
    NSError *error = nil;

     // 创建session
    AVCaptureSession *session = [[AVCaptureSession alloc] init];

       // 可以配置session以产生解析度较低的视频帧,如果你的处理算法能够应付(这种低解析度)。
    // 我们将选择的设备指定为中等质量。
    session.sessionPreset = AVCaptureSessionPresetMedium;

    // Find a suitable AVCaptureDevice

    AVCaptureDevice *device;
    for(AVCaptureDevice *dev in [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo])
    {
    //这里使用前置摄像头
    //这里修改AVCaptureDevicePositionFront成AVCaptureDevicePositionBack可获取后端摄像头
        if([dev position]==AVCaptureDevicePositionFront)
        {
            device=dev;
            break;
        }
    }
  //  我们初始化一个AVCaptureDeviceInput对象,以创建一个输入数据源,该数据源为捕获会话(session)提供视频数据
    AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device
                                                                        error:&error];
    if (!input) {
        // Handling the error appropriately.
    }
    [session addInput:input];

    // AVCaptureVideoDataOutput可用于处理从视频中捕获的未经压缩的帧。一个AVCaptureVideoDataOutput实例能处理许多其他多媒体API能处理的视频帧,你可以通过captureOutput:didOutputSampleBuffer:fromConnection:这个委托方法获取帧,使用setSampleBufferDelegate:queue:设置抽样缓存委托和将应用回调的队列。
    AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
    [session addOutput:output];

     // 配置output对象
    dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
    [output setSampleBufferDelegate:self queue:queue];
    //dispatch_release(queue);

    // Specify the pixel format 设置输出的参数
    output.videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                            [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,
                            [NSNumber numberWithInt: 640], (id)kCVPixelBufferWidthKey,
                            [NSNumber numberWithInt: 480], (id)kCVPixelBufferHeightKey,
                            nil];
    //预览的图层
    self.preLayer = [AVCaptureVideoPreviewLayer layerWithSession: session];
    self.preLayer.frame = self.view.frame;
    self.preLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
    [self.view.layer addSublayer:self.preLayer];

    // 开始捕获画面
    [session startRunning];

}
@end

效果图:

GPUImage--视频流处理之AVCaptureVideoDataOutputSampleBufferDelegate_第1张图片

二、现在没有添加任何滤镜效果,下面开始添加滤镜,处理AVCaptureVideoDataOutputSampleBufferDelegate的CMSampleBufferRef数据

- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
       fromConnection:(AVCaptureConnection *)connection
{
//这里进行处理CMSampleBufferRef
}

首先、我们先通过处理数据,从而得到视频流,如下:

//捕获到视频的回调函数
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
       fromConnection:(AVCaptureConnection *)connection
{
    // 通过sampleBuffer得到图片
    UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
    NSData *mData = UIImageJPEGRepresentation(image, 0.5);
    //这里的mData是NSData对象,后面的0.5代表生成的图片质量
    //在主线程中执行才会把图片显示出来
    dispatch_async(dispatch_get_main_queue(), ^{
        [self.imagev setImage:[UIImage imageWithData:mData]];
    });
    [self.view addSubview:self.imagev];
    NSLog(@"output,mdata:%@",image);
}

self.imagev是在viewController中添加的全局变量,并在viewDidLoad中初始化

- (void)viewDidLoad
{
    [super viewDidLoad];
    self.imagev=[[UIImageView alloc] init];
    self.imagev.frame=CGRectMake(0, 300, 300, 200);
    self.imagev.backgroundColor=[UIColor orangeColor];
    [self setupCaptureSession];
}

这里我们发现self.preLayer的frame 与 self.imagev 的frame 并不冲突,为了更好地进行比较,这里就不移除原有的self.frame
其中,imageFromSampleBuffer的内容如下

// 把buffer流生成图片
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
    // Get a CMSampleBuffer's Core Video image buffer for the media data
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    // Lock the base address of the pixel buffer
    CVPixelBufferLockBaseAddress(imageBuffer, 0);

    // Get the number of bytes per row for the pixel buffer
    void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);

    // Get the number of bytes per row for the pixel buffer
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    // Get the pixel buffer width and height
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);

    // Create a device-dependent RGB color space
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    // Create a bitmap graphics context with the sample buffer data
    CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
                                                 bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
    // Create a Quartz image from the pixel data in the bitmap graphics context
    CGImageRef quartzImage = CGBitmapContextCreateImage(context);
    // Unlock the pixel buffer
    CVPixelBufferUnlockBaseAddress(imageBuffer,0);

    // Free up the context and color space
    CGContextRelease(context);
    CGColorSpaceRelease(colorSpace);

    // Create an image object from the Quartz image
    //UIImage *image = [UIImage imageWithCGImage:quartzImage];
    UIImage *image = [UIImage imageWithCGImage:quartzImage scale:1.0f orientation:UIImageOrientationRight];

    // Release the Quartz image
    CGImageRelease(quartzImage);

    return (image);
}

这时的效果如下:

GPUImage--视频流处理之AVCaptureVideoDataOutputSampleBufferDelegate_第2张图片

我们明显可以看到,在ImageView中出现了视频,并成功的显现了出来,到这里我们成功了

三、我们成功地得到了这个图像,那么是否可以处理呢?下面进行滤镜处理

你可能感兴趣的:(IOS精益编程)