GPUImageStillCamera 摄像头-照相
GPUImage中GPUImageStillCamera可以调用系统相机,并实现实时滤镜,GPUImageStillCamera继承自GPUImageVideoCamera类,添加了捕获照片的功能。
GPUImageVideoCamera
初始化方法:
- (id)initWithSessionPreset:(NSString *)sessionPreset cameraPosition:(AVCaptureDevicePosition)cameraPosition
sessionPreset是相机拍摄时的分辨率。它的值如下
AVCaptureSessionPresetPhoto
AVCaptureSessionPresetHigh
AVCaptureSessionPresetMedium
AVCaptureSessionPresetLow
AVCaptureSessionPreset320x240
AVCaptureSessionPreset352x288
AVCaptureSessionPreset640x480
AVCaptureSessionPreset960x540
AVCaptureSessionPreset1280x720
AVCaptureSessionPreset1920x1080
AVCaptureSessionPreset3840x2160
AVCaptureSessionPresetiFrame960x540
AVCaptureSessionPresetiFrame1280x720
AVCaptureSessionPresetInputPriority
cameraPosition相机设备,分为前后
AVCaptureDevicePositionFront
AVCaptureDevicePositionBack
- (void)startCameraCapture;
开始捕获
- (void)stopCameraCapture;
停止捕获
- (void)rotateCamera;
切换前后摄像头
使用大概步骤
1、创建预览View 即必须的GPUImageView
2、创建滤镜
3、创建Camera 即我们要用到的GPUImageStillCamera
4、addTarget 并开始处理startCameraCapture
5、回调数据、写入相册
第一步:创建预览View 即必须的GPUImageView
GPUImageView *primaryView = [[GPUImageView alloc] initWithFrame:mainScreenFrame];
primaryView.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight;
第二步:创建滤镜 即这里我们使用的 GPUImageSketchFilter(黑白反色)
GPUImageSketchFilter *filter = [[GPUImageSketchFilter alloc] init];
第三步:创建Camera 即我们要用到的GPUImageStillCamera
GPUImageStillCamera* stillCamera = [[GPUImageStillCamera alloc] init];
//设置相机方向
stillCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
第四步: addTarget 并开始处理startCameraCapture
[stillCamera addTarget:filter];
[filter addTarget:primaryView];
[stillCamera startCameraCapture]; // 开始捕获
第五步:添加一个按钮photoCaptureButton,当按钮点击的时候进行以下处理,保存图片到相册
GPUImageVideoCamera 摄像头-视频流
GPUImageVideoCamera是GPUImageOutput的子类,提供来自摄像头的图像数据作为源数据,一般是响应链的源头。
GPUImage使用AVFoundation框架来获取视频。
AVCaptureSession类从AV输入设备的采集数据到制定的输出。
为了实现实时的图像捕获,要实现AVCaptureSession类,添加合适的输入(AVCaptureDeviceInput)和输出(比如 AVCaptureMovieFileOutput)
调用startRunning开始输入到输出的数据流,调用stopRunning停止数据流。
需要注意的是startRunning函数会花费一定的时间,所以不能在主线程(UI线程)调用,防止卡顿。
//相机
GPUImageVideoCamera* videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
//滤镜
GPUImageSepiaFilter *filter = [[GPUImageSepiaFilter alloc] init];
GPUImageLuminanceRangeFilter *filter1 = [[GPUImageLuminanceRangeFilter alloc]init];
//显示view
GPUImageView *filterView = (GPUImageView *)self.view;
//组合
[videoCamera addTarget:filter];
[filter addTarget:filter1];
[filter1 addTarget:filterView];
//相机开始运行
[videoCamera startCameraCapture];
GPUImageMovie 为视频文件添加滤镜
视频处理需要用到GPUImageMovieWriter这个类,初始化方式如下:
//movieURL 是指电影的写入地址 是:NSURL
GPUImageMovieWriter* movieWriter = [[GPUImageMovieWriter alloc] initWithMovieURL:movieURL size:CGSizeMake(480.0, 640.0)];
其实,GPUImageView和GPUImageMovieWriter是出于同一地位的,都是视频输出类,只不过一个是输出到屏幕,一个是输出到文件
代码
- (void)viewDidLoad
{
[super viewDidLoad];
videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
//输出方向为竖屏
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
//滤镜
filter = [[GPUImageSepiaFilter alloc] init];
//显示view
GPUImageView *filterView = (GPUImageView *)self.view;
//组合
[videoCamera addTarget:filter];
[filter addTarget:filterView];
//相机开始运行
[videoCamera startCameraCapture];
//设置写入地址
NSString *pathToMovie = [NSHomeDirectory() stringByAppendingPathComponent:@"Documents/LiveMovied.m4v"];
movieURL = [NSURL fileURLWithPath:pathToMovie];
movieWriter = [[GPUImageMovieWriter alloc] initWithMovieURL:movieURL size:CGSizeMake(480.0, 640.0)];
//设置为liveVideo
movieWriter.encodingLiveVideo = YES;
[filter addTarget:movieWriter];
//设置声音
videoCamera.audioEncodingTarget = movieWriter;
//延迟2秒开始
[self performSelector:@selector(starWrite) withObject:nil afterDelay:2];
//延迟12秒结束
[self performSelector:@selector(stopWrite) withObject:nil afterDelay:12];
}
- (void)starWrite{
dispatch_async(dispatch_get_main_queue(), ^{
[movieWriter startRecording];
});
}
- (void)stopWrite{
dispatch_async(dispatch_get_main_queue(), ^{
videoCamera.audioEncodingTarget = nil;
[filter removeTarget:movieWriter];
[movieWriter finishRecording];
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
if ([library videoAtPathIsCompatibleWithSavedPhotosAlbum:movieURL])
{
[library writeVideoAtPathToSavedPhotosAlbum:movieURL completionBlock:^(NSURL *assetURL, NSError *error)
{
dispatch_async(dispatch_get_main_queue(), ^{
if (error) {
UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Error" message:@"Video Saving Failed"
delegate:nil cancelButtonTitle:@"OK" otherButtonTitles:nil];
[alert show];
} else {
UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Video Saved" message:@"Saved To Photo Album"
delegate:self cancelButtonTitle:@"OK" otherButtonTitles:nil];
[alert show];
}
});
}];
}
});
}
如果我们在写入滤镜的时候突然改变为其他滤镜怎么办?
添加一下方法
[self performSelector:@selector(changeFilter) withObject:nil afterDelay:6];
- (void)changeFilter{
videoCamera.removeAllTargets;
filter = [[GPUImageSobelEdgeDetectionFilter alloc]init];
[videoCamera addTarget:filter];
GPUImageView *filterView = (GPUImageView *)self.view;
[filter addTarget:filterView];
[filter addTarget:movieWriter];
}
关于M4V格式:
M4V 是一个标准视频文件格式,由苹果公司创造。此种格式为 iPod 、 iPhone 和 PlayStation Portable 所使用,同时此格式基于 MPEG-4 编码第二版。M4V是一种应用于网络视频点播网站和移动手持设备的视频格式,是MP4格式的一种特殊类型,其后缀常为.MP4或.M4V,其视频编码采用H264或H264/AVC,音频编码采用AAC
我们可以直接使用GPUImageBeautifyFilter来代替GPUImage的原生滤镜,那么可以直接实现美颜效果
基于GPUImage的实时美颜滤镜
我们也可以参考这篇文章GPUImage详细解析(三)- 实时美颜滤镜