GPUImage视频滤镜

前言

简单介绍下使用GPUImage以及遇到的问题,GPUImage下载地址https://github.com/BradLarson/GPUImage

引入GPUImage

引入方式可以通过三种方式,分别是pod、直接引入工程和打成静态库(.a)文件引入。

  • 通过pod引用
    直接在podfile里面添加GPUimage,比较简单直接
  • 导入GPUImage工程
    这种方式比较繁琐,需要额外配置工程,具体可参考如何正确的导入项目
  • 静态库引入
    直接运用下载下来的工程GPUImage.xcodeproj文件就可以生成.a文件,需要注意的是它分为模拟器和真机,还需要区分debug和release模式,在生成.a是选择Generic iOS Device生成包含多个指令集的静态库文件。

修改GPUImage文件

因为GPUImageView中使用的异步中获取了尺寸,而这个只能在主线程执行,所以第一次启动时会等待一会,需要对此做一下处理。
申明一个viewBounds变量,在initWithFrame的时候赋值:

- (void)recalculateViewGeometry;
{
    runSynchronouslyOnVideoProcessingQueue(^{
        CGFloat heightScaling, widthScaling;
        CGSize currentViewSize = self.viewBounds.size;
        
        //    CGFloat imageAspectRatio = inputImageSize.width / inputImageSize.height;
        //    CGFloat viewAspectRatio = currentViewSize.width / currentViewSize.height;
        
        CGRect insetRect = AVMakeRectWithAspectRatioInsideRect(inputImageSize, self.viewBounds);
        
        switch(_fillMode)
        {
            case kGPUImageFillModeStretch:
            {
                widthScaling = 1.0;
                heightScaling = 1.0;
            }; break;
            case kGPUImageFillModePreserveAspectRatio:
            {
                widthScaling = insetRect.size.width / currentViewSize.width;
                heightScaling = insetRect.size.height / currentViewSize.height;
            }; break;
            case kGPUImageFillModePreserveAspectRatioAndFill:
            {
                //            CGFloat widthHolder = insetRect.size.width / currentViewSize.width;
                widthScaling = currentViewSize.height / insetRect.size.height;
                heightScaling = currentViewSize.width / insetRect.size.width;
            }; break;
        }
        
        imageVertices[0] = -widthScaling;
        imageVertices[1] = -heightScaling;
        imageVertices[2] = widthScaling;
        imageVertices[3] = -heightScaling;
        imageVertices[4] = -widthScaling;
        imageVertices[5] = heightScaling;
        imageVertices[6] = widthScaling;
        imageVertices[7] = heightScaling;
    });
}

需要注意如果在录制后第一帧黑屏,先确定在Build sSetting里面,Other Linker Flags 里面添加 -fobjc-arc -ObjC这两项,若还是会黑屏,可参考https://www.jianshu.com/p/c218651cc461

开启视频

采集可以设置选择前后置摄像头,设置镜像模式,GPUImageView的fillMode为kGPUImageFillModePreserveAspectRatioAndFill则全屏展示,对于iPhoneX不是16:9的样式,可设置针对于iPhoneX的样式。

self.videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset1280x720 cameraPosition:AVCaptureDevicePositionBack];
    //Alan 为了在AVCaptureDevice上设置硬件属性,比如focusMode和exposureMode,客户端必须首先获取设备上的锁。
    if ([_videoCamera.inputCamera lockForConfiguration:nil]) {
        if ([_videoCamera.inputCamera isFocusModeSupported:AVCaptureFocusModeContinuousAutoFocus]) {//自动对焦
            [_videoCamera.inputCamera setFocusMode:AVCaptureFocusModeContinuousAutoFocus];
        }
        if ([_videoCamera.inputCamera isExposureModeSupported:AVCaptureExposureModeContinuousAutoExposure]) {//自动曝光
            [_videoCamera.inputCamera setExposureMode:AVCaptureExposureModeContinuousAutoExposure];
        }
        if ([_videoCamera.inputCamera isWhiteBalanceModeSupported:AVCaptureWhiteBalanceModeContinuousAutoWhiteBalance]) {//自动白平衡
            [_videoCamera.inputCamera setWhiteBalanceMode:AVCaptureWhiteBalanceModeContinuousAutoWhiteBalance];
        }
        [_videoCamera.inputCamera unlockForConfiguration];//Alan  解锁设备,提交配置
    }
    _videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;//竖屏方向采集数据
    [_videoCamera addAudioInputsAndOutputs];//录制的时候添加声音,添加输入源和输出源会暂时会使录制暂时卡住,所以在要使用声音的情况下要先调用该方法来防止录制被卡住。
    _videoCamera.horizontallyMirrorFrontFacingCamera = YES;//前置是镜像
    _videoCamera.horizontallyMirrorRearFacingCamera = NO;//后置不是镜像
    _videoCamera.frameRate = 30;
    [_videoCamera addTarget:self.beautyFilter];
    self.filterView = [[GPUImageView alloc]initWithFrame:self.view.bounds];
    _filterView.fillMode = kGPUImageFillModePreserveAspectRatioAndFill;
    [self.beautyFilter addTarget:_filterView];
    [self.view addSubview:_filterView];
    [_videoCamera startCameraCapture];//开启摄像头

上面采集视频默认加了美颜滤镜,具体实现可以参考https://www.jianshu.com/p/6bdb4cb50f14

- (GPUImageBeautifyFilter *)beautyFilter {
    if(!_beautyFilter) {
        _beautyFilter = [[GPUImageBeautifyFilter alloc]init];
    }
    return _beautyFilter;
}

切换滤镜

在切换滤镜前需要移除它之前的target,再重新添加

    [self.videoCamera removeAllTargets];
    [self.filterGroup removeAllTargets];
    [self.beautyFilter removeAllTargets];
    if(filterType == CameraVideoFilterNone) { //不需要滤镜
        [self.videoCamera addTarget:self.beautyFilter];
        [self.beautyFilter addTarget:_filterView];
        return ;
    }
    [self.filterGroup setInitialFilters:@[imageFilter]];
    [self.filterGroup setTerminalFilter:imageFilter];
    [self.videoCamera addTarget:self.beautyFilter];
    [self.beautyFilter addTarget:self.filterGroup];
    [self.filterGroup addTarget:_filterView];
- (GPUImageFilterGroup *)filterGroup {
    if(!_filterGroup) {
        _filterGroup = [[GPUImageFilterGroup alloc]init];
    }
    return _filterGroup;
}

采集视频

选择滤镜之后可以开始录制,录制完需要移除,另外再次开启时需要重新初始化MovieWriter。

- (void)videoStartRecording {
    NSString *pathToMovie = [NSString stringWithFormat:@"%@/camera_video.mp4",[NSString stringWithFormat:@"%@",[NSHomeDirectory() stringByAppendingPathComponent:@"tmp"]]];
    unlink([pathToMovie UTF8String]); // If a file already exists, AVAssetWriter won't let you record new frames, so delete the old movie
    NSURL *movieURL = [NSURL fileURLWithPath:pathToMovie];
    self.movieWriter = [[GPUImageMovieWriter alloc] initWithMovieURL:movieURL size:CGSizeMake(720.0, 1280.0)];
    _movieWriter.encodingLiveVideo = YES;
    [_movieWriter setCompletionBlock:^{
        
    }];
    [_movieWriter setFailureBlock:^(NSError *error) {
        
    }];
    if(self.selectdFilterType == CameraVideoFilterNone) {
        [self.beautyFilter addTarget:_movieWriter];
    }else {
        [self.filterGroup addTarget:_movieWriter];
    }
    _videoCamera.audioEncodingTarget = _movieWriter;
    [_movieWriter startRecording];
}
- (void)videoFinishRecoring {
    if(self.selectdFilterType == CameraVideoFilterNone) {
        [self.beautyFilter removeTarget:_movieWriter];
    }else {
        [self.filterGroup removeTarget:_movieWriter];
    }
    _videoCamera.audioEncodingTarget = nil;
    [_movieWriter finishRecording];
}

本地视频添加滤镜

本地视频添加滤镜,可参考GPUImage-Master里面的demo,这边需要注意的是视频旋转问题。
我们在展示本地视频和添加滤镜写入的时候,都需要处理视频旋转。
如下展示本地添加滤镜的视频:

//视频旋转角度
- (NSUInteger)degressFromVideoFileWithURL:(NSURL *)url {
    NSUInteger degress = 0;
    AVAsset *asset = [AVAsset assetWithURL:url];
    NSArray *tracks = [asset tracksWithMediaType:AVMediaTypeVideo];
    if([tracks count] > 0) {
        AVAssetTrack *videoTrack = [tracks objectAtIndex:0];
        CGAffineTransform t = videoTrack.preferredTransform;
        if(t.a == 0 && t.b == 1.0 && t.c == -1.0 && t.d == 0){
            // Portrait
            degress = 90;
        }else if(t.a == 0 && t.b == -1.0 && t.c == 1.0 && t.d == 0){
            // PortraitUpsideDown
            degress = 270;
        }else if(t.a == 1.0 && t.b == 0 && t.c == 0 && t.d == 1.0){
            // LandscapeRight
            degress = 0;
        }else if(t.a == -1.0 && t.b == 0 && t.c == 0 && t.d == -1.0){
            // LandscapeLeft
            degress = 180;
        }
    }
    return degress;
}
- (void)setupVideo {
    NSURL *videoPathUrl = [NSURL fileURLWithPath:self.videoPath];//本地视频路径
    self.movieFile = [[GPUImageMovie alloc] initWithURL:videoPathUrl];
    _movieFile.runBenchmark = YES;
    _movieFile.playAtActualSpeed = YES;
    _movieFile.shouldRepeat = YES;
    [_movieFile addTarget:self.beautyFilter];//添加滤镜
    self.filterView = [[GPUImageView alloc]initWithFrame:self.view.bounds];
    _filterView.fillMode = kGPUImageFillModePreserveAspectRatio;
    [self adjustVideoDegressWithUrl:videoPathUrl];//调整角度
    [self.beautyFilter addTarget:_filterView];
    [self.view addSubview:_filterView];
    [_movieFile startProcessing];
}
//调整视频的角度
- (void)adjustVideoDegressWithUrl:(NSURL *)url {
    NSInteger degress = [self degressFromVideoFileWithURL:url];
    switch (degress) {
      case 90:
            [_filterView setInputRotation:kGPUImageRotateRight atIndex:0];
            self.videoDegress = 90;
         break;
       case 180:
            [_filterView setInputRotation:kGPUImageRotate180 atIndex:0];
            self.videoDegress = 180;
          break;
      case 270:
            [_filterView setInputRotation:kGPUImageRotateLeft atIndex:0];
            self.videoDegress = 270;
          break;

       default:
          break;
    }
}

这样就可以正常显示本地视频了,切换滤镜和存储和拍摄类似。需要注意的是添加完滤镜储存的时候也需要制定旋转角度,

[kmovieFile enableSynchronizedEncodingUsingMovieWriter:kmovieWriter];
[kmovieWriter startRecordingInOrientation:CGAffineTransformMakeRotation(self.videoDegress/180.0*M_PI)];
[kmovieFile startProcessing];

视频压缩

视频压缩可以针对分辨率和码率来进行压缩,压缩分辨率,可以直接指定分辨率大小。
针对码率压缩,我们可以用https://github.com/rs/SDAVAssetExportSession
详细介绍可参考wheelsMaker的iOS视频压缩笔记
在使用的时候需要注意视频角度导致的问题

    SDAVAssetExportSession *encoder = [[SDAVAssetExportSession alloc]initWithAsset:avAsset];
    encoder.outputFileType = AVFileTypeMPEG4;
    encoder.outputURL = [NSURL fileURLWithPath:zipPath];
    CGFloat videoWidth = 720.0;
    CGFloat videoHeight = 1280.0;
    NSArray *tracks = [avAsset tracksWithMediaType:AVMediaTypeVideo];
    AVAssetTrack *videoTrack;
    if([tracks count] > 0) {
        videoTrack = [tracks objectAtIndex:0];
        videoWidth = videoTrack.naturalSize.width;
        videoHeight = videoTrack.naturalSize.height;
    }
    if(videoTrack && [NBCameraVideoTools degressFromVideoFileWithURL:[NSURL fileURLWithPath:originalPath]]%180 != 0) {
        videoWidth = videoTrack.naturalSize.height;
        videoHeight = videoTrack.naturalSize.width;
    }
    encoder.videoSettings = @{AVVideoCodecKey: AVVideoCodecH264,AVVideoWidthKey: @(videoWidth),AVVideoHeightKey: @(videoHeight),AVVideoCompressionPropertiesKey: @{AVVideoAverageBitRateKey: @1000000,AVVideoProfileLevelKey: AVVideoProfileLevelH264High40}};
    encoder.audioSettings = @{AVFormatIDKey: @(kAudioFormatMPEG4AAC),AVNumberOfChannelsKey: @2,AVSampleRateKey: @44100,AVEncoderBitRateKey: @128000};

    [encoder exportAsynchronouslyWithCompletionHandler:^{
        if (encoder.status == AVAssetExportSessionStatusCompleted){
            NSLog(@"Video export succeeded");
        }else if (encoder.status == AVAssetExportSessionStatusCancelled){
            NSLog(@"Video export cancelled");
        }else{
            NSLog(@"Video export failed with error: %@ (%d)", encoder.error.localizedDescription, encoder.error.code);
        }
        dispatch_async(dispatch_get_main_queue(), ^{
            showTipsInCenter([NSString stringWithFormat:@"处理完成,状态:%d",[encoder status]]);
        });
    }];

其他

还有视频水印、合成就不一一介绍了,网上有很多方法,也都没啥问题。再使用GPUImage的时候,最好先去运用它的demo,看下它的一些滤镜实现使用方法。

后续遇到的问题

1.[GPUImageMovie endProcessing]的野指针异常,概率比较低
出现原因为movie已经置空,write后置空,而且在不同的线程,所以写入时可能异常。
解决方案参考:数数GPUImage里那些未知的坑
2.SEGV_ACCERR [GPUImageContext presentBufferForDisplay]异常
这个问题是锁屏后再进入出现
原因是iOS不支持appWillResignActive进行OpenGL渲染,所以切后台之前要调用glfinish(),将缓冲区中的指令(无论是否为满)立刻送给图形硬件执行,glfinish()会等待图形硬件执行完才返回。
需要注意的是:在iOS11这个方法可能执行两遍。
解决方法为:

runSynchronouslyOnVideoProcessingQueue(^{
        glFinish();
    });

可参考:GPUImage presentBufferForDisplay崩溃问题

参考:
iOS开发-美颜相机、短视频(GPUImage的使用)
https://github.com/rs/SDAVAssetExportSession
iOS视频压缩笔记
如何正确的导入项目
iOS设备闪光灯的使用
源码级别对GPUImage进行剖析 以及 尝试
数数GPUImage里那些未知的坑
GPUImage presentBufferForDisplay崩溃问题

你可能感兴趣的:(GPUImage视频滤镜)