gpuimage 视频相关使用

基础的图片等使用方法先看一下官方的demo或者其他的教程。这里有的:本地视频添加滤镜(无声,跟官方demo一样,这个要注意释放,我这里没释放了),拍照摄像,本地视频带声预览(ง•̀ω•́)ง✧(感谢同事帮找的https://blog.csdn.net/u011270282/article/details/50354208,不过不支持同步录制)。

另外,预览状态切换滤镜和头像挂件这两个也实现过一下,我比较懒,可能有空会再加一下也可能就这样太监了。

demo地址:本文domo

如果需要边录边播的话,可能你会需要纠正一下视频播放速度:


{

// Do this outside of the video processing queue to not slow that down while waiting

CMTime currentSampleTime = CMSampleBufferGetOutputPresentationTimeStamp(sampleBufferRef);

CMTime differenceFromLastFrame = CMTimeSubtract(currentSampleTime, previousFrameTime);

NSLog(@"%lld  %d",currentSampleTime.value,currentSampleTime.timescale);

if( differenceFromLastFrame.value >0) {

CFAbsoluteTime currentActualTime = CFAbsoluteTimeGetCurrent();

CGFloat frameTimeDifference = CMTimeGetSeconds(differenceFromLastFrame);

CGFloat actualTimeDifference = currentActualTime - previousActualFrameTime;

if(frameTimeDifference > actualTimeDifference ){

CGFloat difTime = (frameTimeDifference - actualTimeDifference) - delayoOffsetTime;

if(difTime >0){

doubletime =1000000.0* difTime;

usleep(time);

}

delayoOffsetTime =  CFAbsoluteTimeGetCurrent() - currentActualTime - difTime;

if(delayoOffsetTime <0) {

delayoOffsetTime =0;

}

NSLog(@"date:%f  %f  dif:%f  difTime:%f",frameTimeDifference,actualTimeDifference,delayoOffsetTime,difTime);

}

previousFrameTime = currentSampleTime;

previousActualFrameTime = CFAbsoluteTimeGetCurrent();

}

}

替换此处


播放速度纠正

64宫格的图的获取方法:你ps上怎么处理其他图片,把gpu里lookup那张图片拿出来,一样的流程处理一遍,就好了。只处理颜色,不要处理模糊什么的。

64宫格

因为可能不会跟新了,附一下切换滤镜的关键方法



//https://blog.csdn.net/u013488791/article/details/69361818 

- (void)filterVideo:(NLGPUImageCustomFilter*)terminalFilter{

if (!gpuMovieFile) {

playerItem = [[AVPlayerItem alloc]initWithURL:videoUrl];

player = [AVPlayer playerWithPlayerItem:playerItem];

gpuMovieFile=  [[GPUImageMovie alloc] initWithPlayerItem:playerItem];

gpuMovieFile.delegate = self;

gpuMovieFile.runBenchmark = YES;

gpuMovieFile.playAtActualSpeed = YES;

}

AVAsset *asset = [AVAsset assetWithURL:videoUrl];

NSArray *tracks = [asset tracksWithMediaType:AVMediaTypeVideo];

if([trackscount] >0) {

AVAssetTrack*videoTrack = [tracksobjectAtIndex:0];

CGAffineTransform t = videoTrack.preferredTransform;//这里的矩阵有旋转角度,转换一下即可

CGFloatwidth = videoTrack.naturalSize.width;

CGFloatheight = videoTrack.naturalSize.height;

// 对旋转90和旋转270的视频进行缩放,平移

if(t.a==0&&fabs(t.b) ==1.0&&fabs(t.c) ==1.0&& t.d==0){

t.b= t.b*width/height;

t.c= t.c*width/height;

t.tx=0;

t.ty=0;

gpuPlayView.transform = t;

}

}

if (CMTimeCompare(pausedTime, player.currentItem.asset.duration)&&isPlayingVideo) {

pausedTime = player.currentTime;

}else{

pausedTime = CMTimeMake(0, 600.0);

}

// 滤镜

if(![terminalFilter  isKindOfClass:[NLGPUImageCustomFilterclass]]) {

//没点滤镜(直接点击播放按钮)

if ([currentFilter isKindOfClass:[NLGPUImageCustomFilter class]]) {

//已选中过滤镜,直接使用

terminalFilter = [self getFilterByFilter:currentFilter];

}else{

//当前未选中过滤镜

terminalFilter = [[NLGPUImageNormalFilteralloc]init];

}

}else{

terminalFilter = [selfgetFilterByFilter:terminalFilter];

}

[gpuMovieFile cancelProcessing];

[gpuMovieFile removeAllTargets];

[gpuMovieFile addTarget:terminalFilter];

[terminalFilteraddTarget:gpuPlayView]; // gpuPlayView is my GPUImageView

[gpuMovieFile startProcessing];

// Seeking to the point where video was paused

if (CMTimeCompare(pausedTime, player.currentItem.asset.duration) == 0) {

[player play];

}else{

[player seekToTime:pausedTime toleranceBefore:kCMTimeZero toleranceAfter:kCMTimeZero];

[player play];

}

}


多层View录制方法,至于人脸跟踪,这里就不展开讲,苹果有自带的人脸跟踪(AVFoudation的,不要用coreImage,我看有些教程直接用coreimage当摄像头显示层来做人脸跟随,真是服气),另外商用的话,还是需要三方的人脸跟踪,苹果的有点慢。

需要学习的可以参考

https://www.jianshu.com/p/095107abc7ba

你可能感兴趣的:(gpuimage 视频相关使用)