截止到19年11月13日,ffmpeg官网上显示,ffmepg目前已有的video filter是233个。
本文以ffmepg4.1为例,介绍各个video filter的用途和用法。每个filter都力求在命令行内实例运行,并给出参考代码。
指定视频中若干个区域为感兴趣区域,但是视频帧信息不改变,只是在metadata中添加roi的信息,影响在稍后的编码过程中,
ffmpeg 4.2.1的版本支持该filter
//理论上可以,但是下载了4.2.1的版本还是不支持 ./ffmpeg42 -y -i ./video_8_24.mp4 -filter_complex addroi=x=0:y=0:w=200:h=200:qoffset=1 rec_addroi.mp4
也可以对多个区域做roi
./ffmpeg42 -y -i ./video_8_24.mp4 -filter_complex "addroi=x=0:y=0:w=200:h=200:qoffset=1[out1];[out1]addroi=x=200:y=200:w=200:h=200:qoffset=1[out2]" -map "[out2]" rec_addroi.mp4
从输入源中取出alpha分量,把取出的alpha分量作为一个灰度视频,这个filter经常和alphamerge filter使用
首先要确定输入的视频有alpah通道
增加或替换alpha通道的内容
ovie=in_alpha.mkv [alpha]; [in][alpha] alphamerge [out]
放大当前像素值和其在前后帧位置像素值的差别
./ffmpeg42 -t 10 -y -i ./video_8_24.mp4 -filter_complex amplify=radius=2:threshold=10:tolerance=5 rec_amplify.mp4
空域自适应去噪
支持基于时间线编辑timeline editing
平均模糊滤波
支持基于时间线编辑timeline editing
Compute the bounding box for the non-black pixels in the input frame luminance plane.
空域平滑,同时保留边缘 (双边滤波)
支持基于时间线编辑timeline editing
显示、计算像素平面的噪声
支持基于时间线编辑timeline editing
检测视频中哪些帧是几乎全黑的,可以设置阈值,这个功能对于检测章节变化很有用,
检测视频中哪些帧几乎是全黑,可以设置阈值
(1)blend:输入两个视频流,第一个输入作为上层帧,第二个输入作为底层帧,控制底层和上层帧显示的权重
./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4 -i ./1.mp4 -filter_complex "blend=all_expr=if(eq(mod(X\,2)\,mod(Y\,2))\,A\,B)" rec_${name}.mp4 ./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4 -filter_complex "blend=all_expr=if(eq(mod(X\,2)\,mod(Y\,2))\,A\,B)" rec_${name}.mp4
(2)tblend:输入一个视频流,
./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4 -i ./1.mp4 -filter_complex "tblend=all_mode=multiply" rec_${name}.mp4
使用block-matching 3D算法 去除帧级噪声(速度很慢)
./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4 -i ./1.mp4 -filter_complex "bm3d=sigma=3:block=4:bstep=2:group=1:estim=basic" rec_${name}.mp4
对输入源,应用boxbulr 算法
消除输入视频的隔行扫描
消除输入视频中除了某种指定颜色外的所有颜色,可以设置区间
YUV colorspace color/chroma 键值
水平或垂直,移动色度
输入视频的像素值分布绘制在Cie图中,作为一个输出视频输出。可以设置白点,cie图的样式,gamma值,源的色域类型
支持可视化部分codec的编码信息,利用码流里的附加信息
目前只调试出来支持显示运动矢量
ffplay -flags2 +export_mvs input.mp4 -vf codecview=mv_type=fp
修改输入源的主要颜色分量的强度(红、绿、蓝)
./ffmpeg42 -hide_banner -t 2 -y -i ./1.mp4 -filter_complex colorbalance=rs=1:rh=1 ${color} rec_${name}.mp4
Adjust video input frames by re-mixing color channels.
RGB colorspace color keying.
Remove all color information for all RGB colors except for certain one
Adjust video input frames using levels
转换色彩矩阵
转换输入源的色域空间
Apply convolution of 3x3, 5x5, 7x7 or horizontal/vertical up to 49 elements
使用卷积来操作输入源视频
//锐化 ./ffmpeg42 -hide_banner -t 3 -y -i ./6.mp4 -filter_complex convolution="0 -1 0 -1 5 -1 0 -1 0:0 -1 0 -1 5 -1 0 -1 0:0 -1 0 -1 5 -1 0 -1 0:0 -1 0 -1 5 -1 0 -1 0" ${color} rec_${name}.mp4 //模糊 convolution="1 1 1 1 1 1 1 1 1:1 1 1 1 1 1 1 1 1:1 1 1 1 1 1 1 1 1:1 1 1 1 1 1 1 1 1:1/9:1/9:1/9:1/9" //边缘增强 convolution="0 0 0 -1 1 0 0 0 0:0 0 0 -1 1 0 0 0 0:0 0 0 -1 1 0 0 0 0:0 0 0 -1 1 0 0 0 0:5:1:1:1:0:128:128:128" //边缘检测 convolution="0 1 0 1 -4 1 0 1 0:0 1 0 1 -4 1 0 1 0:0 1 0 1 -4 1 0 1 0:0 1 0 1 -4 1 0 1 0:5:5:5:1:0:128:128:128" //拉普拉斯边缘检测算子 convolution="1 1 1 1 -8 1 1 1 1:1 1 1 1 -8 1 1 1 1:1 1 1 1 -8 1 1 1 1:1 1 1 1 -8 1 1 1 1:5:5:5:1:0:128:128:0" //浮雕效果 convolution="-2 -1 0 -1 1 1 0 1 2:-2 -1 0 -1 1 1 0 1 2:-2 -1 0 -1 1 1 0 1 2:-2 -1 0 -1 1 1 0 1 2"
Apply 2D convolution of video stream in frequency domain using second stream as impulse.
不经修改地拷贝输入源到输入中,对于测试很有用
Video filtering on GPU using Apple’s CoreImage API on OSX
Cover a rectangular object
剪切输入视频为指定尺寸,指定位置,裁剪指定尺寸
自动检测需要裁剪的参数,通过日志打印出这些参数,检测的维度是包含非黑区域,裁剪那些黑色区域
Delay video filtering until a given wallclock timestamp. The filter first passes on preroll amount of frames, then it buffers at most buffer amount of frames and waits for the cue. After reaching the cue it forwards the buffered frames and also any subsequent frames coming in its input.
The filter can be used synchronize the output of multiple ffmpeg processes for realtime output devices like decklink. By putting the delay in the filtering chain and pre-buffering frames the process can pass on data to output almost immediately after the target wallclock timestamp is reached.
这个filter可以同步用于实时输出采集设备比如视频采集卡的多路输出的ffmpeg操作,
Apply color adjustments using curves
Video data analysis filter.
This filter shows hexadecimal pixel values of part of video.
可以以十六进制的形式看某些区域像素值,输出文件是像素值
使用2D-DCT变换对frame去噪,实时性很慢,不能用于实时场景
消除输入源的带、环状失真,原理是通过把失真像素用参考像素的平均值替换实现。
消除输入源的block效应
定期丢弃一些“重复”的帧,通过衡量相邻帧的质量,决定是否丢弃这些帧,可以设置阈值。
Reduce cross-luminance (dot-crawl) and cross-color (rainbows) from video.
Apply deflate effect to the video.
消除时域上帧之间的方差。
删除部分隔行电视转播的内容所产生的抖动。
模糊logo,设置一个长方形的模糊块
Remove the rain in the input image/video by applying the derain methods based on convolutional neural networks. Supported models:
Attempt to fix small changes in horizontal and/or vertical shift. This filter helps remove camera shake from hand-holding a camera, bumping a tripod, moving on a vehicle, etc.
适用于手持相机引起的画面抖动,
Apply an exact inverse of the telecine operation. It requires a predefined pattern specified using the pattern option which must be the same as that passed to the telecine filter.
把输入视频变成电视信号,有顶场和底场那种
Apply dilation effect to the video.
对输入视频做扩展效果
给输入视频某个区域设置一个带有颜色的box(方块)
给视频画面上划分方块,
给视频中添加文字
检测边缘,并且绘制出边缘,有几种边缘算子可选择
把输入视频做成腐蚀效果
把视频的开始和结束做成渐入渐出的效果
使用3D-FFT变换去噪
Fill borders of the input video, without changing video stream dimensions. Sometimes video can have garbage at the four edges and you may not want to crop video input to keep size multiple of some number.
寻找一个矩形目标
Flood area with values of same pixel components with another values.
转换输入视频的像素格式到另外一个指定的格式
转换视频的fps,通过重复帧,或者丢帧来达到实现转换后的恒定帧率
封装两个视频流成一个立体视频,可以支持多种显示形式,左右、上下、(这样也可以用于显示两个对比视频)
Change the frame rate by interpolating new video output frames from the source frames.
Select one frame every N-th frame.
每n帧挑一帧作为输出
Detect frozen video.
高斯滤波器
对每个像素应用公式,可以水平翻转、左右、各种操作
geq=p(W-X\,Y)
Fix the banding artifacts that are sometimes introduced into nearly flat regions by truncation to 8-bit color depth. Interpolate the gradients that should go where the bands are, and dither them.
Show various filtergraph stats.显示多个filter的关系,可视化filters
A color constancy variation filter which estimates scene illumination via grey edge algorithm and corrects the scene colors accordingly.
Apply a Hald CLUT to a video stream.
创建一个颜色查找表,把这个颜色查找表应用到一个视频上
//创建颜色查找表 ffmpeg -f lavfi -i haldclutsrc=8 -vf "hue=H=2*PI*t:s=sin(2*PI*t)+1, curves=cross_process" -t 10 -c:v ffv1 clut.nut //使用生成的颜色查找表处理视频 ffmpeg -f lavfi -i mandelbrot -i clut.nut -filter_complex '[0][1] haldclut' -t 20 mandelclut.mkv
水平翻转视频
This filter applies a global color histogram equalization on a per-frame basis.
计算输入源的像素值的直方图分布情况
This is a high precision/quality 3d denoise filter.It aims to reduce image noise, producing smooth images and making still images really still. It should enhance compressibility.
Download hardware frames to system memory.
Map hardware frames to system memory or to another device.
Upload system memory frames to hardware surfaces.
Upload system memory frames to a CUDA device.
使用一个高质量的放大滤波,把filter 的input放大若干倍
把多个视频水平放置在一起合成一个视频,要求他们像素格式、宽高一致
修改色调,饱和度
Grow first stream into second stream by connecting components. This makes it possible to build more robust edge masks.
Detect video interlacing type.
Deinterleave or interleave fields
Apply inflate effect to the video
Simple interlacing filter from progressive contents. This interleaves upper (or lower) lines from odd frames with lower (or upper) lines from even frames, halving the frame rate and preserving image height.
Deinterlace input video by applying Donald Graft’s adaptive kernel deinterling. Work on interlaced parts of a video to produce progressive frames.
Slowly update darker pixels
Correct radial lens distortion
Apply lens correction via the lensfun library (http://lensfun.sourceforge.net/).
计算vamf,也可计算psnr、ssim
把像素值大小限制在某一个区间内
loop video frames 循环视频帧,和重播有区别(重播是用-loop)
参数:
loop:循环的次数,设为-1则无限循环,默认是0次
size:循环涉及的帧数,默认是0
start:循环开始的地方,默认是0
loop=loop=30:start=60:size=3 //从视频的第60帧开始,往后的3帧,这几帧循环30次
Apply a 1D LUT to an input video
Apply a 3D LUT to an input video
Turn certain luma values into transparency
Compute a look-up table for binding each pixel component input value to an output value, and apply it to the input video.
The lut2 filter takes two input streams and outputs one stream.
Clamp the first input stream with the second input and third input stream.
Merge the second and third input stream into output stream using absolute differences between second input stream and first input stream and absolute difference between third input stream and first input stream. The picked value will be from second input stream if second absolute difference is greater than first one or from third input stream otherwise.
Merge the first input stream with the second input stream using per pixel weights in the third input stream.
Merge the second and third input stream into output stream using absolute differences between second input stream and first input stream and absolute difference between third input stream and first input stream. The picked value will be from second input stream if second absolute difference is less than first one or from third input stream otherwise.
Create mask from input video
Apply motion-compensation deinterlacing.
定义一个矩形区域,求矩形区域内像素值的中值,矩形内所有值都用这个均值替换
Merge color channel components from several video streams.
Estimate and export motion vectors using block matching algorithms. Motion vectors are stored in frame side data to be used by other filters.
使用块匹配方法估计和导出运动矢量,把匹配到的运动矢量信息保存在side data信息中,供其他filters使用
使用两个视频流,应用中途图像均衡效果。这个filter可以调整一对输入视频流有相似的直方图分布,这样两个视频流的动态范围是一样的,这种效果尤其适用于匹配一对立体相机的曝光。该filter接受两个输入,有一个输出。两个输入必须有相同的像素格式,但是尺寸可以不一样。filter 的输出是第一个输入流利用这个filter调整两个输入的输出。
Convert the video to specified frame rate using motion interpolation.
改变视频帧率的filter。原理是使用运动差值方法
Mix several video input streams into one video stream
将多个输入视频流混合到一个视频流中,可以设置每个流的权重
丢掉那些和其他先前帧区别不大帧,目的是降低帧率
翻转输入视频,像素值翻转操作
使用non-local means 算法去噪,该算法速度较慢
Deinterlace video using neural network edge directed interpolation.
Force libavfilter not to use any of the specified pixel formats for the input to the next filter.
强迫libavfilter不要使用noformat 指定的像素格式作用到下一个filter上
noformat=pix_fmts=yuv420p|yuv444p|yuv410p ,vfilp //强制libavfilter使用除yuv420p\yuv444p\yuv410p的格式作用在input上,然后传递到vfilp filter中
noformat=pix_fmts=yuv420p ,vfilp //如果输入源是yuv420p,因为强制不能使用yuv420p,那最后编码后的视频是yuvj420p
往输入源中添加噪声,支持选择像素分量、噪声类型(空域平均噪声、有规则形状的混合随机噪声、不同帧的噪声形状变化的空域噪声、高斯噪声)
归一化RGB视频(也可以成为直方图拉伸、对比拉伸),
每一帧的每个通道,filter会计算输入视频的range,然后线性绘制到用户指定的输出range,输出range默认是从纯黑到纯白的full dynamic range。空域平滑效果可以作用到输入源的range,以减少由于少部分黑光或亮的物体进入或离开屏幕导致的闪烁问题,这个功能和相机的自动曝光很相似
不经处理地把是输入传输给输出
光学特征识别,OCR,但是想用这个filter,需要在编译的时候--enable-libtesseract。
利用libopencv 对视频转换(transform)
支持放大、平滑(dilate\smooth)
将视频信号以2D示波器的形式在视频中展示。对于测量空域脉冲、阶跃响应、色度延迟等。
支持设置示波器所要展示像素的位置、区域
将一个流覆盖到另一个流上边。两个输入,一个输出,第一个输入是mian,第二个输入被第一个输入覆盖
应用 overcomplete wavelet 去噪,复杂度比较高,处理起来很慢,可以当实现模糊的效果
给输入填充边界,源放在给定的坐标xy处
为整个视频流创建一个调色板
ffmpeg -i input.mkv -vf palettegen palette.png
使用一个调色板下采样一个输入视频流,可以使用palettegen这个filter产生的调色板图像
ffmpeg -i input.mkv -i palette.png -lavfi paletteuse output.gif
Correct perspective of video not recorded perpendicular to the screen.
Delay interlaced video by one field time so that the field order changes.
降低视频中的闪烁现象
Pixel format descriptor test filter, mainly useful for internal testing. The output video should be equal to the input video.
查看某个位置处的像素值,对于检测颜色很有用,支持的最小分辨率是640*480
./ffmpeg42 -hide_banner -y -i ./1.mp4 -filter_complex "pixscope=x=40/720:y=90/1280:w=80:h=80:wx=1:wy=0" ${color} rec_${name}.mp4
Enable the specified chain of postprocessing subfilters using libpostproc. This library should be automatically selected with a GPL build (--enable-gpl). Subfilters must be separated by ’/’ and can be disabled by prepending a ’-’. Each subfilter and some options have a short and a long name that can be used interchangeably, i.e. dr/dering are the same.
Apply Postprocessing filter 7. It is variant of the spp filter, similar to spp = 6 with 7 point DCT, where only the center sample is used after IDCT.
Apply alpha premultiply effect to input video stream using first plane of second stream as alpha.
Apply prewitt operator to input video stream.
对输入视频流使用prewitt算子
改变视频帧的颜色
./ffmpeg42 -hide_banner -y -i ./1.mp4 -filter_complex pseudocolor="'if(between(val,10,200),20,-1)'" ${color} rec_${name}.mp4
计算两个输入视频之间的平均、最大、最小PSNR。输入的第一个视频作为main,会不经修改地传到输出端,第二个输入被用作参考视频。两个视频需要有相同的分辨率、像素格式
Pulldown reversal (inverse telecine) filter, capable of handling mixed hard-telecine, 24000/1001 fps progressive, and 30000/1001 fps progressive content.
改变视频的量化参数QP ,还不知道如何起作用
Flush video frames from internal cache of frames into a random order. No frame is discarded. Inspired by frei0r nervous filter.
把视频帧打乱播放
空域去噪filter 用于progressvie video
Suppress a TV station logo, using an image file to determine which pixels comprise the logo. It works by filling in the pixels that comprise the logo with neighboring pixels.
翻转一个视频,倒放视频,建议使用trim filter ,因为要把所有frame都读入内存,不能使用太多frame
使用roberts算子处理输入视频
Shift R/G/B/A pixels horizontally and/or vertically.
给定一个角度,旋转视频,可指定输出的宽高、采用的插值方法
使用形状自适应滤波
Show a line containing various information for each input video frame. The input video is not modified.
在命令行中显示视频的信息
scale是一个很重要的filter
根据一个参考视频来变换输入视频的尺寸,对于插入logo,自适应图像比例很有用
Scroll input video horizontally and/or vertically by constant speed.
Adjust cyan, magenta, yellow and black (CMYK) to certain ranges of colors (such as "reds", "yellows", "greens", "cyans", ...). The adjustment range is defined by the "purity" of the color (that is, how saturated it already is).
The separatefields takes a frame-based video input and splits each frame into its components fields, producing a new half height clip with twice the frame rate and twice the frame count.
The setdar filter sets the Display Aspect Ratio for the filter output video.
Force field for the output video frame.
The setparams filter marks interlace and color range for the output frames. It does not change the input frame, but only sets the corresponding property, which affects how the frame is treated by filters/encoders.
Displays the 256 colors palette of each frame. This filter is only relevant for pal8 pixel format frames.
Reorder and/or duplicate and/or drop video frames.
Reorder and/or duplicate video planes.
Evaluate various visual metrics that assist in determining issues associated with the digitization of analog video media.
Calculates the MPEG-7 Video Signature. The filter can handle more than one input. In this case the matching between the inputs can be calculated additionally. The filter always passes through the first input. The signature of each stream can be written into a file.
对输入视频滤波,但不影响源的轮廓,可增强,可模糊
使用sobel算子处理输入视频,可以指定处理的视频的分量
Apply a simple postprocessing filter that compresses and decompresses the image at several (or - in the case of quality level 6 - all) shifts and average the results.
Scale the input by applying one of the super-resolution methods based on convolutional neural networks. Supported models:
超分辨率,使用机器学习
计算两个输入视频的ssim,第一个输入是main,第二个输入是参考,可以设置把计算结果保存在文件中
不同立体视频格式之间的转换
挑选视频、音频流
将源放大两倍,使用super2xsai算法实现在放大的同时保持边缘
交换视频中两个方形目标。可以指定两个方形区域,就可以交换两个方形区域的画面
交换u和v平面
对视频应用电视处理效果
对视频使用阈值效果,需要输入四个视频流,第一个流是要处理的,第二个流是阈值,如果第一个流的值小于第二个流,那就pick第三个流或者第四个流。
从一个给定的连续视频帧中挑选出最具有代表性的帧成为一个相册集
把需要的若干视频帧合并在一张图片上
Perform various types of temporal field interlacing.
设置视频播放的形式
混合连续的视频帧。
Tone map colors from different dynamic ranges.
时域填充视频帧
Transpose rows with columns in the input video and optionally flip it.
修剪输入视频流,修剪后仅包含一部分输入流
Apply alpha unpremultiply effect to input video stream using first plane of second stream as alpha.
锐化或滤波输入视频流
Apply ultra slow/simple postprocessing filter that compresses and decompresses the image at several (or - in the case of quality level 8 - all) shifts and average the results.
变换360度视频的格式,在不同格式之间转换
使用基于小波变换的滤波
Display 2 color component values in the two dimensional graph (which is called a vectorscope).
Analyze video stabilization/deshaking. Perform pass 1 of 2, see vidstabtransform for pass 2.
Video stabilization/deshaking: pass 2 of 2, see vidstabdetect for pass 1.
垂直翻转视频
检测是否变帧率
制造或翻转一个自然光晕、渐晕的效果
Obtain the average VMAF motion score of a video. It is one of the component metrics of VMAF.
水平堆放,合并两个视频到一个帧上,这个filter比overlay和pad 快
反交错输入视频
Video waveform monitor.
这个filter绘制颜色成分的密度。默认情况下值绘制亮度
The weave takes a field-based video input and join each two sequential fields into single frame, producing a new double height clip with half the frame rate and half the frame count.
使用xbr 高质量放大镜滤波器,用于放大像素。遵循边缘检测的原则
从多个输入视频流里调选一个中等像素值的像素
Stack video inputs into custom layout.
Deinterlace the input video ("yadif" means "yet another deinterlacing filter").
Deinterlace the input video using the yadif algorithm, but implemented in CUDA so that it can work as part of a GPU accelerated pipeline with nvdec and/or nvenc.
使用缩放或平移效果
scale 输入视频,使用z.lib,需要额外编译,支持色域转换
#addroi
#./ffmpeg42 -y -i ./video_8_24.mp4 -filter_complex "addroi=x=0:y=0:w=200:h=200:qoffset=1[out1];[out1]addroi=x=200:y=200:w=200:h=200:qoffset=1[out2]" -map "[out2]" rec_addroi.mp4
color='-colorspace bt709 -color_range tv -color_primaries bt709 -color_trc bt709'
amplify(){
./ffmpeg42 -hide_banner -t 10 -y -i ./video_8_24.mp4 -filter_complex "amplify=radius=2:threshold=10:tolerance=5" rec_amplify.mp4
ffplay -hide_banner
-i rec_amplify.mp4
}
ass(){
./ffmpeg42 -hide_banner -t 10 -y -i ./video_8_24.mp4 -filter_complex ass rec_ass.mp4
ffplay -hide_banner -i rec_ass.mp4
}
#ass
atadenoise(){
name=atadenoise
./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4 -filter_complex "atadenoise=enable=between(n\,1\,50):0a=0.3:0b=1" rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#atadenoise
avgblur(){
name=avgblur
./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4 -filter_complex "avgblur=enable=between(n\,1\,50):sizeX=10" rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#avgblur
bbox(){
name=bbox
./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4 -filter_complex "bbox=enable=between(n\,1\,10):min_val=1" rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#bbox
bilateral(){
name=bilateral
./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4 -filter_complex "bilateral=enable=between(t\,2\,5)" rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#bilateral
bitplanenoise(){
name=bitplanenoise
./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4 -filter_complex "bitplanenoise=filter=1" rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#bitplanenoise
blackdetect(){
name=blackdetect
./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4 -filter_complex "blackdetect" rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#blackdetect
blackframe(){
name=blackframe
./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4 -filter_complex "blackdetect" rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#blackframe
blend(){
name=blend
./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4 -i ./1.mp4 -filter_complex "tblend=all_mode=multiply" rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#blend
bm3d(){
name=bm3d
./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4 -filter_complex "bm3d=sigma=3:block=4:bstep=2:group=1:estim=basic" rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#bm3d
boxblur(){
name=boxblur
./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4 -filter_complex "boxblur=luma_radius=2:luma_power=1" rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#boxblur
bwdif(){
name=bwdif
./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4 -filter_complex "bwdif" ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#bwdif
chromahold(){
name=chromahold
./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4 -filter_complex "chromahold" ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#chromahold
chromakey(){
name=chromakey
./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4 -filter_complex "chromakey=color=black:blend=0.01" ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#chromakey
chromashift(){
name=chromashift
./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4 -filter_complex "chromashift=edge=smear" ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#chromashift
ciescope(){
name=ciescope
./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4 -filter_complex ciescope=system=rec709:cie=xyy:gamuts=rec709:showwhite=1:gamma=2.2 ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#ciescope
codecview(){
name=codecview
ffplay -hide_banner -flags2 export_mvs -i ./1.mp4 -vf codecview=mv_type=fp:qp=1
}
#codecview
colorbalance(){
name=colorbalance
ffplay -hide_banner -i rec_${name}.mp4
}
#colorbalance
convolution(){
name=convolution
./ffmpeg42 -hide_banner -t 3 -y -i ./6.mp4 -filter_complex convolution="-2 -1 0 -1 1 1 0 1 2:-2 -1 0 -1 1 1 0 1 2:-2 -1 0 -1 1 1 0 1 2:-2 -1 0 -1 1 1 0 1 2" ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#convolution
convolve(){
name=convolve
./ffmpeg42 -hide_banner -t 3 -y -i ./6.mp4 -filter_complex convolve ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#convolve
crop(){
name=crop
./ffmpeg42 -hide_banner -t 3 -y -i ./1.mp4 -filter_complex crop=w=240:h=240 ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#crop
cropdetect(){
name=cropdetect
./ffmpeg42 -hide_banner -t 3 -y -i ./bt709_2.mp4 -filter_complex cropdetect ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#cropdetect
cropdetect(){
name=cropdetect
./ffmpeg42 -hide_banner -t 3 -y -i ./bt709_2.mp4 -filter_complex cropdetect ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#cropdetect
datascope(){
name=datascope
./ffmpeg42 -hide_banner -t 3 -y -i ./3.mp4 -filter_complex datascope=mode=color2 ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#datascope
dctdnoiz(){
name=dctdnoiz
./ffmpeg42 -hide_banner -t 3 -y -i ./3.mp4 -filter_complex dctdnoiz ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#dctdnoiz
deband(){
name=deband
./ffmpeg42 -hide_banner -t 3 -y -i ./1.mp4 -filter_complex deband ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#deband
decimate(){
name=decimate
./ffmpeg42 -hide_banner -t 3 -y -i ./1.mp4 -filter_complex decimate ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#decimate
dedot(){
name=dedot
./ffmpeg42 -hide_banner -t 3 -y -i ./1.mp4 -filter_complex dedot ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#dedot
deflicker(){
name=deflicker
./ffmpeg42 -hide_banner -t 3 -y -i ./1.mp4 -filter_complex deflicker ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#deflicker
dejudder(){
name=dejudder
./ffmpeg42 -hide_banner -t 3 -y -i ./1.mp4 -filter_complex dejudder ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#dejudder
delogo(){
name=delogo
./ffmpeg42 -hide_banner -t 3 -y -i ./1.mp4 -filter_complex delogo=x=1:y=1:w=100:h=100 ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#delogo
derain(){
name=derain
./ffmpeg42 -hide_banner -t 3 -y -i ./1.mp4 -filter_complex derain ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#derain
deshake(){
name=deshake
./ffmpeg42 -hide_banner -t 3 -y -i ./1.mp4 -filter_complex deshake ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#deshake
despill(){
name=despill
./ffmpeg42 -hide_banner -t 3 -y -i ./1.mp4 -filter_complex despill ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#despill
detelecine(){
name=detelecine
./ffmpeg42 -hide_banner -t 3 -y -i ./1.mp4 -filter_complex detelecine ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#detelecine
drawbox(){
name=drawbox
./ffmpeg42 -hide_banner -t 3 -y -i ./1.mp4 -filter_complex drawbox=x=10:y=10:w=100:h=100:[email protected]:t=fill ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#drawbox
edgedetect(){
name=edgedetect
./ffmpeg42 -hide_banner -t 3 -y -i ./3.mp4 -filter_complex "edgedetect=enable=between(t\,0\,2)" ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#edgedetect
framepack(){
name=framepack
./ffmpeg42 -hide_banner -y -i ./1_smpte240m_no_cp_trc.mp4 -i ./1_smpte240m.mp4 -filter_complex "framepack=frameseq" ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#framepack
framestep(){
name=framestep
./ffmpeg42 -hide_banner -y -i ./1_smpte240m_no_cp_trc.mp4 -filter_complex "framestep=step=10" ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#framestep
frei0r(){
name=frei0r
./ffmpeg42 -hide_banner -y -i ./1_smpte240m_no_cp_trc.mp4 -filter_complex "frei0r" ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#frei0r
hysteresis(){
name=hysteresis
./ffmpeg42 -hide_banner -t 2 -y -i ./1_smpte240m_no_cp_trc.mp4 -t 2 -i ./1_smpte240m_no_cp_trc.mp4 -filter_complex "hysteresis" ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#hysteresis
inflate(){
name=inflate
./ffmpeg42 -hide_banner -t 2 -y -i ./1_smpte240m_no_cp_trc.mp4 -filter_complex "inflate" ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#inflate
loop(){
name=loop
./ffmpeg42 -hide_banner -y -i ./1_smpte240m_no_cp_trc.mp4 -filter_complex "loop=loop=30:start=60:size=3" ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#loop
lut1d(){
name=lut1d
./ffmpeg42 -hide_banner -t 2 -y -i ./1_smpte240m_no_cp_trc.mp4 -filter_complex "lut1d" ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#lut1d
maskfun(){
name=maskfun
./ffmpeg42 -hide_banner -t 2 -y -i ./1_smpte240m_no_cp_trc.mp4 -filter_complex "maskfun=low=20:high=230:planes=1" ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#maskfun
mcdeint(){
name=mcdeint
./ffmpeg42 -hide_banner -t 2 -y -i ./1_smpte240m_no_cp_trc.mp4 -filter_complex "maskfun=low=20:high=230:planes=1" ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#mcdeint
median(){
name=median
./ffmpeg42 -hide_banner -t 2 -y -i ./1_smpte240m_no_cp_trc.mp4 -filter_complex "median=radius=50" ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#median
minterpolate(){
name=minterpolate
./ffmpeg42 -hide_banner -y -i ./1_smpte240m_no_cp_trc.mp4 -filter_complex "minterpolate=fps=60:mi_mode=mci" ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#minterpolate
mix(){
name=mix
./ffmpeg42 -hide_banner -y -t 2 -i ./1_smpte240m_no_cp_trc.mp4 -t 2 -i ./1.mp4 -filter_complex "mix=weights=2 4 " ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#mix
negate(){
name=negate
./ffmpeg42 -hide_banner -y -t 2 -i ./6.mp4 -filter_complex "negate=negate_alpha=1" ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#negate
nlmeans(){
name=nlmeans
./ffmpeg42 -hide_banner -y -t 2 -i ./6.mp4 -filter_complex "nlmeans" ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#nlmeans
noformat(){
name=noformat
./ffmpeg42 -hide_banner -y -t 2 -i ./6.mp4 -filter_complex "noformat=yuv420p" ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#noformat
noise(){
name=noise
./ffmpeg42 -hide_banner -y -i ./6.mp4 -filter_complex "loop=loop=30:start=1:size=1,noise=c0_seed=123457:c0_strength=50:c0f=t" ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#noise
null(){
name=null
./ffmpeg42 -hide_banner -y -i ./6.mp4 -filter_complex "null" ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#null
oscilloscope(){
name=Oscilloscope
./ffmpeg42 -hide_banner -y -i ./33_709_pix480.mp4 -filter_complex "oscilloscope" ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#oscilloscope
overlay(){
name=overlay
./ffmpeg42 -hide_banner -y -i ./33_709_pix480.mp4 -i ./1.mp4 -filter_complex "overlay" ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#overlay
overlay(){
name=overlay
./ffmpeg42 -hide_banner -y -i ./33_709_pix480.mp4 -i ./1.mp4 -filter_complex "overlay" ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#overlay
overlay(){
name=overlay
./ffmpeg42 -hide_banner -y -i ./33_709_pix480.mp4 -i ./1.mp4 -filter_complex "overlay" ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#overlay
owdenoise(){
name=owdenoise
./ffmpeg42 -hide_banner -y -i ./1.mp4 -filter_complex "owdenoise=depth=15:ls=500" ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#owdenoise
pad(){
name=pad
./ffmpeg42 -hide_banner -y -i ./1.mp4 -filter_complex "scale=-2:480,pad=w=1080:h=720:x=30:y=30:color=red" ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#pad
palettegen(){
name=palettegen
./ffmpeg42 -hide_banner -y -i ./1.mp4 -filter_complex "palettegen" ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#palettegen
paletteuse(){
name=paletteuse
./ffmpeg42 -hide_banner -y -i ./6.mp4 -i rec_palettegen.png -filter_complex "paletteuse" ${color} rec_${name}.gif
ffplay -hide_banner -i rec_${name}.gif
}
#paletteuse
perspective(){
name=perspective
./ffmpeg42 -hide_banner -y -i ./6.mp4 -i rec_palettegen.png -filter_complex "perspective" ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#perspective
phase(){
name=phase
./ffmpeg42 -hide_banner -y -i ./6.mp4 -i rec_palettegen.png -filter_complex "phase" ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#phase
photosensitivity(){
name=photosensitivity
./ffmpeg42 -hide_banner -y -i ./6.mp4 -filter_complex "photosensitivity" ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#photosensitivity
pixdesctest(){
name=pixdesctest
./ffmpeg42 -hide_banner -y -i ./1.mp4 -filter_complex "format=monow,pixdesctest" ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#pixdesctest
pixscope(){
name=pixscope
./ffmpeg42 -hide_banner -y -i ./1.mp4 -filter_complex "pixscope=x=40/720:y=90/1280:w=80:h=80:wx=1:wy=0" ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#pixscope
prewitt(){
name=prewitt
./ffmpeg42 -hide_banner -y -i ./1.mp4 -filter_complex "prewitt=planes=0xf" ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#prewitt
pseudocolor(){
name=pseudocolor
./ffmpeg42 -hide_banner -y -i ./1.mp4 -filter_complex pseudocolor="'if(between(val,10,200),20,-1)'" ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#pseudocolor
qp(){
name=qp
./ffmpeg42 -hide_banner -y -i ./1.mp4 -filter_complex qp=100 ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#qp
setparams(){
name=setparams
./ffmpeg42 -hide_banner -y -i ./1.mp4 -filter_complex setparams=field_mode=prog:range=tv:color_primaries=bt470m:color_trc=bt470m:colorspace=bt470bg rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#setparams
showpalette(){
name=showpalette
./ffmpeg42 -hide_banner -y -i ./1.mp4 -filter_complex showpalette rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#showpalette
random(){
name=random
./ffmpeg42 -hide_banner -y -i ./1.mp4 -filter_complex random rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#random
removegrain(){
name=removegrain
./ffmpeg42 -hide_banner -y -i ./1.mp4 -filter_complex removegrain rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#removegrain
reverse(){
name=reverse
./ffmpeg42 -hide_banner -t 5 -y -i ./112334.mp4 -filter_complex trim=end=5,reverse rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#reverse
roberts(){
name=roberts
./ffmpeg42 -hide_banner -t 5 -y -i ./112334.mp4 -filter_complex roberts rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#roberts
shuffleplanes(){
name=shuffleplanes
./ffmpeg42 -hide_banner -t 5 -y -i ./112334.mp4 -filter_complex shuffleplanes=1 rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#shuffleplanes
signature(){
name=signature
./ffmpeg42 -hide_banner -t 5 -y -i ./112334.mp4 -filter_complex signature=filename=signature.bin -map 0:v -f null -
}
#signature
smartblur(){
name=smartblur
./ffmpeg42 -hide_banner -t 5 -y -i ./112334.mp4 -filter_complex smartblur=lr=5:ls=-1,smartblur=lr=5:ls=0.2 rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#smartblur
sobel(){
name=sobel
./ffmpeg42 -hide_banner -t 5 -y -i ./112334.mp4 -filter_complex sobel=planes=1 rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#sobel
spp(){
name=spp
./ffmpeg42 -hide_banner -t 5 -y -i ./112334.mp4 -filter_complex spp rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#spp
sr(){
name=sr
./ffmpeg42 -hide_banner -t 5 -y -i ./112334.mp4 -filter_complex sr=dnn_backend=native:scale_factor=2 rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#sr
super2xsai(){
name=super2xsai
./ffmpeg42 -hide_banner -t 5 -y -i ./112334.mp4 -filter_complex super2xsai rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#super2xsai
swaprect(){
name=swaprect
./ffmpeg42 -hide_banner -t 5 -y -i ./1.mp4 -filter_complex swaprect=w=20:h=40:x1=120:y1=240:x2=150:y2=320 rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#swaprect
swapuv(){
name=swapuv
./ffmpeg42 -hide_banner -t 5 -y -i ./1.mp4 -filter_complex swapuv rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#swapuv
telecine(){
name=telecine
./ffmpeg42 -hide_banner -t 5 -y -i ./1.mp4 -filter_complex telecine=first_field=t:pattern=24 rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#telecine
threshold(){
name=threshold
./ffmpeg42 -hide_banner -t 5 -y -i ./1.mp4 -filter_complex threshold rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#threshold
thumbnail(){
name=thumbnail
./ffmpeg42 -hide_banner -t 5 -y -i ./1.mp4 -filter_complex thumbnail=20 rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#thumbnail
tile(){
name=tile
ffmpeg -i ./1.mp4 -vf tile=3x2:nb_frames=5:padding=7:margin=2 -an -vsync 0 keyframes%03d.png
#ffplay -hide_banner -i rec_${name}.mp4
}
#tile
tinterlace(){
name=tinterlace
ffmpeg -y -i ./1.mp4 -filter_complex tinterlace=0 -an rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#tinterlace
tmix(){
name=tmix
ffmpeg -y -i ./1.mp4 -filter_complex tmix=4 -an rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#tmix
tpad(){
name=tpad
ffmpeg -y -i ./1.mp4 -filter_complex tpad=10 -an rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#tpad
vfrdet(){
name=vfrdet
ffmpeg -y -i ./1.mp4 -filter_complex vfrdet -an rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#vfrdet
vignette(){
name=vignette
ffmpeg -y -i ./1.mp4 -filter_complex vignette -an rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#vignette
vmafmotion(){
name=vmafmotion
ffmpeg -y -i ./1.mp4 -filter_complex vmafmotion -an rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#vmafmotion
vstack(){
name=vstack
ffmpeg -y -i ./1.mp4 -i 6.mp4 -filter_complex vstack -an rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#vstack
waveform(){
name=waveform
ffmpeg -y -i ./1.mp4 -filter_complex waveform -an rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#waveform
xbr(){
name=xbr
ffmpeg -y -i ./6.mp4 -filter_complex xbr -an rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#xbr
xmedian(){
name=xmedian
ffmpeg -y -i ./6.mp4 -filter_complex xmedian -an rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#xmedian
zoompan(){
name=zoompan
ffmpeg -y -i ./6.mp4 -filter_complex zoompan -an rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
zoompan