ffmpeg 之video filter 大全(待整理)

截止到19年11月13日,ffmpeg官网上显示,ffmepg目前已有的video filter是233个。

 

本文以ffmepg4.1为例,介绍各个video filter的用途和用法。每个filter都力求在命令行内实例运行,并给出参考代码。

  1. addroi

指定视频中若干个区域为感兴趣区域,但是视频帧信息不改变,只是在metadata中添加roi的信息,影响在稍后的编码过程中,

ffmpeg 4.2.1的版本支持该filter

//理论上可以,但是下载了4.2.1的版本还是不支持 ./ffmpeg42 -y -i ./video_8_24.mp4 -filter_complex addroi=x=0:y=0:w=200:h=200:qoffset=1 rec_addroi.mp4

 

 

 

 

也可以对多个区域做roi

./ffmpeg42 -y -i ./video_8_24.mp4 -filter_complex "addroi=x=0:y=0:w=200:h=200:qoffset=1[out1];[out1]addroi=x=200:y=200:w=200:h=200:qoffset=1[out2]" -map "[out2]" rec_addroi.mp4

 

 

  1. alphaextract

从输入源中取出alpha分量,把取出的alpha分量作为一个灰度视频,这个filter经常和alphamerge filter使用

首先要确定输入的视频有alpah通道

  1. alphamerge

增加或替换alpha通道的内容

ovie=in_alpha.mkv [alpha]; [in][alpha] alphamerge [out]

  1. amplify

放大当前像素值和其在前后帧位置像素值的差别

./ffmpeg42 -t 10 -y -i ./video_8_24.mp4 -filter_complex amplify=radius=2:threshold=10:tolerance=5 rec_amplify.mp4

  1. atadenoise

空域自适应去噪

支持基于时间线编辑timeline editing

 

  1. avgblur

平均模糊滤波

支持基于时间线编辑timeline editing

 

  1. bbox 还没搞懂

Compute the bounding box for the non-black pixels in the input frame luminance plane.

  1. bilateral

空域平滑,同时保留边缘 (双边滤波)

支持基于时间线编辑timeline editing

 

  1. bitplanenoise

显示、计算像素平面的噪声

支持基于时间线编辑timeline editing

  1. blackdetect

检测视频中哪些帧是几乎全黑的,可以设置阈值,这个功能对于检测章节变化很有用,

  1. blackframe

检测视频中哪些帧几乎是全黑,可以设置阈值

  1. blend\tblend

(1)blend:输入两个视频流,第一个输入作为上层帧,第二个输入作为底层帧,控制底层和上层帧显示的权重

./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4 -i ./1.mp4 -filter_complex "blend=all_expr=if(eq(mod(X\,2)\,mod(Y\,2))\,A\,B)" rec_${name}.mp4 ./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4 -filter_complex "blend=all_expr=if(eq(mod(X\,2)\,mod(Y\,2))\,A\,B)" rec_${name}.mp4

(2)tblend:输入一个视频流,

./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4 -i ./1.mp4 -filter_complex "tblend=all_mode=multiply" rec_${name}.mp4

 

 

 

 

  1. bm3d

 

使用block-matching 3D算法 去除帧级噪声(速度很慢)

./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4 -i ./1.mp4 -filter_complex "bm3d=sigma=3:block=4:bstep=2:group=1:estim=basic" rec_${name}.mp4

  1. boxblur

对输入源,应用boxbulr 算法

  1. bwdif

消除输入视频的隔行扫描

  1. chromahold

消除输入视频中除了某种指定颜色外的所有颜色,可以设置区间

  1. chromakey

YUV colorspace color/chroma 键值

  1. chromashift

水平或垂直,移动色度

  1. ciescope

输入视频的像素值分布绘制在Cie图中,作为一个输出视频输出。可以设置白点,cie图的样式,gamma值,源的色域类型

 

  1. codecview

支持可视化部分codec的编码信息,利用码流里的附加信息

目前只调试出来支持显示运动矢量

ffplay -flags2 +export_mvs input.mp4 -vf codecview=mv_type=fp

 

 

 

 

  1. colorbalance

修改输入源的主要颜色分量的强度(红、绿、蓝)

 

./ffmpeg42 -hide_banner -t 2 -y -i ./1.mp4 -filter_complex colorbalance=rs=1:rh=1 ${color} rec_${name}.mp4

 

 

  1. colorchannelmixer

Adjust video input frames by re-mixing color channels.

  1. colorkey

RGB colorspace color keying.

  1. colorhold

Remove all color information for all RGB colors except for certain one

  1. colorlevels

Adjust video input frames using levels

  1. colormatrix

转换色彩矩阵

  1. colorspace

转换输入源的色域空间

  1. convolution

Apply convolution of 3x3, 5x5, 7x7 or horizontal/vertical up to 49 elements

使用卷积来操作输入源视频

//锐化 ./ffmpeg42 -hide_banner -t 3 -y -i ./6.mp4 -filter_complex convolution="0 -1 0 -1 5 -1 0 -1 0:0 -1 0 -1 5 -1 0 -1 0:0 -1 0 -1 5 -1 0 -1 0:0 -1 0 -1 5 -1 0 -1 0" ${color} rec_${name}.mp4 //模糊 convolution="1 1 1 1 1 1 1 1 1:1 1 1 1 1 1 1 1 1:1 1 1 1 1 1 1 1 1:1 1 1 1 1 1 1 1 1:1/9:1/9:1/9:1/9" //边缘增强 convolution="0 0 0 -1 1 0 0 0 0:0 0 0 -1 1 0 0 0 0:0 0 0 -1 1 0 0 0 0:0 0 0 -1 1 0 0 0 0:5:1:1:1:0:128:128:128" //边缘检测 convolution="0 1 0 1 -4 1 0 1 0:0 1 0 1 -4 1 0 1 0:0 1 0 1 -4 1 0 1 0:0 1 0 1 -4 1 0 1 0:5:5:5:1:0:128:128:128" //拉普拉斯边缘检测算子 convolution="1 1 1 1 -8 1 1 1 1:1 1 1 1 -8 1 1 1 1:1 1 1 1 -8 1 1 1 1:1 1 1 1 -8 1 1 1 1:5:5:5:1:0:128:128:0" //浮雕效果 convolution="-2 -1 0 -1 1 1 0 1 2:-2 -1 0 -1 1 1 0 1 2:-2 -1 0 -1 1 1 0 1 2:-2 -1 0 -1 1 1 0 1 2"

  1. convolve

Apply 2D convolution of video stream in frequency domain using second stream as impulse.

  1. copy

不经修改地拷贝输入源到输入中,对于测试很有用

  1. coreimage

Video filtering on GPU using Apple’s CoreImage API on OSX

  1. cover_rect

Cover a rectangular object

  1. crop

剪切输入视频为指定尺寸,指定位置,裁剪指定尺寸

  1. cropdetect

自动检测需要裁剪的参数,通过日志打印出这些参数,检测的维度是包含非黑区域,裁剪那些黑色区域

  1. cue

Delay video filtering until a given wallclock timestamp. The filter first passes on preroll amount of frames, then it buffers at most buffer amount of frames and waits for the cue. After reaching the cue it forwards the buffered frames and also any subsequent frames coming in its input.

The filter can be used synchronize the output of multiple ffmpeg processes for realtime output devices like decklink. By putting the delay in the filtering chain and pre-buffering frames the process can pass on data to output almost immediately after the target wallclock timestamp is reached.

这个filter可以同步用于实时输出采集设备比如视频采集卡的多路输出的ffmpeg操作,

  1. curves

Apply color adjustments using curves

  1. datascope

Video data analysis filter.

This filter shows hexadecimal pixel values of part of video.

可以以十六进制的形式看某些区域像素值,输出文件是像素值

 

  1. dctdnoiz

使用2D-DCT变换对frame去噪,实时性很慢,不能用于实时场景

  1. deband

消除输入源的带、环状失真,原理是通过把失真像素用参考像素的平均值替换实现。

  1. deblock

消除输入源的block效应

  1. decimate

定期丢弃一些“重复”的帧,通过衡量相邻帧的质量,决定是否丢弃这些帧,可以设置阈值。

  1. dedot

Reduce cross-luminance (dot-crawl) and cross-color (rainbows) from video.

  1. deflate

Apply deflate effect to the video.

  1. deflicker

消除时域上帧之间的方差。

  1. dejudder

删除部分隔行电视转播的内容所产生的抖动。

  1. delogo

模糊logo,设置一个长方形的模糊块

  1. derain

Remove the rain in the input image/video by applying the derain methods based on convolutional neural networks. Supported models:

  1. deshake

Attempt to fix small changes in horizontal and/or vertical shift. This filter helps remove camera shake from hand-holding a camera, bumping a tripod, moving on a vehicle, etc.

适用于手持相机引起的画面抖动,

  1. detelecine

Apply an exact inverse of the telecine operation. It requires a predefined pattern specified using the pattern option which must be the same as that passed to the telecine filter.

把输入视频变成电视信号,有顶场和底场那种

  1. dilation

Apply dilation effect to the video.

对输入视频做扩展效果

  1. drawbox

给输入视频某个区域设置一个带有颜色的box(方块)

 

  1. drawgrid

给视频画面上划分方块,

 

  1. drawtext

给视频中添加文字

  1. edgedetect

检测边缘,并且绘制出边缘,有几种边缘算子可选择

  1. entropy

 

  1. erosion

把输入视频做成腐蚀效果

 

  1. fade

把视频的开始和结束做成渐入渐出的效果

  1. fftdnoiz

使用3D-FFT变换去噪

  1. fftfilt

 

  1. fillborders

Fill borders of the input video, without changing video stream dimensions. Sometimes video can have garbage at the four edges and you may not want to crop video input to keep size multiple of some number.

  1. find_rect

寻找一个矩形目标

  1. floodfill

Flood area with values of same pixel components with another values.

  1. format

转换输入视频的像素格式到另外一个指定的格式

  1. fps

转换视频的fps,通过重复帧,或者丢帧来达到实现转换后的恒定帧率

  1. framepack

封装两个视频流成一个立体视频,可以支持多种显示形式,左右、上下、(这样也可以用于显示两个对比视频)

  1. framerate

Change the frame rate by interpolating new video output frames from the source frames.

  1.  framestep

Select one frame every N-th frame.

每n帧挑一帧作为输出

  1. freezedetect

Detect frozen video.

  1. gblur

高斯滤波器

  1. geq

对每个像素应用公式,可以水平翻转、左右、各种操作

geq=p(W-X\,Y)

  1. gradfun

Fix the banding artifacts that are sometimes introduced into nearly flat regions by truncation to 8-bit color depth. Interpolate the gradients that should go where the bands are, and dither them.

  1. graphmonitor

Show various filtergraph stats.显示多个filter的关系,可视化filters

  1. greyedge

A color constancy variation filter which estimates scene illumination via grey edge algorithm and corrects the scene colors accordingly.

  1. haldclut

Apply a Hald CLUT to a video stream.

创建一个颜色查找表,把这个颜色查找表应用到一个视频上

//创建颜色查找表 ffmpeg -f lavfi -i haldclutsrc=8 -vf "hue=H=2*PI*t:s=sin(2*PI*t)+1, curves=cross_process" -t 10 -c:v ffv1 clut.nut //使用生成的颜色查找表处理视频 ffmpeg -f lavfi -i mandelbrot -i clut.nut -filter_complex '[0][1] haldclut' -t 20 mandelclut.mkv

  1. hflip

水平翻转视频

  1. histeq

This filter applies a global color histogram equalization on a per-frame basis.

  1. histogram

计算输入源的像素值的直方图分布情况

  1. hqdn3d

This is a high precision/quality 3d denoise filter.It aims to reduce image noise, producing smooth images and making still images really still. It should enhance compressibility.

  1. hwdownload

Download hardware frames to system memory.

  1. hwmap

Map hardware frames to system memory or to another device.

  1. hwupload

Upload system memory frames to hardware surfaces.

  1. hwupload_cuda

Upload system memory frames to a CUDA device.

  1. hqx

使用一个高质量的放大滤波,把filter 的input放大若干倍

  1. hstack

把多个视频水平放置在一起合成一个视频,要求他们像素格式、宽高一致

  1. hue

修改色调,饱和度

  1. hysteresis

Grow first stream into second stream by connecting components. This makes it possible to build more robust edge masks.

  1. idet

Detect video interlacing type.

  1. il

Deinterleave or interleave fields

  1. inflate

Apply inflate effect to the video

  1. interlace

Simple interlacing filter from progressive contents. This interleaves upper (or lower) lines from odd frames with lower (or upper) lines from even frames, halving the frame rate and preserving image height.

 

  1. kerndeint

Deinterlace input video by applying Donald Graft’s adaptive kernel deinterling. Work on interlaced parts of a video to produce progressive frames.

 

 

  1. lagfun

Slowly update darker pixels

  1.  lenscorrection

Correct radial lens distortion

  1.  lensfun

Apply lens correction via the lensfun library (http://lensfun.sourceforge.net/).

  1.  libvmaf

计算vamf,也可计算psnr、ssim

  1. limiter

把像素值大小限制在某一个区间内

  1. loop

loop video frames 循环视频帧,和重播有区别(重播是用-loop)

参数:

loop:循环的次数,设为-1则无限循环,默认是0次

size:循环涉及的帧数,默认是0

start:循环开始的地方,默认是0

 

loop=loop=30:start=60:size=3 //从视频的第60帧开始,往后的3帧,这几帧循环30次

  1. lut1d

Apply a 1D LUT to an input video

  1. lut3d

Apply a 3D LUT to an input video

  1. lumakey

Turn certain luma values into transparency

  1. lut, lutrgb, lutyuv

Compute a look-up table for binding each pixel component input value to an output value, and apply it to the input video.

  1. lut2, tlut2

The lut2 filter takes two input streams and outputs one stream.

 

  1. maskedclamp

Clamp the first input stream with the second input and third input stream.

 

  1. maskedmax

Merge the second and third input stream into output stream using absolute differences between second input stream and first input stream and absolute difference between third input stream and first input stream. The picked value will be from second input stream if second absolute difference is greater than first one or from third input stream otherwise.

 

  1. maskedmerge

Merge the first input stream with the second input stream using per pixel weights in the third input stream.

  1. maskedmin

Merge the second and third input stream into output stream using absolute differences between second input stream and first input stream and absolute difference between third input stream and first input stream. The picked value will be from second input stream if second absolute difference is less than first one or from third input stream otherwise.

 

  1. maskfun

Create mask from input video

  1. mcdeint

Apply motion-compensation deinterlacing.

 

  1. median

定义一个矩形区域,求矩形区域内像素值的中值,矩形内所有值都用这个均值替换

  1. mergeplanes

Merge color channel components from several video streams.

  1. mestimate

Estimate and export motion vectors using block matching algorithms. Motion vectors are stored in frame side data to be used by other filters.

使用块匹配方法估计和导出运动矢量,把匹配到的运动矢量信息保存在side data信息中,供其他filters使用

  1. midequalizer

使用两个视频流,应用中途图像均衡效果。这个filter可以调整一对输入视频流有相似的直方图分布,这样两个视频流的动态范围是一样的,这种效果尤其适用于匹配一对立体相机的曝光。该filter接受两个输入,有一个输出。两个输入必须有相同的像素格式,但是尺寸可以不一样。filter 的输出是第一个输入流利用这个filter调整两个输入的输出。

  1. minterpolate

Convert the video to specified frame rate using motion interpolation.

改变视频帧率的filter。原理是使用运动差值方法

  1. mix

Mix several video input streams into one video stream

将多个输入视频流混合到一个视频流中,可以设置每个流的权重

 

 

 

  1. mpdecimate

丢掉那些和其他先前帧区别不大帧,目的是降低帧率

  1. negate

翻转输入视频,像素值翻转操作

 

  1. nlmeans

使用non-local means 算法去噪,该算法速度较慢

  1. nnedi

Deinterlace video using neural network edge directed interpolation.

 

  1. noformat

Force libavfilter not to use any of the specified pixel formats for the input to the next filter.

强迫libavfilter不要使用noformat 指定的像素格式作用到下一个filter上

 

noformat=pix_fmts=yuv420p|yuv444p|yuv410p ,vfilp //强制libavfilter使用除yuv420p\yuv444p\yuv410p的格式作用在input上,然后传递到vfilp filter中

noformat=pix_fmts=yuv420p ,vfilp //如果输入源是yuv420p,因为强制不能使用yuv420p,那最后编码后的视频是yuvj420p

  1. noise

往输入源中添加噪声,支持选择像素分量、噪声类型(空域平均噪声、有规则形状的混合随机噪声、不同帧的噪声形状变化的空域噪声、高斯噪声)

  1. normalize

归一化RGB视频(也可以成为直方图拉伸、对比拉伸),

每一帧的每个通道,filter会计算输入视频的range,然后线性绘制到用户指定的输出range,输出range默认是从纯黑到纯白的full dynamic range。空域平滑效果可以作用到输入源的range,以减少由于少部分黑光或亮的物体进入或离开屏幕导致的闪烁问题,这个功能和相机的自动曝光很相似

  1. null

不经处理地把是输入传输给输出

  1. ocr

光学特征识别,OCR,但是想用这个filter,需要在编译的时候--enable-libtesseract。

  1. ocv

利用libopencv 对视频转换(transform)

支持放大、平滑(dilate\smooth)

  1. oscilloscope

将视频信号以2D示波器的形式在视频中展示。对于测量空域脉冲、阶跃响应、色度延迟等。

支持设置示波器所要展示像素的位置、区域

 

 

 

  1. overlay

将一个流覆盖到另一个流上边。两个输入,一个输出,第一个输入是mian,第二个输入被第一个输入覆盖

 

  1. owdenoise

应用 overcomplete wavelet 去噪,复杂度比较高,处理起来很慢,可以当实现模糊的效果

  1. pad

给输入填充边界,源放在给定的坐标xy处

  1. palettegen

为整个视频流创建一个调色板

ffmpeg -i input.mkv -vf palettegen palette.png

 

 

  1. paletteuse

使用一个调色板下采样一个输入视频流,可以使用palettegen这个filter产生的调色板图像

ffmpeg -i input.mkv -i palette.png -lavfi paletteuse output.gif

 

 

  1. perspective

Correct perspective of video not recorded perpendicular to the screen.

 

  1. phase

Delay interlaced video by one field time so that the field order changes.

  1. photosensitivity

降低视频中的闪烁现象

  1. pixdesctest

Pixel format descriptor test filter, mainly useful for internal testing. The output video should be equal to the input video.

  1. pixscope

 

 

查看某个位置处的像素值,对于检测颜色很有用,支持的最小分辨率是640*480

 

./ffmpeg42 -hide_banner -y -i ./1.mp4 -filter_complex "pixscope=x=40/720:y=90/1280:w=80:h=80:wx=1:wy=0" ${color} rec_${name}.mp4

 

 

 

 

 

  1. pp

Enable the specified chain of postprocessing subfilters using libpostproc. This library should be automatically selected with a GPL build (--enable-gpl). Subfilters must be separated by ’/’ and can be disabled by prepending a ’-’. Each subfilter and some options have a short and a long name that can be used interchangeably, i.e. dr/dering are the same.

 

  1. pp7

Apply Postprocessing filter 7. It is variant of the spp filter, similar to spp = 6 with 7 point DCT, where only the center sample is used after IDCT.

 

  1. premultiply

Apply alpha premultiply effect to input video stream using first plane of second stream as alpha.

 

  1. prewitt

Apply prewitt operator to input video stream.

对输入视频流使用prewitt算子

 

  1. pseudocolor

改变视频帧的颜色

./ffmpeg42 -hide_banner -y -i ./1.mp4 -filter_complex pseudocolor="'if(between(val,10,200),20,-1)'" ${color} rec_${name}.mp4

  1. psnr

计算两个输入视频之间的平均、最大、最小PSNR。输入的第一个视频作为main,会不经修改地传到输出端,第二个输入被用作参考视频。两个视频需要有相同的分辨率、像素格式

  1. pullup

Pulldown reversal (inverse telecine) filter, capable of handling mixed hard-telecine, 24000/1001 fps progressive, and 30000/1001 fps progressive content.

 

 

  1. qp

改变视频的量化参数QP ,还不知道如何起作用

  1. random

Flush video frames from internal cache of frames into a random order. No frame is discarded. Inspired by frei0r nervous filter.

把视频帧打乱播放

  1. removegrain

空域去噪filter 用于progressvie video

 

  1. removelogo

Suppress a TV station logo, using an image file to determine which pixels comprise the logo. It works by filling in the pixels that comprise the logo with neighboring pixels.

 

  1. reverse

翻转一个视频,倒放视频,建议使用trim filter ,因为要把所有frame都读入内存,不能使用太多frame

  1. roberts

使用roberts算子处理输入视频

  1. rgbashift

Shift R/G/B/A pixels horizontally and/or vertically.

 

  1.  
  2. rotate

给定一个角度,旋转视频,可指定输出的宽高、采用的插值方法

 

 

 

  1. sab

使用形状自适应滤波

  1. showinfo

Show a line containing various information for each input video frame. The input video is not modified.

 

 

在命令行中显示视频的信息

 

 

 

  1. scale

scale是一个很重要的filter

  1. scale2ref

根据一个参考视频来变换输入视频的尺寸,对于插入logo,自适应图像比例很有用

  1. scroll

Scroll input video horizontally and/or vertically by constant speed.

 

  1. selectivecolor

Adjust cyan, magenta, yellow and black (CMYK) to certain ranges of colors (such as "reds", "yellows", "greens", "cyans", ...). The adjustment range is defined by the "purity" of the color (that is, how saturated it already is).

 

  1. separatefields

The separatefields takes a frame-based video input and splits each frame into its components fields, producing a new half height clip with twice the frame rate and twice the frame count.

 

  1. setdar, setsar

The setdar filter sets the Display Aspect Ratio for the filter output video.

 

  1. setfield

Force field for the output video frame.

 

  1. setparams

The setparams filter marks interlace and color range for the output frames. It does not change the input frame, but only sets the corresponding property, which affects how the frame is treated by filters/encoders.

 

  1. showpalette

Displays the 256 colors palette of each frame. This filter is only relevant for pal8 pixel format frames.

 

  1. shuffleframes

Reorder and/or duplicate and/or drop video frames.

 

 

  1. shuffleplanes

Reorder and/or duplicate video planes.

 

  1. signalstats

Evaluate various visual metrics that assist in determining issues associated with the digitization of analog video media.

 

 

 

  1. signature

Calculates the MPEG-7 Video Signature. The filter can handle more than one input. In this case the matching between the inputs can be calculated additionally. The filter always passes through the first input. The signature of each stream can be written into a file.

 

 

  1. smartblur

对输入视频滤波,但不影响源的轮廓,可增强,可模糊

 

  1. sobel

使用sobel算子处理输入视频,可以指定处理的视频的分量

 

 

 

  1. spp

Apply a simple postprocessing filter that compresses and decompresses the image at several (or - in the case of quality level 6 - all) shifts and average the results.

 

  1. sr

Scale the input by applying one of the super-resolution methods based on convolutional neural networks. Supported models:

超分辨率,使用机器学习

  1. ssim

计算两个输入视频的ssim,第一个输入是main,第二个输入是参考,可以设置把计算结果保存在文件中

 

  1. stereo3d

不同立体视频格式之间的转换

  1. streamselect, astreamselect

挑选视频、音频流

  1. super2xsai

将源放大两倍,使用super2xsai算法实现在放大的同时保持边缘

 

  1. swaprect

交换视频中两个方形目标。可以指定两个方形区域,就可以交换两个方形区域的画面

 

  1. swapuv

交换u和v平面

  1. telecine

对视频应用电视处理效果

 

  1. threshold

对视频使用阈值效果,需要输入四个视频流,第一个流是要处理的,第二个流是阈值,如果第一个流的值小于第二个流,那就pick第三个流或者第四个流。

  1. thumbnail

从一个给定的连续视频帧中挑选出最具有代表性的帧成为一个相册集

 

 

  1. tile

把需要的若干视频帧合并在一张图片上

  1. tinterlace

Perform various types of temporal field interlacing.

设置视频播放的形式

  1. tmix

混合连续的视频帧。

  1. tonemap

Tone map colors from different dynamic ranges.

 

  1. tpad

时域填充视频帧

  1. transpose

Transpose rows with columns in the input video and optionally flip it.

 

  1. transpose_npp

 

  1. trim

修剪输入视频流,修剪后仅包含一部分输入流

  1. unpremultiply

Apply alpha unpremultiply effect to input video stream using first plane of second stream as alpha.

 

  1. unsharp

锐化或滤波输入视频流

  1. uspp

Apply ultra slow/simple postprocessing filter that compresses and decompresses the image at several (or - in the case of quality level 8 - all) shifts and average the results.

 

  1. v360

变换360度视频的格式,在不同格式之间转换

  1. vaguedenoiser

使用基于小波变换的滤波

  1. vectorscope

Display 2 color component values in the two dimensional graph (which is called a vectorscope).

 

  1. vidstabdetect

Analyze video stabilization/deshaking. Perform pass 1 of 2, see vidstabtransform for pass 2.

 

  1. vidstabtransform

Video stabilization/deshaking: pass 2 of 2, see vidstabdetect for pass 1.

 

  1. vflip

垂直翻转视频

  1. vfrdet

检测是否变帧率

  1. vignette

制造或翻转一个自然光晕、渐晕的效果

  1. vmafmotion

Obtain the average VMAF motion score of a video. It is one of the component metrics of VMAF.

 

  1. vstack

水平堆放,合并两个视频到一个帧上,这个filter比overlay和pad 快

  1. w3fdif

反交错输入视频

  1. waveform

Video waveform monitor.

这个filter绘制颜色成分的密度。默认情况下值绘制亮度

  1. weave, doubleweave

The weave takes a field-based video input and join each two sequential fields into single frame, producing a new double height clip with half the frame rate and half the frame count.

 

  1. xbr

使用xbr 高质量放大镜滤波器,用于放大像素。遵循边缘检测的原则

  1. xmedian

从多个输入视频流里调选一个中等像素值的像素

  1. xstack

Stack video inputs into custom layout.

 

  1. yadif

Deinterlace the input video ("yadif" means "yet another deinterlacing filter").

 

  1. yadif_cuda

Deinterlace the input video using the yadif algorithm, but implemented in CUDA so that it can work as part of a GPU accelerated pipeline with nvdec and/or nvenc.

 

  1. zoompan

使用缩放或平移效果

  1. zscale

scale 输入视频,使用z.lib,需要额外编译,支持色域转换

 

 

 

 

 

#addroi
#./ffmpeg42  -y -i  ./video_8_24.mp4 -filter_complex "addroi=x=0:y=0:w=200:h=200:qoffset=1[out1];[out1]addroi=x=200:y=200:w=200:h=200:qoffset=1[out2]" -map "[out2]" rec_addroi.mp4
color='-colorspace bt709 -color_range tv -color_primaries bt709 -color_trc bt709'
amplify(){
./ffmpeg42 -hide_banner -t 10 -y -i  ./video_8_24.mp4 -filter_complex "amplify=radius=2:threshold=10:tolerance=5" rec_amplify.mp4
ffplay -hide_banner 
-i rec_amplify.mp4
}

ass(){
./ffmpeg42 -hide_banner -t 10 -y -i  ./video_8_24.mp4 -filter_complex ass rec_ass.mp4
ffplay -hide_banner -i rec_ass.mp4
}
#ass

atadenoise(){
name=atadenoise
./ffmpeg42 -hide_banner -t 10 -y -i  ./6.mp4 -filter_complex "atadenoise=enable=between(n\,1\,50):0a=0.3:0b=1" rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#atadenoise

avgblur(){
name=avgblur
./ffmpeg42 -hide_banner -t 10 -y -i  ./6.mp4 -filter_complex "avgblur=enable=between(n\,1\,50):sizeX=10" rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#avgblur

bbox(){
name=bbox
./ffmpeg42 -hide_banner -t 10 -y -i  ./6.mp4 -filter_complex "bbox=enable=between(n\,1\,10):min_val=1" rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4  
}
#bbox

bilateral(){
    name=bilateral
./ffmpeg42 -hide_banner -t 10 -y -i  ./6.mp4 -filter_complex "bilateral=enable=between(t\,2\,5)" rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#bilateral

bitplanenoise(){
name=bitplanenoise
./ffmpeg42 -hide_banner -t 10 -y -i  ./6.mp4 -filter_complex "bitplanenoise=filter=1" rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#bitplanenoise

blackdetect(){
name=blackdetect
./ffmpeg42 -hide_banner -t 10 -y -i  ./6.mp4 -filter_complex "blackdetect" rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#blackdetect

blackframe(){
name=blackframe
  ./ffmpeg42 -hide_banner -t 10 -y -i  ./6.mp4 -filter_complex "blackdetect" rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}  
#blackframe

blend(){
name=blend
  ./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4  -i ./1.mp4  -filter_complex  "tblend=all_mode=multiply" rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#blend

bm3d(){
name=bm3d
  ./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4    -filter_complex  "bm3d=sigma=3:block=4:bstep=2:group=1:estim=basic" rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#bm3d

boxblur(){
name=boxblur
  ./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4    -filter_complex  "boxblur=luma_radius=2:luma_power=1" rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#boxblur

bwdif(){
name=bwdif
  ./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4    -filter_complex  "bwdif"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#bwdif

chromahold(){
name=chromahold
  ./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4    -filter_complex  "chromahold"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#chromahold

chromakey(){
name=chromakey
  ./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4    -filter_complex  "chromakey=color=black:blend=0.01"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#chromakey

chromashift(){
name=chromashift
  ./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4    -filter_complex  "chromashift=edge=smear"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#chromashift

ciescope(){
name=ciescope
./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4   -filter_complex  ciescope=system=rec709:cie=xyy:gamuts=rec709:showwhite=1:gamma=2.2  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#ciescope

codecview(){
name=codecview
ffplay -hide_banner   -flags2 export_mvs -i ./1.mp4 -vf codecview=mv_type=fp:qp=1
} 
#codecview

colorbalance(){
name=colorbalance
ffplay -hide_banner -i rec_${name}.mp4 
} 
#colorbalance

convolution(){
name=convolution
./ffmpeg42 -hide_banner -t 3 -y -i ./6.mp4   -filter_complex  convolution="-2 -1 0 -1 1 1 0 1 2:-2 -1 0 -1 1 1 0 1 2:-2 -1 0 -1 1 1 0 1 2:-2 -1 0 -1 1 1 0 1 2" ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#convolution

convolve(){
name=convolve
./ffmpeg42 -hide_banner -t 3 -y -i ./6.mp4   -filter_complex  convolve ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#convolve

crop(){
name=crop
./ffmpeg42 -hide_banner -t 3 -y -i ./1.mp4   -filter_complex  crop=w=240:h=240 ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#crop

cropdetect(){
name=cropdetect
./ffmpeg42 -hide_banner -t 3 -y -i ./bt709_2.mp4   -filter_complex  cropdetect ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#cropdetect

cropdetect(){
name=cropdetect
./ffmpeg42 -hide_banner -t 3 -y -i ./bt709_2.mp4   -filter_complex  cropdetect ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#cropdetect

datascope(){
name=datascope
./ffmpeg42 -hide_banner -t 3 -y -i ./3.mp4   -filter_complex  datascope=mode=color2 ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#datascope

dctdnoiz(){
name=dctdnoiz
./ffmpeg42 -hide_banner -t 3 -y -i ./3.mp4   -filter_complex  dctdnoiz ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#dctdnoiz

deband(){
name=deband
./ffmpeg42 -hide_banner -t 3 -y -i ./1.mp4   -filter_complex  deband ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#deband

decimate(){
name=decimate
./ffmpeg42 -hide_banner -t 3 -y -i ./1.mp4   -filter_complex  decimate ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#decimate

dedot(){
name=dedot
./ffmpeg42 -hide_banner -t 3 -y -i ./1.mp4   -filter_complex  dedot ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#dedot

deflicker(){
name=deflicker
./ffmpeg42 -hide_banner -t 3 -y -i ./1.mp4   -filter_complex  deflicker ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#deflicker

dejudder(){
name=dejudder
./ffmpeg42 -hide_banner -t 3 -y -i ./1.mp4   -filter_complex  dejudder ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#dejudder

delogo(){
name=delogo
./ffmpeg42 -hide_banner -t 3 -y -i ./1.mp4   -filter_complex  delogo=x=1:y=1:w=100:h=100 ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#delogo

derain(){
name=derain
./ffmpeg42 -hide_banner -t 3 -y -i ./1.mp4   -filter_complex  derain ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#derain

deshake(){
name=deshake
./ffmpeg42 -hide_banner -t 3 -y -i ./1.mp4   -filter_complex  deshake ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#deshake

despill(){
name=despill
./ffmpeg42 -hide_banner -t 3 -y -i ./1.mp4   -filter_complex  despill ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#despill

detelecine(){
name=detelecine
./ffmpeg42 -hide_banner -t 3 -y -i ./1.mp4   -filter_complex  detelecine ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#detelecine

drawbox(){
name=drawbox
./ffmpeg42 -hide_banner -t 3 -y -i ./1.mp4   -filter_complex  drawbox=x=10:y=10:w=100:h=100:[email protected]:t=fill ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#drawbox 

edgedetect(){
name=edgedetect
./ffmpeg42 -hide_banner -t 3 -y -i ./3.mp4   -filter_complex  "edgedetect=enable=between(t\,0\,2)"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#edgedetect

framepack(){
name=framepack
./ffmpeg42 -hide_banner  -y -i ./1_smpte240m_no_cp_trc.mp4  -i ./1_smpte240m.mp4  -filter_complex  "framepack=frameseq"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#framepack 

framestep(){
name=framestep
./ffmpeg42 -hide_banner  -y -i ./1_smpte240m_no_cp_trc.mp4   -filter_complex  "framestep=step=10"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#framestep

frei0r(){
name=frei0r
./ffmpeg42 -hide_banner  -y -i ./1_smpte240m_no_cp_trc.mp4   -filter_complex  "frei0r"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#frei0r


hysteresis(){
name=hysteresis
./ffmpeg42 -hide_banner -t 2 -y -i ./1_smpte240m_no_cp_trc.mp4 -t 2 -i ./1_smpte240m_no_cp_trc.mp4  -filter_complex  "hysteresis"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#hysteresis

inflate(){
name=inflate
./ffmpeg42 -hide_banner -t 2 -y -i ./1_smpte240m_no_cp_trc.mp4   -filter_complex  "inflate"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#inflate
 
loop(){
name=loop
./ffmpeg42 -hide_banner  -y -i ./1_smpte240m_no_cp_trc.mp4   -filter_complex  "loop=loop=30:start=60:size=3"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#loop

lut1d(){
name=lut1d
./ffmpeg42 -hide_banner -t 2 -y -i ./1_smpte240m_no_cp_trc.mp4   -filter_complex  "lut1d"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#lut1d

maskfun(){
name=maskfun
./ffmpeg42 -hide_banner -t 2 -y -i ./1_smpte240m_no_cp_trc.mp4   -filter_complex  "maskfun=low=20:high=230:planes=1"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#maskfun

mcdeint(){
name=mcdeint
./ffmpeg42 -hide_banner -t 2 -y -i ./1_smpte240m_no_cp_trc.mp4   -filter_complex  "maskfun=low=20:high=230:planes=1"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#mcdeint

median(){
name=median
./ffmpeg42 -hide_banner -t 2 -y -i ./1_smpte240m_no_cp_trc.mp4   -filter_complex  "median=radius=50"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#median

minterpolate(){
name=minterpolate
./ffmpeg42 -hide_banner  -y -i ./1_smpte240m_no_cp_trc.mp4   -filter_complex  "minterpolate=fps=60:mi_mode=mci"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#minterpolate

mix(){
name=mix
./ffmpeg42 -hide_banner  -y -t 2 -i ./1_smpte240m_no_cp_trc.mp4 -t 2 -i ./1.mp4  -filter_complex  "mix=weights=2 4 "  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#mix

negate(){
name=negate
./ffmpeg42 -hide_banner  -y  -t 2 -i ./6.mp4  -filter_complex  "negate=negate_alpha=1"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#negate


nlmeans(){
name=nlmeans
./ffmpeg42 -hide_banner  -y  -t 2 -i ./6.mp4  -filter_complex  "nlmeans"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#nlmeans

noformat(){
name=noformat
./ffmpeg42 -hide_banner  -y  -t 2 -i ./6.mp4  -filter_complex  "noformat=yuv420p"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#noformat

noise(){
name=noise
./ffmpeg42 -hide_banner  -y   -i ./6.mp4  -filter_complex  "loop=loop=30:start=1:size=1,noise=c0_seed=123457:c0_strength=50:c0f=t"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#noise

null(){
name=null
./ffmpeg42 -hide_banner  -y   -i ./6.mp4  -filter_complex  "null"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#null


oscilloscope(){
name=Oscilloscope
./ffmpeg42 -hide_banner  -y   -i ./33_709_pix480.mp4  -filter_complex  "oscilloscope"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#oscilloscope

overlay(){
name=overlay
./ffmpeg42 -hide_banner  -y   -i ./33_709_pix480.mp4 -i ./1.mp4 -filter_complex  "overlay"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#overlay

overlay(){
name=overlay
./ffmpeg42 -hide_banner  -y   -i ./33_709_pix480.mp4 -i ./1.mp4 -filter_complex  "overlay"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#overlay

overlay(){
name=overlay
./ffmpeg42 -hide_banner  -y   -i ./33_709_pix480.mp4 -i ./1.mp4 -filter_complex  "overlay"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#overlay

owdenoise(){
name=owdenoise
./ffmpeg42 -hide_banner  -y  -i ./1.mp4 -filter_complex  "owdenoise=depth=15:ls=500"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#owdenoise

pad(){
name=pad
./ffmpeg42 -hide_banner  -y  -i ./1.mp4 -filter_complex  "scale=-2:480,pad=w=1080:h=720:x=30:y=30:color=red"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#pad

palettegen(){
name=palettegen
./ffmpeg42 -hide_banner  -y  -i ./1.mp4 -filter_complex  "palettegen"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#palettegen

paletteuse(){
name=paletteuse
./ffmpeg42 -hide_banner  -y  -i ./6.mp4  -i rec_palettegen.png -filter_complex  "paletteuse"  ${color} rec_${name}.gif
ffplay -hide_banner -i rec_${name}.gif
}
#paletteuse

perspective(){
name=perspective
./ffmpeg42 -hide_banner  -y  -i ./6.mp4  -i rec_palettegen.png -filter_complex  "perspective"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#perspective

phase(){
name=phase
./ffmpeg42 -hide_banner  -y  -i ./6.mp4  -i rec_palettegen.png -filter_complex  "phase"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#phase

photosensitivity(){
name=photosensitivity
./ffmpeg42 -hide_banner  -y  -i ./6.mp4   -filter_complex  "photosensitivity"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}

#photosensitivity

pixdesctest(){
name=pixdesctest
./ffmpeg42 -hide_banner  -y  -i ./1.mp4   -filter_complex  "format=monow,pixdesctest"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#pixdesctest

pixscope(){
name=pixscope
./ffmpeg42 -hide_banner  -y  -i ./1.mp4   -filter_complex  "pixscope=x=40/720:y=90/1280:w=80:h=80:wx=1:wy=0"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#pixscope

prewitt(){
name=prewitt
./ffmpeg42 -hide_banner  -y  -i ./1.mp4   -filter_complex  "prewitt=planes=0xf"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#prewitt

pseudocolor(){
name=pseudocolor
./ffmpeg42 -hide_banner  -y  -i ./1.mp4   -filter_complex  pseudocolor="'if(between(val,10,200),20,-1)'"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#pseudocolor

qp(){
name=qp
./ffmpeg42 -hide_banner  -y  -i ./1.mp4   -filter_complex  qp=100  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#qp

setparams(){
name=setparams
./ffmpeg42 -hide_banner  -y  -i ./1.mp4   -filter_complex  setparams=field_mode=prog:range=tv:color_primaries=bt470m:color_trc=bt470m:colorspace=bt470bg   rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#setparams

showpalette(){
name=showpalette
./ffmpeg42 -hide_banner  -y  -i ./1.mp4   -filter_complex  showpalette   rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#showpalette

random(){
name=random
./ffmpeg42 -hide_banner  -y  -i ./1.mp4   -filter_complex  random   rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#random

removegrain(){
name=removegrain
./ffmpeg42 -hide_banner  -y  -i ./1.mp4   -filter_complex  removegrain   rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#removegrain

reverse(){
name=reverse
./ffmpeg42 -hide_banner  -t 5 -y  -i ./112334.mp4    -filter_complex  trim=end=5,reverse   rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#reverse

roberts(){
name=roberts
./ffmpeg42 -hide_banner  -t 5 -y  -i ./112334.mp4    -filter_complex  roberts   rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#roberts

shuffleplanes(){
name=shuffleplanes
./ffmpeg42 -hide_banner  -t 5 -y  -i ./112334.mp4    -filter_complex  shuffleplanes=1   rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#shuffleplanes

signature(){
name=signature
./ffmpeg42 -hide_banner  -t 5 -y  -i ./112334.mp4    -filter_complex  signature=filename=signature.bin  -map 0:v -f null -  
}
#signature

smartblur(){
name=smartblur
./ffmpeg42 -hide_banner  -t 5 -y  -i ./112334.mp4    -filter_complex  smartblur=lr=5:ls=-1,smartblur=lr=5:ls=0.2   rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#smartblur

sobel(){
name=sobel
./ffmpeg42 -hide_banner  -t 5 -y  -i ./112334.mp4    -filter_complex  sobel=planes=1   rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#sobel

spp(){
name=spp
./ffmpeg42 -hide_banner  -t 5 -y  -i ./112334.mp4    -filter_complex  spp   rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#spp

sr(){
name=sr
./ffmpeg42 -hide_banner  -t 5 -y  -i ./112334.mp4    -filter_complex  sr=dnn_backend=native:scale_factor=2   rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#sr

super2xsai(){
name=super2xsai
./ffmpeg42 -hide_banner  -t 5 -y  -i ./112334.mp4    -filter_complex  super2xsai   rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#super2xsai

swaprect(){
name=swaprect
./ffmpeg42 -hide_banner  -t 5 -y  -i ./1.mp4    -filter_complex  swaprect=w=20:h=40:x1=120:y1=240:x2=150:y2=320   rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#swaprect

swapuv(){
name=swapuv
./ffmpeg42 -hide_banner  -t 5 -y  -i ./1.mp4    -filter_complex  swapuv   rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#swapuv

telecine(){
name=telecine
./ffmpeg42 -hide_banner  -t 5 -y  -i ./1.mp4    -filter_complex  telecine=first_field=t:pattern=24   rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#telecine

threshold(){
name=threshold
./ffmpeg42 -hide_banner  -t 5 -y  -i ./1.mp4    -filter_complex  threshold   rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#threshold

thumbnail(){
name=thumbnail
./ffmpeg42 -hide_banner  -t 5 -y  -i ./1.mp4    -filter_complex  thumbnail=20   rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#thumbnail

tile(){
name=tile
ffmpeg  -i ./1.mp4  -vf tile=3x2:nb_frames=5:padding=7:margin=2  -an -vsync 0 keyframes%03d.png
#ffplay -hide_banner -i rec_${name}.mp4
}
#tile

tinterlace(){
name=tinterlace
ffmpeg  -y -i ./1.mp4  -filter_complex  tinterlace=0  -an  rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#tinterlace

tmix(){
name=tmix
ffmpeg  -y -i ./1.mp4  -filter_complex  tmix=4  -an  rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#tmix

tpad(){
name=tpad
ffmpeg  -y -i ./1.mp4  -filter_complex  tpad=10  -an  rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#tpad

vfrdet(){
name=vfrdet
ffmpeg  -y -i ./1.mp4  -filter_complex  vfrdet  -an  rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#vfrdet

vignette(){
name=vignette
ffmpeg  -y -i ./1.mp4  -filter_complex  vignette  -an  rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#vignette

vmafmotion(){
name=vmafmotion
ffmpeg  -y -i ./1.mp4  -filter_complex  vmafmotion  -an  rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#vmafmotion

vstack(){
name=vstack
ffmpeg  -y -i ./1.mp4 -i 6.mp4  -filter_complex  vstack  -an  rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#vstack

waveform(){
name=waveform
ffmpeg  -y -i ./1.mp4   -filter_complex  waveform  -an  rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#waveform

xbr(){
name=xbr
ffmpeg  -y -i ./6.mp4   -filter_complex  xbr  -an  rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#xbr

xmedian(){
name=xmedian
ffmpeg  -y -i ./6.mp4   -filter_complex  xmedian  -an  rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#xmedian

zoompan(){
name=zoompan
ffmpeg  -y -i ./6.mp4   -filter_complex  zoompan  -an  rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
zoompan

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

你可能感兴趣的:(filters,ffmpeg,filter)