为ffmpeg添加自定义滤镜

前言
FFmpeg的优秀在于它的功能强大和良好的系统框架,而滤镜就是其中之一。ffmpeg的自带滤镜不但能对视频进行裁剪,添加logo,还能将多个滤镜组全使用。
更妙之处在于它还可以方便地添加自己定义的各种滤镜。这种可扩展性对于实际应用来说就颇有价值了。

闲言少述,书归正传!
本文第一部分是我对wiki上的一篇教程的翻译和解释,但是它并没有讲解如何将写好的滤镜添加到ffmpeg中编译并运行。
第二部分是我自己实践了的如何将滤镜添加进ffmpeg中进行编译和运行(版本ffmpeg-0.8.5)。
最后一部分附上示例用的滤镜源码(版本ffmpeg-0.8.5)。


Chapter 1  :  
FFmpeg filter HOWTO

(本章节引自http://wiki.multimedia.cx/index.php?title=FFmpeg_filter_howto)

This page is meant as an introduction of writing filters for libavfilter. This is a work in progress, but should at least point you in the right direction for writing simple filters.

Contents
1. Definition of a filter
      1.1 AVFilter
      1.2 AVFilterPad
2. Picture buffers
      2.1  Reference counting
      2.2 Permissions
3. Filter Links
4. Writing a simple filter
      4.1 Default filter entry points
      4.2 The vf_negate filter
Definition of a filter  AVFilter

All filters are described by an AVFilter structure. This structure gives information needed to initialize the filter, and information on the entry points into the filter code. This structure is declared in libavfilter/avfilter.h

滤镜的数据结构体(包括需要初始化滤镜变量,滤镜的函数入口点等)定义在libavfilter/avfilter.h中:

  1. typedef struct
  2. {
  3.     char *name; ///< filter name

  4.     int priv_size; ///< size of private data to allocate for the filter

  5.     int (*init)(AVFilterContext *ctx, const char *args, void *opaque);
  6.     void (*uninit)(AVFilterContext *ctx);

  7.     int (*query_formats)(AVFilterContext *ctx);

  8.     const AVFilterPad *inputs; ///< NULL terminated list of inputs. NULL if none
  9.     const AVFilterPad *outputs; ///< NULL terminated list of outputs. NULL if none
  10. } AVFilter;

The query_formats function sets the in_formats member of connected output links, and the out_formats member of connected input links, described below under AVFilterLink.

query_formats 函数设置滤镜输入,输出图像的格式,如YUV420,YUV422等。

AVFilterPad

Let's take a quick look at the AVFilterPad structure, which is used to describe the inputs and outputs of the filter. This is also defined in libavfilter/avfilter.h:

AVFilterPad结构体用于描述滤镜的输入和输出。

  1. typedef struct AVFilterPad
  2. {
  3.     char *name;
  4.     int type;

  5.     int min_perms;
  6.     int rej_perms;

  7.     void (*start_frame)(AVFilterLink *link, AVFilterPicRef *picref);
  8.     AVFilterPicRef *(*get_video_buffer)(AVFilterLink *link, int perms);
  9.     void (*end_frame)(AVFilterLink *link);
  10.     void (*draw_slice)(AVFilterLink *link, int y, int height);

  11.     int (*request_frame)(AVFilterLink *link);

  12.     int (*config_props)(AVFilterLink *link);
  13. } AVFilterPad;

The actual definition in the header file has doxygen comments describing each entry point, its purpose, and what type of pads it is relevant for. These fields are relevant for all pads: 

Field Description
name Name of the pad. No two inputs should have the same name, and no two outputs should have the same name.
(pad的名称,当有多于一个的输入或输出时,不要重名)
type Only AV_PAD_VIDEO currently.
config_props Handles configuration of the link connected to the pad

Fields only relevant to input pads are:

Field Description
min_perms Minimum permissions required to a picture received as input.
(本滤镜要求的输入图像的最小授权)
rej_perms Permissions not accepted on pictures received as input.
start_frame Called when a frame is about to be given as input.
draw_slice Called when a slice of frame data has been given as input.
(主要的滤镜处理函数)
end_frame Called when the input frame has been completely sent.
get_video_buffer Called by the previous filter to request memory for a picture.

Fields only relevant to output pads are:

Field Description
request_frame Requests that the filter output a frame.
Picture buffers  Reference counting(引用计数)

All pictures in the filter system are reference counted. This means that there is a picture buffer with memory allocated for the image data, and various filters can own a reference to the buffer. When a reference is no longer needed, its owner frees the reference. When the last reference to a picture buffer is freed, the filter system automatically frees the picture buffer. 

滤镜系统中所有的图像都是有计数的引用。意即,有一个分配了存储空间的图像buffer,而每个滤镜可以有自己的引用指向这个图像buffer。当这个引用不再需要时,拥有这个引用的滤镜将释放这个引用。当图像buffer的最后一个引用也被释放时,滤镜系统将自动释放这个图像buffer。

Permissions(组合滤镜的授权问题)

The upshot of multiple filters having references to a single picture is that they will all want some level of access to the image data. It should be obvious that if one filter expects to be able to read the image data without it changing that no other filter should write to the image data. The permissions system handles this. 

使用组合滤镜的结果就是会有多个引用指向同一个图像buffer,而且它们都希望对buffer中的图像有相同的访问权限。这时就会产生冲突啦,例如,当一个滤镜有读取buffer中图像数据的权限时,肯定不希望有别的滤镜同时在改定buffer中的图像数据。解决这种冲突就需要授权系统来处理。

In most cases, when a filter prepares to output a frame, it will request a buffer from the filter to which it will be outputting. It specifies the minimum permissions it needs to the buffer, though it may be given a buffer with more permissions than the minimum it requested. 

通常,当滤镜准备输出一帧数据时,它会向滤镜系统请求一个输出buffer,这时,滤镜系统会返回一个buffer结该滤镜并将这个buffer授予最小的权限。

When it wants to pass this buffer to another filter as output, it creates a new reference to the picture, possibly with a reduced set of permissions. This new reference will be owned by the filter receiving it. 

当当前滤镜要将图像buffer输出给下一个滤镜时,该滤镜会创建一个指向图像buffer的新引用,这个buffer的授权可能会减小。而这个新的引用会被下一个滤镜接收。

So, for example, for a filter which drops frames if they are similar to the last frame it output, it would want to keep its own reference to a picture after outputting it, and make sure that no other filter modified the buffer either. It would do this by requesting the permissions AV_PERM_READ|AV_PERM_WRITE|AV_PERM_PRESERVE for itself, and removing the AV_PERM_WRITE permission from any references it gave to other filters.

这段没搞明白什么意思

The available permissions are:

Permission Description
AV_PERM_READ Can read the image data.
AV_PERM_WRITE Can write to the image data.
AV_PERM_PRESERVE Can assume that the image data will not be modified by other filters. This means that no other filters should have the AV_PERM_WRITE permission.
AV_PERM_REUSE The filter may output the same buffer multiple times, but the image data may not be changed for the different outputs.
AV_PERM_REUSE2 The filter may output the same buffer multiple times, and may modify the image data between outputs.
Filter Links

A filter's inputs and outputs are connected to those of another filter through the AVFilterLink structure:

  1. typedef struct AVFilterLink
  2. {
  3.     AVFilterContext *src; ///< source filter
  4.     unsigned int srcpad; ///< index of the output pad on the source filter

  5.     AVFilterContext *dst; ///< dest filter
  6.     unsigned int dstpad; ///< index of the input pad on the dest filter

  7.     int w; ///< agreed upon image width
  8.     int h; ///< agreed upon image height
  9.     enum PixelFormat format; ///< agreed upon image colorspace

  10.     AVFilterFormats *in_formats; ///< formats supported by source filter
  11.     AVFilterFormats *out_formats; ///< formats supported by destination filter

  12.     AVFilterPicRef *srcpic;

  13.     AVFilterPicRef *cur_pic;
  14.     AVFilterPicRef *outpic;
  15. };

The src and dst members indicate the filters at the source and destination ends of the link, respectively. The srcpad indicates the index of the output pad on the source filter to which the link is connected. Likewise, the dstpad indicates the index of the input pad on the destination filter.

The in_formats member points to a list of formats supported by the source filter, while the out_formats member points to a list of formats supported by the destination filter. The AVFilterFormats structure used to store the lists is reference counted, and in fact tracks its references (see the comments for the AVFilterFormats structure in libavfilter/avfilter.h for more information on how the colorspace negotiation is works and why this is necessary). The upshot is that if a filter provides pointers to the same list on multiple input/output links, it means that those links will be forced to use the same format as each other.

When two filters are connected, they need to agree upon the dimensions of the image data they'll be working with, and the format that data is in. Once this has been agreed upon, these parameters are stored in the link structure.

The srcpic member is used internally by the filter system, and should not be accessed directly.

The cur_pic member is for the use of the destination filter. When a frame is currently being sent over the link (ie. starting from the call to start_frame() and ending with the call to end_frame()), this contains the reference to the frame which is owned by the destination filter.

The outpic member is described in the following tutorial on writing a simple filter. 

Writing a simple filter  Default filter entry points

Because the majority of filters that will probably be written will take exactly one input, and produce exactly one output, and output one frame for every frame received as input, the filter system provides a number default entry points to ease the development of such filters. 

Entry point Actions taken by the default implementation
request_frame() Request a frame from the previous filter in the chain.
query_formats() Sets the list of supported formats on all input pads such that all links must use the same format, from a default list of formats containing most YUV and RGB/BGR formats.
start_frame() Request a buffer to store the output frame in. A reference to this buffer is stored in the outpic member of the link hooked to the filter's output. The next filter's start_frame() callback is called and given a reference to this buffer.
end_frame() Calls the next filter's end_frame() callback. Frees the reference to the outpic member of the output link, if it was set by (ie. if the default start_frame() is used). Frees the cur_pic reference in the input link.
get_video_buffer() Returns a buffer with the AV_PERM_READ permission in addition to all the requested permissions.
config_props() on output pad Sets the image dimensions for the output link to the same as on the filter's input.
The vf_negate filter

Having looked at the data structures and callback functions involved, let's take a look at an actual filter. The vf_negate filter inverts the colors in a video. It has one input, and one output, and outputs exactly one frame for every input frame. In this way, it's fairly typical, and can take advantage of many of the default callback implementations offered by the filter system.

First, let's take a look at the AVFilter structure at the bottom of the libavfilter/vf_negate.c file:

  1. AVFilter avfilter_vf_negate =
  2. {
  3.     .name = "negate",

  4.     .priv_size = sizeof(NegContext),

  5.     .query_formats = query_formats,

  6.     .inputs = (AVFilterPad[]) {{ .name = "default",
  7.                                     .type = AV_PAD_VIDEO,
  8.                                     .draw_slice = draw_slice,
  9.                                     .config_props = config_props,
  10.                                     .min_perms = AV_PERM_READ, },
  11.                                   { .name = NULL}},
  12.     .outputs = (AVFilterPad[]) {{ .name = "default",
  13.                                     .type = AV_PAD_VIDEO, },
  14.                                   { .name = NULL}},
  15. };

Here, you can see that the filter is named "negate," and it needs sizeof(NegContext) bytes of data to store its context. In the list of inputs and outputs, a pad whose name is set to NULL indicates the end of the list, so this filter has exactly one input and one output. If you look closely at the pad definitions, you will see that fairly few callback functions are actually specified. Because of the simplicity of the filter, the defaults can do most of the work for us.

Let us take a look at the callback function it does define. 

query_formats()
  1. static int query_formats(AVFilterContext *ctx)
  2. {
  3.     avfilter_set_common_formats(ctx,
  4.         avfilter_make_format_list(10,
  5.                 PIX_FMT_YUV444P, PIX_FMT_YUV422P, PIX_FMT_YUV420P,
  6.                 PIX_FMT_YUV411P, PIX_FMT_YUV410P,
  7.                 PIX_FMT_YUVJ444P, PIX_FMT_YUVJ422P, PIX_FMT_YUVJ420P,
  8.                 PIX_FMT_YUV440P, PIX_FMT_YUVJ440P));
  9.     return 0;
  10. }

This calls avfilter_make_format_list(). This function takes as its first parameter the number of formats which will follow as the remaining parameters. The return value is an AVFilterFormats structure containing the given formats. The avfilter_set_common_formats() function which this structure is passed to sets all connected links to use this same list of formats, which causes all the filters to use the same format after negotiation is complete. As you can see, this filter supports a number of planar YUV colorspaces, including JPEG YUV colorspaces (the ones with a 'J' in the names). 

config_props() on an input pad

The config_props() on an input pad is responsible for verifying that the properties of the input pad are supported by the filter, and to make any updates to the filter's context which are necessary for the link's properties.

TODO: quick explanation of YUV colorspaces, chroma subsampling, difference in range of YUV and JPEG YUV.

Let's take a look at the way in which this filter stores its context: 

  1. typedef struct
  2. {
  3.     int offY, offUV;
  4.     int hsub, vsub;
  5. } NegContext;

That's right. The priv_size member of the AVFilter structure tells the filter system how many bytes to reserve for this structure. The hsub and vsub members are used for chroma subsampling, and the offY and offUV members are used for handling the difference in range between YUV and JPEG YUV. Let's see how these are set in the input pad's config_props: 

  1. static int config_props(AVFilterLink *link)
  2. {
  3.     NegContext *neg = link->dst->priv;

  4.     avcodec_get_chroma_sub_sample(link->format, &neg->hsub, &neg->vsub);

  5.     switch(link->format) {
  6.     case PIX_FMT_YUVJ444P:
  7.     case PIX_FMT_YUVJ422P:
  8.     case PIX_FMT_YUVJ420P:
  9.     case PIX_FMT_YUVJ440P:
  10.         neg->offY =
  11.         neg->offUV = 0;
  12.         break;
  13.     default:
  14.         neg->offY = -4;
  15.         neg->offUV = 1;
  16.     }

  17.     return 0;
  18. }

This simply calls avcodec_get_chroma_sub_sample() to get the chroma subsampling shift factors, and stores those in the context. It then stores a set of offsets for compensating for different luma/chroma value ranges for JPEG YUV, and a different set of offsets for other YUV colorspaces. It returns zero to indicate success, because there are no possible input cases which this filter cannot handle. 

draw_slice()

Finally, the function which actually does the processing for the filter, draw_slice(): 

  1. static void draw_slice(AVFilterLink *link, int y, int h)
  2. {
  3.     NegContext *neg = link->dst->priv;
  4.     AVFilterPicRef *in = link->cur_pic;
  5.     AVFilterPicRef *out = link->dst->outputs[0]->outpic;
  6.     uint8_t *inrow, *outrow;
  7.     int i, j, plane;

  8.     /* luma plane */
  9.     inrow = in-> data[0] + y * in-> linesize[0];
  10.     outrow = out->data[0] + y * out->linesize[0];
  11.     for(= 0; i < h; i ++) {
  12.         for(= 0; j < link->w; j ++)
  13.             outrow[j] = 255 - inrow[j] + neg->offY;
  14.         inrow += in-> linesize[0];
  15.         outrow += out->linesize[0];
  16.     }

  17.     /* chroma planes */
  18.     for(plane = 1; plane < 3; plane ++) {
  19.         inrow = in-> data[plane] + (>> neg->vsub) * in-> linesize[plane];
  20.         outrow = out->data[plane] + (>> neg->vsub) * out->linesize[plane];

  21.         for(= 0; i < h >> neg->vsub; i ++) {
  22.             for(= 0; j < link->>> neg->hsub; j ++)
  23.                 outrow[j] = 255 - inrow[j] + neg->offUV;
  24.             inrow += in-> linesize[plane];
  25.             outrow += out->linesize[plane];
  26.         }
  27.     }

  28.     avfilter_draw_slice(link->dst->outputs[0], y, h);
  29. }

The y parameter indicates the top of the current slice, and the h parameter the slice's height. Areas of the image outside this slice should not be assumed to be meaningful (though a method to allow this assumption in order to simplify boundary cases for some filters is coming in the future).

This sets inrow to point to the beginning of the first row of the slice in the input, and outrow similarly for the output. Then, for each row, it loops through all the pixels, subtracting them from 255, and adding the offset which was determined in config_props() to account for different value ranges.

It then does the same thing for the chroma planes. Note how the width and height are shifted right to account for the chroma subsampling.

Once the drawing is completed, the slice is sent to the next filter by calling avfilter_draw_slice().

Chapter 2: 

将滤镜添加进ffmpeg中编译和运行

1. configure中声明使用的协议

  1. # filters
  2. ...
  3. tnegate_filter_deps="gpl"

2. libavfilter/allfilter.c中注册自定义滤镜

  1. void avfilter_register_all(void)
  2. {
  3.     ...

  4.     REGISTER_FILTER (TNEGATE, tnegate, vf);
  5.     ...
  6. }

3. libavfilter/Makfile中添加滤镜的链接

  1. OBJS = allfilters.\
  2.        avfilter.\
  3.        avfiltergraph.\
  4.        defaults.\
  5.        drawutils.\
  6.        formats.\
  7.        graphparser.\

  8. OBJS-$(CONFIG_AVCODEC) += avcodec.o
  9. ...
  10. OBJS-$(CONFIG_TNEGATE_FILTER) += vf_tnegate.o

4. configure设置

  1. ./configure \
  2. --enable-gpl --enable-nonfree --enable-version3 \
  3. ...
  4. --enable-avfilter --enable-filter=movie \
  5. --enable-avfilter --enable-filter=tnegate

5. 编译与运行

  1. #make
  2. #./ffmpeg -i input.flv -vf "tnegate" -y output.flv

chapter 3: 

反相滤镜源码

  1. libavfilter/vf_tnegate.c

  2. #include "libavutil/eval.h"
  3. #include "libavutil/opt.h"
  4. #include "libavutil/pixdesc.h"
  5. #include "libavcodec/avcodec.h"
  6. #include "avfilter.h"

  7. typedef struct
  8. {
  9.   int hsub, vsub; // Used for chroma subsampling
  10. }NegContext;

  11. /* */
  12. static int tnegate_config_props(AVFilterLink *link)
  13. {
  14.   NegContext *neg = link->dst->priv;

  15.   avcodec_get_chroma_sub_sample(link->format, &neg->hsub, &neg->vsub);
  16.   switch(link->format)
  17.   {
  18.     case PIX_FMT_YUVJ444P:
  19.     case PIX_FMT_YUVJ422P:
  20.     case PIX_FMT_YUVJ420P:
  21.     case PIX_FMT_YUVJ440P:
  22.       neg->offY =
  23.       neg->offUV = 0;
  24.       break;
  25.     default:
  26.       neg->offY = -4;
  27.       neg->offUV= 1;
  28.   }
  29.   return 0;
  30. }

  31. static void tnegate_draw_slice(AVFilterLink *link, int y, int h, int slice_dir)
  32. {
  33.   NegContext *neg = link->dst->priv;
  34.   //AVFilterPicRef *in = link->cur_pic;
  35.   //AVFilterPicRef *out = link->dst->outputs[0]->outpic;
  36.   AVFilterBufferRef *in = link->cur_buf;
  37.   AVFilterBufferRef *out = link->dst->outputs[0]->out_buf;
  38.   unsigned char *inrow, *outrow;
  39.   int i, j, plane;

  40.   /* luma plane */
  41.   inrow = in-> data[0] + y * in-> linesize[0]; // Get the row position of pixel
  42.   outrow = out->data[0] + y * out->linesize[0];

  43.   for(= 0; i < h; i++)
  44.   {
  45.     for(= 0; j < link->w; j++)
  46.       outrow[j] = 255 -inrow[j] + neg->offY;
  47.     outrow += in->linesize[0];
  48.   }

  49.   /* chroma planes */
  50.   for(plane = 1; plane < 3; plane++)
  51.   {
        inrow  = in-> data[plane] + (y >> neg->vsub) * in-> linesize[plane];
        outrow = out->data[plane] + (y >> neg->vsub) * out->linesize[plane];

        for(i = 0; i < (h >> neg->vsub); i++)
        {
          for(j = 0; j < (link->w >> neg->vsub); j++)
            outrow[j] = 255 - inrow[j] + neg->offUV;

  52.       inrow += in-> linesize[plane];
  53.       outrow += out->linesize[plane];
  54.     }
  55.   }
  56.   avfilter_draw_slice(link->dst->outputs[0], y, h, 1);
  57. }

  58. static int tnegate_query_formats(AVFilterContext *ctx)
  59. {
  60.   static const enum PixelFormat pix_fmts[] = {
  61.     PIX_FMT_YUV410P, PIX_FMT_YUV420P, PIX_FMT_GRAY8, PIX_FMT_NV12,
  62.     PIX_FMT_NV21, PIX_FMT_YUV444P, PIX_FMT_YUV422P, PIX_FMT_YUV411P,
  63.     PIX_FMT_NONE
  64.   };
  65.   avfilter_set_common_pixel_formats(ctx, avfilter_make_format_list(pix_fmts));
  66.   return 0;
  67. }

  68. /* the filter's structure */
  69. AVFilter avfilter_vf_tnegate =
  70. {
  71.   .name = "tnegate", ///< filter name
  72.   .priv_size = sizeof(NegContext), ///< size of private data to allocate for the filter
  73.   .query_formats = tnegate_query_formats, ///< set the format of in/outputs
  74.   
  75.   /* the inputs of the filter */
  76.   .inputs = (AVFilterPad[]){{ .name = "default",
  77.                               .type = AVMEDIA_TYPE_VIDEO,
  78.                               .draw_slice = tnegate_draw_slice, 
  79.                               .config_props = tnegate_config_props, 
  80.                               .min_perms = AV_PERM_READ,}, 
  81.                             { .name = NULL }},
  82.   /* the ouputs of the filter */
  83.   .outputs = (AVFilterPad[]){{ .name = "default",
  84.                                .type = AVMEDIA_TYPE_VIDEO, },
  85.                              { .name = NULL }},
  86. };

你可能感兴趣的:(流媒体:,FFmpeg专项)