FFmpeg源代码简单分析 libswscale的sws getContext

分享一下我老师大神的人工智能教程!零基础,通俗易懂!http://blog.csdn.net/jiangjunshow

也欢迎大家转载本篇文章。分享知识,造福人民,实现我们中华民族伟大复兴!

               

=====================================================

FFmpeg的库函数源代码分析文章列表:

【架构图】

FFmpeg源代码结构图 - 解码

FFmpeg源代码结构图 - 编码

【通用】

FFmpeg 源代码简单分析:av_register_all()

FFmpeg 源代码简单分析:avcodec_register_all()

FFmpeg 源代码简单分析:内存的分配和释放(av_malloc()av_free()等)

FFmpeg 源代码简单分析:常见结构体的初始化和销毁(AVFormatContextAVFrame等)

FFmpeg 源代码简单分析:avio_open2()

FFmpeg 源代码简单分析:av_find_decoder()av_find_encoder()

FFmpeg 源代码简单分析:avcodec_open2()

FFmpeg 源代码简单分析:avcodec_close()

【解码】

图解FFMPEG打开媒体的函数avformat_open_input

FFmpeg 源代码简单分析:avformat_open_input()

FFmpeg 源代码简单分析:avformat_find_stream_info()

FFmpeg 源代码简单分析:av_read_frame()

FFmpeg 源代码简单分析:avcodec_decode_video2()

FFmpeg 源代码简单分析:avformat_close_input()

【编码】

FFmpeg 源代码简单分析:avformat_alloc_output_context2()

FFmpeg 源代码简单分析:avformat_write_header()

FFmpeg 源代码简单分析:avcodec_encode_video()

FFmpeg 源代码简单分析:av_write_frame()

FFmpeg 源代码简单分析:av_write_trailer()

【其它】

FFmpeg源代码简单分析:日志输出系统(av_log()等)

FFmpeg源代码简单分析:结构体成员管理系统-AVClass

FFmpeg源代码简单分析:结构体成员管理系统-AVOption

FFmpeg源代码简单分析:libswscalesws_getContext()

FFmpeg源代码简单分析:libswscalesws_scale()

FFmpeg源代码简单分析:libavdeviceavdevice_register_all()

FFmpeg源代码简单分析:libavdevicegdigrab

【脚本】

FFmpeg源代码简单分析:makefile

FFmpeg源代码简单分析:configure

【H.264】

FFmpegH.264解码器源代码简单分析:概述

=====================================================


打算写两篇文章记录FFmpeg中的图像处理(缩放,YUV/RGB格式转换)类库libswsscale的源代码。libswscale是一个主要用于处理图片像素数据的类库。可以完成图片像素格式的转换,图片的拉伸等工作。有关libswscale的使用可以参考文章:

《最简单的基于FFmpeg的libswscale的示例(YUV转RGB)》

libswscale常用的函数数量很少,一般情况下就3个:

sws_getContext():初始化一个SwsContext。

sws_scale():处理图像数据。

sws_freeContext():释放一个SwsContext。

其中sws_getContext()也可以用sws_getCachedContext()取代。

尽管libswscale从表面上看常用函数的个数不多,它的内部却有一个大大的“世界”。做为一个几乎“万能”的图片像素数据处理类库,它的内部包含了大量的代码。因此计划写两篇文章分析它的源代码。本文首先分析它的初始化函数sws_getContext(),而下一篇文章则分析它的数据处理函数sws_scale()。


函数调用结构图

分析得到的libswscale的函数调用关系如下图所示。



Libswscale处理数据流程

Libswscale处理像素数据的流程可以概括为下图。


从图中可以看出,libswscale处理数据有两条最主要的方式:unscaled和scaled。unscaled用于处理不需要拉伸的像素数据(属于比较特殊的情况),scaled用于处理需要拉伸的像素数据。Unscaled只需要对图像像素格式进行转换;而Scaled则除了对像素格式进行转换之外,还需要对图像进行缩放。Scaled方式可以分成以下几个步骤:

  • XXX to YUV Converter:首相将数据像素数据转换为8bitYUV格式;
  • Horizontal scaler:水平拉伸图像,并且转换为15bitYUV;
  • Vertical scaler:垂直拉伸图像;
  • Output converter:转换为输出像素格式。

SwsContext

SwsContext是使用libswscale时候一个贯穿始终的结构体。但是我们在使用FFmpeg的类库进行开发的时候,是无法看到它的内部结构的。在libswscale\swscale.h中只能看到一行定义:
struct SwsContext;
一般人看到这个只有一行定义的结构体,会猜测它的内部一定十分简单。但是假使我们看一下FFmpeg的源代码,会发现这个猜测是完全错误的——SwsContext的定义是十分复杂的。它的定义位于libswscale\swscale_internal.h中,如下所示。
/* This struct should be aligned on at least a 32-byte boundary. */typedef struct SwsContext {    /**     * info on struct for av_log     */    const AVClass *av_class;    /**     * Note that src, dst, srcStride, dstStride will be copied in the     * sws_scale() wrapper so they can be freely modified here.     */    SwsFunc swscale;    int srcW;                     ///< Width  of source      luma/alpha planes.    int srcH;                     ///< Height of source      luma/alpha planes.    int dstH;                     ///< Height of destination luma/alpha planes.    int chrSrcW;                  ///< Width  of source      chroma     planes.    int chrSrcH;                  ///< Height of source      chroma     planes.    int chrDstW;                  ///< Width  of destination chroma     planes.    int chrDstH;                  ///< Height of destination chroma     planes.    int lumXInc, chrXInc;    int lumYInc, chrYInc;    enum AVPixelFormat dstFormat; ///< Destination pixel format.    enum AVPixelFormat srcFormat; ///< Source      pixel format.    int dstFormatBpp;             ///< Number of bits per pixel of the destination pixel format.    int srcFormatBpp;             ///< Number of bits per pixel of the source      pixel format.    int dstBpc, srcBpc;    int chrSrcHSubSample;         ///< Binary logarithm of horizontal subsampling factor between luma/alpha and chroma planes in source      image.    int chrSrcVSubSample;         ///< Binary logarithm of vertical   subsampling factor between luma/alpha and chroma planes in source      image.    int chrDstHSubSample;         ///< Binary logarithm of horizontal subsampling factor between luma/alpha and chroma planes in destination image.    int chrDstVSubSample;         ///< Binary logarithm of vertical   subsampling factor between luma/alpha and chroma planes in destination image.    int vChrDrop;                 ///< Binary logarithm of extra vertical subsampling factor in source image chroma planes specified by user.    int sliceDir;                 ///< Direction that slices are fed to the scaler (1 = top-to-bottom, -1 = bottom-to-top).    double param[2];              ///< Input parameters for scaling algorithms that need them.    /* The cascaded_* fields allow spliting a scaler task into multiple     * sequential steps, this is for example used to limit the maximum     * downscaling factor that needs to be supported in one scaler.     */    struct SwsContext *cascaded_context[2];    int cascaded_tmpStride[4];    uint8_t *cascaded_tmp[4];    uint32_t pal_yuv[256];    uint32_t pal_rgb[256];    /**     * @name Scaled horizontal lines ring buffer.     * The horizontal scaler keeps just enough scaled lines in a ring buffer     * so they may be passed to the vertical scaler. The pointers to the     * allocated buffers for each line are duplicated in sequence in the ring     * buffer to simplify indexing and avoid wrapping around between lines     * inside the vertical scaler code. The wrapping is done before the     * vertical scaler is called.     */    //@{    int16_t **lumPixBuf;          ///< Ring buffer for scaled horizontal luma   plane lines to be fed to the vertical scaler.    int16_t **chrUPixBuf;         ///< Ring buffer for scaled horizontal chroma plane lines to be fed to the vertical scaler.    int16_t **chrVPixBuf;         ///< Ring buffer for scaled horizontal chroma plane lines to be fed to the vertical scaler.    int16_t **alpPixBuf;          ///< Ring buffer for scaled horizontal alpha  plane lines to be fed to the vertical scaler.    int vLumBufSize;              ///< Number of vertical luma/alpha lines allocated in the ring buffer.    int vChrBufSize;              ///< Number of vertical chroma     lines allocated in the ring buffer.    int lastInLumBuf;             ///< Last scaled horizontal luma/alpha line from source in the ring buffer.    int lastInChrBuf;             ///< Last scaled horizontal chroma     line from source in the ring buffer.    int lumBufIndex;              ///< Index in ring buffer of the last scaled horizontal luma/alpha line from source.    int chrBufIndex;              ///< Index in ring buffer of the last scaled horizontal chroma     line from source.    //@}    uint8_t *formatConvBuffer;    /**     * @name Horizontal and vertical filters.     * To better understand the following fields, here is a pseudo-code of     * their usage in filtering a horizontal line:     * @code     * for (i = 0; i < width; i++) {     *     dst[i] = 0;     *     for (j = 0; j < filterSize; j++)     *         dst[i] += src[ filterPos[i] + j ] * filter[ filterSize * i + j ];     *     dst[i] >>= FRAC_BITS; // The actual implementation is fixed-point.     * }     * @endcode     */    //@{    int16_t *hLumFilter;          ///< Array of horizontal filter coefficients for luma/alpha planes.    int16_t *hChrFilter;          ///< Array of horizontal filter coefficients for chroma     planes.    int16_t *vLumFilter;          ///< Array of vertical   filter coefficients for luma/alpha planes.    int16_t *vChrFilter;          ///< Array of vertical   filter coefficients for chroma     planes.    int32_t *hLumFilterPos;       ///< Array of horizontal filter starting positions for each dst[i] for luma/alpha planes.    int32_t *hChrFilterPos;       ///< Array of horizontal filter starting positions for each dst[i] for chroma     planes.    int32_t *vLumFilterPos;       ///< Array of vertical   filter starting positions for each dst[i] for luma/alpha planes.    int32_t *vChrFilterPos;       ///< Array of vertical   filter starting positions for each dst[i] for chroma     planes.    int hLumFilterSize;           ///< Horizontal filter size for luma/alpha pixels.    int hChrFilterSize;           ///< Horizontal filter size for chroma     pixels.    int vLumFilterSize;           ///< Vertical   filter size for luma/alpha pixels.    int vChrFilterSize;           ///< Vertical   filter size for chroma     pixels.    //@}    int lumMmxextFilterCodeSize;  ///< Runtime-generated MMXEXT horizontal fast bilinear scaler code size for luma/alpha planes.    int chrMmxextFilterCodeSize;  ///< Runtime-generated MMXEXT horizontal fast bilinear scaler code size for chroma planes.    uint8_t *lumMmxextFilterCode; ///< Runtime-generated MMXEXT horizontal fast bilinear scaler code for luma/alpha planes.    uint8_t *chrMmxextFilterCode; ///< Runtime-generated MMXEXT horizontal fast bilinear scaler code for chroma planes.    int canMMXEXTBeUsed;    int dstY;                     ///< Last destination vertical line output from last slice.    int flags;                    ///< Flags passed by the user to select scaler algorithm, optimizations, subsampling, etc...    void *yuvTable;             // pointer to the yuv->rgb table start so it can be freed()    // alignment ensures the offset can be added in a single    // instruction on e.g. ARM    DECLARE_ALIGNED(16, int, table_gV)[256 + 2*YUVRGB_TABLE_HEADROOM];    uint8_t *table_rV[256 + 2*YUVRGB_TABLE_HEADROOM];    uint8_t *table_gU[256 + 2*YUVRGB_TABLE_HEADROOM];    uint8_t *table_bU[256 + 2*YUVRGB_TABLE_HEADROOM];    DECLARE_ALIGNED(16, int32_t, input_rgb2yuv_table)[16+40*4]; // This table can contain both C and SIMD formatted values, the C vales are always at the XY_IDX points#define RY_IDX 0#define GY_IDX 1#define BY_IDX 2#define RU_IDX 3#define GU_IDX 4#define BU_IDX 5#define RV_IDX 6#define GV_IDX 7#define BV_IDX 8#define RGB2YUV_SHIFT 15    int *dither_error[4];    //Colorspace stuff    int contrast, brightness, saturation;    // for sws_getColorspaceDetails    int srcColorspaceTable[4];    int dstColorspaceTable[4];    int srcRange;                 ///< 0 = MPG YUV range, 1 = JPG YUV range (source      image).    int dstRange;                 ///< 0 = MPG YUV range, 1 = JPG YUV range (destination image).    int src0Alpha;    int dst0Alpha;    int srcXYZ;    int dstXYZ;    int src_h_chr_pos;    int dst_h_chr_pos;    int src_v_chr_pos;    int dst_v_chr_pos;    int yuv2rgb_y_offset;    int yuv2rgb_y_coeff;    int yuv2rgb_v2r_coeff;    int yuv2rgb_v2g_coeff;    int yuv2rgb_u2g_coeff;    int yuv2rgb_u2b_coeff;#define RED_DITHER            "0*8"#define GREEN_DITHER          "1*8"#define BLUE_DITHER           "2*8"#define Y_COEFF               "3*8"#define VR_COEFF              "4*8"#define UB_COEFF              "5*8"#define VG_COEFF              "6*8"#define UG_COEFF              "7*8"#define Y_OFFSET              "8*8"#define U_OFFSET              "9*8"#define V_OFFSET              "10*8"#define LUM_MMX_FILTER_OFFSET "11*8"#define CHR_MMX_FILTER_OFFSET "11*8+4*4*"AV_STRINGIFY(MAX_FILTER_SIZE)#define DSTW_OFFSET           "11*8+4*4*"AV_STRINGIFY(MAX_FILTER_SIZE)"*2"#define ESP_OFFSET            "11*8+4*4*"AV_STRINGIFY(MAX_FILTER_SIZE)"*2+8"#define VROUNDER_OFFSET       "11*8+4*4*"AV_STRINGIFY(MAX_FILTER_SIZE)"*2+16"#define U_TEMP                "11*8+4*4*"AV_STRINGIFY(MAX_FILTER_SIZE)"*2+24"#define V_TEMP                "11*8+4*4*"AV_STRINGIFY(MAX_FILTER_SIZE)"*2+32"#define Y_TEMP                "11*8+4*4*"AV_STRINGIFY(MAX_FILTER_SIZE)"*2+40"#define ALP_MMX_FILTER_OFFSET "11*8+4*4*"AV_STRINGIFY(MAX_FILTER_SIZE)"*2+48"#define UV_OFF_PX             "11*8+4*4*"AV_STRINGIFY(MAX_FILTER_SIZE)"*3+48"#define UV_OFF_BYTE           "11*8+4*4*"AV_STRINGIFY(MAX_FILTER_SIZE)"*3+56"#define DITHER16              "11*8+4*4*"AV_STRINGIFY(MAX_FILTER_SIZE)"*3+64"#define DITHER32              "11*8+4*4*"AV_STRINGIFY(MAX_FILTER_SIZE)"*3+80"#define DITHER32_INT          (11*8+4*4*MAX_FILTER_SIZE*3+80) // value equal to above, used for checking that the struct hasn't been changed by mistake    DECLARE_ALIGNED(8, uint64_t, redDither);    DECLARE_ALIGNED(8, uint64_t, greenDither);    DECLARE_ALIGNED(8, uint64_t, blueDither);    DECLARE_ALIGNED(8, uint64_t, yCoeff);    DECLARE_ALIGNED(8, uint64_t, vrCoeff);    DECLARE_ALIGNED(8, uint64_t, ubCoeff);    DECLARE_ALIGNED(8, uint64_t, vgCoeff);    DECLARE_ALIGNED(8, uint64_t, ugCoeff);    DECLARE_ALIGNED(8, uint64_t, yOffset);    DECLARE_ALIGNED(8, uint64_t, uOffset);    DECLARE_ALIGNED(8, uint64_t, vOffset);    int32_t lumMmxFilter[4 * MAX_FILTER_SIZE];    int32_t chrMmxFilter[4 * MAX_FILTER_SIZE];    int dstW;                     ///< Width  of destination luma/alpha planes.    DECLARE_ALIGNED(8, uint64_t, esp);    DECLARE_ALIGNED(8, uint64_t, vRounder);    DECLARE_ALIGNED(8, uint64_t, u_temp);    DECLARE_ALIGNED(8, uint64_t, v_temp);    DECLARE_ALIGNED(8, uint64_t, y_temp);    int32_t alpMmxFilter[4 * MAX_FILTER_SIZE];    // alignment of these values is not necessary, but merely here    // to maintain the same offset across x8632 and x86-64. Once we    // use proper offset macros in the asm, they can be removed.    DECLARE_ALIGNED(8, ptrdiff_t, uv_off); ///< offset (in pixels) between u and v planes    DECLARE_ALIGNED(8, ptrdiff_t, uv_offx2); ///< offset (in bytes) between u and v planes    DECLARE_ALIGNED(8, uint16_t, dither16)[8];    DECLARE_ALIGNED(8, uint32_t, dither32)[8];    const uint8_t *chrDither8, *lumDither8;#if HAVE_ALTIVEC    vector signed short   CY;    vector signed short   CRV;    vector signed short   CBU;    vector signed short   CGU;    vector signed short   CGV;    vector signed short   OY;    vector unsigned short CSHIFT;    vector signed short  *vYCoeffsBank, *vCCoeffsBank;#endif    int use_mmx_vfilter;/* pre defined color-spaces gamma */#define XYZ_GAMMA (2.6f)#define RGB_GAMMA (2.2f)    int16_t *xyzgamma;    int16_t *rgbgamma;    int16_t *xyzgammainv;    int16_t *rgbgammainv;    int16_t xyz2rgb_matrix[3][4];    int16_t rgb2xyz_matrix[3][4];    /* function pointers for swscale() */    yuv2planar1_fn yuv2plane1;    yuv2planarX_fn yuv2planeX;    yuv2interleavedX_fn yuv2nv12cX;    yuv2packed1_fn yuv2packed1;    yuv2packed2_fn yuv2packed2;    yuv2packedX_fn yuv2packedX;    yuv2anyX_fn yuv2anyX;    /// Unscaled conversion of luma plane to YV12 for horizontal scaler.    void (*lumToYV12)(uint8_t *dst, const uint8_t *src, const uint8_t *src2, const uint8_t *src3,                      int width, uint32_t *pal);    /// Unscaled conversion of alpha plane to YV12 for horizontal scaler.    void (*alpToYV12)(uint8_t *dst, const uint8_t *src, const uint8_t *src2, const uint8_t *src3,                      int width, uint32_t *pal);    /// Unscaled conversion of chroma planes to YV12 for horizontal scaler.    void (*chrToYV12)(uint8_t *dstU, uint8_t *dstV,                      const uint8_t *src1, const uint8_t *src2, const uint8_t *src3,                      int width, uint32_t *pal);    /**     * Functions to read planar input, such as planar RGB, and convert     * internally to Y/UV/A.     */    /** @{ */    void (*readLumPlanar)(uint8_t *dst, const uint8_t *src[4], int width, int32_t *rgb2yuv);    void (*readChrPlanar)(uint8_t *dstU, uint8_t *dstV, const uint8_t *src[4],                          int width, int32_t *rgb2yuv);    void (*readAlpPlanar)(uint8_t *dst, const uint8_t *src[4], int width, int32_t *rgb2yuv);    /** @} */    /**     * Scale one horizontal line of input data using a bilinear filter     * to produce one line of output data. Compared to SwsContext->hScale(),     * please take note of the following caveats when using these:     * - Scaling is done using only 7bit instead of 14bit coefficients.     * - You can use no more than 5 input pixels to produce 4 output     *   pixels. Therefore, this filter should not be used for downscaling     *   by more than ~20% in width (because that equals more than 5/4th     *   downscaling and thus more than 5 pixels input per 4 pixels output).     * - In general, bilinear filters create artifacts during downscaling     *   (even when <20%), because one output pixel will span more than one     *   input pixel, and thus some pixels will need edges of both neighbor     *   pixels to interpolate the output pixel. Since you can use at most     *   two input pixels per output pixel in bilinear scaling, this is     *   impossible and thus downscaling by any size will create artifacts.     * To enable this type of scaling, set SWS_FLAG_FAST_BILINEAR     * in SwsContext->flags.     */    /** @{ */    void (*hyscale_fast)(struct SwsContext *c,                         int16_t *dst, int dstWidth,                         const uint8_t *src, int srcW, int xInc);    void (*hcscale_fast)(struct SwsContext *c,                         int16_t *dst1, int16_t *dst2, int dstWidth,                         const uint8_t *src1, const uint8_t *src2,                         int srcW, int xInc);    /** @} */    /**     * Scale one horizontal line of input data using a filter over the input     * lines, to produce one (differently sized) line of output data.     *     * @param dst        pointer to destination buffer for horizontally scaled     *                   data. If the number of bits per component of one     *                   destination pixel (SwsContext->dstBpc) is <= 10, data     *                   will be 15bpc in 16bits (int16_t) width. Else (i.e.     *                   SwsContext->dstBpc == 16), data will be 19bpc in     *                   32bits (int32_t) width.     * @param dstW       width of destination image     * @param src        pointer to source data to be scaled. If the number of     *                   bits per component of a source pixel (SwsContext->srcBpc)     *                   is 8, this is 8bpc in 8bits (uint8_t) width. Else     *                   (i.e. SwsContext->dstBpc > 8), this is native depth     *                   in 16bits (uint16_t) width. In other words, for 9-bit     *                   YUV input, this is 9bpc, for 10-bit YUV input, this is     *                   10bpc, and for 16-bit RGB or YUV, this is 16bpc.     * @param filter     filter coefficients to be used per output pixel for     *                   scaling. This contains 14bpp filtering coefficients.     *                   Guaranteed to contain dstW * filterSize entries.     * @param filterPos  position of the first input pixel to be used for     *                   each output pixel during scaling. Guaranteed to     *                   contain dstW entries.     * @param filterSize the number of input coefficients to be used (and     *                   thus the number of input pixels to be used) for     *                   creating a single output pixel. Is aligned to 4     *                   (and input coefficients thus padded with zeroes)     *                   to simplify creating SIMD code.     */    /** @{ */    void (*hyScale)(struct SwsContext *c, int16_t *dst, int dstW,                    const uint8_t *src, const int16_t *filter,                    const int32_t *filterPos, int filterSize);    void (*hcScale)(struct SwsContext *c, int16_t *dst, int dstW,                    const uint8_t *src, const int16_t *filter,                    const int32_t *filterPos, int filterSize);    /** @} */    /// Color range conversion function for luma plane if needed.    void (*lumConvertRange)(int16_t *dst, int width);    /// Color range conversion function for chroma planes if needed.    void (*chrConvertRange)(int16_t *dst1, int16_t *dst2, int width);    int needs_hcscale; ///< Set if there are chroma planes to be converted.    SwsDither dither;} SwsContext;

这个结构体的定义确实比较复杂,里面包含了libswscale所需要的全部变量。一一分析这些变量是不太现实的,在后文中会简单分析其中的几个变量。

sws_getContext()

sws_getContext()是初始化SwsContext的函数。sws_getContext()的声明位于libswscale\swscale.h,如下所示。
/** * Allocate and return an SwsContext. You need it to perform * scaling/conversion operations using sws_scale(). * * @param srcW the width of the source image * @param srcH the height of the source image * @param srcFormat the source image format * @param dstW the width of the destination image * @param dstH the height of the destination image * @param dstFormat the destination image format * @param flags specify which algorithm and options to use for rescaling * @return a pointer to an allocated context, or NULL in case of error * @note this function is to be removed after a saner alternative is *       written */struct SwsContext *sws_getContext(int srcW, int srcH, enum AVPixelFormat srcFormat,                                  int dstW, int dstH, enum AVPixelFormat dstFormat,                                  int flags, SwsFilter *srcFilter,                                  SwsFilter *dstFilter, const double *param);

该函数包含以下参数:
srcW:源图像的宽
srcH:源图像的高
srcFormat:源图像的像素格式
dstW:目标图像的宽
dstH:目标图像的高
dstFormat:目标图像的像素格式
flags:设定图像拉伸使用的算法
成功执行的话返回生成的SwsContext,否则返回NULL。
sws_getContext()的定义位于libswscale\utils.c,如下所示。
SwsContext *sws_getContext(int srcW, int srcH, enum AVPixelFormat srcFormat,                           int dstW, int dstH, enum AVPixelFormat dstFormat,                           int flags, SwsFilter *srcFilter,                           SwsFilter *dstFilter, const double *param){    SwsContext *c;    if (!(c = sws_alloc_context()))        return NULL;    c->flags     = flags;    c->srcW      = srcW;    c->srcH      = srcH;    c->dstW      = dstW;    c->dstH      = dstH;    c->srcFormat = srcFormat;    c->dstFormat = dstFormat;    if (param) {        c->param[0] = param[0];        c->param[1] = param[1];    }    if (sws_init_context(c, srcFilter, dstFilter) < 0) {        sws_freeContext(c);        return NULL;    }    return c;}

从sws_getContext()的定义中可以看出,它首先调用了一个函数sws_alloc_context()用于给SwsContext分配内存。然后将传入的源图像,目标图像的宽高,像素格式,以及标志位分别赋值给该SwsContext相应的字段。最后调用一个函数sws_init_context()完成初始化工作。下面我们分别看一下sws_alloc_context()和sws_init_context()这两个函数。


sws_alloc_context()

sws_alloc_context()是FFmpeg的一个API,用于给SwsContext分配内存,它的声明如下所示。
/** * Allocate an empty SwsContext. This must be filled and passed to * sws_init_context(). For filling see AVOptions, options.c and * sws_setColorspaceDetails(). */struct SwsContext *sws_alloc_context(void);

sws_alloc_context()的定义位于libswscale\utils.c,如下所示。
SwsContext *sws_alloc_context(void){    SwsContext *c = av_mallocz(sizeof(SwsContext));    av_assert0(offsetof(SwsContext, redDither) + DITHER32_INT == offsetof(SwsContext, dither32));    if (c) {        c->av_class = &sws_context_class;        av_opt_set_defaults(c);    }    return c;}

从代码中可以看出,sws_alloc_context()首先调用av_mallocz()为SwsContext结构体分配了一块内存;然后设置了该结构体的AVClass,并且给该结构体的字段设置了默认值。

sws_init_context()

sws_init_context()的是FFmpeg的一个API,用于初始化SwsContext。
/** * Initialize the swscaler context sws_context. * * @return zero or positive value on success, a negative value on * error */int sws_init_context(struct SwsContext *sws_context, SwsFilter *srcFilter, SwsFilter *dstFilter);

sws_init_context()的函数定义非常的长,位于libswscale\utils.c,如下所示。
av_cold int sws_init_context(SwsContext *c, SwsFilter *srcFilter,                             SwsFilter *dstFilter){    int i, j;    int usesVFilter, usesHFilter;    int unscaled;    SwsFilter dummyFilter = { NULL, NULL, NULL, NULL };    int srcW              = c->srcW;    int srcH              = c->srcH;    int dstW              = c->dstW;    int dstH              = c->dstH;    int dst_stride        = FFALIGN(dstW * sizeof(int16_t) + 66, 16);    int flags, cpu_flags;    enum AVPixelFormat srcFormat = c->srcFormat;    enum AVPixelFormat dstFormat = c->dstFormat;    const AVPixFmtDescriptor *desc_src;    const AVPixFmtDescriptor *desc_dst;    int ret = 0;    //获取    cpu_flags = av_get_cpu_flags();    flags     = c->flags;    emms_c();    if (!rgb15to16)        sws_rgb2rgb_init();    //如果输入的宽高和输出的宽高一样,则做特殊处理    unscaled = (srcW == dstW && srcH == dstH);    //如果是JPEG标准(Y取值0-255),则需要设置这两项    c->srcRange |= handle_jpeg(&c->srcFormat);    c->dstRange |= handle_jpeg(&c->dstFormat);    if(srcFormat!=c->srcFormat || dstFormat!=c->dstFormat)        av_log(c, AV_LOG_WARNING, "deprecated pixel format used, make sure you did set range correctly\n");    //设置Colorspace    if (!c->contrast && !c->saturation && !c->dstFormatBpp)        sws_setColorspaceDetails(c, ff_yuv2rgb_coeffs[SWS_CS_DEFAULT], c->srcRange,                                 ff_yuv2rgb_coeffs[SWS_CS_DEFAULT],                                 c->dstRange, 0, 1 << 16, 1 << 16);    handle_formats(c);    srcFormat = c->srcFormat;    dstFormat = c->dstFormat;    desc_src = av_pix_fmt_desc_get(srcFormat);    desc_dst = av_pix_fmt_desc_get(dstFormat);    //转换大小端?    if (!(unscaled && sws_isSupportedEndiannessConversion(srcFormat) &&          av_pix_fmt_swap_endianness(srcFormat) == dstFormat)) {    //检查输入格式是否支持    if (!sws_isSupportedInput(srcFormat)) {        av_log(c, AV_LOG_ERROR, "%s is not supported as input pixel format\n",               av_get_pix_fmt_name(srcFormat));        return AVERROR(EINVAL);    }    //检查输出格式是否支持    if (!sws_isSupportedOutput(dstFormat)) {        av_log(c, AV_LOG_ERROR, "%s is not supported as output pixel format\n",               av_get_pix_fmt_name(dstFormat));        return AVERROR(EINVAL);    }    }    //检查拉伸的方法    i = flags & (SWS_POINT         |                 SWS_AREA          |                 SWS_BILINEAR      |                 SWS_FAST_BILINEAR |                 SWS_BICUBIC       |                 SWS_X             |                 SWS_GAUSS         |                 SWS_LANCZOS       |                 SWS_SINC          |                 SWS_SPLINE        |                 SWS_BICUBLIN);    /* provide a default scaler if not set by caller */    //如果没有指定,就使用默认的    if (!i) {        if (dstW < srcW && dstH < srcH)            flags |= SWS_BICUBIC;        else if (dstW > srcW && dstH > srcH)            flags |= SWS_BICUBIC;        else            flags |= SWS_BICUBIC;        c->flags = flags;    } else if (i & (i - 1)) {        av_log(c, AV_LOG_ERROR,               "Exactly one scaler algorithm must be chosen, got %X\n", i);        return AVERROR(EINVAL);    }    /* sanity check */    //检查宽高参数    if (srcW < 1 || srcH < 1 || dstW < 1 || dstH < 1) {        /* FIXME check if these are enough and try to lower them after         * fixing the relevant parts of the code */        av_log(c, AV_LOG_ERROR, "%dx%d -> %dx%d is invalid scaling dimension\n",               srcW, srcH, dstW, dstH);        return AVERROR(EINVAL);    }    if (!dstFilter)        dstFilter = &dummyFilter;    if (!srcFilter)        srcFilter = &dummyFilter;    c->lumXInc      = (((int64_t)srcW << 16) + (dstW >> 1)) / dstW;    c->lumYInc      = (((int64_t)srcH << 16) + (dstH >> 1)) / dstH;    c->dstFormatBpp = av_get_bits_per_pixel(desc_dst);    c->srcFormatBpp = av_get_bits_per_pixel(desc_src);    c->vRounder     = 4 * 0x0001000100010001ULL;    usesVFilter = (srcFilter->lumV && srcFilter->lumV->length > 1) ||                  (srcFilter->chrV && srcFilter->chrV->length > 1) ||                  (dstFilter->lumV && dstFilter->lumV->length > 1) ||                  (dstFilter->chrV && dstFilter->chrV->length > 1);    usesHFilter = (srcFilter->lumH && srcFilter->lumH->length > 1) ||                  (srcFilter->chrH && srcFilter->chrH->length > 1) ||                  (dstFilter->lumH && dstFilter->lumH->length > 1) ||                  (dstFilter->chrH && dstFilter->chrH->length > 1);    av_pix_fmt_get_chroma_sub_sample(srcFormat, &c->chrSrcHSubSample, &c->chrSrcVSubSample);    av_pix_fmt_get_chroma_sub_sample(dstFormat, &c->chrDstHSubSample, &c->chrDstVSubSample);    if (isAnyRGB(dstFormat) && !(flags&SWS_FULL_CHR_H_INT)) {        if (dstW&1) {            av_log(c, AV_LOG_DEBUG, "Forcing full internal H chroma due to odd output size\n");            flags |= SWS_FULL_CHR_H_INT;            c->flags = flags;        }        if (   c->chrSrcHSubSample == 0            && c->chrSrcVSubSample == 0            && c->dither != SWS_DITHER_BAYER //SWS_FULL_CHR_H_INT is currently not supported with SWS_DITHER_BAYER            && !(c->flags & SWS_FAST_BILINEAR)        ) {            av_log(c, AV_LOG_DEBUG, "Forcing full internal H chroma due to input having non subsampled chroma\n");            flags |= SWS_FULL_CHR_H_INT;            c->flags = flags;        }    }    if (c->dither == SWS_DITHER_AUTO) {        if (flags & SWS_ERROR_DIFFUSION)            c->dither = SWS_DITHER_ED;    }    if(dstFormat == AV_PIX_FMT_BGR4_BYTE ||       dstFormat == AV_PIX_FMT_RGB4_BYTE ||       dstFormat == AV_PIX_FMT_BGR8 ||       dstFormat == AV_PIX_FMT_RGB8) {        if (c->dither == SWS_DITHER_AUTO)            c->dither = (flags & SWS_FULL_CHR_H_INT) ? SWS_DITHER_ED : SWS_DITHER_BAYER;        if (!(flags & SWS_FULL_CHR_H_INT)) {            if (c->dither == SWS_DITHER_ED || c->dither == SWS_DITHER_A_DITHER || c->dither == SWS_DITHER_X_DITHER) {                av_log(c, AV_LOG_DEBUG,                    "Desired dithering only supported in full chroma interpolation for destination format '%s'\n",                    av_get_pix_fmt_name(dstFormat));                flags   |= SWS_FULL_CHR_H_INT;                c->flags = flags;            }        }        if (flags & SWS_FULL_CHR_H_INT) {            if (c->dither == SWS_DITHER_BAYER) {                av_log(c, AV_LOG_DEBUG,                    "Ordered dither is not supported in full chroma interpolation for destination format '%s'\n",                    av_get_pix_fmt_name(dstFormat));                c->dither = SWS_DITHER_ED;            }        }    }    if (isPlanarRGB(dstFormat)) {        if (!(flags & SWS_FULL_CHR_H_INT)) {            av_log(c, AV_LOG_DEBUG,                   "%s output is not supported with half chroma resolution, switching to full\n",                   av_get_pix_fmt_name(dstFormat));            flags   |= SWS_FULL_CHR_H_INT;            c->flags = flags;        }    }    /* reuse chroma for 2 pixels RGB/BGR unless user wants full     * chroma interpolation */    if (flags & SWS_FULL_CHR_H_INT &&        isAnyRGB(dstFormat)        &&        !isPlanarRGB(dstFormat)    &&        dstFormat != AV_PIX_FMT_RGBA  &&        dstFormat != AV_PIX_FMT_ARGB  &&        dstFormat != AV_PIX_FMT_BGRA  &&        dstFormat != AV_PIX_FMT_ABGR  &&        dstFormat != AV_PIX_FMT_RGB24 &&        dstFormat != AV_PIX_FMT_BGR24 &&        dstFormat != AV_PIX_FMT_BGR4_BYTE &&        dstFormat != AV_PIX_FMT_RGB4_BYTE &&        dstFormat != AV_PIX_FMT_BGR8 &&        dstFormat != AV_PIX_FMT_RGB8    ) {        av_log(c, AV_LOG_WARNING,               "full chroma interpolation for destination format '%s' not yet implemented\n",               av_get_pix_fmt_name(dstFormat));        flags   &= ~SWS_FULL_CHR_H_INT;        c->flags = flags;    }    if (isAnyRGB(dstFormat) && !(flags & SWS_FULL_CHR_H_INT))        c->chrDstHSubSample = 1;    // drop some chroma lines if the user wants it    c->vChrDrop          = (flags & SWS_SRC_V_CHR_DROP_MASK) >>                           SWS_SRC_V_CHR_DROP_SHIFT;    c->chrSrcVSubSample += c->vChrDrop;    /* drop every other pixel for chroma calculation unless user     * wants full chroma */    if (isAnyRGB(srcFormat) && !(flags & SWS_FULL_CHR_H_INP)   &&        srcFormat != AV_PIX_FMT_RGB8 && srcFormat != AV_PIX_FMT_BGR8 &&        srcFormat != AV_PIX_FMT_RGB4 && srcFormat != AV_PIX_FMT_BGR4 &&        srcFormat != AV_PIX_FMT_RGB4_BYTE && srcFormat != AV_PIX_FMT_BGR4_BYTE &&        srcFormat != AV_PIX_FMT_GBRP9BE   && srcFormat != AV_PIX_FMT_GBRP9LE  &&        srcFormat != AV_PIX_FMT_GBRP10BE  && srcFormat != AV_PIX_FMT_GBRP10LE &&        srcFormat != AV_PIX_FMT_GBRP12BE  && srcFormat != AV_PIX_FMT_GBRP12LE &&        srcFormat != AV_PIX_FMT_GBRP14BE  && srcFormat != AV_PIX_FMT_GBRP14LE &&        srcFormat != AV_PIX_FMT_GBRP16BE  && srcFormat != AV_PIX_FMT_GBRP16LE &&        ((dstW >> c->chrDstHSubSample) <= (srcW >> 1) ||         (flags & SWS_FAST_BILINEAR)))        c->chrSrcHSubSample = 1;    // Note the FF_CEIL_RSHIFT is so that we always round toward +inf.    c->chrSrcW = FF_CEIL_RSHIFT(srcW, c->chrSrcHSubSample);    c->chrSrcH = FF_CEIL_RSHIFT(srcH, c->chrSrcVSubSample);    c->chrDstW = FF_CEIL_RSHIFT(dstW, c->chrDstHSubSample);    c->chrDstH = FF_CEIL_RSHIFT(dstH, c->chrDstVSubSample);    FF_ALLOC_OR_GOTO(c, c->formatConvBuffer, FFALIGN(srcW*2+78, 16) * 2, fail);    c->srcBpc = 1 + desc_src->comp[0].depth_minus1;    if (c->srcBpc < 8)        c->srcBpc = 8;    c->dstBpc = 1 + desc_dst->comp[0].depth_minus1;    if (c->dstBpc < 8)        c->dstBpc = 8;    if (isAnyRGB(srcFormat) || srcFormat == AV_PIX_FMT_PAL8)        c->srcBpc = 16;    if (c->dstBpc == 16)        dst_stride <<= 1;    if (INLINE_MMXEXT(cpu_flags) && c->srcBpc == 8 && c->dstBpc <= 14) {        c->canMMXEXTBeUsed = dstW >= srcW && (dstW & 31) == 0 &&                             c->chrDstW >= c->chrSrcW &&                             (srcW & 15) == 0;        if (!c->canMMXEXTBeUsed && dstW >= srcW && c->chrDstW >= c->chrSrcW && (srcW & 15) == 0            && (flags & SWS_FAST_BILINEAR)) {            if (flags & SWS_PRINT_INFO)                av_log(c, AV_LOG_INFO,                       "output width is not a multiple of 32 -> no MMXEXT scaler\n");        }        if (usesHFilter || isNBPS(c->srcFormat) || is16BPS(c->srcFormat) || isAnyRGB(c->srcFormat))            c->canMMXEXTBeUsed = 0;    } else        c->canMMXEXTBeUsed = 0;    c->chrXInc = (((int64_t)c->chrSrcW << 16) + (c->chrDstW >> 1)) / c->chrDstW;    c->chrYInc = (((int64_t)c->chrSrcH << 16) + (c->chrDstH >> 1)) / c->chrDstH;    /* Match pixel 0 of the src to pixel 0 of dst and match pixel n-2 of src     * to pixel n-2 of dst, but only for the FAST_BILINEAR mode otherwise do     * correct scaling.     * n-2 is the last chrominance sample available.     * This is not perfect, but no one should notice the difference, the more     * correct variant would be like the vertical one, but that would require     * some special code for the first and last pixel */    if (flags & SWS_FAST_BILINEAR) {        if (c->canMMXEXTBeUsed) {            c->lumXInc += 20;            c->chrXInc += 20;        }        // we don't use the x86 asm scaler if MMX is available        else if (INLINE_MMX(cpu_flags) && c->dstBpc <= 14) {            c->lumXInc = ((int64_t)(srcW       - 2) << 16) / (dstW       - 2) - 20;            c->chrXInc = ((int64_t)(c->chrSrcW - 2) << 16) / (c->chrDstW - 2) - 20;        }    }    if (isBayer(srcFormat)) {        if (!unscaled ||            (dstFormat != AV_PIX_FMT_RGB24 && dstFormat != AV_PIX_FMT_YUV420P)) {            enum AVPixelFormat tmpFormat = AV_PIX_FMT_RGB24;            ret = av_image_alloc(c->cascaded_tmp, c->cascaded_tmpStride,                                srcW, srcH, tmpFormat, 64);            if (ret < 0)                return ret;            c->cascaded_context[0] = sws_getContext(srcW, srcH, srcFormat,                                                    srcW, srcH, tmpFormat,                                                    flags, srcFilter, NULL, c->param);            if (!c->cascaded_context[0])                return -1;            c->cascaded_context[1] = sws_getContext(srcW, srcH, tmpFormat,                                                    dstW, dstH, dstFormat,                                                    flags, NULL, dstFilter, c->param);            if (!c->cascaded_context[1])                return -1;            return 0;        }    }#define USE_MMAP (HAVE_MMAP && HAVE_MPROTECT && defined MAP_ANONYMOUS)    /* precalculate horizontal scaler filter coefficients */    {#if HAVE_MMXEXT_INLINE// can't downscale !!!        if (c->canMMXEXTBeUsed && (flags & SWS_FAST_BILINEAR)) {            c->lumMmxextFilterCodeSize = ff_init_hscaler_mmxext(dstW, c->lumXInc, NULL,                                                             NULL, NULL, 8);            c->chrMmxextFilterCodeSize = ff_init_hscaler_mmxext(c->chrDstW, c->chrXInc,                                                             NULL, NULL, NULL, 4);#if USE_MMAP            c->lumMmxextFilterCode = mmap(NULL, c->lumMmxextFilterCodeSize,                                          PROT_READ | PROT_WRITE,                                          MAP_PRIVATE | MAP_ANONYMOUS,                                          -1, 0);            c->chrMmxextFilterCode = mmap(NULL, c->chrMmxextFilterCodeSize,                                          PROT_READ | PROT_WRITE,                                          MAP_PRIVATE | MAP_ANONYMOUS,                                          -1, 0);#elif HAVE_VIRTUALALLOC            c->lumMmxextFilterCode = VirtualAlloc(NULL,                                                  c->lumMmxextFilterCodeSize,                                                  MEM_COMMIT,                                                  PAGE_EXECUTE_READWRITE);            c->chrMmxextFilterCode = VirtualAlloc(NULL,                                                  c->chrMmxextFilterCodeSize,                                                  MEM_COMMIT,                                                  PAGE_EXECUTE_READWRITE);#else            c->lumMmxextFilterCode = av_malloc(c->lumMmxextFilterCodeSize);            c->chrMmxextFilterCode = av_malloc(c->chrMmxextFilterCodeSize);#endif#ifdef MAP_ANONYMOUS            if (c->lumMmxextFilterCode == MAP_FAILED || c->chrMmxextFilterCode == MAP_FAILED)#else            if (!c->lumMmxextFilterCode || !c->chrMmxextFilterCode)#endif            {                av_log(c, AV_LOG_ERROR, "Failed to allocate MMX2FilterCode\n");                return AVERROR(ENOMEM);            }            FF_ALLOCZ_OR_GOTO(c, c->hLumFilter,    (dstW           / 8 + 8) * sizeof(int16_t), fail);            FF_ALLOCZ_OR_GOTO(c, c->hChrFilter,    (c->chrDstW     / 4 + 8) * sizeof(int16_t), fail);            FF_ALLOCZ_OR_GOTO(c, c->hLumFilterPos, (dstW       / 2 / 8 + 8) * sizeof(int32_t), fail);            FF_ALLOCZ_OR_GOTO(c, c->hChrFilterPos, (c->chrDstW / 2 / 4 + 8) * sizeof(int32_t), fail);            ff_init_hscaler_mmxext(      dstW, c->lumXInc, c->lumMmxextFilterCode,                                c->hLumFilter, (uint32_t*)c->hLumFilterPos, 8);            ff_init_hscaler_mmxext(c->chrDstW, c->chrXInc, c->chrMmxextFilterCode,                                c->hChrFilter, (uint32_t*)c->hChrFilterPos, 4);#if USE_MMAP            if (   mprotect(c->lumMmxextFilterCode, c->lumMmxextFilterCodeSize, PROT_EXEC | PROT_READ) == -1                || mprotect(c->chrMmxextFilterCode, c->chrMmxextFilterCodeSize, PROT_EXEC | PROT_READ) == -1) {                av_log(c, AV_LOG_ERROR, "mprotect failed, cannot use fast bilinear scaler\n");                goto fail;            }#endif        } else#endif /* HAVE_MMXEXT_INLINE */        {            const int filterAlign = X86_MMX(cpu_flags)     ? 4 :                                    PPC_ALTIVEC(cpu_flags) ? 8 : 1;            if ((ret = initFilter(&c->hLumFilter, &c->hLumFilterPos,                           &c->hLumFilterSize, c->lumXInc,                           srcW, dstW, filterAlign, 1 << 14,                           (flags & SWS_BICUBLIN) ? (flags | SWS_BICUBIC) : flags,                           cpu_flags, srcFilter->lumH, dstFilter->lumH,                           c->param,                           get_local_pos(c, 0, 0, 0),                           get_local_pos(c, 0, 0, 0))) < 0)                goto fail;            if ((ret = initFilter(&c->hChrFilter, &c->hChrFilterPos,                           &c->hChrFilterSize, c->chrXInc,                           c->chrSrcW, c->chrDstW, filterAlign, 1 << 14,                           (flags & SWS_BICUBLIN) ? (flags | SWS_BILINEAR) : flags,                           cpu_flags, srcFilter->chrH, dstFilter->chrH,                           c->param,                           get_local_pos(c, c->chrSrcHSubSample, c->src_h_chr_pos, 0),                           get_local_pos(c, c->chrDstHSubSample, c->dst_h_chr_pos, 0))) < 0)                goto fail;        }    } // initialize horizontal stuff    /* precalculate vertical scaler filter coefficients */    {        const int filterAlign = X86_MMX(cpu_flags)     ? 2 :                                PPC_ALTIVEC(cpu_flags) ? 8 : 1;        if ((ret = initFilter(&c->vLumFilter, &c->vLumFilterPos, &c->vLumFilterSize,                       c->lumYInc, srcH, dstH, filterAlign, (1 << 12),                       (flags & SWS_BICUBLIN) ? (flags | SWS_BICUBIC) : flags,                       cpu_flags, srcFilter->lumV, dstFilter->lumV,                       c->param,                       get_local_pos(c, 0, 0, 1),                       get_local_pos(c, 0, 0, 1))) < 0)            goto fail;        if ((ret = initFilter(&c->vChrFilter, &c->vChrFilterPos, &c->vChrFilterSize,                       c->chrYInc, c->chrSrcH, c->chrDstH,                       filterAlign, (1 << 12),                       (flags & SWS_BICUBLIN) ? (flags | SWS_BILINEAR) : flags,                       cpu_flags, srcFilter->chrV, dstFilter->chrV,                       c->param,                       get_local_pos(c, c->chrSrcVSubSample, c->src_v_chr_pos, 1),                       get_local_pos(c, c->chrDstVSubSample, c->dst_v_chr_pos, 1))) < 0)            goto fail;#if HAVE_ALTIVEC        FF_ALLOC_OR_GOTO(c, c->vYCoeffsBank, sizeof(vector signed short) * c->vLumFilterSize * c->dstH,    fail);        FF_ALLOC_OR_GOTO(c, c->vCCoeffsBank, sizeof(vector signed short) * c->vChrFilterSize * c->chrDstH, fail);        for (i = 0; i < c->vLumFilterSize * c->dstH; i++) {            int j;            short *p = (short *)&c->vYCoeffsBank[i];            for (j = 0; j < 8; j++)                p[j] = c->vLumFilter[i];        }        for (i = 0; i < c->vChrFilterSize * c->chrDstH; i++) {            int j;            short *p = (short *)&c->vCCoeffsBank[i];            for (j = 0; j < 8; j++)                p[j] = c->vChrFilter[i];        }#endif    }    // calculate buffer sizes so that they won't run out while handling these damn slices    c->vLumBufSize = c->vLumFilterSize;    c->vChrBufSize = c->vChrFilterSize;    for (i = 0; i < dstH; i++) {        int chrI      = (int64_t)i * c->chrDstH / dstH;        int nextSlice = FFMAX(c->vLumFilterPos[i] + c->vLumFilterSize - 1,                              ((c->vChrFilterPos[chrI] + c->vChrFilterSize - 1)                               << c->chrSrcVSubSample));        nextSlice >>= c->chrSrcVSubSample;        nextSlice <<= c->chrSrcVSubSample;        if (c->vLumFilterPos[i] + c->vLumBufSize < nextSlice)            c->vLumBufSize = nextSlice - c->vLumFilterPos[i];        if (c->vChrFilterPos[chrI] + c->vChrBufSize <            (nextSlice >> c->chrSrcVSubSample))            c->vChrBufSize = (nextSlice >> c->chrSrcVSubSample) -                             c->vChrFilterPos[chrI];    }    for (i = 0; i < 4; i++)        FF_ALLOCZ_OR_GOTO(c, c->dither_error[i], (c->dstW+2) * sizeof(int), fail);    /* Allocate pixbufs (we use dynamic allocation because otherwise we would     * need to allocate several megabytes to handle all possible cases) */    FF_ALLOC_OR_GOTO(c, c->lumPixBuf,  c->vLumBufSize * 3 * sizeof(int16_t *), fail);    FF_ALLOC_OR_GOTO(c, c->chrUPixBuf, c->vChrBufSize * 3 * sizeof(int16_t *), fail);    FF_ALLOC_OR_GOTO(c, c->chrVPixBuf, c->vChrBufSize * 3 * sizeof(int16_t *), fail);    if (CONFIG_SWSCALE_ALPHA && isALPHA(c->srcFormat) && isALPHA(c->dstFormat))        FF_ALLOCZ_OR_GOTO(c, c->alpPixBuf, c->vLumBufSize * 3 * sizeof(int16_t *), fail);    /* Note we need at least one pixel more at the end because of the MMX code     * (just in case someone wants to replace the 4000/8000). */    /* align at 16 bytes for AltiVec */    for (i = 0; i < c->vLumBufSize; i++) {        FF_ALLOCZ_OR_GOTO(c, c->lumPixBuf[i + c->vLumBufSize],                          dst_stride + 16, fail);        c->lumPixBuf[i] = c->lumPixBuf[i + c->vLumBufSize];    }    // 64 / c->scalingBpp is the same as 16 / sizeof(scaling_intermediate)    c->uv_off   = (dst_stride>>1) + 64 / (c->dstBpc &~ 7);    c->uv_offx2 = dst_stride + 16;    for (i = 0; i < c->vChrBufSize; i++) {        FF_ALLOC_OR_GOTO(c, c->chrUPixBuf[i + c->vChrBufSize],                         dst_stride * 2 + 32, fail);        c->chrUPixBuf[i] = c->chrUPixBuf[i + c->vChrBufSize];        c->chrVPixBuf[i] = c->chrVPixBuf[i + c->vChrBufSize]                         = c->chrUPixBuf[i] + (dst_stride >> 1) + 8;    }    if (CONFIG_SWSCALE_ALPHA && c->alpPixBuf)        for (i = 0; i < c->vLumBufSize; i++) {            FF_ALLOCZ_OR_GOTO(c, c->alpPixBuf[i + c->vLumBufSize],                              dst_stride + 16, fail);            c->alpPixBuf[i] = c->alpPixBuf[i + c->vLumBufSize];        }    // try to avoid drawing green stuff between the right end and the stride end    for (i = 0; i < c->vChrBufSize; i++)        if(desc_dst->comp[0].depth_minus1 == 15){            av_assert0(c->dstBpc > 14);            for(j=0; j2+1; j++)                ((int32_t*)(c->chrUPixBuf[i]))[j] = 1<<18;        } else            for(j=0; j1; j++)                ((int16_t*)(c->chrUPixBuf[i]))[j] = 1<<14;    av_assert0(c->chrDstH <= dstH);    //是否要输出    if (flags & SWS_PRINT_INFO) {        const char *scaler = NULL, *cpucaps;        for (i = 0; i < FF_ARRAY_ELEMS(scale_algorithms); i++) {            if (flags & scale_algorithms[i].flag) {                scaler = scale_algorithms[i].description;                break;            }        }        if (!scaler)            scaler =  "ehh flags invalid?!";        av_log(c, AV_LOG_INFO, "%s scaler, from %s to %s%s ",               scaler,               av_get_pix_fmt_name(srcFormat),#ifdef DITHER1XBPP               dstFormat == AV_PIX_FMT_BGR555   || dstFormat == AV_PIX_FMT_BGR565   ||               dstFormat == AV_PIX_FMT_RGB444BE || dstFormat == AV_PIX_FMT_RGB444LE ||               dstFormat == AV_PIX_FMT_BGR444BE || dstFormat == AV_PIX_FMT_BGR444LE ?                                                             "dithered " : "",#else               "",#endif               av_get_pix_fmt_name(dstFormat));        if (INLINE_MMXEXT(cpu_flags))            cpucaps = "MMXEXT";        else if (INLINE_AMD3DNOW(cpu_flags))            cpucaps = "3DNOW";        else if (INLINE_MMX(cpu_flags))            cpucaps = "MMX";        else if (PPC_ALTIVEC(cpu_flags))            cpucaps = "AltiVec";        else            cpucaps = "C";        av_log(c, AV_LOG_INFO, "using %s\n", cpucaps);        av_log(c, AV_LOG_VERBOSE, "%dx%d -> %dx%d\n", srcW, srcH, dstW, dstH);        av_log(c, AV_LOG_DEBUG,               "lum srcW=%d srcH=%d dstW=%d dstH=%d xInc=%d yInc=%d\n",               c->srcW, c->srcH, c->dstW, c->dstH, c->lumXInc, c->lumYInc);        av_log(c, AV_LOG_DEBUG,               "chr srcW=%d srcH=%d dstW=%d dstH=%d xInc=%d yInc=%d\n",               c->chrSrcW, c->chrSrcH, c->chrDstW, c->chrDstH,               c->chrXInc, c->chrYInc);    }    /* unscaled special cases */    //不拉伸的情况    if (unscaled && !usesHFilter && !usesVFilter &&        (c->srcRange == c->dstRange || isAnyRGB(dstFormat))) {     //不许拉伸的情况下,初始化相应的函数        ff_get_unscaled_swscale(c);        if (c->swscale) {            if (flags & SWS_PRINT_INFO)                av_log(c, AV_LOG_INFO,                       "using unscaled %s -> %s special converter\n",                       av_get_pix_fmt_name(srcFormat), av_get_pix_fmt_name(dstFormat));            return 0;        }    }    //关键:设置SwsContext中的swscale()指针    c->swscale = ff_getSwsFunc(c);    return 0;fail: // FIXME replace things by appropriate error codes    if (ret == RETCODE_USE_CASCADE)  {        int tmpW = sqrt(srcW * (int64_t)dstW);        int tmpH = sqrt(srcH * (int64_t)dstH);        enum AVPixelFormat tmpFormat = AV_PIX_FMT_YUV420P;        if (srcW*(int64_t)srcH <= 4LL*dstW*dstH)            return AVERROR(EINVAL);        ret = av_image_alloc(c->cascaded_tmp, c->cascaded_tmpStride,                             tmpW, tmpH, tmpFormat, 64);        if (ret < 0)            return ret;        c->cascaded_context[0] = sws_getContext(srcW, srcH, srcFormat,                                                tmpW, tmpH, tmpFormat,                                                flags, srcFilter, NULL, c->param);        if (!c->cascaded_context[0])            return -1;        c->cascaded_context[1] = sws_getContext(tmpW, tmpH, tmpFormat,                                                dstW, dstH, dstFormat,                                                flags, NULL, dstFilter, c->param);        if (!c->cascaded_context[1])            return -1;        return 0;    }    return -1;}

sws_init_context()除了对SwsContext中的各种变量进行赋值之外,主要按照顺序完成了以下一些工作:
1.  通过sws_rgb2rgb_init()初始化RGB转RGB(或者YUV转YUV)的函数(注意不包含RGB与YUV相互转换的函数)。
2.  通过判断输入输出图像的宽高来判断图像是否需要拉伸。如果图像需要拉伸,那么unscaled变量会被标记为1。
3.  通过sws_setColorspaceDetails()初始化颜色空间。
4.  一些输入参数的检测。例如:如果没有设置图像拉伸方法的话,默认设置为SWS_BICUBIC;如果输入和输出图像的宽高小于等于0的话,也会返回错误信息。
5.  初始化Filter。这一步根据拉伸方法的不同,初始化不同的Filter。
6.  如果flags中设置了“打印信息”选项SWS_PRINT_INFO,则输出信息。
7.  如果不需要拉伸的话,调用ff_get_unscaled_swscale()将特定的像素转换函数的指针赋值给SwsContext中的swscale指针。
8.  如果需要拉伸的话,调用ff_getSwsFunc()将通用的swscale()赋值给SwsContext中的swscale指针(这个地方有点绕,但是确实是这样的)。

下面分别记录一下上述步骤的实现。


1.初始化RGB转RGB(或者YUV转YUV)的函数。注意这部分函数不包含RGB与YUV相互转换的函数。

sws_rgb2rgb_init()

sws_rgb2rgb_init()的定义位于libswscale\rgb2rgb.c,如下所示。
av_cold void sws_rgb2rgb_init(void){    rgb2rgb_init_c();    if (ARCH_X86)        rgb2rgb_init_x86();}

从sws_rgb2rgb_init()代码中可以看出,有两个初始化函数:rgb2rgb_init_c()是初始化C语言版本的RGB互转(或者YUV互转)的函数,rgb2rgb_init_x86()则是初始化X86汇编版本的RGB互转的函数。
PS:在libswscale中有一点需要注意:很多的函数名称中包含类似“_c”这样的字符串,代表了该函数是C语言写的。与之对应的还有其它标记,比如“_mmx”,“sse2”等。


rgb2rgb_

给我老师的人工智能教程打call!http://blog.csdn.net/jiangjunshow

这里写图片描述

你可能感兴趣的:(FFmpeg源代码简单分析 libswscale的sws getContext)