ffmpeg解码视频

目录

一、前言

二、ffmpeg解码API介绍

三、ffmpeg解码示例

四、ffmpeg解码框架设计

《ffmpeg解码H264/H265为yuv代码实现》链接:

https://edu.csdn.net/learn/38258/606144?spm=1003.2001.3001.4157

一、前言

      当下音视频开发中解码视频的方案有很多,比如GPU解码器、CPU解码等。其中CPU解码通常使用纯软件的方法对视频解码。ffmpeg是最受欢迎的一款开源解码库。ffmpeg不仅仅支持视频解码还支持视频的编码、以及音频的编解码、同时还支持图像滤波处理以及音视频文件的转封装等。 ffmpeg github路径:GitHub - FFmpeg/FFmpeg: Mirror of https://git.ffmpeg.org/ffmpeg.git在本文中重点介绍如何使用ffmpeg解码H264/H265视频,并输出YUV420的格式。

二、ffmpeg解码API介绍

1、AVCodec *avcodec_find_decoder(enum AVCodecID id);

     该函数用于查找一个解码器,并返回解码器。输入参数id为解码器的枚举值,AVCodecID中定义了上百中编解码器ID。H264和H265的ID值如下。

  enum AVCodecID {
  ......
  AV_CODEC_ID_H264,  
  ......
  AV_CODEC_ID_HEVC,
#define AV_CODEC_ID_H265 AV_CODEC_ID_HEVC
  ......
};

     返回值为AVCodec 类型的指针。AVCodec为解码器属性的结构体,定义如下。

/**
 * AVCodec.
 */
typedef struct AVCodec {
    /**
     * Name of the codec implementation.
     * The name is globally unique among encoders and among decoders (but an
     * encoder and a decoder can share the same name).
     * This is the primary way to find a codec from the user perspective.
     */
    const char *name;
    /**
     * Descriptive name for the codec, meant to be more human readable than name.
     * You should use the NULL_IF_CONFIG_SMALL() macro to define it.
     */
    const char *long_name;
    enum AVMediaType type;
    enum AVCodecID id;
    /**
     * Codec capabilities.
     * see AV_CODEC_CAP_*
     */
    int capabilities;
    uint8_t max_lowres;                     ///< maximum value for lowres supported by the decoder
    const AVRational *supported_framerates; ///< array of supported framerates, or NULL if any, array is terminated by {0,0}
    const enum AVPixelFormat *pix_fmts;     ///< array of supported pixel formats, or NULL if unknown, array is terminated by -1
    const int *supported_samplerates;       ///< array of supported audio samplerates, or NULL if unknown, array is terminated by 0
    const enum AVSampleFormat *sample_fmts; ///< array of supported sample formats, or NULL if unknown, array is terminated by -1
#if FF_API_OLD_CHANNEL_LAYOUT
    /**
     * @deprecated use ch_layouts instead
     */
    attribute_deprecated
    const uint64_t *channel_layouts;         ///< array of support channel layouts, or NULL if unknown. array is terminated by 0
#endif
    const AVClass *priv_class;              ///< AVClass for the private context
    const AVProfile *profiles;              ///< array of recognized profiles, or NULL if unknown, array is terminated by {FF_PROFILE_UNKNOWN}

    /**
     * Group name of the codec implementation.
     * This is a short symbolic name of the wrapper backing this codec. A
     * wrapper uses some kind of external implementation for the codec, such
     * as an external library, or a codec implementation provided by the OS or
     * the hardware.
     * If this field is NULL, this is a builtin, libavcodec native codec.
     * If non-NULL, this will be the suffix in AVCodec.name in most cases
     * (usually AVCodec.name will be of the form "_").
     */
    const char *wrapper_name;

    /**
     * Array of supported channel layouts, terminated with a zeroed layout.
     */
    const AVChannelLayout *ch_layouts;
} AVCodec;

2、AVCodecContext *avcodec_alloc_context3(const AVCodec *codec);

     该函数用户为解码器分配上下文内存,返回解码器上下文参数。输入codec参数为解码器,即avcodec_find_decoder函数返回值。返回值为上下文AVCodecContext的指针。AVCodecContext这个结构体在avcodec.h中定义,由于结构体的成员定义太多这里就不在展示。

3、int avcodec_open2(AVCodecContext *avctx, const AVCodec *codec, AVDictionary **options);

     打开解码器。avctx参数为解码器的上下文参数变量,即avcodec_alloc_context3函数返回值;codec参数为编码器参数指针,即avcodec_find_decoder函数返回值;options参数为解码器的配置参数,该值通常设置为空,即不配置参数采用解码器默认设置。

4、int avcodec_send_packet(AVCodecContext *avctx, const AVPacket *avpkt);

    还函数用于发送待解码的H264/H265编码数据到解码器中。参数avctx为码器上下文参数即avcodec_alloc_context3函数返回值。avpkt参数为输入的带解码的数据。avpkt为AVPacket类型的参数;AVPacket结构体的定义如下。主要包含视频码流数据地址uint8_t *data;视频码流大小 int   size;视频码流的时间戳int64_t pts。

typedef struct AVPacket {
    /**
     * A reference to the reference-counted buffer where the packet data is
     * stored.
     * May be NULL, then the packet data is not reference-counted.
     */
    AVBufferRef *buf;
    /**
     * Presentation timestamp in AVStream->time_base units; the time at which
     * the decompressed packet will be presented to the user.
     * Can be AV_NOPTS_VALUE if it is not stored in the file.
     * pts MUST be larger or equal to dts as presentation cannot happen before
     * decompression, unless one wants to view hex dumps. Some formats misuse
     * the terms dts and pts/cts to mean something different. Such timestamps
     * must be converted to true pts/dts before they are stored in AVPacket.
     */
    int64_t pts;
    /**
     * Decompression timestamp in AVStream->time_base units; the time at which
     * the packet is decompressed.
     * Can be AV_NOPTS_VALUE if it is not stored in the file.
     */
    int64_t dts;
    uint8_t *data;
    int   size;
    int   stream_index;
    /**
     * A combination of AV_PKT_FLAG values
     */
    int   flags;
    /**
     * Additional packet data that can be provided by the container.
     * Packet can contain several types of side information.
     */
    AVPacketSideData *side_data;
    int side_data_elems;

    /**
     * Duration of this packet in AVStream->time_base units, 0 if unknown.
     * Equals next_pts - this_pts in presentation order.
     */
    int64_t duration;

    int64_t pos;                            ///< byte position in stream, -1 if unknown

    /**
     * for some private data of the user
     */
    void *opaque;

    /**
     * AVBufferRef for free use by the API user. FFmpeg will never check the
     * contents of the buffer ref. FFmpeg calls av_buffer_unref() on it when
     * the packet is unreferenced. av_packet_copy_props() calls create a new
     * reference with av_buffer_ref() for the target packet's opaque_ref field.
     *
     * This is unrelated to the opaque field, although it serves a similar
     * purpose.
     */
    AVBufferRef *opaque_ref;

    /**
     * Time base of the packet's timestamps.
     * In the future, this field may be set on packets output by encoders or
     * demuxers, but its value will be by default ignored on input to decoders
     * or muxers.
     */
    AVRational time_base;
} AVPacket;

5、int avcodec_receive_frame(AVCodecContext *avctx, AVFrame *frame);

      该函数用于从解码器中接收已经解码完成的数据(如YUV)。参数avctx为码器上下文参数即avcodec_alloc_context3函数返回值。AVFrame *frame为输入解码数据的变量,AVFrame的成员变量定义比较多这里就不再全部展示,其主要的成员定义如下:

typedef struct AVFrame {
#define AV_NUM_DATA_POINTERS 8
    //解码后YUV的分辨率
    int width, height;
    //解码后的时间戳
    int64_t pts;
    //解码后的数据地址
    uint8_t *data[AV_NUM_DATA_POINTERS];
    //解码后图像各个分量的跨距
    int linesize[AV_NUM_DATA_POINTERS];

} AVFrame;

6、void avcodec_free_context(AVCodecContext **avctx);

     该函数用于释放编码器上下文。avctx参数为解码器的上下文参数变量,即avcodec_alloc_context3函数返回值。

三、ffmpeg解码示例

1、视频解码器初始化核心代码

int video_denc_init(int vencType)
{
    enum AVCodecID codecId = vencType ? AV_CODEC_ID_H265 : AV_CODEC_ID_H264;
    g_videoDencMng.pAVCodecDecoder = (AVCodec*)avcodec_find_decoder(codecId);//查找对应的解码器
    if (!g_videoDencMng.pAVCodecDecoder){
        printf("can not find H26x (%d) codec\n",codecId);
        return -1;
    }
    //根据解码器分配编解码器上下文
    g_videoDencMng.pAVCodecCtxDecoder = avcodec_alloc_context3(g_videoDencMng.pAVCodecDecoder);
    if (g_videoDencMng.pAVCodecCtxDecoder == NULL) {
        printf("Could not alloc video context!\n");
        return -1;
    }

    //打开编码器
    if (avcodec_open2(g_videoDencMng.pAVCodecCtxDecoder, g_videoDencMng.pAVCodecDecoder, NULL) < 0){
        printf("Failed to open h264 decoder");
        video_denc_release();
        return -1;
    }
 
    return 0;
}

2、解码一帧视频数据的核心代码

int video_denc_decode(unsigned char *inbuf, int inSize,long pts)
{

    if (0==g_videoDencMng.bInit || !inbuf || inSize <=0) 
    {
        return -1;
    }
    av_frame_unref(g_videoDencMng.pAVFrameDecoder);
    av_frame_unref(g_videoDencMng.pFrameYUVDecoder);

    g_videoDencMng.mAVPacketDecoder.data = inbuf;
    g_videoDencMng.mAVPacketDecoder.size = inSize;
    g_videoDencMng.mAVPacketDecoder.pts  = pts;
    int ret = avcodec_send_packet(g_videoDencMng.pAVCodecCtxDecoder, &g_videoDencMng.mAVPacketDecoder);
    if (ret != 0)
    {
        printf("avcodec_send_packet error(%d)\n",ret);
        return -1;
    }
    
    ret = avcodec_receive_frame(g_videoDencMng.pAVCodecCtxDecoder, g_videoDencMng.pAVFrameDecoder);//默认输出yuv420p
    if (ret == 0) 
    {
        g_videoDencMng.mFrameNum++;
        std::shared_ptr pYuv = std::make_shared();
        pYuv->width  = g_videoDencMng.pAVFrameDecoder->width;
        pYuv->height = g_videoDencMng.pAVFrameDecoder->height;
        pYuv->pts    = g_videoDencMng.pAVFrameDecoder->pts;
        pYuv->seq    = g_videoDencMng.mFrameNum;
        pYuv->lineSize[0] = g_videoDencMng.pAVFrameDecoder->linesize[0];
        pYuv->lineSize[1] = g_videoDencMng.pAVFrameDecoder->linesize[1];
        pYuv->lineSize[2] = g_videoDencMng.pAVFrameDecoder->linesize[2];
        pYuv->pData  = new unsigned char [pYuv->width * pYuv->height * 3 / 2];
        for(int i = 0;i < pYuv->height;i++)
        {
            memcpy(pYuv->pData+i*pYuv->width, g_videoDencMng.pAVFrameDecoder->data[0]+i*g_videoDencMng.pAVFrameDecoder->linesize[0], pYuv->width);
        }
        for(int i = 0;i < pYuv->height / 2;i++)
        {
            memcpy(pYuv->pData+pYuv->width*pYuv->height+i*pYuv->width / 2, g_videoDencMng.pAVFrameDecoder->data[1]+i*g_videoDencMng.pAVFrameDecoder->linesize[1], pYuv->width / 2);
            memcpy(pYuv->pData+pYuv->width*pYuv->height + pYuv->width / 2 *pYuv->height / 2+i*pYuv->width/2, g_videoDencMng.pAVFrameDecoder->data[2]+i*g_videoDencMng.pAVFrameDecoder->linesize[2], pYuv->width / 2);
        }

        //存储YUV到输出缓存队列中。 略
        。。。。。。

        return (pYuv->width * pYuv->height * 3 / 2);
    } 
    else if (ret == AVERROR(EAGAIN)) 
    {
        printf("avcodec_receive_frame :EAGAIN %d\n",ret);
        return 0;
    } 

    printf("avcodec_receive_frame error %d\n",ret);
    return -1;
}

3、解码器关闭的核心代码

static int video_denc_release()
{
    if (g_videoDencMng.pAVCodecCtxDecoder != NULL) {
        avcodec_free_context(&g_videoDencMng.pAVCodecCtxDecoder);
        g_videoDencMng.pAVCodecCtxDecoder = NULL;
    }

    return 0;
}

四、ffmpeg解码框架设计

ffmpeg解码视频_第1张图片

你可能感兴趣的:(音视频开发,rtsp,ffmpeg,音视频,实时音视频,视频编解码)