如何使用ffmpeg生成视频缩略图

核心思路

使用ffmpeg获取视频的第一帧关键帧,转换成UIImage,然后保存成jpg图片。如果不需要持久化,直接使用UIImage对象即可

ffmpeg手动集成

我直接使用了ffmpeg-kit进行ffmpeg的打包,打包脚本如下

ffmpeg-kit/tools/release/ios.sh

最后可以在以下目录找到产物

ffmpeg-kit/prebuilt/bundle-apple-cocoapods-ios/ffmpeg-kit-ios-min/

Podfile指向该目录下的ffmpeg-kit-ios-min.podspec即可,或者传到自己的git repo上。

代码实现

使用ffmpeg打开视频文件

AVFormatContext *context = avformat_alloc_context();
// 通过文件创建AVFormatContext
int ret;
ret = avformat_open_input(&context, [videoPath UTF8String], NULL, NULL);
if (ret != 0) goto free_res;
// 寻找流信息
ret = avformat_find_stream_info(context, NULL);
if (ret != 0) goto free_res;

寻找视频流

// 寻找视频流
AVStream *videoStream = NULL;
int videoStreamIndex = -1;
for (int i = 0; i < context->nb_streams; ++i) {
  if (context->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO) {
    videoStream = context->streams[i];
    videoStreamIndex = i;
    break;
  }
}
if (!videoStream) goto free_res;

这里还存了一下视频流的索引值,方便后续比对

创建解码器

// 创建视频解码器
const AVCodec *videoCodec = avcodec_find_decoder(videoStream->codecpar->codec_id);
AVCodecContext *videoCodecContext = avcodec_alloc_context3(videoCodec);
avcodec_parameters_to_context(videoCodecContext, videoStream->codecpar);
ret = avcodec_open2(videoCodecContext, videoCodec, NULL);
if (ret != 0) goto free_res;

读取第一帧视频I帧

AVPacket *firstPacket = av_packet_alloc();
AVFrame *rawFrame = av_frame_alloc();
while(av_read_frame(context, firstPacket) == 0) {
    if (firstPacket->stream_index == videoStreamIndex) {
        avcodec_send_packet(videoCodecContext, firstPacket);
        avcodec_receive_frame(videoCodecContext, rawFrame);
        if (rawFrame->pict_type == AV_PICTURE_TYPE_I) {
            break;
        }
    }
}

使用sws_scale缩放图片并进行格式转换

int width = rawFrame->width;
int height = rawFrame->height;
int bitsPerComponent = 8;
int bitsPerPixel = bitsPerComponent * 4;
int bytesPerRow = 4 * width;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSMutableData *rgbaData = [NSMutableData.alloc initWithLength:rawFrame->width * rawFrame->height * 4];
    
void *dstAddress = (void *)rgbaData.bytes;
//  使用sws处理图片
struct SwsContext *swsContext = sws_getContext(rawFrame->width, rawFrame->height, rawFrame->format, rawFrame->width, rawFrame->height, AV_PIX_FMT_RGBA, SWS_BILINEAR, NULL, NULL, NULL);
sws_scale(swsContext,
              (const uint8_t *const *) rawFrame->data,
                rawFrame->linesize,
              0,
          rawFrame->height,
          (uint8_t *const *)&dstAddress,
          &bytesPerRow);

这里将AVFrame的图片转换成rgba的像素格式,数据存储到rgbaData

rgba数据转换成UIImage

CFDataRef rgbDataRef = (__bridge CFDataRef)rgbaData;
CGDataProviderRef provider = CGDataProviderCreateWithCFData(rgbDataRef);
CGImageRef cgImage = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpace, kCGImageAlphaLast | kCGBitmapByteOrderDefault, provider, NULL, YES, kCGRenderingIntentDefault);

UIImage *img = [UIImage.alloc initWithCGImage:cgImage];
CGImageRelease(cgImage);

UIImage转换成NSData保存到本地

 NSData *imgData = UIImageJPEGRepresentation(img, 0.8);
[imgData writeToFile:destPath atomically:YES];

释放相关对象

CGDataProviderRelease(provider);
CFRelease(colorSpace);
sws_freeContext(swsContext);

avcodec_free_context(&videoCodecContext);
av_packet_free(&firstPacket);
av_frame_free(&rawFrame);
avformat_free_context(context);

你可能感兴趣的:(如何使用ffmpeg生成视频缩略图)