在解决音视频播放同步前,有一些基本的知识点我需要说明一下。
上面一些音视频的基本知识点,是解决音视频播放同步的主要因素,因此必须先通过媒体文件,获取里面的音频与视频的信息,根据这些信息才能做好同步操作。那么如何获得这些信息呢?
AVFormatContext *pFormatCtx;
pFormatCtx = avformat_alloc_context();
avformat_open_input(&pFormatCtx, filepath, NULL, NULL);
avformat_find_stream_info(pFormatCtx,NULL)
av_dump_format(pFormatCtx, 0, filepath, 0);
//以下是函数av_dump_format输出的信息
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'bootloader.mp4':
Metadata:
major_brand : mp42
minor_version : 0
compatible_brands: mp42mp41
creation_time : 2017-12-29T09:16:47.000000Z
Duration: 00:14:10.67, start: 0.000000, bitrate: 1128 kb/s
Stream #0:0(eng): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, bt709), 1024x768, 808 kb/s, 8 fps, 8 tbr, 16 tbn, 16 tbc (default)
Metadata:
creation_time : 2017-12-29T09:16:47.000000Z
handler_name : Alias Data Handler
encoder : AVC Coding
Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 317 kb/s (default)
Metadata:
creation_time : 2017-12-29T09:16:47.000000Z
handler_name : Alias Data Handler
我们主要关注以下几点信息
文件时长:Duration: 00:14:10.67,此信息位于结构体AVFormatContext
的duration
成员,其实还可以获取其他信息例如bit_rate、packet_size
视频流:Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, bt709), 1024x768, 808 kb/s, 8 fps, 8 tbr, 16 tbn, 16 tbc (default)
音频流:Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 317 kb/s (default)
那么这些数据都是从哪里得到的呢?
在获得并根据多媒体文件更新一个AVFormatContext
结构体变量之后,就可以在此结构的AVStream **streams
成员中查找音视频流,并获得音视频流的各种信息
获取音频相关信息主要依靠struct AVCodecContext
结构体,此结构体的变量位于AVStream
结构中,当在AVFormatContext
结构体的AVStream **streams
成员中查找到音频流之后,就可以用以下方式获取音频信息:
pFormatCtx->streams[AudioIndex]->codec->codec_id
,这是一个枚举变量pFormatCtx->streams[AudioIndex]->codec->sample_rate
,pFormatCtx->streams[AudioIndex]->codec->frame_size
pFormatCtx->streams[AudioIndex]->codec->channels
pFormatCtx->streams[AudioIndex]->codec->sample_fmt
,这是一个枚举变量获取视频相关信息与音频类似,当在AVFormatContext
结构体的AVStream **streams
成员中查找到视频流之后,就可以用以下方式获取视频信息:
pFormatCtx->streams[VideoIndex]->codec->codec_id
,这是一个枚举变量pFormatCtx->streams[VideoIndex]->codec->width / height
,pFormatCtx->streams[VideoIndex]->codec->framerate
,这是一个AVRational
类型的变量,次结构用来表示一个分数,其成员num
表示分子,den
成员表示分母,这个结构在以下部分会常用到通过以上的步骤,分别获取多媒体文件的音频与视频信息之后,就可以进行解码并播放。理论上只需要分别按照各自的时间要求播放音频与视频,他们本身应该就是同步的。假设一个多媒体文件的音频流为AAC编码,2声道,格式为16bit,采样率44.1KHz,视频流为H264编码,帧率为25fps,理论播放同步如下:
时间轴 | 0 | 23.2 | 40 | 46.4 | 69.6 | 80 | 92.8 | 116 | 20 | … |
---|---|---|---|---|---|---|---|---|---|---|
音频时间点 | 0 | 23.2 | 46.4 | 69.6 | 92.8 | 116 | … | |||
视频时间点 | 0 | 40 | 80 | 120 | … |
理论上只要按照上面的时间点,各自播放音频与视频,就可以同步了,但实际上,音频与视频播放都分别需要经过解码、重采样、播放3个步骤,每个步骤的耗时不一样,无法做到精确计时。
由此衍生出了3种同步的方法 :
其实我更倾向于理论的方法,音频与视频各自播放互不打扰,从音视频播放的特点来说,人的听觉更为敏感,稍微的停顿都可以听出来,但是视觉就不一样了,人的视觉有暂留的效应;
因此根据理论的同步方式,对音频的播放不多加计算,尽快按照硬件所需数据的速度向硬件输入播放数据,又因为音频编解码的帧使用的解码时间戳DTS
、播放时间戳PTS
永远是一样的,因此只需要按照顺序进行解码播放即可
对于视频播放,由于H264编码的视频帧存在I帧、P帧、B帧,尤其是存在B帧的视频、其解码的顺序与播放顺序可能不一致,因此视频播放要先按解码顺序解码视频,然后按照音频播放的时间,在合适的时间点(PTS对应的时间)播放视频,由于不能精确计时,视频的早一点、迟一点,人的视觉几乎感觉不到,只要误差时间不超过视觉暂留的时间,并且误差不要累积;这实际上就是以音频为基准,视频向音频同步的过程
由以上分析可以看出,同步不是一次性完成的,而是时时刻刻在进行的,直到播放完毕。
关于DTS与PTS:
AVRational
类型的变量,time_base成员来表示,实际的时间需要乘以time_base所表示的单位时间那么如何获取音视频的DTS与PTS呢?
通过函数av_read_frame(pFormatCtx, Packet)读取一个AVPacket,在此结构中保存有每一帧的DTS、PTS信息
因为音频是顺序播放,因此音频中DTS和PTS是相同的。
printf("stream audio time_base.num:%d, time_base.den:%d, avg_frame_rate.num:%d, avg_frame_rate.den:%d, duration:%ld\n",
pFormatCtx->streams[AudioIndex]->time_base.num,
pFormatCtx->streams[AudioIndex]->time_base.den,
pFormatCtx->streams[AudioIndex]->avg_frame_rate.num,
pFormatCtx->streams[AudioIndex]->avg_frame_rate.den,
pFormatCtx->streams[AudioIndex]->duration);
//输出:stream audio time_base.num:1, time_base.den:48000, avg_frame_rate.num:0, avg_frame_rate.den:0, duration:40830000
av_read_frame(pFormatCtx, Packet);
avcodec_decode_audio4( pAudioCodecCtx, pAudioFrame,&GotAudioPicture, Packet);
printf("Auduo index:%5d\t pts:%ld\t pts:%ld\t packet size:%d, pFrame->nb_samples:%d\n",
audioCnt, Packet->dts, Packet->pts, Packet->size, pAudioFrame->nb_samples);
//Auduo index: 0 pts:0 pts:0 packet size:847, pFrame->nb_samples:1024
//Auduo index: 1 pts:1024 pts:1024 packet size:846, pFrame->nb_samples:1024
//Auduo index: 2 pts:2048 pts:2048 packet size:846, pFrame->nb_samples:1024
//Auduo index: 3 pts:3072 pts:3072 packet size:847, pFrame->nb_samples:1024
//Auduo index: 4 pts:4096 pts:4096 packet size:846, pFrame->nb_samples:1024
//Auduo index: 5 pts:5120 pts:5120 packet size:846, pFrame->nb_samples:1024
AVRational
类型的变量,可以从输出看出时间单位是1 / 48000,那么用DTS×(1 / 48000)就是解码时间戳,PTS×(1 / 48000)就是播放时间戳,视频中由于B帧需要双向预测,B帧依赖于其前和其后的帧,因此含B帧的视频解码顺序与显示顺序不同,即DTS与PTS不同;不含B帧的视频,其DTS和PTS是相同的。
printf("stream video time_base.num:%d, time_base.den:%d, avg_frame_rate.num:%d, avg_frame_rate.den:%d, duration:%ld\n",
pFormatCtx->streams[VideoIndex]->time_base.num,
pFormatCtx->streams[VideoIndex]->time_base.den,
pFormatCtx->streams[VideoIndex]->avg_frame_rate.num,
pFormatCtx->streams[VideoIndex]->avg_frame_rate.den,
pFormatCtx->streams[VideoIndex]->duration);
//输出:stream video time_base.num:1, time_base.den:16, avg_frame_rate.num:8, avg_frame_rate.den:1, duration:13610
av_read_frame(pFormatCtx, Packet);
printf("Video index:%5d\t dts:%ld\t, pts:%ld\t packet size:%d\n",
videoCnt, Packet->dts, Packet->pts, Packet->size);
//Video index: 0 dts:-2 , pts:0 packet size:91041
//Video index: 1 dts:0 , pts:8 packet size:191
//Video index: 2 dts:2 , pts:2 packet size:103
//Video index: 3 dts:4 , pts:4 packet size:103
//Video index: 4 dts:6 , pts:6 packet size:103
AVRational
类型的变量,可以从输出看出时间单位是1 / 16,那么用DTS×(1 / 16)就是解码时间戳,PTS×(1 / 16)就是播放时间戳,以上部分把同步播放需要的信息,全都得到了,那么怎么实现音视频播放同步呢?很自然的我们需要多线程,不可能在一个线程里完成这些事情
通过以上的介绍,可以看出,我并没有刻意的使用将视频同步到音频,而是各自按照自己的速度去播放,貌似也还可以。下面就把代码贴上吧。
/*
* ffmpeg_sdl2_avpalyer.cpp
*
* Created on: 2019年4月4日
* Author: luke
* 实现音视频播放同步
*/
#include
#include
#include
#define __STDC_CONSTANT_MACROS
#ifdef __cplusplus
extern "C"
{
#endif
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#ifdef __cplusplus
};
#endif
#define MAX_AUDIO_FRAME_SIZE 192000 // 1 second of 48khz 32bit audio
#define PACKET_ARRAY_SIZE (60)
typedef struct __PacketStruct
{
AVPacket Packet;
int64_t dts;
int64_t pts;
int state;
}PacketStruct;
typedef struct
{
unsigned int rIndex;
unsigned int wIndex;
PacketStruct PacketArray[PACKET_ARRAY_SIZE];
}PacketArrayStruct;
typedef struct __AudioCtrlStruct
{
AVFormatContext *pFormatCtx;
AVStream *pStream;
AVCodec *pCodec;
AVCodecContext *pCodecCtx;
SwrContext *pConvertCtx;
Uint8 *audio_chunk;
Sint32 audio_len;
Uint8 *audio_pos;
int AudioIndex;
int AudioCnt;
uint64_t AudioOutChannelLayout;
int out_nb_samples; //nb_samples: AAC-1024 MP3-1152
AVSampleFormat out_sample_fmt;
int out_sample_rate;
int out_channels;
int out_buffer_size;
unsigned char* pAudioOutBuffer;
sem_t frame_put;
sem_t frame_get;
PacketArrayStruct Audio;
}AudioCtrlStruct;
typedef struct __VideoCtrlStruct
{
AVFormatContext *pFormatCtx;
AVStream *pStream;
AVCodec *pCodec;
AVCodecContext *pCodecCtx;
SwsContext *pConvertCtx;
AVFrame *pVideoFrame, *pFrameYUV;
unsigned char *pVideoOutBuffer;
int VideoIndex;
int VideoCnt;
int RefreshTime;
int screen_w,screen_h;
SDL_Window *screen;
SDL_Renderer* sdlRenderer;
SDL_Texture* sdlTexture;
SDL_Rect sdlRect;
SDL_Thread *video_tid;
sem_t frame_put;
sem_t video_refresh;
PacketArrayStruct Video;
}VideoCtrlStruct;
//Refresh Event
#define SFM_REFRESH_VIDEO_EVENT (SDL_USEREVENT + 1)
#define SFM_REFRESH_AUDIO_EVENT (SDL_USEREVENT + 2)
#define SFM_BREAK_EVENT (SDL_USEREVENT + 3)
int thread_exit = 0;
int thread_pause = 0;
VideoCtrlStruct VideoCtrl;
AudioCtrlStruct AudioCtrl;
//video time_base.num:1, time_base.den:16, avg_frame_rate.num:8, avg_frame_rate.den:1
//audio time_base.num:1, time_base.den:48000, avg_frame_rate.num:0, avg_frame_rate.den:0
int IsPacketArrayFull(PacketArrayStruct* p)
{
int i = 0;
i = p->wIndex % PACKET_ARRAY_SIZE;
if(p->PacketArray[i].state != 0) return 1;
return 0;
}
int IsPacketArrayEmpty(PacketArrayStruct* p)
{
int i = 0;
i = p->rIndex % PACKET_ARRAY_SIZE;
if(p->PacketArray[i].state == 0) return 1;
return 0;
}
int SDL_event_thread(void *opaque)
{
SDL_Event event;
while(1)
{
SDL_WaitEvent(&event);
if(event.type == SDL_KEYDOWN)
{
//Pause
if(event.key.keysym.sym == SDLK_SPACE)
{
thread_pause = !thread_pause;
printf("video got pause event!\n");
}
}
else if(event.type == SDL_QUIT)
{
thread_exit = 1;
printf("------------------------------>video got SDL_QUIT event!\n");
break;
}
else if(event.type == SFM_BREAK_EVENT)
{
break;
}
}
printf("---------> SDL_event_thread end !!!! \n");
return 0;
}
int video_refresh_thread(void *opaque)
{
while (1)
{
if(thread_exit) break;
if(thread_pause)
{
SDL_Delay(40);
continue;
}
usleep(VideoCtrl.RefreshTime);
sem_post(&VideoCtrl.video_refresh);
}
printf("---------> video_refresh_thread end !!!! \n");
return 0;
}
static void *thread_audio(void *arg)
{
AVCodecContext *pAudioCodecCtx;
AVFrame *pAudioFrame;
unsigned char *pAudioOutBuffer;
AVPacket *Packet;
int i, ret, GotAudioPicture;
struct SwrContext *AudioConvertCtx;
AudioCtrlStruct* AudioCtrl = (AudioCtrlStruct*)arg;
pAudioCodecCtx = AudioCtrl->pCodecCtx;
pAudioOutBuffer = AudioCtrl->pAudioOutBuffer;
AudioConvertCtx = AudioCtrl->pConvertCtx;
printf("---------> thread_audio start !!!! \n");
pAudioFrame = av_frame_alloc();
while(1)
{
if(thread_exit) break;
if(thread_pause)
{
usleep(10000);
continue;
}
//sem_wait(&AudioCtrl->frame_put);
if(IsPacketArrayEmpty(&AudioCtrl->Audio))
{
SDL_Delay(1);
printf("---------> thread_audio empty !!!! \n");
continue;
}
i = AudioCtrl->Audio.rIndex;
Packet = &AudioCtrl->Audio.PacketArray[i].Packet;
if(Packet->stream_index == AudioCtrl->AudioIndex)
{
ret = avcodec_decode_audio4( pAudioCodecCtx, pAudioFrame, &GotAudioPicture, Packet);
if ( ret < 0 )
{
printf("Error in decoding audio frame.\n");
return 0;
}
if ( GotAudioPicture > 0 )
{
swr_convert(AudioConvertCtx,&pAudioOutBuffer, MAX_AUDIO_FRAME_SIZE,
(const uint8_t **)pAudioFrame->data , pAudioFrame->nb_samples);
//printf("Auduo index:%5d\t pts:%ld\t packet size:%d, pFrame->nb_samples:%d\n",
// AudioCtrl->AudioCnt, Packet->pts, Packet->size, pAudioFrame->nb_samples);
AudioCtrl->AudioCnt++;
}
while(AudioCtrl->audio_len > 0)//Wait until finish
SDL_Delay(1);
//Set audio buffer (PCM data)
AudioCtrl->audio_chunk = (Uint8 *) pAudioOutBuffer;
AudioCtrl->audio_pos = AudioCtrl->audio_chunk;
AudioCtrl->audio_len = AudioCtrl->out_buffer_size;
//sem_post(&AudioCtrl->frame_get);
av_packet_unref(Packet);
AudioCtrl->Audio.PacketArray[i].state = 0;
i++;
if(i >= PACKET_ARRAY_SIZE) i = 0;
AudioCtrl->Audio.rIndex = i;
}
}
printf("---------> thread_audio end !!!! \n");
return 0;
}
static void *thread_video(void *arg)
{
AVCodecContext *pVideoCodecCtx;
AVFrame *pVideoFrame,*pFrameYUV;
AVPacket *Packet;
int i, ret, GotPicture;
struct SwsContext *VideoConvertCtx;
VideoCtrlStruct* VideoCtrl = (VideoCtrlStruct*)arg;
pVideoCodecCtx = VideoCtrl->pCodecCtx;
VideoConvertCtx = VideoCtrl->pConvertCtx;
pVideoFrame = VideoCtrl->pVideoFrame;
pFrameYUV = VideoCtrl->pFrameYUV;
printf("---------> thread_video start !!!! \n");
while(1)
{
if(thread_exit) break;
//sem_wait(&VideoCtrl->frame_put);
if(IsPacketArrayEmpty(&VideoCtrl->Video))
{
SDL_Delay(1);
continue;
}
i = VideoCtrl->Video.rIndex;
Packet = &VideoCtrl->Video.PacketArray[i].Packet;
if(Packet->stream_index == VideoCtrl->VideoIndex)
{
ret = avcodec_decode_video2(pVideoCodecCtx, pVideoFrame, &GotPicture, Packet);
if(ret < 0)
{
printf("Video Decode Error.\n");
return 0;
}
//printf("Video index:%5d\t dts:%ld\t, pts:%ld\t packet size:%d, GotVideoPicture:%d\n",
// VideoCtrl->VideoCnt, Packet->dts, Packet->pts, Packet->size, GotPicture);
// printf("Video index:%5d\t pFrame->pkt_dts:%ld, pFrame->pkt_pts:%ld, pFrame->pts:%ld, pFrame->pict_type:%d, "
// "pFrame->best_effort_timestamp:%ld, pFrame->pkt_pos:%ld, pVideoFrame->pkt_duration:%ld\n",
// VideoCtrl->VideoCnt, pVideoFrame->pkt_dts, pVideoFrame->pkt_pts, pVideoFrame->pts,
// pVideoFrame->pict_type, pVideoFrame->best_effort_timestamp,
// pVideoFrame->pkt_pos, pVideoFrame->pkt_duration);
VideoCtrl->VideoCnt++;
if(GotPicture)
{
sws_scale(VideoConvertCtx, (const unsigned char* const*)pVideoFrame->data,
pVideoFrame->linesize, 0, pVideoCodecCtx->height, pFrameYUV->data, pFrameYUV->linesize);
sem_wait(&VideoCtrl->video_refresh);
//SDL---------------------------
SDL_UpdateTexture( VideoCtrl->sdlTexture, NULL, pFrameYUV->data[0], pFrameYUV->linesize[0] );
SDL_RenderClear( VideoCtrl->sdlRenderer );
//SDL_RenderCopy( sdlRenderer, sdlTexture, &sdlRect, &sdlRect );
SDL_RenderCopy( VideoCtrl->sdlRenderer, VideoCtrl->sdlTexture, NULL, NULL);
SDL_RenderPresent( VideoCtrl->sdlRenderer );
//SDL End-----------------------
}
av_packet_unref(Packet);
VideoCtrl->Video.PacketArray[i].state = 0;
i++;
if(i >= PACKET_ARRAY_SIZE) i = 0;
VideoCtrl->Video.rIndex = i;
}
}
printf("---------> thread_video end !!!! \n");
return 0;
}
/* The audio function callback takes the following parameters:
* stream: A pointer to the audio buffer to be filled
* len: The length (in bytes) of the audio buffer
*/
void fill_audio(void *udata,Uint8 *stream,int len)
{
AudioCtrlStruct* AudioCtrl = (AudioCtrlStruct*)udata;
//SDL 2.0
SDL_memset(stream, 0, len);
if(AudioCtrl->audio_len == 0) return;
len=(len > AudioCtrl->audio_len ? AudioCtrl->audio_len : len); /* Mix as much data as possible */
SDL_MixAudio(stream, AudioCtrl->audio_pos, len, SDL_MIX_MAXVOLUME);
AudioCtrl->audio_pos += len;
AudioCtrl->audio_len -= len;
}
int main(int argc, char* argv[])
{
AVFormatContext *pFormatCtx;
AVCodecContext *pVideoCodecCtx, *pAudioCodecCtx;
AVCodec *pVideoCodec, *pAudioCodec;
AVPacket *Packet;
unsigned char *pVideoOutBuffer, *pAudioOutBuffer;
int ret;
unsigned int i;
pthread_t audio_tid, video_tid;
uint64_t AudioOutChannelLayout;
int out_nb_samples; //nb_samples: AAC-1024 MP3-1152
AVSampleFormat out_sample_fmt;
int out_sample_rate;
int out_channels;
int out_buffer_size;
struct SwsContext *VideoConvertCtx;
struct SwrContext *AudioConvertCtx;
int VideoIndex, VideoCnt;
int AudioIndex, AudioCnt;
memset(&AudioCtrl, 0, sizeof(AudioCtrlStruct));
memset(&VideoCtrl, 0, sizeof(VideoCtrlStruct));
char *filepath = argv[1];
sem_init(&VideoCtrl.video_refresh, 0, 0);
sem_init(&VideoCtrl.frame_put, 0, 0);
sem_init(&AudioCtrl.frame_put, 0, 0);
thread_exit = 0;
thread_pause = 0;
av_register_all();
avformat_network_init();
pFormatCtx = avformat_alloc_context();
if(avformat_open_input(&pFormatCtx, filepath, NULL, NULL) !=0 )
{
printf("Couldn't open input stream.\n");
return -1;
}
if(avformat_find_stream_info(pFormatCtx,NULL) < 0)
{
printf("Couldn't find stream information.\n");
return -1;
}
VideoIndex = -1;
AudioIndex = -1;
for(i = 0; i < pFormatCtx->nb_streams; i++)
{
if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO)
{
VideoIndex = i;
//打印输出视频流的信息
printf("video time_base.num:%d, time_base.den:%d, avg_frame_rate.num:%d, avg_frame_rate.den:%d\n",
pFormatCtx->streams[VideoIndex]->time_base.num,
pFormatCtx->streams[VideoIndex]->time_base.den,
pFormatCtx->streams[VideoIndex]->avg_frame_rate.num,
pFormatCtx->streams[VideoIndex]->avg_frame_rate.den);
}
if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_AUDIO)
{
AudioIndex = i;
//打印输出音频流的信息
printf("audio time_base.num:%d, time_base.den:%d, avg_frame_rate.num:%d, avg_frame_rate.den:%d\n",
pFormatCtx->streams[AudioIndex]->time_base.num,
pFormatCtx->streams[AudioIndex]->time_base.den,
pFormatCtx->streams[AudioIndex]->avg_frame_rate.num,
pFormatCtx->streams[AudioIndex]->avg_frame_rate.den);
}
}
if(VideoIndex != -1)
{ //准备视频的解码操作上下文数据结构,
pVideoCodecCtx = pFormatCtx->streams[VideoIndex]->codec;
pVideoCodec = avcodec_find_decoder(pVideoCodecCtx->codec_id);
if(pVideoCodec == NULL)
{
printf("Video Codec not found.\n");
return -1;
}
if(avcodec_open2(pVideoCodecCtx, pVideoCodec,NULL) < 0)
{
printf("Could not open video codec.\n");
return -1;
}
// prepare video
VideoCtrl.pVideoFrame = av_frame_alloc();
VideoCtrl.pFrameYUV = av_frame_alloc();
ret = av_image_get_buffer_size(AV_PIX_FMT_YUV420P, pVideoCodecCtx->width, pVideoCodecCtx->height, 1);
pVideoOutBuffer = (unsigned char *)av_malloc(ret);
av_image_fill_arrays(VideoCtrl.pFrameYUV->data, VideoCtrl.pFrameYUV->linesize, pVideoOutBuffer,
AV_PIX_FMT_YUV420P, pVideoCodecCtx->width, pVideoCodecCtx->height, 1);
VideoConvertCtx = sws_getContext(pVideoCodecCtx->width, pVideoCodecCtx->height, pVideoCodecCtx->pix_fmt,
pVideoCodecCtx->width, pVideoCodecCtx->height,
AV_PIX_FMT_YUV420P, SWS_BICUBIC, NULL, NULL, NULL);
VideoCtrl.pFormatCtx = pFormatCtx;
VideoCtrl.pStream = pFormatCtx->streams[VideoIndex];
VideoCtrl.pCodec = pVideoCodec;
VideoCtrl.pCodecCtx = pFormatCtx->streams[VideoIndex]->codec;
VideoCtrl.pConvertCtx = VideoConvertCtx;
VideoCtrl.pVideoOutBuffer = pVideoOutBuffer;
VideoCtrl.VideoIndex = VideoIndex;
if(pFormatCtx->streams[VideoIndex]->avg_frame_rate.num == 0 ||
pFormatCtx->streams[VideoIndex]->avg_frame_rate.den == 0)
{
VideoCtrl.RefreshTime = 40000;
}
else
{ //计算视频每一帧的时间,使用此时间间隔在发送视频播放信号
VideoCtrl.RefreshTime = 1000000 * pFormatCtx->streams[VideoIndex]->avg_frame_rate.den;
VideoCtrl.RefreshTime /= pFormatCtx->streams[VideoIndex]->avg_frame_rate.num;
}
printf("VideoCtrl.RefreshTime:%d\n", VideoCtrl.RefreshTime);
}
else
{
printf("Didn't find a video stream.\n");
}
if(AudioIndex != -1)
{ //准备音频的解码操作上下文数据结构,
pAudioCodecCtx = pFormatCtx->streams[AudioIndex]->codec;
pAudioCodec = avcodec_find_decoder(pAudioCodecCtx->codec_id);
if(pAudioCodec == NULL)
{
printf("Audio Codec not found.\n");
return -1;
}
if(avcodec_open2(pAudioCodecCtx, pAudioCodec,NULL) < 0)
{
printf("Could not open audio codec.\n");
return -1;
}
// prepare Out Audio Param
AudioOutChannelLayout = AV_CH_LAYOUT_STEREO;
out_nb_samples = pAudioCodecCtx->frame_size; //nb_samples: AAC-1024 MP3-1152
out_sample_fmt = AV_SAMPLE_FMT_S16;
out_sample_rate = pAudioCodecCtx->sample_rate;
// 此处一定使用pAudioCodecCtx->sample_rate这个变量赋值,否则使用不一样的值会造成音频少采样或者过采样,导致音频播放出现杂音
out_channels = av_get_channel_layout_nb_channels(AudioOutChannelLayout);
out_buffer_size = av_samples_get_buffer_size(NULL,out_channels ,out_nb_samples,out_sample_fmt, 1);
//mp3:out_nb_samples:1152, out_channels:2, out_buffer_size:4608, pCodecCtx->channels:2
//aac:out_nb_samples:1024, out_channels:2, out_buffer_size:4096, pCodecCtx->channels:2
printf("out_nb_samples:%d, out_channels:%d, out_buffer_size:%d, pCodecCtx->channels:%d\n",
out_nb_samples, out_channels, out_buffer_size, pAudioCodecCtx->channels);
pAudioOutBuffer = (uint8_t *)av_malloc(MAX_AUDIO_FRAME_SIZE*2);
//FIX:Some Codec's Context Information is missing
int64_t in_channel_layout = av_get_default_channel_layout(pAudioCodecCtx->channels);
//Swr
AudioConvertCtx = swr_alloc();
AudioConvertCtx = swr_alloc_set_opts(AudioConvertCtx, AudioOutChannelLayout,
out_sample_fmt, out_sample_rate,
in_channel_layout, pAudioCodecCtx->sample_fmt ,
pAudioCodecCtx->sample_rate, 0, NULL);
swr_init(AudioConvertCtx);
AudioCtrl.pFormatCtx = pFormatCtx;
AudioCtrl.pStream = pFormatCtx->streams[AudioIndex];
AudioCtrl.pCodec = pAudioCodec;
AudioCtrl.pCodecCtx = pFormatCtx->streams[AudioIndex]->codec;
AudioCtrl.pConvertCtx = AudioConvertCtx;
AudioCtrl.AudioOutChannelLayout = AudioOutChannelLayout;
AudioCtrl.out_nb_samples = out_nb_samples;
AudioCtrl.out_sample_fmt = out_sample_fmt;
AudioCtrl.out_sample_rate = out_sample_rate;
AudioCtrl.out_channels = out_channels;
AudioCtrl.out_buffer_size = out_buffer_size;
AudioCtrl.pAudioOutBuffer = pAudioOutBuffer;
AudioCtrl.AudioIndex = AudioIndex;
}
else
{
printf("Didn't find a audio stream.\n");
}
//Output Info-----------------------------
printf("---------------- File Information ---------------\n");
av_dump_format(pFormatCtx, 0, filepath, 0);
printf("-------------- File Information end -------------\n");
if(SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER))
{
printf( "Could not initialize SDL - %s\n", SDL_GetError());
return -1;
}
if(VideoIndex != -1)
{
//SDL 2.0 Support for multiple windows
//SDL_VideoSpec
VideoCtrl.screen_w = pVideoCodecCtx->width;
VideoCtrl.screen_h = pVideoCodecCtx->height;
VideoCtrl.screen = SDL_CreateWindow("Simplest ffmpeg player's Window", SDL_WINDOWPOS_UNDEFINED,
SDL_WINDOWPOS_UNDEFINED, VideoCtrl.screen_w, VideoCtrl.screen_h, SDL_WINDOW_OPENGL);
if(!VideoCtrl.screen)
{
printf("SDL: could not create window - exiting:%s\n",SDL_GetError());
return -1;
}
VideoCtrl.sdlRenderer = SDL_CreateRenderer(VideoCtrl.screen, -1, 0);
//IYUV: Y + U + V (3 planes)
//YV12: Y + V + U (3 planes)
VideoCtrl.sdlTexture = SDL_CreateTexture(VideoCtrl.sdlRenderer, SDL_PIXELFORMAT_IYUV, SDL_TEXTUREACCESS_STREAMING,
pVideoCodecCtx->width, pVideoCodecCtx->height);
VideoCtrl.sdlRect.x = 0;
VideoCtrl.sdlRect.y = 0;
VideoCtrl.sdlRect.w = VideoCtrl.screen_w;
VideoCtrl.sdlRect.h = VideoCtrl.screen_h;
VideoCtrl.video_tid = SDL_CreateThread(video_refresh_thread, NULL, NULL);
ret = pthread_create(&video_tid, NULL, thread_video, &VideoCtrl);
if (ret)
{
printf("create thr_rvs video thread failed, error = %d \n", ret);
return -1;
}
}
if(AudioIndex != -1)
{
//SDL_AudioSpec
SDL_AudioSpec AudioSpec;
AudioSpec.freq = out_sample_rate;
AudioSpec.format = AUDIO_S16SYS;
AudioSpec.channels = out_channels;
AudioSpec.silence = 0;
AudioSpec.samples = out_nb_samples;
AudioSpec.callback = fill_audio;
AudioSpec.userdata = (void*)&AudioCtrl;
if (SDL_OpenAudio(&AudioSpec, NULL) < 0)
{
printf("can't open audio.\n");
return -1;
}
ret = pthread_create(&audio_tid, NULL, thread_audio, &AudioCtrl);
if (ret)
{
printf("create thr_rvs video thread failed, error = %d \n", ret);
return -1;
}
SDL_PauseAudio(0);
}
SDL_Thread *event_tid;
event_tid = SDL_CreateThread(SDL_event_thread, NULL, NULL);
VideoCnt = 0;
AudioCnt = 0;
Packet = (AVPacket *)av_malloc(sizeof(AVPacket));
av_init_packet(Packet);
while(1)
{
if(thread_exit) break;
if(av_read_frame(pFormatCtx, Packet) < 0)
{ //读取的到文件结束,自动退出,想SDL事件监听线程发送退出信号
thread_exit = 1;
SDL_Event event;
event.type = SFM_BREAK_EVENT;
SDL_PushEvent(&event);
printf("---------> av_read_frame < 0, thread_exit = 1 !!!\n");
break;
}
if(Packet->stream_index == VideoIndex)
{
if(VideoCtrl.Video.wIndex >= PACKET_ARRAY_SIZE)
{
VideoCtrl.Video.wIndex = 0;
}
while(IsPacketArrayFull(&VideoCtrl.Video))
{
usleep(5000);
//printf("---------> VideoCtrl.Video.PacketArray FULL !!!\n");
}
i = VideoCtrl.Video.wIndex;
VideoCtrl.Video.PacketArray[i].Packet = *Packet;
VideoCtrl.Video.PacketArray[i].dts = Packet->dts;
VideoCtrl.Video.PacketArray[i].pts = Packet->pts;
VideoCtrl.Video.PacketArray[i].state = 1;
VideoCtrl.Video.wIndex++;
//printf("VideoCtrl.frame_put, VideoCnt:%d\n", VideoCnt++);
//sem_post(&VideoCtrl.frame_put);
}
if(Packet->stream_index == AudioIndex)
{
if(AudioCtrl.Audio.wIndex >= PACKET_ARRAY_SIZE)
{
AudioCtrl.Audio.wIndex = 0;
}
while(IsPacketArrayFull(&AudioCtrl.Audio))
{
usleep(5000);
//printf("---------> AudioCtrl.Audio.PacketArray FULL !!!\n");
}
i = AudioCtrl.Audio.wIndex;
AudioCtrl.Audio.PacketArray[i].Packet = *Packet;
AudioCtrl.Audio.PacketArray[i].dts = Packet->dts;
AudioCtrl.Audio.PacketArray[i].pts = Packet->pts;
AudioCtrl.Audio.PacketArray[i].state = 1;
AudioCtrl.Audio.wIndex++;
//printf("AudioCtrl.frame_put, AudioCnt:%d\n", AudioCnt++);
//sem_post(&AudioCtrl.frame_put);
}
}
SDL_WaitThread(event_tid, NULL);
//printf("--------------------------->main exit 0 !!\n");
SDL_WaitThread(VideoCtrl.video_tid, NULL);
//printf("--------------------------->main exit 1 !!\n");
pthread_join(audio_tid, NULL);
//printf("--------------------------->main exit 2 !!\n");
pthread_join(video_tid, NULL);
//printf("--------------------------->main exit 3 !!\n");
SDL_CloseAudio();//Close SDL
//printf("--------------------------->main exit 4 !!\n");
SDL_Quit();
//printf("--------------------------->main exit 5 !!\n");
swr_free(&AudioConvertCtx);
sws_freeContext(VideoConvertCtx);
//printf("--------------------------->main exit 6 !!\n");
av_free(pVideoOutBuffer);
avcodec_close(pVideoCodecCtx);
//printf("--------------------------->main exit 7 !!\n");
av_free(pAudioOutBuffer);
avcodec_close(pAudioCodecCtx);
avformat_close_input(&pFormatCtx);
printf("--------------------------->main exit 8 !!\n");
}