incoming_video_stream.cc 入口,单独的线程处理渲染,
接收端接收到视频,组帧,解码后会放入渲染队列
渲染线程从队列中取出帧,进行渲染
渲染线程起点
IncomingVideoStream::IncomingVideoStreamThreadFun
{
IncomingVideoStream::IncomingVideoStreamProcess()
{
frame_to_render = render_buffers_->FrameToRender(); //获取要渲染的帧,render_buffers_里面有个list,存储要渲染的帧
wait_time = render_buffers_->TimeToNextFrameRelease(); //下一帧渲染的时间,这个函数??
deliver_buffer_event_->StartTimer(false, wait_time); //设置下一帧最长等待时间
VideoReceiveStream::OnFrame(const VideoFrame& video_frame) //进入渲染函数
{
D3dRenderer::OnFrame(const webrtc::VideoFrame& frame) //执行渲染
}
}
}
调用堆栈就这些,但是如何判断何时去渲染呢,跟踪
frame_to_render = render_buffers_->FrameToRender();
rtc::Optional VideoRenderFrames::FrameToRender()
{
rtc::Optional render_frame;
while (!incoming_frames_.empty() && TimeToNextFrameRelease() <= 0)
{
render_frame = rtc::Optional(incoming_frames_.front());
incoming_frames_.pop_front();
}
return render_frame;
}
当队列里面有数据,并且达到渲染时间时,执行渲染。
怎么判断达到渲染时间呢?
uint32_t VideoRenderFrames::TimeToNextFrameRelease()
{
if (incoming_frames_.empty())
{
return kEventMaxWaitTimeMs;
}
const int64_t time_to_release = incoming_frames_.front().render_time_ms() -
render_delay_ms_ -
rtc::TimeMillis();
return time_to_release < 0 ? 0u : static_cast(time_to_release);
}
1、 这个time_to_release是怎么计算? render_time_ms是怎么算的?
2、render_delay_ms_ 是设置的 render_delay_ms_(EnsureValidRenderDelay(render_delay_ms)),
在video_receive_stream.h 中 int render_delay_ms = 10;
render_time_ms 是怎么得来的呢
解码线程入口video_receive_stream.cc
VideoReceiveStream::DecodeThreadFunction(void* ptr)
{
VideoReceiveStream::Decode()
{
VideoReceiver::Decode(uint16_t maxWaitTimeMs) //参数是最大等待时间
{
VCMEncodedFrame* frame = _receiver.FrameForDecoding(maxWaitTimeMs, prefer_late_decoding); //获取要解码的视频帧,时间戳都是
{
//获取要解码的帧,并计算各种时间信息,重点在这,
VCMReceiver::FrameForDecoding(uint16_t max_wait_time_ms, bool prefer_late_decoding)
}
VideoReceiver::Decode(const VCMEncodedFrame& frame) // 解码视频帧
{
VCMDecodedFrameCallback::Map(uint32_t timestamp, VCMFrameInformation* frameInfo) //记录帧信息,时间戳,渲染时间
VCMGenericDecoder::Decode(const VCMEncodedFrame& frame, int64_t nowMs) //执行解码
{ //调用解码器解码
}
if (!frame.Complete() || frame.MissingFrame())
{
request_key_frame = true; //如果解码有问题,请求关键帧,
}
}
}
}
}
重点就是 receiver.cc 的这个函数了,看看是怎么计算render时间的
伪代码
VCMEncodedFrame* VCMReceiver::FrameForDecoding(uint16_t max_wait_time_ms,
bool prefer_late_decoding)
{
//从可解码队列中取得接收完成的完整帧,用于取frame_timestamp
VCMEncodedFrame* found_frame = jitter_buffer_.NextCompleteFrame(max_wait_time_ms);
if (found_frame)
{
frame_timestamp = found_frame->TimeStamp();
}
else
{ //如果没有可解码的,就从已经接收到的数据包中取(至少接收到这个帧的一个数据包了),可能是下一个接收完整的包
if (!jitter_buffer_.NextMaybeIncompleteTimestamp(&frame_timestamp))
}
//计算渲染时间
render_time_ms = timing_->RenderTimeMs(frame_timestamp, now_ms);
{
VCMTiming::RenderTimeMs
{
VCMTiming::RenderTimeMsInternal
{ //根据timestamp计算出渲染时间,是用相对timestamp和平滑算法实现的
TimestampExtrapolator::ExtrapolateLocalTime(uint32_t timestamp90khz)
}
}
}
return frame;
}
渲染时间的平滑算法就要看TimestampExtrapolator::ExtrapolateLocalTime
在timestamp_extrapolator.cc 中