WebRTC视频流渲染中插入图片帧

WebRTC视频流渲染中插入自己的图片帧渲染

WebRTC revision 8146

2015年初,戴维营里上了WebRTC框架进行音视频通话项目,由于一个无聊的主意,想要在WebRTC视频聊天的过程中,在本地视频画面添加一个水印图片的渲染,但是希望不要改动代码太多,最后还是没有实现视频帧和图片的融合渲染,只测试了把一个写死的图片文件数据当成一帧数据渲染出来.这里为这个无聊的行为做个简单的记录:

把一个JPEG图片用jpeg2yuv命令处理转成yuv格式,然后把yuv格式的二进制转为一个C语言头文件,放到下面这个路径

#include "webrtc/modules/video_render/watermark_yuv.h"

然后就只要修改文件:
webrtc/modules/video_render/incoming_video_stream.cc
中的视频帧渲染函数为这样:

int32_t IncomingVideoStream::RenderFrame(const uint32_t stream_id, I420VideoFrame& video_frame) { CriticalSectionScoped csS(&stream_critsect_); WEBRTC_TRACE(kTraceStream, kTraceVideoRenderer, module_id_, "%s for stream %d, render time: %u", __FUNCTION__, stream_id_, video_frame.render_time_ms()); if (!running_) { WEBRTC_TRACE(kTraceStream, kTraceVideoRenderer, module_id_, "%s: Not running", __FUNCTION__); return -1; } // Mirroring is not supported if the frame is backed by a texture. if (true == mirror_frames_enabled_ && video_frame.native_handle() == NULL) { transformed_video_frame_.CreateEmptyFrame(video_frame.width(), video_frame.height(), video_frame.stride(kYPlane), video_frame.stride(kUPlane), video_frame.stride(kVPlane)); if (mirroring_.mirror_x_axis) { MirrorI420UpDown(&video_frame, &transformed_video_frame_); video_frame.SwapFrame(&transformed_video_frame_); } if (mirroring_.mirror_y_axis) { MirrorI420LeftRight(&video_frame, &transformed_video_frame_); video_frame.SwapFrame(&transformed_video_frame_); } } // Rate statistics. num_frames_since_last_calculation_++; int64_t now_ms = TickTime::MillisecondTimestamp(); if (now_ms >= last_rate_calculation_time_ms_ + KFrameRatePeriodMs) { incoming_rate_ = static_cast<uint32_t>(1000 * num_frames_since_last_calculation_ / (now_ms - last_rate_calculation_time_ms_)); num_frames_since_last_calculation_ = 0; last_rate_calculation_time_ms_ = now_ms; } // Insert frame. CriticalSectionScoped csB(&buffer_critsect_); if (render_buffers_.AddFrame(&video_frame) == 1) deliver_buffer_event_.Set(); #if 0 #else //修改,插入渲染自己的图片帧 const uint32_t renderDelayMs = 50; static I420VideoFrame videoFrame0; const int width = 480; const int height = 640; const int half_width = (width + 1) / 2; const int stride_y = width; const int stride_uv = half_width; const uint8_t* buffer_y = watermark_yuv; const uint8_t* buffer_u = buffer_y + stride_y * height; const uint8_t* buffer_v = buffer_u + stride_uv * ((height + 1) / 2); videoFrame0.CreateFrame(width*height,buffer_y,width*height/2, buffer_u,width*height/2, buffer_v, width, height, stride_y, stride_uv, stride_uv); videoFrame0.set_render_time_ms(TickTime::MillisecondTimestamp() + renderDelayMs); videoFrame0.set_ntp_time_ms(video_frame.ntp_time_ms()+renderDelayMs); if (TickTime::MillisecondTimestamp()%30000<=3000&&render_buffers_.AddFrame(&videoFrame0) >= 1) { deliver_buffer_event_.Set(); // fprintf(stderr,"%s:diveinedu logo\n",__FUNCTION__); } #endif return 0; }

修改的文件的变化用svn diff的结果如下:

Index: webrtc/modules/video_render/incoming_video_stream.cc =================================================================== --- webrtc/modules/video_render/incoming_video_stream.cc (revision 8146) +++ webrtc/modules/video_render/incoming_video_stream.cc (working copy) @@ -9,6 +9,7 @@ */ #include "webrtc/modules/video_render/incoming_video_stream.h" +#include "webrtc/modules/video_render/watermark_yuv.h" #include <assert.h> @@ -135,6 +136,27 @@ if (render_buffers_.AddFrame(&video_frame) == 1) deliver_buffer_event_.Set(); +#if 0 +#else + const uint32_t renderDelayMs = 50; + static I420VideoFrame videoFrame0; + const int width = 480; + const int height = 640; + const int half_width = (width + 1) / 2; + const int stride_y = width; + const int stride_uv = half_width; + const uint8_t* buffer_y = watermark_yuv; + const uint8_t* buffer_u = buffer_y + stride_y * height; + const uint8_t* buffer_v = buffer_u + stride_uv * ((height + 1) / 2); + videoFrame0.CreateFrame(width*height,buffer_y,width*height/2, buffer_u,width*height/2, buffer_v, width, height, stride_y, + stride_uv, stride_uv); + videoFrame0.set_render_time_ms(TickTime::MillisecondTimestamp() + renderDelayMs); + videoFrame0.set_ntp_time_ms(video_frame.ntp_time_ms()+renderDelayMs); + if (TickTime::MillisecondTimestamp()%30000<=3000&&render_buffers_.AddFrame(&videoFrame0) >= 1) { + deliver_buffer_event_.Set(); +// fprintf(stderr,"%s:diveinedu logo\n",__FUNCTION__); + } +#endif return 0; }

然后重新编译项目,生成框架即可在视频通话建立之后每隔30s显示一帧自定义图3s.



文/戴维营教育(简书作者)
原文链接:http://www.jianshu.com/p/c126c4831e8b
著作权归作者所有,转载请联系作者获得授权,并标注“简书作者”。

你可能感兴趣的:(WebRTC视频流渲染中插入图片帧)