视频通过RTMP方式发布需要一个RTMP Server(常见的有FMS、Wowza Media Server, 开源的有CRtmpServer、Red5等),原始视频只要按照RTMP协议发送给RTMP Server就可以RTMP视频流的发布了。为了便于视频的打包发布,封装了一个RTMPStream,目前只支持发送H264的视频文件。可以直接发送H264数据帧或H264文件,RTMPStream提供的接口如下。
目前视频只测试了H264格式,其它格式的视频还未做测试。播放器也支持直接打开本地视频播放,但播放的帧率和原始视频的码率不同步。目前还不清楚如何处理这个问题,希望懂这方面的大侠指教。
另外,还有一个开源的库VLC也可以用来开发流媒体播放器,它支持多种流媒体协议,如RTP、RTSP等,CodeProject上已经有牛人在VLCLib的基础上封装可更易使用的库VLCWrapper(地址:http://www.codeproject.com/Articles/38952/VLCWrapper-A-Little-C-wrapper-Around-libvlc)。用它可以很方便的开发视频播放器。
以下是CRTSPPlayer完整的代码:
6)OpenH264 是思科公司发布的一个开源的 H.264 编码和解码器。http://www.oschina.net/p/openh264
7)多媒体开源网址:
http://www.oschina.net/project/tag/142/multimedialib
8)live555 change log
http://www.live555.com/liveMedia/public/changelog.txt
live555 的20140115版本支持H265了;
9)live555 rtspperf
RTSPPerf is a tool that use live555 to test the rtsp server performance and it just receive the video audio data and throwthem, you can use test tool for NVR IPcamera DVR rtsp server test.
http://sourceforge.net/p/rtspperf/home/RTSPPerf%20/
10)
[原]Live555 之odm封装
http://m.blog.csdn.net/blog/u011652271/18423295
ODM在视频监控领域最近小有名气,而
且其播放视频延迟非常小,今天拿来代码研究了一下,发现里面对FFMPEG和live555的封装算是我见到过最经典的,有兴趣大家可以自己看一下源码
http://sourceforge.net/p/onvifdm/code/HEAD/tree/trunk/odm/odm.player/odm.player.lib/
11)一个好的RTSP测试源(http://m.blog.csdn.net/blog/u011652271/18179935)
http://m.blog.csdn.net/blog/u011652271/18179935
12)
http://stackoverflow.com/questions/13863673/how-to-write-a-live555-framedsource-to-allow-me-to-stream-h-264-live
Ok, I finally got some time to spend on this and got it working! I'm sure there are others who will be begging to know how to do it so here it is.
You will need your own FramedSource to take each frame, encode, and prepare it for streaming, I will provide some of the source code for this soon.
Essentially throw your FramedSource into the H264VideoStreamDiscreteFramer, then throw this into the H264RTPSink. Something like this
scheduler = BasicTaskScheduler::createNew(); env = BasicUsageEnvironment::createNew(*scheduler); framedSource = H264FramedSource::createNew(*env, 0,0); h264VideoStreamDiscreteFramer = H264VideoStreamDiscreteFramer::createNew(*env, framedSource); // initialise the RTP Sink stuff here, look at // testH264VideoStreamer.cpp to find out how videoSink->startPlaying(*h264VideoStreamDiscreteFramer, NULL, videoSink); env->taskScheduler().doEventLoop();
Now in your main render loop, throw over your backbuffer which you've saved to system memory to your FramedSource so it can be encoded etc. For more info on how to setup the encoding stuff check out this answerHow does one encode a series of images into H264 using the x264 C API?
My implementation is very much in a hacky state and is yet to be optimised at all, my d3d application runs at around 15fps due to the encoding, ouch, so I will have to look into this. But for all intensive purposes this stackoverflow question is answered because I was mostly after how to stream it. I hope this helps other people.
As for my FramedSource it looks a little something like this
concurrent_queue<x264_nal_t> m_queue;
SwsContext* convertCtx;
x264_param_t param;
x264_t* encoder;
x264_picture_t pic_in, pic_out;
EventTriggerId H264FramedSource::eventTriggerId = 0;
unsigned H264FramedSource::FrameSize = 0;
unsigned H264FramedSource::referenceCount = 0;
int W = 720;
int H = 960;
H264FramedSource* H264FramedSource::createNew(UsageEnvironment& env,
unsigned preferredFrameSize,
unsigned playTimePerFrame)
{
return new H264FramedSource(env, preferredFrameSize, playTimePerFrame);
}
H264FramedSource::H264FramedSource(UsageEnvironment& env,
unsigned preferredFrameSize,
unsigned playTimePerFrame)
: FramedSource(env),
fPreferredFrameSize(fMaxSize),
fPlayTimePerFrame(playTimePerFrame),
fLastPlayTime(0),
fCurIndex(0)
{
if (referenceCount == 0)
{
}
++referenceCount;
x264_param_default_preset(¶m, "veryfast", "zerolatency");
param.i_threads = 1;
param.i_width = 720;
param.i_height = 960;
param.i_fps_num = 60;
param.i_fps_den = 1;
// Intra refres:
param.i_keyint_max = 60;
param.b_intra_refresh = 1;
//Rate control:
param.rc.i_rc_method = X264_RC_CRF;
param.rc.f_rf_constant = 25;
param.rc.f_rf_constant_max = 35;
param.i_sps_id = 7;
//For streaming:
param.b_repeat_headers = 1;
param.b_annexb = 1;
x264_param_apply_profile(¶m, "baseline");
encoder = x264_encoder_open(¶m);
pic_in.i_type = X264_TYPE_AUTO;
pic_in.i_qpplus1 = 0;
pic_in.img.i_csp = X264_CSP_I420;
pic_in.img.i_plane = 3;
x264_picture_alloc(&pic_in, X264_CSP_I420, 720, 920);
convertCtx = sws_getContext(720, 960, PIX_FMT_RGB24, 720, 760, PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL);
if (eventTriggerId == 0)
{
eventTriggerId = envir().taskScheduler().createEventTrigger(deliverFrame0); //将触发事件的Id和回调函数deliverFrame0关联起来
}
}
H264FramedSource::~H264FramedSource()
{
--referenceCount;
if (referenceCount == 0)
{
// Reclaim our 'event trigger'
envir().taskScheduler().deleteEventTrigger(eventTriggerId);
eventTriggerId = 0;
}
}
void H264FramedSource::AddToBuffer(uint8_t* buf, int surfaceSizeInBytes)
{
uint8_t* surfaceData = (new uint8_t[surfaceSizeInBytes]);
memcpy(surfaceData, buf, surfaceSizeInBytes);
int srcstride = W*3;
sws_scale(convertCtx, &surfaceData, &srcstride,0, H, pic_in.img.plane, pic_in.img.i_stride);
x264_nal_t* nals = NULL;
int i_nals = 0;
int frame_size = -1;
frame_size = x264_encoder_encode(encoder, &nals, &i_nals, &pic_in, &pic_out);
static bool finished = false;
if (frame_size >= 0)
{
static bool alreadydone = false;
if(!alreadydone)
{
x264_encoder_headers(encoder, &nals, &i_nals);
alreadydone = true;
}
for(int i = 0; i < i_nals; ++i)
{
m_queue.push(nals[i]);
}
}
delete [] surfaceData;
surfaceData = NULL;
envir().taskScheduler().triggerEvent(eventTriggerId, this);
}
void H264FramedSource::doGetNextFrame()
{
deliverFrame();
}
void H264FramedSource::deliverFrame0(void* clientData)
{
((H264FramedSource*)clientData)->deliverFrame();
}
void H264FramedSource::deliverFrame()
{
x264_nal_t nalToDeliver;
if (fPlayTimePerFrame > 0 && fPreferredFrameSize > 0) {
if (fPresentationTime.tv_sec == 0 && fPresentationTime.tv_usec == 0) {
// This is the first frame, so use the current time:
gettimeofday(&fPresentationTime, NULL);
} else {
// Increment by the play time of the previous data:
unsigned uSeconds = fPresentationTime.tv_usec + fLastPlayTime;
fPresentationTime.tv_sec += uSeconds/1000000;
fPresentationTime.tv_usec = uSeconds%1000000;
}
// Remember the play time of this data:
fLastPlayTime = (fPlayTimePerFrame*fFrameSize)/fPreferredFrameSize;
fDurationInMicroseconds = fLastPlayTime;
} else {
// We don't know a specific play time duration for this data,
// so just record the current time as being the 'presentation time':
gettimeofday(&fPresentationTime, NULL);
}
if(!m_queue.empty())
{
m_queue.wait_and_pop(nalToDeliver);
uint8_t* newFrameDataStart = (uint8_t*)0xD15EA5E;
newFrameDataStart = (uint8_t*)(nalToDeliver.p_payload);
unsigned newFrameSize = nalToDeliver.i_payload;
// Deliver the data here:
if (newFrameSize > fMaxSize) {
fFrameSize = fMaxSize;
fNumTruncatedBytes = newFrameSize - fMaxSize;
}
else {
fFrameSize = newFrameSize;
}
memcpy(fTo, nalToDeliver.p_payload, nalToDeliver.i_payload);
FramedSource::afterGetting(this);
}
}
http://www.cfanz.cn/?c=article&a=read&id=121976
http://blog.yikuyiku.com/?tag=avcodec_decode_video2
JM h264 编码文档
http://iphome.hhi.de/suehring/tml/doc/ldec/html/
http://iphome.hhi.de/suehring/tml/doc/ldec/html/parset_8c_source.html
获得H.264视频分辨率的方法
http://www.cnblogs.com/likwo/p/3531241.html
13)
流媒体基本要点简述:如何在H264数据中获取PTS?
http://70565912.blog.51cto.com/1358202/533736/
14)
http://bbs.chinavideo.org/viewthread.php?tid=47
15)ffplay的音视频同步分析 http://blog.chinaunix.net/uid-20364597-id-3510052.html
16)程序员必须知道的国外几个IT网站
http://www.vaikan.com/programers-should-know-several-tech-websites/
CodeGuru:http://www.codeguru.com
Devx C++ None:http://www.devx.com/cplus
C-C++ Users Journal Web Site:http://www.devx.com/cplus
SGI STL--Service & Support:http://www.sgi.com/tech/stl
The MFC Professional:http://www.visionx.com/mfcpro/index.htp
Windows Developer Magazine Online:http://www.wdj.com
www.codeproject.com
17)
http://stackoverflow.com/questions/992069/ace-vs-boost-vs-poco
18)http://bbs.chinavideo.org/viewthread.php?tid=5816 ffplay原理
19)ttp://blog.yikuyiku.com/?tag=avcodec_decode_video2
avcodec_decode_video2解码丢帧问题解决;
20)
boost 线程池
http://threadpool.sourceforge.net/
21)线程池
http://www.cnblogs.com/xugang/archive/2010/04/20/1716042.html
22)Onvif开发之服务端成功对接Rtsp视频流篇
http://www.sjsjw.com/kf_code/article/27_28486_6721.asp
23)关于ipcam的UPnP或NAT的知识
http://blog.chinaunix.net/uid-23883288-id-3038120.html