流媒体开发的有用网址收藏

1)通过live555实现H264 RTSP直播http://blog.csdn.net/firehood_/article/details/16844397

2)Live555 Streaming from a live source

http://comments.gmane.org/gmane.comp.multimedia.live555.devel/5555

3)H264视频通过RTMP直播(http://blog.csdn.net/firehood_/article/details/8783589)

 前面的文章中提到了通过RTSP(Real Time Streaming Protocol)的方式来实现视频的直播,但RTSP方式的一个弊端是如果需要支持客户端通过网页来访问,就需要在在页面中嵌入一个ActiveX控件,而ActiveX一般都需要签名才能正常使用,否则用户在使用时还需要更改浏览器设置,并且ActiveX还只支持IE内核的浏览器,Chrome、FireFox需要IE插件才能运行,因此会特别影响用户体验。而RTMP(Real Time Messaging Protocol)很好的解决了这一个问题。由于RTMP是针对FLASH的流媒体协议,视频通过RTMP直播后,只需要在WEB上嵌入一个Web Player(如Jwplayer)即可观看,而且对平台也没什么限制,还可以方便的通过手机观看。

       视频通过RTMP方式发布需要一个RTMP Server(常见的有FMS、Wowza Media Server, 开源的有CRtmpServer、Red5等),原始视频只要按照RTMP协议发送给RTMP Server就可以RTMP视频流的发布了。为了便于视频的打包发布,封装了一个RTMPStream,目前只支持发送H264的视频文件。可以直接发送H264数据帧或H264文件,RTMPStream提供的接口如下。

4)H264视频编码成MP4文件(http://blog.csdn.net/firehood_/article/details/8813587)

最近需要将H264视频编码成MP4格式。研究了一下,一种方法是采用ffmpeg库,可以先将H264文件解码,再编码生成MP4文件,但这种方式效率较低,10M的视频可能需要几秒钟才能完成。另一种方式根据MP4文件协议直接将H264包封装成MP4格式,由于是直接基于MP4的封装,因而效率很高。H264可以很方便的封装成FLV文件,但MP4格式格式相对比较复杂,封装起来会比较麻烦。由于没时间研究MP4协议,在Google Code上找到一个开源的MP4编解码库Mp4v2(https://code.google.com/p/mp4v2/),通过Mp4v2可以很方便的将H264编码成MP4格式文件。为了方便使用,基于该库封装了一个MP4Encoder类,MP4Encoder封装的接口如下。目前仅支持将H264文件或数据帧编码成MP4文件。

 

5)RTSP流媒体播放器实现http://blog.csdn.net/firehood_/article/details/8727914

最近需要做一个RTSP流媒体播放器,研究了一下,封装了一个RTSP播放类CRTSPPlayer,解码库采用ffmpeg。由于需求比较简单,时间也有限,目前只实现了播放、停止、暂停几个基本的接口。下面是基于CRTSPPlayer类实现的简单RTSP播放器。

       目前视频只测试了H264格式,其它格式的视频还未做测试。播放器也支持直接打开本地视频播放,但播放的帧率和原始视频的码率不同步。目前还不清楚如何处理这个问题,希望懂这方面的大侠指教。

       另外,还有一个开源的库VLC也可以用来开发流媒体播放器,它支持多种流媒体协议,如RTP、RTSP等,CodeProject上已经有牛人在VLCLib的基础上封装可更易使用的库VLCWrapper(地址:http://www.codeproject.com/Articles/38952/VLCWrapper-A-Little-C-wrapper-Around-libvlc)。用它可以很方便的开发视频播放器。

        以下是CRTSPPlayer完整的代码:

 

6)OpenH264 是思科公司发布的一个开源的 H.264 编码和解码器。http://www.oschina.net/p/openh264

 

7)多媒体开源网址:

http://www.oschina.net/project/tag/142/multimedialib

 

8)live555 change log

http://www.live555.com/liveMedia/public/changelog.txt

 

live555 的20140115版本支持H265了;

 

9)live555 rtspperf

RTSPPerf is a tool that use live555 to test the rtsp server performance and it just receive the video audio data and throwthem, you can use test tool for NVR IPcamera DVR rtsp server test.

http://sourceforge.net/p/rtspperf/home/RTSPPerf%20/

 

10)

[原]Live555 之odm封装

http://m.blog.csdn.net/blog/u011652271/18423295

ODM在视频监控领域最近小有名气,而

且其播放视频延迟非常小,今天拿来代码研究了一下,发现里面对FFMPEG和live555的封装算是我见到过最经典的,有兴趣大家可以自己看一下源码

http://sourceforge.net/p/onvifdm/code/HEAD/tree/trunk/odm/odm.player/odm.player.lib/

 

11)一个好的RTSP测试源(http://m.blog.csdn.net/blog/u011652271/18179935)

http://m.blog.csdn.net/blog/u011652271/18179935

 

12)

How to write a Live555 FramedSource to allow me to stream H.264 live

http://stackoverflow.com/questions/13863673/how-to-write-a-live555-framedsource-to-allow-me-to-stream-h-264-live

 

Ok, I finally got some time to spend on this and got it working! I'm sure there are others who will be begging to know how to do it so here it is.

You will need your own FramedSource to take each frame, encode, and prepare it for streaming, I will provide some of the source code for this soon.

Essentially throw your FramedSource into the H264VideoStreamDiscreteFramer, then throw this into the H264RTPSink. Something like this

scheduler = BasicTaskScheduler::createNew(); env = BasicUsageEnvironment::createNew(*scheduler); framedSource = H264FramedSource::createNew(*env, 0,0); h264VideoStreamDiscreteFramer = H264VideoStreamDiscreteFramer::createNew(*env, framedSource); // initialise the RTP Sink stuff here, look at  // testH264VideoStreamer.cpp to find out how videoSink->startPlaying(*h264VideoStreamDiscreteFramer, NULL, videoSink); env->taskScheduler().doEventLoop();

Now in your main render loop, throw over your backbuffer which you've saved to system memory to your FramedSource so it can be encoded etc. For more info on how to setup the encoding stuff check out this answerHow does one encode a series of images into H264 using the x264 C API?

My implementation is very much in a hacky state and is yet to be optimised at all, my d3d application runs at around 15fps due to the encoding, ouch, so I will have to look into this. But for all intensive purposes this stackoverflow question is answered because I was mostly after how to stream it. I hope this helps other people.

As for my FramedSource it looks a little something like this

 

concurrent_queue<x264_nal_t> m_queue;
SwsContext* convertCtx;
x264_param_t param;
x264_t* encoder;
x264_picture_t pic_in, pic_out;


EventTriggerId H264FramedSource::eventTriggerId = 0;
unsigned H264FramedSource::FrameSize = 0;
unsigned H264FramedSource::referenceCount = 0;

int W = 720;
int H = 960;

H264FramedSource* H264FramedSource::createNew(UsageEnvironment& env,
                                              unsigned preferredFrameSize,
                                              unsigned playTimePerFrame)
{
        return new H264FramedSource(env, preferredFrameSize, playTimePerFrame);
}

H264FramedSource::H264FramedSource(UsageEnvironment& env,
                                   unsigned preferredFrameSize,
                                   unsigned playTimePerFrame)
    : FramedSource(env),
    fPreferredFrameSize(fMaxSize),
    fPlayTimePerFrame(playTimePerFrame),
    fLastPlayTime(0),
    fCurIndex(0)
{
        if (referenceCount == 0)
        {

        }
        ++referenceCount;

        x264_param_default_preset(&param, "veryfast", "zerolatency");
        param.i_threads = 1;
        param.i_width = 720;
        param.i_height = 960;
        param.i_fps_num = 60;
        param.i_fps_den = 1;
        // Intra refres:
        param.i_keyint_max = 60;
        param.b_intra_refresh = 1;
        //Rate control:
        param.rc.i_rc_method = X264_RC_CRF;
        param.rc.f_rf_constant = 25;
        param.rc.f_rf_constant_max = 35;
        param.i_sps_id = 7;
        //For streaming:
        param.b_repeat_headers = 1;
        param.b_annexb = 1;
        x264_param_apply_profile(&param, "baseline");


        encoder = x264_encoder_open(&param);
        pic_in.i_type            = X264_TYPE_AUTO;  
        pic_in.i_qpplus1         = 0;
        pic_in.img.i_csp         = X264_CSP_I420;  
        pic_in.img.i_plane       = 3;


        x264_picture_alloc(&pic_in, X264_CSP_I420, 720, 920);

        convertCtx = sws_getContext(720, 960, PIX_FMT_RGB24, 720, 760, PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL);


        if (eventTriggerId == 0)
        {
            eventTriggerId = envir().taskScheduler().createEventTrigger(deliverFrame0);  //将触发事件的Id和回调函数deliverFrame0关联起来


        }
}

H264FramedSource::~H264FramedSource()
{
    --referenceCount;
    if (referenceCount == 0)
    {
        // Reclaim our 'event trigger'
        envir().taskScheduler().deleteEventTrigger(eventTriggerId);
        eventTriggerId = 0;
    }
}

void H264FramedSource::AddToBuffer(uint8_t* buf, int surfaceSizeInBytes)
{
    uint8_t* surfaceData = (new uint8_t[surfaceSizeInBytes]);

    memcpy(surfaceData, buf, surfaceSizeInBytes);

    int srcstride = W*3;
    sws_scale(convertCtx, &surfaceData, &srcstride,0, H, pic_in.img.plane, pic_in.img.i_stride);
    x264_nal_t* nals = NULL;
    int i_nals = 0;
    int frame_size = -1;


    frame_size = x264_encoder_encode(encoder, &nals, &i_nals, &pic_in, &pic_out);

    static bool finished = false;

    if (frame_size >= 0)
    {
        static bool alreadydone = false;
        if(!alreadydone)
        {

            x264_encoder_headers(encoder, &nals, &i_nals);
            alreadydone = true;
        }
        for(int i = 0; i < i_nals; ++i)
        {
            m_queue.push(nals[i]);
        }  
    }
    delete [] surfaceData;
    surfaceData = NULL;

    envir().taskScheduler().triggerEvent(eventTriggerId, this);
}

void H264FramedSource::doGetNextFrame()
{
    deliverFrame();
}

void H264FramedSource::deliverFrame0(void* clientData)
{
    ((H264FramedSource*)clientData)->deliverFrame();
}

void H264FramedSource::deliverFrame()
{
    x264_nal_t nalToDeliver;

    if (fPlayTimePerFrame > 0 && fPreferredFrameSize > 0) {
        if (fPresentationTime.tv_sec == 0 && fPresentationTime.tv_usec == 0) {
            // This is the first frame, so use the current time:
            gettimeofday(&fPresentationTime, NULL);
        } else {
            // Increment by the play time of the previous data:
            unsigned uSeconds   = fPresentationTime.tv_usec + fLastPlayTime;
            fPresentationTime.tv_sec += uSeconds/1000000;
            fPresentationTime.tv_usec = uSeconds%1000000;
        }

        // Remember the play time of this data:
        fLastPlayTime = (fPlayTimePerFrame*fFrameSize)/fPreferredFrameSize;
        fDurationInMicroseconds = fLastPlayTime;
    } else {
        // We don't know a specific play time duration for this data,
        // so just record the current time as being the 'presentation time':
        gettimeofday(&fPresentationTime, NULL);
    }

    if(!m_queue.empty())
    {
        m_queue.wait_and_pop(nalToDeliver);

        uint8_t* newFrameDataStart = (uint8_t*)0xD15EA5E;

        newFrameDataStart = (uint8_t*)(nalToDeliver.p_payload);
        unsigned newFrameSize = nalToDeliver.i_payload;

        // Deliver the data here:
        if (newFrameSize > fMaxSize) {
            fFrameSize = fMaxSize;
            fNumTruncatedBytes = newFrameSize - fMaxSize;
        }
        else {
            fFrameSize = newFrameSize;
        }

        memcpy(fTo, nalToDeliver.p_payload, nalToDeliver.i_payload);

        FramedSource::afterGetting(this);
    }
}

 

http://www.cfanz.cn/?c=article&a=read&id=121976

http://blog.yikuyiku.com/?tag=avcodec_decode_video2

 

JM h264 编码文档

http://iphome.hhi.de/suehring/tml/doc/ldec/html/

http://iphome.hhi.de/suehring/tml/doc/ldec/html/parset_8c_source.html


获得H.264视频分辨率的方法

http://www.cnblogs.com/likwo/p/3531241.html


 13)

流媒体基本要点简述:如何在H264数据中获取PTS?

http://70565912.blog.51cto.com/1358202/533736/

 

14)

mediaxyz访谈录:ffmpeg的码率控制

http://bbs.chinavideo.org/viewthread.php?tid=47

 

15)ffplay的音视频同步分析 http://blog.chinaunix.net/uid-20364597-id-3510052.html

 

16)程序员必须知道的国外几个IT网站

http://www.vaikan.com/programers-should-know-several-tech-websites/

 

CodeGuru:http://www.codeguru.com   
  Devx   C++   None:http://www.devx.com/cplus   
  C-C++   Users   Journal   Web   Site:http://www.devx.com/cplus   
  SGI   STL--Service   &   Support:http://www.sgi.com/tech/stl   
  The   MFC   Professional:http://www.visionx.com/mfcpro/index.htp   
  Windows   Developer   Magazine   Online:http://www.wdj.com
  www.codeproject.com

 

17)

ACE vs Boost vs POCO

http://stackoverflow.com/questions/992069/ace-vs-boost-vs-poco

 

18)http://bbs.chinavideo.org/viewthread.php?tid=5816 ffplay原理

 

19)ttp://blog.yikuyiku.com/?tag=avcodec_decode_video2
avcodec_decode_video2解码丢帧问题解决;

 

20)

boost 线程池

http://threadpool.sourceforge.net/

 

21)线程池

http://www.cnblogs.com/xugang/archive/2010/04/20/1716042.html

 

22)Onvif开发之服务端成功对接Rtsp视频流篇

http://www.sjsjw.com/kf_code/article/27_28486_6721.asp

 

23)关于ipcam的UPnP或NAT的知识

http://blog.chinaunix.net/uid-23883288-id-3038120.html

你可能感兴趣的:(流媒体开发的有用网址收藏)