1)H264码流接收采用的是live555,live555会将sps,pps,I帧,p帧都是单独的包过来的,在接收到Buffer,需要对它进行组成帧,live555自己支持I帧和P帧的组帧的,但是我们交给ffmpeg前,必须对在每帧之前插入00 00 00 01开始码,同时如果是I帧,必须将sps,pps,I帧同时交给ffmpeg才能解码的,所以对live555的Buffer的进行组帧;
live555的重点工作是拿到Buffer,可以参考OpenRtsp和RtspClient两个例子,OpenRtsp中有一个FileSink和H264VideoFileSink,RtspClient中有个DummySink,可以修改这个Sink,在这个Sink中进行组帧,然后调用ffmpeg解码;
class DummySink: public MediaSink {
public:
static DummySink* createNew(UsageEnvironment& env,
MediaSubsession& subsession, // identifies the kind of data that's being received
char const* streamId = NULL); // identifies the stream itself (optional)
private:
DummySink(UsageEnvironment& env, MediaSubsession& subsession, char const* streamId);
// called only by "createNew()"
virtual ~DummySink();
static void afterGettingFrame(void* clientData, unsigned frameSize,
unsigned numTruncatedBytes,
struct timeval presentationTime,
unsigned durationInMicroseconds);
void afterGettingFrame(unsigned frameSize, unsigned numTruncatedBytes,
struct timeval presentationTime, unsigned durationInMicroseconds);
private:
// redefined virtual functions:
virtual Boolean continuePlaying();
private:
u_int8_t* fReceiveBuffer;
MediaSubsession& fSubsession;
char* fStreamId;
};
2)采用ffmpeg进行解码;这个没有什么可说的,我前面的例子里面有;
我们可以改写:
void DummySink::afterGettingFrame(unsigned frameSize, unsigned numTruncatedBytes,
struct timeval presentationTime, unsigned /*durationInMicroseconds*/) {
unsigned char const start_code[4] = {0x00, 0x00, 0x00, 0x01};
。。。。。。。。。。。。。。。。。
。。。。。。。。。。。。。。。。。。。。
。。。。。。。。。。。。。。。。。。。。。。
//为保存Buff的缓冲区,在这个地方将Buff调用给FFMPEG
int imageWidth=0;
int imageHeight=0;
if (H264Status==H264STATUS_IFRAME ||H264Status==H264STATUS_PFRAME)
{
//封装H264解码函数;
bool bRet=H264DecodeClass.H264DecodeProcess((unsigned char*)pH264ReceiveBuff,frameSize,(unsigned char *)DecodeBuff,imageWidth,imageHeight);
if (bRet&&imageWidth>0&&imageHeight>0)
{
TRACE("receive a frame,frameSize=%d\n",frameSize);
//这里调用DDRAW显示图像;
。。。。。。。。。。。。。。。。。。。。。。。。。
。。。。。。。。。。。。。。。。。。。。。。。。。。
}
}
// Then continue, to request the next frame of data:
continuePlaying();
}
3)ddraw yuv420p直接显示;首先创建2个表面,主表面和离屏表面;将yuv420p的数据copy到离屏表面,然后blt到主表面进行绘制;这个在我的博客里面也有讲到,有需要的朋友可以参考前面的ddraw yuv视频显示的文章;
大家可以参考我的思路自己实现H264接收,解码,显示功能;