01.FramedSource* H264VideoFileServerMediaSubsession::createNewStreamSource(02. unsigned /*clientSessionId*/,03. unsigned& estBitrate)
04.{05. estBitrate = 500; // kbps, estimate
06.07. // Create the video source:
08. ByteStreamFileSource* fileSource = ByteStreamFileSource::createNew(envir(),09. fFileName);10. if (fileSource == NULL)
11. return NULL;
12. fFileSize = fileSource->fileSize();13.14. // Create a framer for the Video Elementary Stream:
15. return H264VideoStreamFramer::createNew(envir(), fileSource);
16.}
先创建一个ByteStreamFileSource,显然这是一个从文件按字节读取数据的source,没什么可细说的。但是,打开文件,读写文件操作的确就在其中。最终来处理h264文件,分析其格式,解析出帧或nal的应是这个source:H264VideoStreamFramer。打开文件的地方找到了,但分析文件的代码才是更有价值的。那我们只能来看H264VideoStreamFramer。继续层次:
H264VideoStreamFramer->MPEGVideoStreamFramer->FramedFilter->FramedSource
中间又冒出个Filter。看到它,是不是联想到了DirectShow的filter?或者说Photoshop中的filter?它们的意义应该都差不多吧?即插入到source和render(sink)之间的处理媒体数据的东东?如果这样理解,还是更接近于photoshop中的概念。唉,说实话,我估计自己说的也不全对,反正就这样认识吧,谬不了一千里。既然我们这样认识了,那么我们就有理由相信可能会出现多个filter们一个连一个...
H264VideoStreamFramer继承自MPEGVideoStreamFramer,MPEGVideoStreamFramer比较简单,它只是把一些工作交给了MPEGVideoStreamParser(又出来个parser,这可是个新东西哦,先不要管它吧),重点来看一下。
构造函数:
01.H264VideoStreamFramer::H264VideoStreamFramer(UsageEnvironment& env,02. FramedSource* inputSource,03. Boolean createParser,04. Boolean includeStartCodeInOutput)05. : MPEGVideoStreamFramer(env, inputSource),06. fIncludeStartCodeInOutput(includeStartCodeInOutput),07. fLastSeenSPS(NULL),08. fLastSeenSPSSize(0),09. fLastSeenPPS(NULL),10. fLastSeenPPSSize(0)11.{12. fParser = createParser ?13. new H264VideoStreamParser(this, inputSource, includeStartCodeInOutput) : NULL;14.15. fNextPresentationTime = fPresentationTimeBase;16. fFrameRate = 25.0; // We assume a frame rate of 25 fps,
17. //unless we learn otherwise (from parsing a Sequence Parameter Set NAL unit)
18.}
由于createParser肯定为真,所以主要内容是创建了H264VideoStreamParser对象(先不管这个parser)。
其它的函数就没什么可看的了,都集中在所保存的PPS与SPS上。看来分析工作移到了H264VideoStreamParser,Parser嘛,就是分析器。分析器的基类是StreamParser。StreamParser做了不少的工作,那我们就先搞明白StreamParser做了哪些工作吧,并且可能为继承者提供什么样的调用框架呢?
.....我看完了,呵呵。直接说分析结果吧:StreamParser的主要工作是实现了对数据以位为单位进行访问。因为在处理媒体格式时,按位分析是很常见的情况。这两个函数skipBits(unsigned numBits)和unsigned getBits(unsigned numBits)很明显是基于位的操作。unsigned char* fBank[2]这个变量中的两个缓冲区被轮换使用。这个类中保存了一个Source,理所当然地它应该保存ByteStreamFileSource的实例,而不是FramedFilter。那些getBytes()或getBits()最终会导致读文件的操作。从文件读取一次数据后,StreamParser::afterGettingBytes1()被调用,StreamParser::afterGettingBytes1()中做一点简单的工作后便调用fClientContinueFunc这个回调函数。fClientContinueFunc可能指向Frame的函数体也可能是指向RtpSink的函数体--因为Framer完全可以把RtpSink的函数体传给Parser。至于到底指向哪个,只能在进一步分析之后才得知。
下面再来分析StreamParser的儿子:MPEGVideoStreamParser。先利用Parser进行分析(应该是解析出一帧吧?),分析完成后,帧数据已到了MPEGVideoStreamFramer的缓冲fTo中。计算出帧的持续时间后,调用FrameSource的afterGetting(),最终会调用RTPSink的函数。看到这里,可以总结一下,其实看来看去,Parser直正被外部使用的函数几乎只有一个:parse()。01.MPEGVideoStreamParser::MPEGVideoStreamParser(02. MPEGVideoStreamFramer* usingSource,03. FramedSource* inputSource)04. : StreamParser(inputSource,05. FramedSource::handleClosure,06. usingSource,07. &MPEGVideoStreamFramer::continueReadProcessing,08. usingSource),09. fUsingSource(usingSource)10.{11.}
MPEGVideoStreamParser的构造函数中有很多有意思的东西。首先参数usingSource是什么意思?表示正在使用这个Parser的Source? inputSource 很明确,就是能获取数据的source,也就是ByteStreamFileSource。而且很明显的,StreamParser中保存的source是ByteStreamFileSource。从传入给StreamParser的回调函数以及它们的参数可以看出,这些回调函数全是指向的StreamParser的子类的函数(为啥不用虚函数的方式?哦,回调函数全是静态函数,不能成为虚函数)。这说明在每读一次数据后,MPEGVideoStreamFramer::continueReadProcessing()被调用,在其中对帧进行界定和分析,完成后再调用RTPSink的相应函数,RTPSink中对帧进行打包和发送。MPEGVideoStreamParser的fTo是RTPSink传入的缓冲指针,其saveByte(),save4Bytes()是把数据从StreamParser的缓冲把数据复制到fTo中,是给继承类使用的。saveToNextCode()是复制数据直到遇到一个同步字节串(比如h264中分隔nal的那一陀东东,当然此处的跟h264还不一样),也是给继承类使用的。纯虚函数parse()很明显是留继承类去写帧分析代码的地方。registerReadInterest()被调用者用来告诉MPEGVideoStreamParser其接收帧的缓冲地址与容量。 现在应该来分析一下MPEGVideoStreamFramer,以明确MPEGVideoStreamFramer与MPEGVideoStreamParser是怎样配合的。MPEGVideoStreamFramer中用到Parser的重要的函数只有两个,一是:
01.void MPEGVideoStreamFramer::doGetNextFrame()
02.{03. fParser->registerReadInterest(fTo, fMaxSize);04. continueReadProcessing();05.}很简单,只是告诉了Parser保存帧的缓冲和缓冲的大小,然后执行continueReadProcessing(),那么来看一下continueReadProcessing():01.void MPEGVideoStreamFramer::continueReadProcessing()
02.{03. unsigned acquiredFrameSize = fParser->parse();
04. if (acquiredFrameSize > 0) {
05. // We were able to acquire a frame from the input.
06. // It has already been copied to the reader's space.
07. fFrameSize = acquiredFrameSize;08. fNumTruncatedBytes = fParser->numTruncatedBytes();09.10. // "fPresentationTime" should have already been computed.
11.12. // Compute "fDurationInMicroseconds" now:
13. fDurationInMicroseconds =14. (fFrameRate == 0.0 || ((int) fPictureCount) < 0) ?
15. 0 : (unsigned) ((fPictureCount * 1000000) / fFrameRate);
16. fPictureCount = 0;17.18. // Call our own 'after getting' function. Because we're not a 'leaf'
19. // source, we can call this directly, without risking infinite recursion. 20. afterGetting(this); 21. } else { 22. // We were unable to parse a complete frame from the input, because:
23. // - we had to read more data from the source stream, or
24. // - the source stream has ended.
25. }26.}
01.Boolean H264VideoRTPSink::continuePlaying()02.{03. // First, check whether we have a 'fragmenter' class set up yet.
04. // If not, create it now:
05. if (fOurFragmenter == NULL) {
06. fOurFragmenter = new H264FUAFragmenter(envir(), fSource,
07. OutPacketBuffer::maxSize,08. ourMaxPacketSize() - 12/*RTP hdr size*/);
09. fSource = fOurFragmenter;10. }11.12. // Then call the parent class's implementation:
13. return MultiFramedRTPSink::continuePlaying();
14.}
fSource被指向了H264FUAFragmenter类,这个类主要实现了H264按照RFC3984进行RTP分包,不过这里的实现每个RTP中最多只包含一个NALU,没有实现组合封包的情形。这个类的继承关系如下:H264FUAFragmenter->FramedFilter->FramedSource。很明显,这是一个filter,包装了MPEGVideoStreamFramer类的对像。
并且它取代了H264VideoStreamFramer成为直接与RTPSink发生关系的source.如此一来,RTPSink要获取帧时,都是从它获取的.看它最主要的一个函数吧:
01.void H264FUAFragmenter::doGetNextFrame() {
02. if (fNumValidDataBytes == 1) {
03. //读取一个新的frame
04. // We have no NAL unit data currently in the buffer. Read a new one:
05. fInputSource->getNextFrame(&fInputBuffer[1], fInputBufferSize - 1,06. afterGettingFrame, this,
07. FramedSource::handleClosure, this);
08. } else {
09. // 10. //现在buffer中已经存在NALU数据了,需要考虑三种情况
11. //1.一个新的NALU,且足够小能投递给RTP sink。
12. //2.一个新的NALU,但是比RTP sink要求的包大了,投递第一个分片作为一个FU-A packet, 并带上一个额外的头字节。
13. //3.部分NALU数据,投递下一个分片作为一个FU-A packet,并带上2个额外的头字节。
14. // We have NAL unit data in the buffer. There are three cases to consider:
15. // 1. There is a new NAL unit in the buffer, and it's small enough to deliver
16. // to the RTP sink (as is).
17. // 2. There is a new NAL unit in the buffer, but it's too large to deliver to
18. // the RTP sink in its entirety. Deliver the first fragment of this data,
19. // as a FU-A packet, with one extra preceding header byte.
20. // 3. There is a NAL unit in the buffer, and we've already delivered some
21. // fragment(s) of this. Deliver the next fragment of this data,
22. // as a FU-A packet, with two extra preceding header bytes.
23.24.25. if (fMaxSize < fMaxOutputPacketSize) { // shouldn't happen26. envir() << "H264FUAFragmenter::doGetNextFrame(): fMaxSize ("
27. << fMaxSize << ") is smaller than expected\n";
28. } else {
29. fMaxSize = fMaxOutputPacketSize;30. }31.32.33. fLastFragmentCompletedNALUnit = True; // by default
34. if (fCurDataOffset == 1) { // case 1 or 235. if (fNumValidDataBytes - 1 <= fMaxSize) { // case 136. //
37. //情况1, 处理整个NALU
38. //
39. memmove(fTo, &fInputBuffer[1], fNumValidDataBytes - 1);40. fFrameSize = fNumValidDataBytes - 1;41. fCurDataOffset = fNumValidDataBytes;42. } else { // case 243. //
44. //情况2,处理NALU的第1个分片。注意,我们添加FU指示符和FU头字节(with S bit)到包的最前面(
45. //重用已经存在的NAL 头字节作为FU的头字节)
46. //
47. // We need to send the NAL unit data as FU-A packets. Deliver the first
48. // packet now. Note that we add FU indicator and FU header bytes to the front
49. // of the packet (reusing the existing NAL header byte for the FU header).
50. fInputBuffer[0] = (fInputBuffer[1] & 0xE0) | 28; // FU indicator
51. fInputBuffer[1] = 0x80 | (fInputBuffer[1] & 0x1F); // FU header (with S bit) 重用NALU头字节
52. memmove(fTo, fInputBuffer, fMaxSize);53. fFrameSize = fMaxSize;54. fCurDataOffset += fMaxSize - 1;55. fLastFragmentCompletedNALUnit = False;56. }57. } else { // case 358. //
59. //情况3,处理非第1个分片。需要添加FU指示符和FU头(我们重用了第一个分片中的字节,但是需要清除S位,
60. //并在最后一个分片中添加E位)
61. // 62. //
63. // We are sending this NAL unit data as FU-A packets. We've already sent the
64. // first packet (fragment). Now, send the next fragment. Note that we add
65. // FU indicator and FU header bytes to the front. (We reuse these bytes that
66. // we already sent for the first fragment, but clear the S bit, and add the E
67. // bit if this is the last fragment.)
68. fInputBuffer[fCurDataOffset-2] = fInputBuffer[0]; // FU indicator
69. fInputBuffer[fCurDataOffset-1] = fInputBuffer[1]&~0x80; // FU header (no S bit)
70. unsigned numBytesToSend = 2 + fNumValidDataBytes - fCurDataOffset;
71. if (numBytesToSend > fMaxSize) {
72. // We can't send all of the remaining data this time:
73. numBytesToSend = fMaxSize;74. fLastFragmentCompletedNALUnit = False;75. } else {
76. //
77. //最后一个分片,需要在FU头中设置E标志位
78. // This is the last fragment:
79. fInputBuffer[fCurDataOffset-1] |= 0x40; // set the E bit in the FU header
80. fNumTruncatedBytes = fSaveNumTruncatedBytes;81. }82. memmove(fTo, &fInputBuffer[fCurDataOffset-2], numBytesToSend);83. fFrameSize = numBytesToSend;84. fCurDataOffset += numBytesToSend - 2;85. }86.87.88. if (fCurDataOffset >= fNumValidDataBytes) {
89. // We're done with this data. Reset the pointers for receiving new data:
90. fNumValidDataBytes = fCurDataOffset = 1;91. }92.93.94. // Complete delivery to the client:
95. FramedSource::afterGetting(this);
96. }97.}
如果输入缓冲中没有数据,调用fInputSource->getNextFrame(),fInputSource是H264VideoStreamFramer,H264VideoStreamFramer的getNextFrame()会调用H264VideoStreamParser的parser(),parser()又调用ByteStreamFileSource获取数据,然后分析,parser()完成后会调用:
01.void H264FUAFragmenter::afterGettingFrame1(
02. unsigned frameSize,
03. unsigned numTruncatedBytes,
04. struct timeval presentationTime,
05. unsigned durationInMicroseconds)
06.{07. fNumValidDataBytes += frameSize;08. fSaveNumTruncatedBytes = numTruncatedBytes;09. fPresentationTime = presentationTime;10. fDurationInMicroseconds = durationInMicroseconds;11.12. /*Deliver data to the client:*/
13. doGetNextFrame();14.}
然后又调用回H264FUAFragmenter::doGetNextFrame(),此时输入缓冲中有数据了,H264FUAFragmenter就进行分析处理.H264FUAFragmenter又对数据做了什么呢?
http://blog.csdn.net/nkmnkm/article/details/6936108
http://blog.csdn.net/gavinr/article/details/7042228