十 h264 RTP传输详解(2)
上一章并没有把打开文件分析文件的代码找到,因为发现它隐藏得比较深,而且H264的Source又有多个,形成了连环计。所以此章中就将文件处理与H264的Source们并在一起分析吧。
从哪里开始呢?从source开始吧!为什么要从它开始呢?我就想从这里开始,行了吧?
FramedSource* H264VideoFileServerMediaSubsession::createNewStreamSource( unsigned /*clientSessionId*/, unsigned& estBitrate) { estBitrate = 500; // kbps, estimate // Create the video source: ByteStreamFileSource* fileSource = ByteStreamFileSource::createNew(envir(), fFileName); if (fileSource == NULL) return NULL; fFileSize = fileSource->fileSize(); // Create a framer for the Video Elementary Stream: return H264VideoStreamFramer::createNew(envir(), fileSource); }先创建一个ByteStreamFileSource,显然这是一个从文件按字节读取数据的source,没什么可细说的。但是,打开文件,读写文件操作的确就在其中。最终来处理h264文件,分析其格式,解析出帧或nal的应是这个source: H264VideoStreamFramer。打开文件的地方找到了,但分析文件的代码才是更有价值的。那我们只能来看H264VideoStreamFramer。
H264VideoStreamFramer::H264VideoStreamFramer(UsageEnvironment& env, FramedSource* inputSource, Boolean createParser, Boolean includeStartCodeInOutput) : MPEGVideoStreamFramer(env, inputSource), fIncludeStartCodeInOutput(includeStartCodeInOutput), fLastSeenSPS(NULL), fLastSeenSPSSize(0), fLastSeenPPS(NULL), fLastSeenPPSSize(0) { fParser = createParser ? new H264VideoStreamParser(this, inputSource, includeStartCodeInOutput) : NULL; fNextPresentationTime = fPresentationTimeBase; fFrameRate = 25.0; // We assume a frame rate of 25 fps, //unless we learn otherwise (from parsing a Sequence Parameter Set NAL unit) }由于createParser肯定为真,所以主要内容是创建了H264VideoStreamParser对象(先不管这个parser)。
下面再来分析StreamParser的儿子:MPEGVideoStreamParser。
MPEGVideoStreamParser::MPEGVideoStreamParser( MPEGVideoStreamFramer* usingSource, FramedSource* inputSource) : StreamParser(inputSource, FramedSource::handleClosure, usingSource, &MPEGVideoStreamFramer::continueReadProcessing, usingSource), fUsingSource(usingSource) { }
MPEGVideoStreamParser的构造函数中有很多有意思的东西。
首先参数usingSource是什么意思?表示正在使用这个Parser的Source? inputSource 很明确,就是能获取数据的source,也就是 ByteStreamFileSource。而且很明显的,StreamParser中保存的source是ByteStreamFileSource。从传入给StreamParser的回调函数以及它们的参数可以看出,这些回调函数全是指向的StreamParser的子类的函数(为啥不用虚函数的方式?哦,回调函数全是静态函数,不能成为虚函数)。这说明在每读一次数据后,MPEGVideoStreamFramer::continueReadProcessing()被调用,在其中对帧进行界定和分析,完成后再调用RTPSink的相应函数,RTPSink中对帧进行打包和发送(还记得吗,不记得了请回头看以前的章节)。
MPEGVideoStreamParser的fTo是RTPSink传入的缓冲指针,其saveByte(),save4Bytes()是把数据从StreamParser的缓冲把数据复制到fTo中,是给继承类使用的。saveToNextCode()是复制数据直到遇到一个同步字节串(比如h264中分隔nal的那一陀东东,当然此处的跟h264还不一样),也是给继承类使用的。纯虚函数parse()很明显是留继承类去写帧分析代码的地方。registerReadInterest()被调用者用来告诉MPEGVideoStreamParser其接收帧的缓冲地址与容量。
现在应该来分析一下MPEGVideoStreamFramer,以明确MPEGVideoStreamFramer与MPEGVideoStreamParser是怎样配合的。
MPEGVideoStreamFramer中用到Parser的重要的函数只有两个,一是:
void MPEGVideoStreamFramer::doGetNextFrame() { fParser->registerReadInterest(fTo, fMaxSize); continueReadProcessing(); }
很简单,只是告诉了Parser保存帧的缓冲和缓冲的大小,然后执行continueReadProcessing(),那么来看一下continueReadProcessing():
void MPEGVideoStreamFramer::continueReadProcessing() { unsigned acquiredFrameSize = fParser->parse(); if (acquiredFrameSize > 0) { // We were able to acquire a frame from the input. // It has already been copied to the reader's space. fFrameSize = acquiredFrameSize; fNumTruncatedBytes = fParser->numTruncatedBytes(); // "fPresentationTime" should have already been computed. // Compute "fDurationInMicroseconds" now: fDurationInMicroseconds = (fFrameRate == 0.0 || ((int) fPictureCount) < 0) ? 0 : (unsigned) ((fPictureCount * 1000000) / fFrameRate); fPictureCount = 0; // Call our own 'after getting' function. Because we're not a 'leaf' // source, we can call this directly, without risking infinite recursion. afterGetting(this); } else { // We were unable to parse a complete frame from the input, because: // - we had to read more data from the source stream, or // - the source stream has ended. } }先利用Parser进行分析(应该是解析出一帧吧?),分析完成后,帧数据已到了MPEGVideoStreamFramer的缓冲fTo中。计算出帧的持续时间后,调用FrameSource的afterGetting(),最终会调用RTPSink的函数。
Boolean H264VideoRTPSink::continuePlaying() { // First, check whether we have a 'fragmenter' class set up yet. // If not, create it now: if (fOurFragmenter == NULL) { fOurFragmenter = new H264FUAFragmenter(envir(), fSource, OutPacketBuffer::maxSize, ourMaxPacketSize() - 12/*RTP hdr size*/); fSource = fOurFragmenter; } // Then call the parent class's implementation: return MultiFramedRTPSink::continuePlaying(); }并且它取代了H264VideoStreamFramer成为直接与RTPSink发生关系的source.如此一来,RTPSink要获取帧时,都是从它获取的.看它最主要的一个函数吧:
void H264FUAFragmenter::doGetNextFrame() { if (fNumValidDataBytes == 1) { // We have no NAL unit data currently in the buffer. Read a new one: fInputSource->getNextFrame(&fInputBuffer[1], fInputBufferSize - 1, afterGettingFrame, this, FramedSource::handleClosure, this); } else { // We have NAL unit data in the buffer. There are three cases to consider: // 1. There is a new NAL unit in the buffer, and it's small enough to deliver // to the RTP sink (as is). // 2. There is a new NAL unit in the buffer, but it's too large to deliver to // the RTP sink in its entirety. Deliver the first fragment of this data, // as a FU-A packet, with one extra preceding header byte. // 3. There is a NAL unit in the buffer, and we've already delivered some // fragment(s) of this. Deliver the next fragment of this data, // as a FU-A packet, with two extra preceding header bytes. if (fMaxSize < fMaxOutputPacketSize) { // shouldn't happen envir() << "H264FUAFragmenter::doGetNextFrame(): fMaxSize (" << fMaxSize << ") is smaller than expected\n"; } else { fMaxSize = fMaxOutputPacketSize; } fLastFragmentCompletedNALUnit = True; // by default if (fCurDataOffset == 1) { // case 1 or 2 if (fNumValidDataBytes - 1 <= fMaxSize) { // case 1 memmove(fTo, &fInputBuffer[1], fNumValidDataBytes - 1); fFrameSize = fNumValidDataBytes - 1; fCurDataOffset = fNumValidDataBytes; } else { // case 2 // We need to send the NAL unit data as FU-A packets. Deliver the first // packet now. Note that we add FU indicator and FU header bytes to the front // of the packet (reusing the existing NAL header byte for the FU header). fInputBuffer[0] = (fInputBuffer[1] & 0xE0) | 28; // FU indicator fInputBuffer[1] = 0x80 | (fInputBuffer[1] & 0x1F); // FU header (with S bit) memmove(fTo, fInputBuffer, fMaxSize); fFrameSize = fMaxSize; fCurDataOffset += fMaxSize - 1; fLastFragmentCompletedNALUnit = False; } } else { // case 3 // We are sending this NAL unit data as FU-A packets. We've already sent the // first packet (fragment). Now, send the next fragment. Note that we add // FU indicator and FU header bytes to the front. (We reuse these bytes that // we already sent for the first fragment, but clear the S bit, and add the E // bit if this is the last fragment.) fInputBuffer[fCurDataOffset - 2] = fInputBuffer[0]; // FU indicator fInputBuffer[fCurDataOffset - 1] = fInputBuffer[1] & ~0x80; // FU header (no S bit) unsigned numBytesToSend = 2 + fNumValidDataBytes - fCurDataOffset; if (numBytesToSend > fMaxSize) { // We can't send all of the remaining data this time: numBytesToSend = fMaxSize; fLastFragmentCompletedNALUnit = False; } else { // This is the last fragment: fInputBuffer[fCurDataOffset - 1] |= 0x40; // set the E bit in the FU header fNumTruncatedBytes = fSaveNumTruncatedBytes; } memmove(fTo, &fInputBuffer[fCurDataOffset - 2], numBytesToSend); fFrameSize = numBytesToSend; fCurDataOffset += numBytesToSend - 2; } if (fCurDataOffset >= fNumValidDataBytes) { // We're done with this data. Reset the pointers for receiving new data: fNumValidDataBytes = fCurDataOffset = 1; } // Complete delivery to the client: FramedSource::afterGetting(this); } }
如果输入缓冲中没有数据,调用fInputSource->getNextFrame(),fInputSource是H264VideoStreamFramer,H264VideoStreamFramer的getNextFrame()会调用H264VideoStreamParser的parser(),parser()又调用ByteStreamFileSource获取数据,然后分析,parser()完成后会调用:
void H264FUAFragmenter::afterGettingFrame1( unsigned frameSize, unsigned numTruncatedBytes, struct timeval presentationTime, unsigned durationInMicroseconds) { fNumValidDataBytes += frameSize; fSaveNumTruncatedBytes = numTruncatedBytes; fPresentationTime = presentationTime; fDurationInMicroseconds = durationInMicroseconds; // Deliver data to the client: doGetNextFrame(); }然后又调用回H264FUAFragmenter::doGetNextFrame(),此时输入缓冲中有数据了,H264FUAFragmenter就进行分析处理.H264FUAFragmenter又对数据做了什么呢?