live555源码分析----mpg文件的处理

    live555支持的文件格式多为单流的文件,仅支持*.mpg、*.mkv、*.webm几种音视频混合类型的文件。其实我的目的是扩展其支持的格式,如avi等, 所以来分析一下mpg文件的处理。


    mpg文件处理涉及的类相当多,类间的关系比较复杂,对于RTP打包的过程(在RTPSink中完成)所以有的媒体类型比较相似(细节上有差异,这些都是在特定媒体相关的***RTPSink中完成的),所以主要分析的是从source中获取数据的过程。一个subsession对应一个流,当一个session中拥有多个subsession时,需要对每一个subsession进行单独的控制。我们可以看到,对处理“PLAY”命令时,对session中的每个subsession都调用了一次startStream操作,如下:


void RTSPServer::RTSPClientSession
  ::handleCmd_PLAY(ServerMediaSubsession* subsession, char const* cseq,
  char const* fullRequestStr) {

  //现在终于开始媒体数据传输了

  // Now, start streaming:
  for (i = 0; i < fNumStreamStates; ++i) {
    if (subsession == NULL /* means: aggregated operation */
|| subsession == fStreamStates[i].subsession) {
      unsigned short rtpSeqNum = 0;
      unsigned rtpTimestamp = 0;

      //开始各个subsession上的数据传输, 即开始播放了

      fStreamStates[i].subsession->startStream(fOurSessionId,
      fStreamStates[i].streamToken,
      (TaskFunc*)noteClientLiveness, this,
      rtpSeqNum, rtpTimestamp,
      handleAlternativeRequestByte, this);
    ...
    }
  }
  
...
}


    
    PLAY命令的处理过程前面的文章已经分析过了,通过subsession->startStream调用启动每个流上的播放,其从source获取源数据是将在MultiFramedRTPSink::packFrame()中进行的。


void MultiFramedRTPSink::packFrame() {
  if (fOutBuf->haveOverflowData()) {
...
  } else {
...

    //
    //从source中获取下一个frame
    //
    fSource->getNextFrame(fOutBuf->curPtr(), fOutBuf->totalBytesAvailable(),
			  afterGettingFrame, this, ourHandleClosure, this);
  }
}


    对于mpeg来讲,fSource是一个MPEG1or2VideoStreamFramer或者MPEG1or2AudioStreamFramer实例(根据live555中的源码,还有一种AAC格式音频,这里为了简便不作分析)。它们的继承关系如下:
    MPEG1or2VideoStreamFramer->MPEGVideoStreamFramer->FramedFilter->FramedSource->MediaSource
    MPEG1or2AudioStreamFramer->FramedSource->MediaSource


    先来分析mpeg video的处理。
    MPEG1or2VideoStreamFramer类的实现简单,主要是创建对应的语法分析器MPEG1or2VideoStreamParser实例。显然MPEG1or2VideoStreamFramer(是FrameSource的直接子类)是一个Filter,跟它的创建过程可以发现,它的输入source是一个MPEG1or2DemuxedElementaryStream类实例。对于单流的文件来讲, 一般包装的是ByteStreamFileSource类实例,从后面我们可以发现,最终直接读取文件的还是ByteStreamFileSource实例。对于语法分析部分,不作分析,我们只关心如何从文件中解析出音视频数据,所以直接跟踪Filter所包装的MPEG1or2DemuxedElementaryStream类。在语法分析器中,将会调用MPEG1or2DemuxedElementaryStream的getNextFrame函数。
    

    getNextFrame是定义在FramedSource中的非虚函数,实现如下

void FramedSource::getNextFrame(unsigned char* to, unsigned maxSize,
				afterGettingFunc* afterGettingFunc,
				void* afterGettingClientData,
				onCloseFunc* onCloseFunc,
				void* onCloseClientData) {
  // Make sure we're not already being read:
  if (fIsCurrentlyAwaitingData) {
    envir() << "FramedSource[" << this << "]::getNextFrame(): attempting to read more than once at the same time!\n";
    envir().internalError();
  }


  fTo = to;
  fMaxSize = maxSize;
  fNumTruncatedBytes = 0; // by default; could be changed by doGetNextFrame()
  fDurationInMicroseconds = 0; // by default; could be changed by doGetNextFrame()
  fAfterGettingFunc = afterGettingFunc;
  fAfterGettingClientData = afterGettingClientData;
  fOnCloseFunc = onCloseFunc;
  fOnCloseClientData = onCloseClientData;
  fIsCurrentlyAwaitingData = True;


  doGetNextFrame();     //获取下一个frame
}



    FramedSource::getNextFrame完成了一些员变量的初始化工作,其它工作交给dogetNextFrame函数,它是FrameSource上的一个纯虚函数,在子类MPEG1or2DemuxedElementaryStream中重新实现。
    
void MPEG1or2DemuxedElementaryStream::doGetNextFrame() {
  fOurSourceDemux.getNextFrame(fOurStreamIdTag, fTo, fMaxSize,
			       afterGettingFrame, this,
			       handleClosure, this);
}



    fOurSourceDemux,被定义为MPEG1or2Demux(直接继承自Medium)对像的引用。为什么是引用而不是指针?我们想想,前面涉及到的类MPEG1or2DemuxedServerMediaSubsession、MPEG1or2VideoRTPSink、MPEG1or2VideoStreamFramer、MPEG1or2VideoStreamParser,这些对于文件中的每一个流(当然这里只是列出了视频相关的类)都需要创建一个实例,但是所有的流对应的文件却只有一个,最后读取文件的过程必然只有一份实现。文件中的每个流会对应一个MPEG1or2DemuxedElementaryStream实例,但是却对应同一个MPEG1or2Demux实例。这里用引用而不是指针,就是为了说明fOurSourceDemux并属于某个MPEG1or2DemuxedElementaryStream实例。


    在创建MPEG1or2Demux实例之前,会创建一个ByteStreamFileSource,来看MPEG1or2Demux::getNextFrame函数的定义

void MPEG1or2Demux::getNextFrame(u_int8_t streamIdTag,
				 unsigned char* to, unsigned maxSize,
				 FramedSource::afterGettingFunc* afterGettingFunc,
				 void* afterGettingClientData,
				 FramedSource::onCloseFunc* onCloseFunc,
				 void* onCloseClientData) {
  // First, check whether we have saved data for this stream id:
  //
  // 检查缓存中是否已经存在streamIdTag流的数据
  //
  if (useSavedData(streamIdTag, to, maxSize,
		   afterGettingFunc, afterGettingClientData)) {
    return;
  }
    
  //注意,这里设置了回调用函数
  // Then save the parameters of the specified stream id:
  registerReadInterest(streamIdTag, to, maxSize,
		       afterGettingFunc, afterGettingClientData,
		       onCloseFunc, onCloseClientData);


  // Next, if we're the only currently pending read, continue looking for data:
  if (fNumPendingReads == 1 || fHaveUndeliveredData) {
    fHaveUndeliveredData = 0;
    continueReadProcessing();       //继续读取数据
  } // otherwise the continued read processing has already been taken care of
}


    
    上面的代码中首先调用useSavedData函数,检查缓存中是否有相应流的数据存在,若是不存在,就只有再从文件中读取了。现在可以做一个猜测,缓存是必需的,文件中的不同流数据交错排列,而读取时一般会按顺序来读。现在要读取一个视频frame,但是当前文件指针处读取到的刚好是音视频frame,我们就需要将这个音频frame保存到缓存中,这样一直到读取视频数据为止。实际情况是不是这样呢?
    
    先来看MPEG1or2Demux::useSavedData函数的实现
Boolean MPEG1or2Demux::useSavedData(u_int8_t streamIdTag,
				    unsigned char* to, unsigned maxSize,
				    FramedSource::afterGettingFunc* afterGettingFunc,
				    void* afterGettingClientData) {
  struct OutputDescriptor& out = fOutput[streamIdTag];      //fOutput是一个缓存数组
  //正常情况,缓存中没有数据,直接返回了
  if (out.savedDataHead == NULL) return False; // common case


  unsigned totNumBytesCopied = 0;


  //
  //从OutputDescriptor类型的缓存中读取全部数据
  //
  while (maxSize > 0 && out.savedDataHead != NULL) {
    OutputDescriptor::SavedData& savedData = *(out.savedDataHead);
    unsigned char* from = &savedData.data[savedData.numBytesUsed];
    unsigned numBytesToCopy = savedData.dataSize - savedData.numBytesUsed;
    if (numBytesToCopy > maxSize) numBytesToCopy = maxSize; 
    memmove(to, from, numBytesToCopy);
    to += numBytesToCopy;
    maxSize -= numBytesToCopy;
    out.savedDataTotalSize -= numBytesToCopy;
    totNumBytesCopied += numBytesToCopy;
    savedData.numBytesUsed += numBytesToCopy;
    if (savedData.numBytesUsed == savedData.dataSize) {
      out.savedDataHead = savedData.next;
      if (out.savedDataHead == NULL) out.savedDataTail = NULL;
      savedData.next = NULL;
      delete &savedData;
    }
  }


  out.isCurrentlyActive = True;
  if (afterGettingFunc != NULL) {
    struct timeval presentationTime;
    presentationTime.tv_sec = 0; presentationTime.tv_usec = 0; // should fix #####
    (*afterGettingFunc)(afterGettingClientData, totNumBytesCopied,  
			0 /* numTruncatedBytes */, presentationTime,    
			0 /* durationInMicroseconds ?????#####*/);
  }
  return True;
}


    来分析一下上面的代码。缓存被定义为一个OutputDescriptor类型数组,看其定义
      OutputDescriptor_t fOutput[256];
    竟然定义了一个256个实例,不过也只有这个样才能使用流索引streamIdTag直接访问数组元素,streamIdTag应该小于256。每一个流将对应一个OutputDescriptor实例。实际数据保存在 OutputDescriptor::SavedData类型的链表中。


    现在继续来看MPEG1or2Demux::getNextFrame函数,我们需要知道是怎么分离出不同媒体流的,来看continueReadProcessing函数
void MPEG1or2Demux::continueReadProcessing() {
  while (fNumPendingReads > 0) {
    unsigned char acquiredStreamIdTag = fParser->parse();   //文件语法分析
    
    if (acquiredStreamIdTag != 0) { //若未读取到所需要的数据,这个值为0
        //我们从输入源获取到一个frame
      // We were able to acquire a frame from the input.
      struct OutputDescriptor& newOut = fOutput[acquiredStreamIdTag];
      newOut.isCurrentlyAwaitingData = False;   //指示我们可以读下一个frame了,parse中会根据这个值判断是否正在处理的流
      // indicates that we can be read again
        // (This needs to be set before the 'after getting' call below,
        //  in case it tries to read another frame)


      //
      //调用“after getting”函数
      //
      // Call our own 'after getting' function.  Because we're not a 'leaf'
      // source, we can call this directly, without risking infinite recursion.
      if (newOut.fAfterGettingFunc != NULL) {
	(*newOut.fAfterGettingFunc)(newOut.afterGettingClientData,
				    newOut.frameSize, 0 /* numTruncatedBytes */,
				    newOut.presentationTime,
				    0 /* durationInMicroseconds ?????#####*/);
      --fNumPendingReads;
      }
    } else {
      // We were unable to parse a complete frame from the input, because:
      // - we had to read more data from the source stream, or
      // - we found a frame for a stream that was being read, but whose
      //   reader is not ready to get the frame right now, or
      // - the source stream has ended.
      break;
    }
  }
}


    MPEG1or2Demux::continueReadProcessing的实现似乎很眼熟悉啊!是的,其与MPEGVideoStreamFramer::continueReadProcessing()有些相似。两个函数中均调用了语法分析器进行语法分析,不过应注意前者的语法分析器是MPEGProgramStreamParser类实例,完成了对复合文件的解析,主要目的是从中分离出音视频流数据(其实就是demux过程),而后者是一个MPEG1or2VideoStreamParser,对指定的流数据进一步分析。MPEGProgramStreamParser的数据源,是一个ByteStreamFileSource实例,它是MPEG1or2Demux构造函数中传递下来的参数。若未读取到所需要的数据,fParser->parse()将返回0。


    来看MPEGProgramStreamParser::parse函数实现
unsigned char MPEGProgramStreamParser::parse() {
  unsigned char acquiredStreamTagId = 0;


  try {
    do {
      switch (fCurrentParseState) {
      case PARSING_PACK_HEADER: {
	parsePackHeader();      //分析包头
	break;
      }
      case PARSING_SYSTEM_HEADER: {
	parseSystemHeader();    //分析系统头
	break;
      }
      case PARSING_PES_PACKET: {
	acquiredStreamTagId = parsePESPacket(); //分析流数据
	break;
      }
      }
    } while(acquiredStreamTagId == 0);


    return acquiredStreamTagId;
  } catch (int /*e*/) {
#ifdef DEBUG
    fprintf(stderr, "MPEGProgramStreamParser::parse() EXCEPTION (This is normal behavior - *not* an error)\n");
    fflush(stderr);
#endif
    return 0;  // the parsing got interrupted
  }
}



对于mpeg文件格式不熟悉,就不看了,只看获取流数据包的函数MPEGProgramStreamParser::parsePESPacket

unsigned char MPEGProgramStreamParser::parsePESPacket() {
...
    //
    //检查source是否正在等待这个流类型,如果是就把数据交给它
    //
    // Check whether our using source is interested in this stream type.
    // If so, deliver the frame to him:
    MPEG1or2Demux::OutputDescriptor_t& out = fUsingDemux->fOutput[stream_id];
    if (out.isCurrentlyAwaitingData) {
      unsigned numBytesToCopy;
      if (PES_packet_length > out.maxSize) {
	    numBytesToCopy = out.maxSize;
      } else {
	numBytesToCopy = PES_packet_length;
      }


      getBytes(out.to, numBytesToCopy); //拷贝数据
      out.frameSize = numBytesToCopy;


      // set out.presentationTime later #####
      acquiredStreamIdTag = stream_id;
      PES_packet_length -= numBytesToCopy;0
    } else if (out.isCurrentlyActive) {
    //
    //这个流是需要用到的,但是不是现在,当需要的时候我们才能把数据交出去
    //
        // Someone has been reading this stream, but isn't right now.
      // We can't deliver this frame until he asks for it, so punt for now.
      // The next time he asks for a frame, he'll get it.


      restoreSavedParserState(); // so we read from the beginning next time
      fUsingDemux->fHaveUndeliveredData = True;
      throw READER_NOT_READY;   //抛出异常了
    } else if (out.isPotentiallyReadable &&
	       out.savedDataTotalSize + PES_packet_length < 1000000 /*limit*/) {
        //
        //这个流也是需要用到的,只是当前没有读取而已,因此将其保存到缓存中(OutputDescriptor)
        //
      // Someone is interested in this stream, but hasn't begun reading it yet.
      // Save this data, so that the reader will get it when he later asks for it.
      unsigned char* buf = new unsigned char[PES_packet_length];
      getBytes(buf, PES_packet_length);
      MPEG1or2Demux::OutputDescriptor::SavedData* savedData
	= new MPEG1or2Demux::OutputDescriptor::SavedData(buf, PES_packet_length);   //新建一个SavedData实例
      if (out.savedDataHead == NULL) {
	out.savedDataHead = out.savedDataTail = savedData;
      } else {
	out.savedDataTail->next = savedData;
	out.savedDataTail = savedData;
      }
      out.savedDataTotalSize += PES_packet_length;
      PES_packet_length = 0;
    }
    skipBytes(PES_packet_length);
  }


  // Check for another PES Packet next:
  setParseState(PARSING_PES_PACKET);


 return acquiredStreamIdTag;
}


    MPEGProgramStreamParser::parsePESPacket函数会读取流中的数据,若读取到的流数据不是当前需要的就保存到缓存中,也就是其对应的OutputDescriptor实例。需要注意局部变量acquiredStreamIdTag,它的值被初始化为0, 只有当读取到当前所需要的流数据时才会被赋值。所以若未读取到所需要的数据,这个函数将返回0值。

你可能感兴趣的:(Stream,struct,filter,null,Parameters,Parsing)