live555调用increaseSendBufferTo分析

通过在 groupsock\GroupsockHelper.cpp 中打断点,发现一个rtsp over tcp的客户端连接上来,共调用了increaseSendBufferTo函数6次。打印出来的每次调用的socket号和对应的buffer size分别为

increaseBufferTo socket=824 size=51200
increaseBufferTo socket=972 size=51200
increaseBufferTo socket=976 size=51200
increaseBufferTo socket=988 size=51200
increaseBufferTo socket=988 size=1125000
increaseBufferTo socket=992 size=51200

而最终 RTPInterface::sendDataOverTCP 使用的socket为972的那个,就是第二个调用increaseSendBufferTo的那个socket。这意味着,tcp的发送缓冲只有51200字节。下面再具体分析上面这6次increaseSendBufferTo分别发生在什么地方

第一次调用,发生在 RTSPServer::createNew,这个函数接着调用了GenericMediaServer::setUpOurSocket

    ourSocket = setupStreamSocket(env, ourPort);
    if (ourSocket < 0) break;
    
    // Make sure we have a big send buffer:
    if (!increaseSendBufferTo(env, ourSocket, 50*1024)) break;

    // Allow multiple simultaneous connections:
    if (listen(ourSocket, LISTEN_BACKLOG_SIZE) < 0) {
      env.setResultErrMsg("listen() failed: ");
      break;
    }

可以看出,这是针对侦听的socket进行操作,意义不大

第二次调用,发生在 GenericMediaServer::incomingConnectionHandlerOnSocket

  struct sockaddr_in clientAddr;
  SOCKLEN_T clientAddrLen = sizeof clientAddr;
  int clientSocket = accept(serverSocket, (struct sockaddr*)&clientAddr, &clientAddrLen);
  if (clientSocket < 0) {
    int err = envir().getErrno();
    if (err != EWOULDBLOCK) {
      envir().setResultErrMsg("accept() failed: ");
    }
    return;
  }
  ignoreSigPipeOnSocket(clientSocket); // so that clients on the same host that are killed don't also kill us
  makeSocketNonBlocking(clientSocket);
  increaseSendBufferTo(envir(), clientSocket, 50*1024);

这是针对连接进来的tcp socket,也就是之后进行命令交互和rtp数据交互的tcp通道,进行的设置。

第三次调用,发生在 RTSPServer::RTSPClientConnection
::handleCmd_DESCRIBE。这个函数中会调用 ServerMediaSession::generateSDPDescription,从而最终创建了一个RTPInterface对象。

RTPInterface::RTPInterface(Medium* owner, Groupsock* gs)
  : fOwner(owner), fGS(gs),
    fTCPStreams(NULL),
    fNextTCPReadSize(0), fNextTCPReadStreamSocketNum(-1),
    fNextTCPReadStreamChannelId(0xFF), fReadHandlerProc(NULL),
    fAuxReadHandlerFunc(NULL), fAuxReadHandlerClientData(NULL) {
  // Make the socket non-blocking, even though it will be read from only asynchronously, when packets arrive.
  // The reason for this is that, in some OSs, reads on a blocking socket can (allegedly) sometimes block,
  // even if the socket was previously reported (e.g., by "select()") as having data available.
  // (This can supposedly happen if the UDP checksum fails, for example.)
  makeSocketNonBlocking(fGS->socketNum());
  increaseSendBufferTo(envir(), fGS->socketNum(), 50*1024);
}

这里设置的是RTPInterface对应的Groupsock,组播的socket

第四、第五次调用,发生在 RTSPServer::RTSPClientSession
::handleCmd_SETUP。这里调用了 OnDemandServerMediaSubsession
::getStreamParameters

    rtpSink = createNewRTPSink(rtpGroupsock, rtpPayloadType, mediaSource);
    if (rtpSink != NULL && rtpSink->estimatedBitrate() > 0) streamBitrate = rtpSink->estimatedBitrate();
      }

      // Turn off the destinations for each groupsock.  They'll get set later
      // (unless TCP is used instead):
      if (rtpGroupsock != NULL) rtpGroupsock->removeAllDestinations();
      if (rtcpGroupsock != NULL) rtcpGroupsock->removeAllDestinations();

      if (rtpGroupsock != NULL) {
    // Try to use a big send buffer for RTP -  at least 0.1 second of
    // specified bandwidth and at least 50 KB
    unsigned rtpBufSize = streamBitrate * 25 / 2; // 1 kbps * 0.1 s = 12.5 bytes
    if (rtpBufSize < 50 * 1024) rtpBufSize = 50 * 1024;
    increaseSendBufferTo(envir(), rtpGroupsock->socketNum(), rtpBufSize);
      }

createNewRTPSink先创建了一个RTPInterface,其构造函数会设置一次发送缓冲大小。然后下面的代码会根据码率,对rtpGroupsock再设置一次缓冲大小。很明显,这两次调用increaseSendBufferTo都是针对udp的组播socket的。

第六次调用,发生在 RTSPServer::RTSPClientSession
::handleCmd_PLAY。这个函数调用了 OnDemandServerMediaSubsession::startStream,最终创建一个RTCPInstance对象。而RTCPInstance类中有一个RTPInterface fRTCPInterface的成员,因此又创建了一个RTPInterface,从而又调用了一次increaseSendBufferTo。

综上分析,可以看出rtsp的tcp通道只被设置了一次发送缓冲,其值为50 *1024。显然这个值太小了,对于发送大码率视频是不足够的

你可能感兴趣的:(live555调用increaseSendBufferTo分析)