[Live-devel] Detecting network failure

Roberts, Randy Randy.Roberts at flir.com
Mon Mar 22 09:26:52 PDT 2010


 

Previously I was watching RTPOverTCP_OK to determine when there was a
failure in TCP streaming, but that's since been removed.  How can I
determine when streaming has ceased because the network has failed (my
internet connection goes down, for example)?

In particular, I need to do this with a DarwinInjector instance.
Previously I'd check the status of RTPOverTCP_OK every 500 msec; what
options do I now have to determine if streaming is failing?

Thanks,

Jeremy

 

Hi Jeremy,

 

When this happens do you, by chance, notice the CPU utilization go close
to 100% on your target?

 

I've seen this phenomenon when the client side closes an RTPOverTCP
connection in my RTP/RTSP/TCP/HTTP environment...VLC sends an RTCP BYTE,
and then closes the socket (it didn't send an RTSP TEARDOWN...can anyone
confirm that behavior???   is that correct/expected???  (I'm not sure
why it wouldn't send the TEARDOWN too)...

 

Anyway, as an experiment, I changed sendRTPOverTCP() helper function to
return the socket status return from send()....and in sendPacket(), upon
a non-zero return from sendRTPOverTCP, I call removeStreamSocket().

 

In RTPInterface::sendPacket(...)

 

  // Also, send over each of our TCP sockets:

  for (tcpStreamRecord* streams = fTCPStreams; streams != NULL;

       streams = streams->fNext) {

    if (sendRTPOverTCP(packet, packetSize, streams->fStreamSocketNum,
streams->fStreamChannelId)) {

#ifdef DEBUG_RTP

      fprintf(stderr, "RTPInterface::sendRTPOverTCP()
failed...remoteStreamSocket %d, channelID %d\n",
streams->fStreamSocketNum, streams->fStreamChannelId);

#endif

      removeStreamSocket(streams->fStreamSocketNum,
streams->fStreamChannelId);

    }

  }

 

 

Eventually, the server side will timeout the connection as the
"liveness" check fails, and all context, as far as I can tell, will be
intact...

 

This change will keep the tcpReadHandler() from calling select() (via
readSocket()->blockUntilReadable()) with a non-NULL timeout.tv_sec = 0,
which always returns 0 every time through the event loop...it also keeps
sendPacket from sending to a "broken-pipe", for every packet...I'm
guessing the latter is the cause of the cpu utilization spike.

 

Without this change, I see the cpu utilization spike, until the liveness
check removes the session/socket, etc.

 

I haven't found any negative side effects to this change...and
performance seems more stable...

 

Randy

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.live555.com/pipermail/live-devel/attachments/20100322/b7565ccc/attachment-0001.html>


More information about the live-devel mailing list