[Live-devel] Observing dataloss in linux user space with testRTSPClient

Marathe, Yogesh yogesh_marathe at ti.com
Fri Feb 1 00:06:09 PST 2013


Hi,

I added simple logic and few parameters in DummySink to calculate received bit rate in afteGettingFrame() of testRTSPClient and printed the same at 30secs of interval. This was showing birate per stream received. When I opened 4 connections from IP cameras (streaming at 8Mbps CBR) I saw 30-32Mbps data received in application as expected. When I opened 8 streams (effectively 64 Mbps data coming in) from different IP cameras, I saw testRTSPClient cannot receive more than 25 Mbps of data collectively. I can see 'ifconfig' showing 64 Mbps data is received. I mean if I execute 'ifconfig' twice with interval of 30 secs and calculate bitrate with difference between 'RX bytes' it comes out to be 64Mbps approximately. I think that means driver is not dropping huge data. I also observed CPU load which was less than 50% so CPU is not overloaded.

Why the same bitrate is not observed in user space? Where the data is being dropped? Is it that the application is not consuming data at enough rate (I mean select is not getting called fast enough)?

Before doing this experiment I ensured following things.

1.     Changed DUMMY_SINK_RECEIVE_BUFFER_SIZE to 10000000

2.     unsigned RTSPClient::responseBufferSize = 10000000;

3.     I have turned my system to set net.core.rmem_max and other parameters. I am also using setReceiveBufferTo() to increase socket receive buffer to 0xDA000.

Please let me know if you can foresee where the bottleneck could be. I'm running linux on receiving side.

Regards.
Yogesh.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.live555.com/pipermail/live-devel/attachments/20130201/6dce3b00/attachment.html>


More information about the live-devel mailing list