[Live-devel] Live555 MPEG2 Transport Stream packet loss

ilya77 at gmail.com ilya77 at gmail.com
Thu Apr 26 05:33:31 PDT 2007


Hi All,

First of all, let me express my gratitude for the elaborate and thoughtful
replies from Ross, Xochitl and Morgan, and apologise in advance for the long
text below, however - we are quite desparate...

We've done some homework in order to answer the questions which have been
previously asked, and here are the answers which might hopefully clear
things up:

We've compared two packet size settings, the default (1448) and a custom
(1328 = 188*7 + 12) size. In both cases ethereal showed 43% packet loss
(indicated by gaps in RTP sequence numbers).
During the test, we have constantly monitored our NAT box, and the bandwidth
usage did not exceed 50% of the downstream capacity (and of course the
upstream capacity too).
We have also examined the RTSP receiver report sent by the VLC player which
revealed the following data:
a. Packets lost: 6361
b. Fraction lost: 48 / 256
c. Interarrival jitter: 301
I am not an expert enough to understand the interarrival jitter or fraction
lost counters, however "packets lost" stand for itself.
We've also examined the TTL value of the packets which DID arrive, and it
stood on 126 on ALL packets.
The RTP stream was sent over TCP, and we will later on try to eliminate the
NAT from the equation and try testing with UDP transport.
I also think it's a good idea to describe the network topology utilized
during the test to make things a bit more clear:

Win2003 Server -> 1Gbit switch -> Internet -> ATM -> Checkpoint Safe at Office 500
7.0.33.x -> LAN
The internet connection from the checkpoint box is going over PPPoE through
the ATM line.

The ATM line contract is 4 Mbit downstream and 2 Mbit upstream (we have
conducted the test with a transport stream encoded at approximately 2
Mbit/sec.), and the checkpoint box performs traffic shaping effectively
limiting the incoming bandwidth at around 3.8 Mbit/sec.

>From past experience with multicast (although not 100% relevant in this case
since we're using unicast streaming), PPPoE adds some excess overhead to the
packets going out of the line, effectively reducing the MTU of the packets
which can be sent out of the network, and also causes fragmentation of
incoming packets on the IP level (since the incoming MTU is also reduced),
and this is the reason we started experimenting with different packet sizes.

As you can see, there are quite a few devices and software along the way
which could potentially affect the quality of service, yet again, from our
extensive experience with Real Media and Windows Media (on the same network
infrastructure) those potential infrastructure problems can be (and are)
successfully dealt with in commercial software for the sake of the end-user
experience. We are trying to do the same with Live555, with no positive
results (and no apparent reason for failure) yet.

Any help, ideas and thoughts would be kindly appreciated...

Many thanks,
Ilya

On 4/26/07, Morgan Tørvolt <morgan.torvolt at gmail.com> wrote:
>
> > Also, your email says you are using ATM (Asynchronous Transfer
> Mode?).  I
> > check this on wikipedia, and notice that the data must be divided into
> very
> > small cells for delivery.  It seems like ATM is a protocol for
> > time-sensitive data, like VoIP.  RTP also is designed to meet special
> > timing needs of the data. Could there be a problem combining these two
> > time-sensitive protocols?  "If a circuit is exceeding its traffic
> contract,
> > the network can either drop the cells or mark the Cell Loss Priority
> (CLP)
> > bit (to identify a cell as discardable further down the line)."  The RTP
>
> > delivery can be somewhat bursty, so maybe you need to check your ATM
> > contract and make sure you have the correct one (VBR maybe?) one?
>
> ATM is just a transport layer, as Ethernet, SDH and others. ATM is
> usually used in for example ADSL connections. That the packet size is
> 155 (or something like that) is not obvious, or even visible, to the
> end hardware. It sees only an ordinary Ethernet link.
> ATM is not very suited to do anything really. It is like a duck, it
> can swim, fly and make sound, but does neither extremely well. For
> extremely time sensitive data one uses SDH, which even keeps the
> original input clock through the links. For time insensitive data one
> uses Ethernet. ATM is very good at prioritizing traffic though, and it
> could sound like the Ethernet packets are given a low priority,
> possibly due to the fact that it is UDP. I have experienced this
> before, and the configuration of a Ethernet card in one of the nodes
> was to blame. It did not keep it's config due to hardware failure. In
> your case this should be configurable in the ATM nodes, or management
> software.
>
> As a "most likely" option, I would have to opt for bandwidth problem.
> I have spent too many hours saying that it is not so on ATM networks
> before. Check that your VCs and VPs are set up correctly, and try
> transmitting TCP and UDP data from other sources than live. VLC stream
> output would be a good place to start I believe.
>
> I agree with Ross here. You say it works well without the ATM, but not
> with it. The reason for the problems is then too obvious. As there are
> literally thousands of users of this software, I do believe that the
> way it handles the Ethernet part works according to standard (unless
> your OS or some configurations therein is to blame). One should not
> make custom fixes to allow for broken hardware or configuration. In
> this case "your network" seems like a good term, as most of the other
> networks seems to work just fine.
>
> -Morgan-
> _______________________________________________
> live-devel mailing list
> live-devel at lists.live555.com
> http://lists.live555.com/mailman/listinfo/live-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.live555.com/pipermail/live-devel/attachments/20070426/28529de9/attachment-0001.html 


More information about the live-devel mailing list