[Live-devel] OutPacketBuffer::maxSize and TCP vs UDP

Google Admin bill at 2048-bit.com
Wed Sep 12 08:28:09 PDT 2018


 > The best way to handle data loss is to stream over UDP, but also
configure your encoder so that ‘key frames’ are sent as a series of
‘slices’, rather than as a single ‘frame’ (or NAL unit, in H.264
terminology).  That way, the loss of a packet will cause only a single
‘slice’ to be lost; the rest of the frame will be received and render OK.
And you’ll avoid the large latency of streaming over TCP.

Thanks for the explanation. I have been using UDP without issue for a while
now on lower definition cameras. Now that I've started using HD an UHD
cameras I'm finding that TCP is the only thing that seems to make it work
correctly (or at all).  I've noticed that a number of the largest consumer
video manufacturers (Flir, Digital Watchdog, Hikvision, etc.) all seem to
use RTSP over TCP by default. Is that more or less designed to combat
unknown network circumstances of varying consumers networks, or would there
be another reason?


On Wed, Sep 12, 2018 at 9:08 AM, Ross Finlayson <finlayson at live555.com>
wrote:

> > I'm now assuming that the mere fact that TCP is a reliable transport
> protocol is what's ensuring that I get all of the necessary packets to
> reassemble the image frame.
>
> This is a common (though understandable) misconception.  TCP is not some
> ‘magic fairy dust’ that will always prevent data loss.  If your
> transmitter’s (i.e., server’s) data rate exceeds the capacity of your
> network, then you *will* get data loss, *even if* you are streaming over
> TCP.  If you are streaming over TCP (and your transmitter’s data rate
> exceeds the capacity of your network) then the data loss will happen when
> the transmitter’s TCP buffer (in its OS) eventually fills up; the server’s
> TCP ‘send()’ operation will then fail.
>
> As I have explained numerous times (but which people like to ignore,
> because the truth is inconvenient to them), streaming over TCP is less data
> efficient than streaming over UDP, and should really be used only if you
> are streaming over a network that has a firewall (somewhere between the
> server and client) that blocks UDP packets.
>
> The best way to handle data loss is to stream over UDP, but also configure
> your encoder so that ‘key frames’ are sent as a series of ‘slices’, rather
> than as a single ‘frame’ (or NAL unit, in H.264 terminology).  That way,
> the loss of a packet will cause only a single ‘slice’ to be lost; the rest
> of the frame will be received and render OK.  And you’ll avoid the large
> latency of streaming over TCP.
>
>
> > Lastly, I'm trying to understand how the OutPacketBuffer::maxSize value
> affects things specifically?  I noticed there was a comment that there was
> no point in going larger than 65536 bytes, but I've seen a number of posts
> referencing that for giant 4K frames that you should set that value much
> higher. If my MTU is 1500 how does changing that to be higher affect
> transport of my frames?
>
> First, a UDP packet cannot be larger than 65536 bytes (16 bits in
> length).  That is the absolute maximum packet size for a UDP (including
> RTP) packet.
>
> Second, if your network’s MTU is 1500 bytes, then although you *can* send
> UDP (including RTP) packets larger than this (up to the maximum of 65536),
> you *shouldn’t* - because if you do, fragmentation/reassembly will take
> place at the IP level (inside the sender and receiver’s OS), and you won’t
> have any control over it.  The loss of a single IP fragment will cause the
> loss of the entire UDP packet.
>
>
> Ross Finlayson
> Live Networks, Inc.
> http://www.live555.com/
>
>
> _______________________________________________
> live-devel mailing list
> live-devel at lists.live555.com
> http://lists.live555.com/mailman/listinfo/live-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.live555.com/pipermail/live-devel/attachments/20180912/2f6b9be4/attachment-0001.html>


More information about the live-devel mailing list