[Live-devel] UDP framing in RTP

Michael Braitmaier braitmaier at hlrs.de
Wed Sep 2 08:25:54 PDT 2009


Hello to everyone!

I am struggling for some while now, what happens if your sources 
receives a complete frame larger than the maximum  that can be passed 
into an UDP
packet. Intuitively I would say it gets split up into multiple packets 
and merged again on receiver side.
 From my understanding of liveMedia I thought that the framing of a 
video frame is then done by liveMedia. I was encouraged in my assumption 
when
seeing that OutPacketBuffer has a mechanism for handling overflow data, 
which made me assume that this is the way the large frames are handled.

 From one of my last messages here, I learned that making the buffer fTo 
in a source large enough to hold my largest discrete frames is the way 
to avoid frame truncation and
droping of partial frame data. I currently do this by calling 
"setPacketSizes(preferredSize, maxPacketSize)" just after the 
instantiation of the MultiFramedRTPSink.

But with this buffer increase I also seem to increase the buffer size 
(the buffer in OutPacketBuffer) that is directly handed down
to the UDP socket, which in unfortunate situations causes a "message too 
big" error.

So I am confused now whether I have to take action in my derived source 
class or whether this should be handled by liveMedia natively?

If it is handled then I am unsure about how to pass a rather large frame 
(e.g. 140K) from my source to a MultiFramedRTPSink when I can not 
increase the size of the fTo-buffer in my source
(which directly relates to the size of the buffer in OutPacketBuffer and 
the message size of a UDP message) without causing the "message size too 
big" errors on the UDP socket.

I guess I am missing something obvious , but can't seem to find it.

Any help would be greatly appreciated.

Michael


More information about the live-devel mailing list