[Live-devel] Frame granularity
Matthew Hodgson
matthew at mxtelecom.com
Sun May 21 18:00:24 PDT 2006
Hi all,
I'm trying to write a custom MultiFramedRTPSource and Sink (and Framer?)
for a video payload format - but i'm slightly confused about what
granularity of frames I should be aiming for. Should the Source yield
frames which refer to a discrete picture of video, or instead whatever
sub-picture fragment the individual received packets represent (e.g.
picture, GOB, slice or macroblocks)?
If Source frames represent discrete pictures combined from several packets
(as H263plusVideoSource appears to), each with a set of per-packet
specialHeaders, should those specialHeaders be used in the Sink (when
doing a simple relay) to construct the output packets? How would one tell
the extended MultiFramedRTPSink to only fragment the frame on the
boundaries implied by that frame's specialHeaders, rather than MTU-based
fragmentation?
Alternatively, is the intention for a Framer to perform this operation -
fragmenting the discrete picture frames into per-packet frames suitable
for the Sink (assuming the Sink then never performs any fragmentation to
avoid exceeding the MTU)? If so, is the Framer allowed to sniff
specialHeaders from the Source?
I'm trying to minimise latency when doing a simple relay by not waiting to
combine incoming packets into discrete picture frames in the Sink -
although I want to use the RTPSource/Sink framework rather than working at
a blunt UDP level so as to allow more interesting Filters to be introduced
to the chain when necessary. Is this possible?
many thanks in advance :)
M.
--
______________________________________________________________
Matthew Hodgson matthew at mxtelecom.com Tel: +44 845 6667778
Systems Analyst, MX Telecom Ltd.
More information about the live-devel
mailing list