[Live-devel] Frame granularity

Ross Finlayson finlayson at live555.com
Sun May 21 22:29:50 PDT 2006


> > Out of curiosity - which codec is this for?  Is this payload format
> > already described by a RFC (or IETF Internet Draft)?
>
>This is 1996 vintage baseline H.263 - as described in RFC 2190.

FYI, RFC 2190 is currently being classified as 'historic'.  That 
payload format should not be used any more.  See 
<http://www.ietf.org/internet-drafts/draft-ietf-avt-rfc2190-to-historic-06.txt>. 
Because of this, I won't be giving you much more help here after this 
message - sorry.

> > The latter.  The reason for this is that these 'sub-pictures'
> > ("slices" for MPEG video; "NAL units" for H.264) are useful to a
> > decoder by themselves.  E.g., a MPEG video decoder can display the
> > 'slices' that it has received, even if some of the other slices in
> > the frame were note received.
>
>Right.  I was slightly confused by the H263plusVideoSource, which
>reassembles inbound packets until it sees a marker bit, and then yields
>the result as a 'frame'.

Yes - for video, the use of the term 'frame' in this code can be a 
bit misleading.  A 'frame' of data received from a "RTPSource", or 
written to a "RTPSink", might not be a complete video frame.

>How is one meant to keep track of whether such a sub-picture frame
>represents the end of a picture - or any other information that might be
>useful for the destination Sink?

If you're reading from a "RTPSource", then the RTP "M" bit should 
give yout that information.  Call "RTPSource::curPacketMarkerBit()" 
after receiving the 'frame'.

> > The 'special headers' are RTP-specific.  Therefore, they should be
> > added by the "RTPSink" (before the packet is sent), and should be
> > stripped by the "RTPSource" (before the incoming packet data is
> > delivered to the reader).  The various virtual functions in
> > "MultiFramedRTPSink/Source" can do this for you.
>
>Okay - but should one try to use the data stripped by the Source in the
>Sink, if fSource->fSpecialHeaderBytes[] is available?

You shouldn't need to.  If you're writing a RTPSource->RTPSink relay, 
then just pass the stripped data (a 'subpicture frame', to use your 
terminology) to the RTPSink, and the special headers should get recomputed.

>Or should the Sink always recalculate the
>special headers from scratch by parsing the raw bytestream?

A "RTPSink" does not parse a raw bytestream.  A "RTPSink" is passed a 
discrete (sub)frame, one at a time.


	Ross Finlayson
	Live Networks, Inc. (LIVE555.COM)
	<http://www.live555.com/>



More information about the live-devel mailing list