[Live-devel] Confused about how to generate fmtp line for H.264 source for SDP

Ross Finlayson finlayson at live555.com
Wed Aug 3 06:14:13 PDT 2011


On Aug 2, 2011, at 6:45 PM, Matt Schuckmannn wrote:

> I'm working on up grading our use of Live555 RTSP server code to the latest version of the library, our old version was at least a couple of years old.

Good heavens; there have been *many* improvements and bug fixes since then!


> In the new code it appears that the default behavior is to obtain the sps, pps, etc from the h.264 fragmented

Yes.  Now, the SPS and PPS NAL units are assumed to be in the input NAL unit stream (and are extracted from there).

That means that if we're streaming a H.264 stream 'on demand' (e.g., from a unicast RTSP server), we have to do a little trick (hack) to get this information for use in the stream's SDP description, before we start delivering to the first client.  Basically, we have to 'stream' the input source to a dummy sink, until we see the data that we need.

The place to do this is in your subclass of "ServerMediaSubsession" for H.264 video.  Specifically, you reimplement the "getAuxSDPLine()" virtual function.

For a model of how to do this, see our implementation of "H264VideoFileServerMediaSubsession".  You will presumably do something similar, except with your own subclass.  (Of course, as always, you will also implement the "createNewStreamSource()" and "createNewRTPSink()" virtual functions.)


> I'm not sure if I should over ride the auxSDPLine() in my class derived from H264VideoRTPSink

No, you should need to change (or reimplement) that code.


Ross Finlayson
Live Networks, Inc.
http://www.live555.com/

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.live555.com/pipermail/live-devel/attachments/20110803/dbb38efc/attachment.html>


More information about the live-devel mailing list