[Live-devel] H.264 PPS/SPS for the second client

Dmitry Bely d.bely at recognize.ru
Mon May 17 08:53:49 PDT 2021


On Mon, May 17, 2021 at 6:18 PM Ross Finlayson <finlayson at live555.com> wrote:
>
> A receiver of a H.264 RTSP/RTP stream cannot rely on receiving the SPS and PPS NAL units inside the stream.  As you’ve discovered, if your server sets “reuseFirstSource” to True, then only the first-connected receiver will get the SPS and PPS NAL units (if they appeared at the start of the media source) - but even this isn’t reliable, because the RTP packets are datagrams.
>
> Instead, the RTSP/RTP receiver should get the SPS and PPS NAL units from the stream’s SDP description (which is returned as the result of the RTSP “DESCRIBE” command).  I.e., your receiver should (after it’s handled the result of “DESCRIBE”) call
>         MediaSubsession::fmtp_spropparametersets()
> on the stream’s “MediaSubsession” object (for the H.264 video ‘subsession’).  This will give you an ASCII string that encodes the SPS and PPS NAL units.

Well, the receiver is ffmpeg-based and is out of my control (it's
launched by an external video storage software). Looks like missing
SPS/PPS is not the only problem, in my opinion ffmpeg doesn't like a
bunch of non-IDR leading frames. Is it absolutely impossible to
discard all NALs for the given RTP stream on the server side until
SPS/PPS arrive?

- Dmitry Bely



More information about the live-devel mailing list