[Live-devel] H.264 PPS/SPS for the second client

Dmitry Bely d.bely at recognize.ru
Mon May 17 09:36:07 PDT 2021


On Mon, May 17, 2021 at 7:20 PM Ross Finlayson <finlayson at live555.com> wrote:
>
>
>
> > On May 17, 2021, at 9:53 AM, Dmitry Bely <d.bely at recognize.ru> wrote:
> >
> > On Mon, May 17, 2021 at 6:18 PM Ross Finlayson <finlayson at live555.com> wrote:
> >>
> >> A receiver of a H.264 RTSP/RTP stream cannot rely on receiving the SPS and PPS NAL units inside the stream.  As you’ve discovered, if your server sets “reuseFirstSource” to True, then only the first-connected receiver will get the SPS and PPS NAL units (if they appeared at the start of the media source) - but even this isn’t reliable, because the RTP packets are datagrams.
> >>
> >> Instead, the RTSP/RTP receiver should get the SPS and PPS NAL units from the stream’s SDP description (which is returned as the result of the RTSP “DESCRIBE” command).  I.e., your receiver should (after it’s handled the result of “DESCRIBE”) call
> >>        MediaSubsession::fmtp_spropparametersets()
> >> on the stream’s “MediaSubsession” object (for the H.264 video ‘subsession’).  This will give you an ASCII string that encodes the SPS and PPS NAL units.
> >
> > Well, the receiver is ffmpeg-based and is out of my control
>
> Your (ffmpeg-based) RTSP receiver probably already uses the SPS/PPS information that’s encoded in the SDP description.  If it doesn’t, then it’s broken (because, as I noted before, receivers cannot rely on the SPS and PPS NAL units appearing reliably in the media stream).
>
> ffmpeg’s implementation of RTSP isn’t very good.  Instead, you could use our RTSP client implementation, and just use ffmpeg for the H.264 video decoding.  (This is what most people do.)

I can't unfortunately. Another side (https://shinobi.video) uses
ffmpeg internally...
BTW, is everything good with this SDP generated by live555:

[rtsp @ 0000029d2b618d80] SDP:
v=0
o=- 1621266030222402 1 IN IP4 172.21.100.156
s=H264 Stream
i=channel0
t=0 0
a=tool:LIVE555 Streaming Media v2020.10.16
a=type:broadcast
a=control:*
a=range:npt=0-
a=x-qt-text-nam:H264 Stream
a=x-qt-text-inf:channel0
m=video 0 RTP/AVP 96
c=IN IP4 0.0.0.0
b=AS:5000
a=rtpmap:96 H264/90000
a=control:track1

Failed to parse interval end specification ''

Looks like ffmpeg doesn't like 'npt=0-'?

> >  Is it absolutely impossible to
> > discard all NALs for the given RTP stream on the server side until
> > SPS/PPS arrive?
>
> If your server is using our “H264VideoFileServerMediaSubsession” class, then it’s already doing this.  However, the problem is that - for many H.264 video sources - the PPS and SPS NAL units appear only at the very start of the stream, and never thereafter.  That’s why - if you set “reuseFirstSource” to True - the second and subsequent receivers will not get the SPS/PPS NAL units in the media stream.  But, as I’ve already noted, this is not something that receivers should be relying on anyway.

No, it's based on H264LiveServerMediaSubsession - I'm trying to stream
live video from a camera.

- Dmitry Bely



More information about the live-devel mailing list