[Live-devel] H.264 PPS/SPS for the second client

Dmitry Bely d.bely at recognize.ru
Tue May 18 06:55:43 PDT 2021


On Tue, May 18, 2021 at 12:54 AM Ross Finlayson <finlayson at live555.com> wrote:
>
> >>>>> Is it absolutely impossible to
> >>>>> discard all NALs for the given RTP stream on the server side until
> >>>>> SPS/PPS arrive?
> >>>>
> >>>> If your server is using our “H264VideoFileServerMediaSubsession” class, then it’s already doing this.  However, the problem is that - for many H.264 video sources - the PPS and SPS NAL units appear only at the very start of the stream, and never thereafter.  That’s why - if you set “reuseFirstSource” to True - the second and subsequent receivers will not get the SPS/PPS NAL units in the media stream.  But, as I’ve already noted, this is not something that receivers should be relying on anyway.
> >>>
> >>> No, it's based on H264LiveServerMediaSubsession
> >>
> >> I don’t know what “H264LiveServerMediaSubsession” is.  We have no class with that name,.
> >
> > Sorry, copy/paste error. Actually H264LiveServerMediaSubsession is my
> > subclass of OnDemandServerMediaSubsession with createNewRTPSink()
> > creating H264VideoRTPSink instance.
>
> If you know - in advance - the H.264 PPS and SPS NAL units, then your “createNewRTPSink()” implementation should pass them as parameters to the call to “H264VideoRTPSink::createNew()”.  If you do this, then the proper "a=fmtp:” line containing "sprop-parameter-sets” will automatically appear in your SDP description.

Yes, that helped. I created an H.264 encoder beforehand, encoded one
frame, extracted & saved the resulting SPS/PPS NALs, and then used
them while creating H264VideoRTPSink. Now all my problems with ffmpeg
seem to be gone. Thanks a lot for all your efforts!

- Dmitry Bely



More information about the live-devel mailing list