[Live-devel] H.264 PPS/SPS for the second client
Ross Finlayson
finlayson at live555.com
Mon May 17 09:58:11 PDT 2021
>> ffmpeg’s implementation of RTSP isn’t very good. Instead, you could use our RTSP client implementation, and just use ffmpeg for the H.264 video decoding. (This is what most people do.)
>
> I can't unfortunately. Another side (https://shinobi.video) uses
> ffmpeg internally...
Feel free to file bug reports with that project.
> BTW, is everything good with this SDP generated by live555:
Almost, but not quite. There’s no "a=fmtp:” line that contains "sprop-parameter-sets”. In other words, the server has not found any H.264 SPS and PPS NAL units at all!
> Looks like ffmpeg doesn't like 'npt=0-'?
Feel free to file a bug report with “ffmpeg”.
>>> Is it absolutely impossible to
>>> discard all NALs for the given RTP stream on the server side until
>>> SPS/PPS arrive?
>>
>> If your server is using our “H264VideoFileServerMediaSubsession” class, then it’s already doing this. However, the problem is that - for many H.264 video sources - the PPS and SPS NAL units appear only at the very start of the stream, and never thereafter. That’s why - if you set “reuseFirstSource” to True - the second and subsequent receivers will not get the SPS/PPS NAL units in the media stream. But, as I’ve already noted, this is not something that receivers should be relying on anyway.
>
> No, it's based on H264LiveServerMediaSubsession
I don’t know what “H264LiveServerMediaSubsession” is. We have no class with that name,.
Ross Finlayson
Live Networks, Inc.
http://www.live555.com/
More information about the live-devel
mailing list