[Live-devel] [Mirasys] Live555 RTSP server questions
Ross Finlayson
finlayson at live555.com
Wed Jan 12 02:07:19 PST 2022
> On Jan 12, 2022, at 10:29 PM, Victor Vitkovskiy <victor.vitkovskiy at mirasys.com> wrote:
>
> Hello Ross,
>
> Thank you for your answers.
>
> Still I have some opened questions:
>>> You don’t need to be concerned at all with the internals of the LIVE555 code to do what you want here.
> This doesn't give me any information how to do this :).
> If I don't need to subclass from RTSPServer then how I can detect new client connected / disconnected
You don’t need to do this. Our RTSP server code does this (detect/manage the connection/disconnection of clients) for you. All you need to do is write a subclass of “FramedSource” that delivers a frame of data each time it’s asked (via “doGetNextFrame()”), and write your own subclass of “OnDemandServerMediaSubsession” (implementing the virtual functions "createNewStreamSource()” and “createNewRTPSink()”). That’s all. You don’t need to concern yourself with the RTSP protocol, or the connection/disconnection of RTSP clients, or RTP, or RTCP. Our code does all of that for you.
> It is clear for me how to create my H264FramedSource, but it is not clear how to use it in higher levels.
> testOnDemandRTSPServer example use this code:
> ServerMediaSession* sms = ServerMediaSession::createNew(*env, streamName, streamName, descriptionString);
> sms->addSubsession(H264VideoFileServerMediaSubsession::createNew(*env, inputFileName, reuseFirstSource));
> rtspServer->addServerMediaSession(sms);
> So I need to subclass from H264VideoFileServerMediaSubsession and override those two virtual functions: createNewStreamSource and createNewRTPSink, is this correct?
> Or I need to subclass from OnDemandServerMediaSubsession
It’s probably best for you to define a subclass of “OnDemandServerMediaSubsession”, and implement the "createNewStreamSource()” and “createNewRTPSink()” virtual functions. That’s all.
> and do the same thing (like reading SDP information from H.264 stream)?
You shouldn't need to concern yourself with this. Again, that’s our job. However, for streaming H.264, it’s best if you tell your “H264VideoRTPSink” object (that you would create in your implementation of the “createNewRTPSink()” virtual function) about your H.264 stream’s SPS and PPS NAL units. See “liveMedia/include/H264VideoRTPSink.hh”. I.e., you should create your “H264VideoRTPSink” object using one of the forms of “createNew()” that take SPS and PPS NAL unit parameters. E.g.
RTPSink* MyH264VideoServerMediaSubsession::createNewRTPSink(Groupsock* rtpGroupsock,
unsigned char rtpPayloadTypeIfDynamic,
FramedSource* /*inputSource*/) {
return H264VideoRTPSink::createNew(envir(), rtpGroupsock, rtpPayloadTypeIfDynamic,
SPS_NAL_unit, size_of_SPS_NAL_unit,
PPS_NAL_unit, size_of_PPS_NAL_unit);
}
where “SPS_NAL_unit” and “PPS_NAL_unit” are binary data that you would get from your encoder. If you don’t know the SPS and PPS NAL units, then you could instead subclass from “H264VideoFileServerMediaSubsession”, and rely upon that code to automatically read your input source to figure out the SPS and PPS NAL units (assuming that they’re present in the stream), but that’s messier.
Ross Finlayson
Live Networks, Inc.
http://www.live555.com/
More information about the live-devel
mailing list