[Live-devel] Building Mac RTSP Client Application
Eric Blanpied
eric at sparkalley.com
Fri Nov 7 14:02:51 PST 2014
Hello,
We're building Mac-only application to capture and view synchronized videos, with all media handling done via AVFoundation classes. Except for RTSP, of course! In order to get things prototyped, our app currently saves streams to .mov files by tasking the openRTSP program. This temporary solution has gotten our app as a whole up and running, and has validated the live555 library as a stream-storage solution. Accordingly, we are now working on a proper implementation.
Basic Summary: We are looking to use the live555 library to save H264+AAC streams in the .mov format, along with timing information useful for synchronization (storage format tbd). Capture will start and stop based on user action in the GUI. There will be no real-time viewing of the incoming streams.
At present I have a working test application that wraps the testRTSPClient code with a bit of objective-c and runs that on a second thread, calling the library from a shared lib within the application bundle. This delivers the same debug messages as the command-line version, so that seems like a decent basic reference platform to continue to experiment with. My questions now are about wider application architecture, and while some are not directly about the live555 library itself, I'm sure that many of my questions have recommended practices for answers, some folks may have direct experience to share. In any case, I'd be grateful to get the initial project going with an appropriate structure.
Generally, I envision a singleton "RTSPStreamService” class (obj-c++) that wraps and runs the UsageEnvironment(etc.) on it's own thread, and an "RTSPStreamClient” class (also obj-c++) which the app would instantiate per-stream and pass to the RTSPStreamService to connect to and start storing a stream. This would also provide a reference to instruct the RTSPStreamService to stop capture & finish the file storage. Ideally the interfaces between the larger app and the live555-handling code would be concentrated in a very small number of classes. Sounds good when described in the abstract, but I'm unsure about a number of things.
It sounds like the C++ work will involve subclassing the following classes:
- UsageEnvironment, clearly.
- TaskScheduler?
>From what I see in the example and the library code, it's not clear to me why BasicTaskScheduler wouldn't be appropriate to use. I guess some info on when TaskScheduler needs subclassing would be helpful.
- RTSPClient
Following the example code, since we want to handle multiple streams. Not sure if this is the RTSPStreamClient class mentioned above, or if there is some kind of reference stored to link the two.
- MediaSink (or FileSink?)
The example makes it clear that instead of DummySink we'll be creating our own. The plan is to implement an AVAssetWriter-based solution. We have other parts of our app that use this, and the work in this case will involve properly describing the NAL units (and audio samples) to pass the data to those classes. Such details are clearly outside of this list's scope, but my question is whether it would be more appropriate to be subclassing MediaSink or FileSink. It looks as though much of what makes up FileSink has to do with actual file handling, while we'll probably want something much thinner, using afterGettingFrame to pass the data to a mainly obj-c class that handles the AVAssetWriter stuff. To me, that makes it sound like subclassing MediaSink is the way to go.
- Other. Anything obviously missing?
I'm unsure about how we'd signal from outside the event loop that it's time to stop a particular stream and close up it's file. Would that be best done via the "EventTrigger" mechanism the faq mentions? Is there some more info, or example code, on that?
No doubt the above is full of naive assumptions and incorrect understanding, but one has to start somewhere!
thanks in advance,
-Eric
More information about the live-devel
mailing list