I see for H.264 streams, openRTSP defaults to the H264VideoFileSink, which is based on FileSink, which is based on MediaSink.<br><br>I don't want to write the video out to a file; I want the video exposed as a live stream to the rest of my application. To me, it seems like I need to write my own "sink," but I'm not sure what class would be best to inherit from (MediaSink? Or all the way down to Medium?). <br>
<br>My other question is more general; I see that a single RTSP server can have multiple sessions, and each session can be composed of multiple subsessions. So I'm wondering what the best (easiest?) way to structure my media streams would be. I'm going to have several H.264 streams, audio, MJPEG, and possibly MPEG4, and I'm wondering if each should get its own session, or if I should combine audio and video into the same session. Will I have AV sync issues if each stream is in its own session?<br>
<br>Thanks in advance.<br>