[Live-devel] Question about a FAQ answer

Jeremy Noring kidjan at gmail.com
Tue Sep 22 14:18:36 PDT 2009


On Wed, Sep 16, 2009 at 11:48 PM, Ross Finlayson <finlayson at live555.com> wrote:
>> The Live555 FAQ has this question and answer:
>>
>> Q: For many of the "test*Streamer" test prpgrams, the built-in RTSP
>> server is optional (and disabled by default). For
>> "testAMRudioStreamer", "testMPEG4VideoStreamer" and
>> "testWAVAudioStreamer", however, the built-in RTSP server is
>> manditory. Why?
>>
>> A: For those media types (AMR audio, MPEG-4 video, and PCM audio,
>> respectively), the stream includes some codec-specific parameters that
>> are communicated to clients out-of-band, in a SDP description. Because
>> these parameters - and thus the SDP description - can vary from stream
>> to stream, the only effective way to communicate this SDP description
>> to clients is using the standard RTSP protocol. Therefore, the RTSP
>> server is a manditory part of these test programs.
>>
>> My question is: does this same principle apply to H.264 as well
>
> Yes.

Thanks.

My application consists of an embedded processor running Live555.  To
allow people to view video remotely, we need to push video to a
central server where it is then re-distributed; this is to circumvent
NAT/firewalls/etc.  For this model, the embedded server has to push
video to some central server.  Is there any way to do this with
Live555 without making some rather radical modifications?  And if yes,
any suggestions as to a good starting place in the code base to make
those modifications?

(on a side note, I did tear apart quicktime broadcaster with
wireshark, and it seems they're using some weird combination of RTSP
ANNOUNCE/RECORD and TCP interleaving to "push" video to a server over
a single port.  Is this a common capability of RTSP implementations?
Seems like it isn't)

As always, Ross, your advice is greatly appreciated.


More information about the live-devel mailing list