[Live-devel] Add an audio sub-session makes the video stop
Fabrice Triboix
ftriboix at falcon-one.com
Sun Dec 18 13:13:07 PST 2011
Dear Ross,
> Because your codec is MP3, you definitely *do not* need to reimplement the "getAuxSDPLine()" virtual function. I.e., if you have such a reimplementation, then you should remove it.
Many thanks for these details, that's enlightening!
> Our RTP output code - in this case, the "MPEG1or2AudioRTPSink" class - automatically takes care of packing an appropriate number of MP3 frames (along with required headers) into each outgoing RTP packet. If you have a "FramedSource" subclass that feeds into this, then all you need to do is feed it individual MP3 frames.
>
> (If your input source is a MP3 file, then you can just use a "MP3FileSource"; you don't need to write your own "FramedSource" subclass. If, however, your input source is a live source - e.g., from a MP3 encoder - then you will need to write your own "FramedSource" subclass that delivers one MP3 frame at a time.)
I am currently using a file, but eventually it will be live source.
I got the audio streaming, which is good, but it stutters. I looked at
the packet format with wireshark, but found no issues.
However I found out that apparently RTP packets are sent too quickly
with my class. But if you use MP3AudioFileServerMediaSubsession, the
audio is fine and RTP packets are sent at a normal rate. If I use my
class with openRTSP for a few seconds, the audio file is 3x larger that
the video file (with your class it's 10x smaller).
Do you have any idea what is happening here?
Many thanks for your support!
Fabrice
More information about the live-devel
mailing list