[Live-devel] Audio drift with live source
finlayson at live555.com
Wed Jan 13 15:10:49 PST 2010
>I am using MP2 audio encoding for which the compressed framesize is
>supposed to be 576 bytes (sampling rate is 32 KHz, single channel).
>However, occasionally fMaxSize in deliverFrame() is less than 576.
>It is 240 or so. When that happens, I write only 240 bytes to fTo,
>and assign fFrameSize to 240 instead of 576. It seems that these
>truncated frames are simply not being handled properly by RTPSink.
>In fact, they are not getting transmitted by RTPSink at all, and in
>addition they are causing a timestamp drift which builds up over
>Is my handling of the case where fMaxSize is < 576 correct. If not,
>what should be the right way?
Truncated data is just that - truncated (i.e., lost). If this
happens (because of a too-small buffer), then it's an error. It's
not something that our software tries to 'correct'.
So, your solution is to figure out why your downstream object's
buffer is getting too small, and then to fix that.
You haven't said what your downstream object is. Is it a
"MPEG1or2AudioRTPSink"? Is it a "MPEG1or2AudioStreamFramer"? Or is
it some other class that you wrote yourself?
Also, because you're encoding and streaming MPEG audio, I suggest
that you take a look at the "wis-streamer" code
<http://www.live555.com/wis-streamer/>, which also does this (plus
lots of other things). In particular, you may find it useful to look
at the code for "MPEGAudioEncoder" (in particular, its implementation
Live Networks, Inc.
More information about the live-devel