[Live-devel] Audio timestamping issue

Ross Finlayson finlayson at live555.com
Mon Nov 4 20:44:55 PST 2013


> I implemented a device source with Ross's help and made it to work. Now I am getting nice video but my audio is choppy and is not good. So my question is what is the right way of timestamping audio and video. I read that if your source is live than no need to set fFrameDurationInMicroSeconds

That's correct.  Because you are streaming from a live source, you shouldn't set "fDurationInMicroseconds" (sic).


> In my case I am using FFMpeg to encode a live stream into MPEG4 and AC3 and then streaming it using live555. Should I be sending the frame with gettimeofday() as presentation time. My understanding is that we need to calculate delta between frames and set the timestamping as it is possible that encoder can give 2 consecutive frames in short duration and may be take longer for 3rd packet.

The presentation times - for both audio and video frames - should correspond to the times that the frames were *generated*.  This is to ensure that when the receiver gets the incoming audio and video frames, it can feed them to the decoder at the appropriate time, so that audio and video will be properly synced.

So, if your audio frames come in 'bunches', then you shouldn't just call "gettimeofday()" each time.  Instead, think about the times that the audio (and video) frames were really *generated*, and set appropriate presentation times for each.


Ross Finlayson
Live Networks, Inc.
http://www.live555.com/

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.live555.com/pipermail/live-devel/attachments/20131104/414ea594/attachment-0001.html>


More information about the live-devel mailing list