[Live-devel] Adding secondary audio track to existing H264 RTP stream

Ross Finlayson finlayson at live555.com
Wed Mar 28 13:29:02 PDT 2018


> How to read this file in native speed (read every packet every 21.3ms) using ADTSFileSource?
> Using scheduleDelayedTask every 21.3 ms to get new packet on time is bad option - that was wrong approach - took a lot of CPU.

Marcin,

I’m not totally sure I understand what you’re trying to do - but if you are trying to read and stream from a pre-recorded file (with any type of media; not just ADTS), then you should never be calling “scheduleDelayedTask()” yourself.  Instead, your input source should just be setting “fDurationInMicroseconds” when it delivers each frame - and then our downstream software (“MultiFramedRTPSink”) that actually does the packetizing and transmission of the RTP audio data will automatically use those delays to figure out when to transmit each packet (and then continue reading from the input file).  Thus your only task is to make sure that “fDurationInMicroseconds” is set properly for each input frame.

If you look at the code for “ADTSAudioFileSource.cpp”, you’ll see that it already sets “fDurationInMicroseconds” for each audio frame that it delivers.  If you are using that code ‘as is’, then you should not need to do anything else.  If, however, you are writing your own class (i.e., a subclass of “FramedSource” or “FramedFileSource”) to implement your input audio source, then you will need to set “fDurationInMicroseconds” (and “fPresentationTime) for each frame.

I hope this helps.


Ross Finlayson
Live Networks, Inc.
http://www.live555.com/




More information about the live-devel mailing list