[Live-devel] RTP packet size

Sid Price sidprice at softtools.com
Sat Aug 25 15:21:33 PDT 2012


Thank you, excellent insight.

 

I apologize for not using the correct terminology, still learning here.

 

The requirement is being set by the third party, it is a system used in
broadcasting and is a proprietary, closed system, with their own RTP
implementation, so I need for these trials to follow their specification.

 

>>> Now, you seem to be saying that you want a smaller RTP packet, one that
contains only 4 ms of audio - i.e., 192 samples.  But why? 

 

Yes I understand this, it is a requirement of the client we need to work
with. I also realize, and told them, that it would increase the overhead.
Since this piece of equipment (the client) is in production we need to
comply with what it needs, at least for the trial. I am sure that changes
may be possible later once the project gets a green light.

 

Sid.

 

From: live-devel-bounces at ns.live555.com
[mailto:live-devel-bounces at ns.live555.com] On Behalf Of Ross Finlayson
Sent: Saturday, August 25, 2012 3:32 PM
To: LIVE555 Streaming Media - development & use
Subject: Re: [Live-devel] RTP packet size

 

I need to do some integration testing with a proprietary piece of hardware
as part of the continuing proof of concept for the project I am working on.
The hardware I need to stream audio (from an uncompressed WAV file for the
test) to requires a sample rate of 48KHz and we have sources prepared for
that. It also requires that the audio frames being streamed are not
fragmented and so I need to send a 4mS frame of audio data rather than the
8mS that the server appears to send right now. Could someone point me to the
setting or parameter for the library that would enable me to set the frame
size to achieve this please? I have searched through the code but so far I
have not been able to identify where this is controlled.

 

There's a bit of confusion here, I think.  First, audio from a WAV file are
'samples', not 'frames'.  Each sample is usually only 16 bits (i.e., 2
bytes), I think.  So WAV (really PCM) audio samples are nowhere near large
enough to get fragmented over outgoing RTP packets.

 

OTOH, the LIVE555 code works with 'frames' - delivering one frame at a time.
For the code to run efficiently, frames need to be much larger than 2 bytes,
so, for streaming PCM audio, we group samples into much larger 'frames'.  We
also want these 'frames' to be small enough to fit within an outgoing RTP
packet.

 

The code for computing this 'preferred frame size' is at 201-204 of
"WAVAudioFileSource.cpp".  In your case - 48 kHz audio, 2 channels, 16
bits-per-sample (I think) - this will give you a preferred frame size of
1400 bytes: i.e., 350 samples.  For a 48 kHz sample rate, this means that
each outgoing RTP packet will contain about 7 ms of audio.

 

Now, you seem to be saying that you want want a smaller RTP packet, one that
contains only 4 ms of audio - i.e., 192 samples.  But why?  Having a smaller
RTP packet (almost 1/2 as small) will lead to increased overhead (because of
the need for almost twice as many Ethernet packets, each with their own RTP
header).  So it's probably not something that you really want.

 

(Note also that RTP packets do not get seen by receivers - only by the
lower-level LIVE555 reception code.  An audio receiver still sees only a
sequence of audio samples, regardless of the underlying RTP packet size that
was used to transmit them.)

 

So, I don't think that you have any real need to change anything.

 

Ross Finlayson
Live Networks, Inc.
http://www.live555.com/ 

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.live555.com/pipermail/live-devel/attachments/20120825/101e7933/attachment-0001.html>


More information about the live-devel mailing list