[Live-devel] Problems with Streaming of AMR

Martin marthi at graphics.cs.uni-sb.de
Tue Oct 31 07:57:41 PST 2006


Hi,

Ross Finlayson wrote:
>>Is there something special I have to consider when using AMR?
> 
> 
> Yes, when delivering data to an "AMRAudioRTPSink":
> 1/ The input source must be an "AMRAudioSource" (usually a subclass 
> of this), and
> 2/ Each input frame must include the 1-byte AMR header (at the 
> beginning).  If I recall correctly, the FFMPEG AMR encoding software 
> doesn't include this header, by default.
> 
> In any case, I suggest looking at the "wis-streamer" code: 
> <http://www.live555.com/wis-streamer/>, because it includes a 
> software AMR audio encoder (using the FFMPEG code).

thanks a lot for your help!
Now that I've used a subclass of AMRAudioSource the method
doGetNextFrame() is called. Unfortunately I got a message on the
receiver side from ffmpeg:
[amr_nb @ 0xb45fc008]amr frame too short (12, should be 32)

So as you suggested I looked into the wis-streamer code as well and
included the part for the AMR-header:

enum Mode ourAMRMode = MR122; // the only mode that we support
fLastFrameHeader = toc_byte[ourAMRMode];
fNumTruncatedBytes = 0;

But on the receiver side the frame is still to short:

[amr_nb @ 0xb45fc008]amr frame too short (31, should be 32)

Maybe I'm still missing something?

Another thing that could be the problem is the following: As the AMR
frames transmitted are not interleaved (because it is not supported) I
used the RawAMRRTPSource rtp_source instead of AMRAudioSource.

RTPSource* rtp_source = 0;
AMRAudioSource* audio_source = AMRAudioRTPSource::createNew(env,
rtp_sock, rtp_source, 97);
// audio_source is not further used, instead rtp_source is used.

Is it correct that I can use the rtp_source? What is the difference in
these two? If I would use the audio_source, what is the purpose of
rtp_source in this example?

Thanks in advance for your help!

Martin


More information about the live-devel mailing list