<div>Hi,</div><div><br></div><div>I have subclassed MediaSink in an attempt to provide an adapter class between live555 and the audio APIs on iOS (iPhone). The RTSP session is set up using the openRTSP test program (except my custom subclass of MediaSink is used in place of FileSink).</div>
<div><br></div><div>The stream being received is an AAC-hbr stream.</div><div><br></div><div><meta charset="utf-8"><div>I am getting *some* audio to decode/render, but it seems as though the data is being consumed faster than it is being supplied, resulting in audio rendering to stop (it's quite likely a problem down stream from live555, but just trying to rule things out).</div>
</div><div><br></div><div>I just want to double check that each frame received from the MPEG4GenericRTPSource is actually a depacketized, "raw" AAC frame - i.e. stripped of all RTP and AU headers etc, and ready to be sent on for decoding. </div>
<div><br></div><div>Also, the server (in this case Darwin 5.5.5) is sending out multiframed RTP packets (about 6 or 7 frames per RTP packets) - my sink is being notified of new data but with the same presentation timestamp for 6 or 7 frames in a row (it then increases). I'm guessing that the MPEG4GenericeRTPSource is setting the pts based on the RTP timestsamp, and not calculating for each frame within the packet, and that's why I'm seeing this behaviour. Is this correct? The pts is ignored by my code anyway - again, just trying to understand things.</div>
<div><br></div><div>Regards,</div><div>Jon Burgess</div><div><br></div>