[Live-devel] Timestamp gap in RTCP Report for MPEG1or2VideoStreamFramer

Ross Finlayson finlayson at live555.com
Tue Feb 5 10:32:11 PST 2008


You need to stop thinking about RTP timestamps.  Instead, think about 
presentation times.

At the RTP sender end, you provide - for each frame of media data - a 
presentation time.

At the RTP receiver end, you get - for each frame of media data - a 
presentation time.  This presentation time is used for decoding (and 
a/v sync).

If you also implement RTCP, then (after the initial RTCP "SR" is 
received) the presentation times that you get at the RTP receiver end 
will be *exactly the same* as the presentation times that you 
provided at the RTP sender end.  You can verify this for youself.

In other words, you can - and should - think of the RTP/RTCP 
implementation, at the sender and receiver ends, as being a 'black 
box' that takes presentation times as input at one end, and produces 
*the same* presentation times as output at the other end.  This 
'black box' happens to work by using RTP timestamps and RTCP reports. 
But you shouldn't concern yourself with this.  This code works. 
Trust me.

Thus, the thing that you need to worry about is presentation times. 
Look at the *presentation times* that are coming out of 
"MPEG1or2VideoStreamFramer".  Are they correct?  Because your stream 
is I-frame only, then it's not inconceivable that there is a problem 
with the way that "MPEG1or2VideoStreamFramer" generates presentation 
times for your streams.  You should be able to easily check this for 
yourself.  But don't concern yourself with RTP timestamps; that's a 
total red herring.

Note also that if your input data consists of discrete MPEG video 
frames, rather than a byte stream, then you could instead use 
"MPEG1or2VideoStreamDiscreteFramer", which is simpler and more 
efficient than "MPEG1or2VideoStreamFramer".
-- 

Ross Finlayson
Live Networks, Inc.
http://www.live555.com/


More information about the live-devel mailing list