Hi Ross, <br><br>Thanks for your explanations. I am somehow unsure about the proper setup of fPresentationTime and fDurationInMicroseconds.<br>My h264 video stream comes in the oder of pps, sps, I, P, P, P...(29 times) in a second.<br>
Is the following code correct ? Thank you.<br><br>::continueReadProcessing() {<br><br> ......<br> if (acquiredFrameSize > 0 ) {<br> // We were able to acquire a frame from the input.<br> // It has already been copied to the reader's space.<br>
fFrameSize = acquiredFrameSize;<br> fNumTruncatedBytes = 0; <br><br> seq++; <br> <br> // Compute "fPresentationTime" <br> if ( the frame != pps or sps ) // regular video frames<br>
fPresentationTime.tv_usec += (long) 33*1000;<br>
else // pps or sps , presentation time does not change<br> ;<br> <br> while (fPresentationTime.tv_usec >= 1000000) {<br> fPresentationTime.tv_usec -= 1000000;<br> ++fPresentationTime.tv_sec;<br>
}<br> <br> if ( the frame != pps or sps ) // regular video frames<br>
fDurationInMicroseconds = (long) 33*1000;<br>
else // pps or sps , duration is 0<br>
fDurationInMicroseconds = 0; <br><br> // Call our own 'after getting' function. Because we're not a 'leaf'<br> // source, we can call this directly, without risking infinite recursion.<br>
afterGetting(this);<br> }<br> .....................<br><br>}<br><br>>I need to understand the value of fPresentationTime<br>
>fDurationInMicroseconds. Those two are used in almost all<br>
>videoframers.<br>
>I have an IP camera, which sends out video frames in the oder of<br>
>pps, sps, I, P, P, P...(29 times) in a second. So should the<br>
>fPresentationTime/ fDurationInMicroseconds be set as 1000/32<br>
>(including both pps ,sps and vidoe frames) ms or 1000/30 (only the<br>
>video frames) ms?<br>
<br>
Note that "fPresentationTime" and "fDurationInMicroseconds" are<br>
separate variables, and both should be set (although, if you know<br>
that your framer will always be reading from a live source (rather<br>
than a file), you can probably omit setting<br>
"fDurationInMicroseconds").<br>
<br>
(Note: Because you mention "PPS" and "SPS", I assume that you're<br>
referring specifically to H.264 video.)<br>
<br>
"fDurationInMicroseconds" should be set to 1000000/framerate for the<br>
NAL unit that ends a video frame (Note: This will be the NAL unit for<br>
which your reimplemented "currentNALUnitEndsAccessUnit(<div id=":yq" class="ii gt">)" virtual<br>
function will return True), and should be set to 0 for all other NAL<br>
units.<br>
<br>
Similarly, all NAL units that make up a single video frame (including<br>
any PPS and SPS NAL units) should be given the same<br>
"fPresentationTime" value (i.e., the presentation time of the video<br>
frame).<br>
<br>
>It looks the framer put several video frames in a single RTP packet.<br>
<br>
No, it's the "H264VideoRTPSink" class (i.e., our implementation of<br>
the RTP payload format for H.264) that takes care of packing NAL<br>
units into RTP packets. You don't need to know or care about this.<br>
Just feed the "H264VideoRTPSink" one NAL unit at a time.<br>
--<br>
<br>
Ross Finlayson<br>
Live Networks, Inc.<br>
<a href="http://www.live555.com/" target="_blank">http://www.live555.com/</a></div>