<br><br><div><span class="gmail_quote">On 7/31/05, <b class="gmail_sendername">Ross Finlayson</b> <<a href="mailto:finlayson@live.com">finlayson@live.com</a>> wrote:</span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
At 08:43 PM 7/30/2005, you wrote:<br>>Hello, all<br>>I've read the FAQ about synchronization. It says that the parameter<br>>presentationTime passed to afterGetFrame() can be used to synchronize,<br><br>Yes, but note that this FAQ applies to *receiving* RTP streams (
i.e.,<br>over a network). I.e., it applies to subclasses of "RTPSource".<br><br>> but I can't figure out how to use the parameter. I even can't<br>> understand the exact mean of the parameter. Is it the same as the
<br>> MediaSource::fPresentationTime exactly? In other words, if in my<br>> MultiFramedMediaSource::doGetNextFrame() implementation, I set<br>> fPresentationTime={1000,1000} for one particular frame, the<br>> receiver should get a presentationTime={1000,1000} for this frame
<br>> when call afterGettingFrame, shouldn't it?<br><br>It's not clear from your description exactly what you are using your<br>"MultiFramedMediaSource" for. But, if you are using it as a source<br>for an outgoing RTP stream (
i.e., feeding it into a "RTPSink"), then<br>your presentation times should be aligned with real, 'wallclock'<br>time. I.e., the first time you generate "fPresentationTime" in your<br>source object, you should do so using the value obtained by calling
<br>"gettimeofday()". Plus, of course, you must have a "RTCPInstance" -<br>at both the sender and receiver - for synchronization to work.</blockquote><div><br>
Yes, I use FramedSource as the source for SimpleRTPSink<br>
my implementation is:<br>
<br>
class BufferedFramedSource: public FramedSource{<br>
public:<br>
BufferedFramedSource(UsageEnvironment & env,<br>
const std::string & type);<br>
virtual ~BufferedFramedSource();<br>
private:<br>
void doGetNextFrame();<br>
<br>
std::string category;<br>
};<br>
<br>
<br>
BufferedFramedSource::BufferedFramedSource(UsageEnvironment & env, <br>
const std::string & type)<br>
:FramedSource(env), fInBuffer(ib), category(type)<br>
{<br>
gettimeofday(&fPresentationTime, NULL);<br>
};<br>
BufferedFramedSource::~BufferedFramedSource()<br>
{<br>
}<br>
<br>
void BufferedFramedSource::doGetNextFrame()<br>
{<br>
unsigned frameSize = 1024;<br>
if(fMaxSize < frameSize){<br>
fNumTruncatedBytes = frameSize - fMaxSize;<br>
fFrameSize = fMaxSize;<br>
}else{<br>
fNumTruncatedBytes = 0;<br>
fFrameSize = frameSize;<br>
}<br>
fPresentationTime.tv_sec += 10;<br>
<br>
//fill the buffer with the discription of fPresentationTime<br>
std::stringstream ss(std::stringstream::out);<br>
ss << category <br>
<<" presentation time : "<< fPresentationTime.tv_sec <<"s, "<br>
<< fPresentationTime.tv_usec << "microseconds";<br>
memcpy(fTo, ss.str().c_str(), fFrameSize);<br>
<br>
fDurationInMicroseconds = 1000;<br>
<br>
nextTask() = envir().taskScheduler().scheduleDelayedTask(0,<br>
(TaskFunc*)afterGetting, this);<br>
}<br>
<br>
On the client peer, I use SimpleRTPSource to receive the stream, and try to print the presentation asocciated with the frame.<br>
<br>
class BufferedSink: public MediaSink{<br>
public:<br>
BufferedSink(UsageEnvironment &env, <br>
ofstream & file,<br>
unsigned maxSize=1024);<br>
~BufferedSink();<br>
protected:<br>
static void afterGettingFrame(void *clientData, <br>
unsigned frameSize, <br>
unsigned numTruncatedBytes, <br>
struct timeval presentationTime, <br>
unsigned durationInMicroseconds);<br>
<br>
virtual Boolean continuePlaying();<br>
private:<br>
virtual void afterGettingFrame1 (unsigned frameSize, <br>
struct timeval presentationTime,<br>
unsigned durationInMicroseconds);<br>
unsigned char *fBuffer;<br>
ofstream & ofile;<br>
};<br>
<br>
<br>
<br>
BufferedSink::BufferedSink(UsageEnvironment &env, <br>
ofstream & file,<br>
unsigned maxSize):<br>
MediaSink(env), fBufferSize(maxSize),<br>
ofile(file)<br>
{<br>
fBuffer = new unsigned char[maxSize];<br>
}<br>
<br>
BufferedSink::~BufferedSink()<br>
{<br>
delete []fBuffer;<br>
}<br>
<br>
void <br>
BufferedSink::afterGettingFrame(void* clientData, <br>
unsigned frameSize,<br>
unsigned numTruncatedBytes,<br>
struct timeval presentationTime,<br>
unsigned durationInMicroseconds) <br>
{<br>
BufferedSink* sink = (BufferedSink*)clientData;<br>
sink->afterGettingFrame1(frameSize, <br>
presentationTime, <br>
durationInMicroseconds);<br>
}<br>
<br>
<br>
void BufferedSink::afterGettingFrame1(unsigned frameSize,<br>
struct timeval presentationTime, <br>
unsigned durationInMicroseconds) {<br>
ofile << "presentationTime: " << presentationTime.tv_sec << "s,"<br>
<< presentationTime.tv_usec << "micros, "<br>
<<"durationInMicroseconds: "<< durationInMicroseconds <br>
<<", size: " <<frameSize << " bytes" << endl;<br>
ofile << "Content of
the buffer is:\n" << fBuffer << endl << endl;<br>
if(presentationTime.tv_sec == 0 <br>
&& presentationTime.tv_usec == 0){<br>
// The output file has closed. Handle this the same way as if the<br>
// input source had closed:<br>
std::cout<< "End of the stream...";<br>
onSourceClosure(this);<br>
<br>
stopPlaying();<br>
return;<br>
}<br>
// Then try getting the next frame:<br>
continuePlaying();<br>
}<br>
<br>
Boolean<br>
BufferedSink::continuePlaying()<br>
{<br>
if (fSource == NULL) return False;<br>
<br>
fSource->getNextFrame(fBuffer, fBufferSize,<br>
afterGettingFrame, this,<br>
onSourceClosure, this);<br>
<br>
return True;<br>
}<br>
<br>
I also created a RTPCPInstance for the stream. What I got from the output of the client is:<br>
<br>
presentationTime: 1122790619s,599583micros, durationInMicroseconds: 0, size: 1024 bytes<br>
Content of the buffer is:<br>
video presentation time : 1123141214s, 536290microseconds<br>
<br>
presentationTime: 1122790629s,599583micros, durationInMicroseconds: 0, size: 1024 bytes<br>
Content of the buffer is:<br>
video presentation time : 1123141224s, 536290microseconds<br>
<br>
Apparently, the received presentationTime is not the same as the sending presentationTime. What's wrong with my codes?<br>
<br>
</div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">>Another attribute of MediaSource I don't understand is<br>>fDurationInMicroseconds. In the above example, I set it to 10000 in
<br>>MultiFramedMediaSource::doGetNextFrame, but I always get a 0 for<br>>durantiontimeInMicroseconds in afterGetttingFrame(), why?<br><br>"duration in microseconds" - unlike "presentation time" - is not a
<br>parameter that gets passed within RTP (i.e., from sender to<br>receiver). Instead, it is a parameter that is used only internally<br>within a chain of "Media" objects. In particular, for data sources<br>that feed into a "MultiFramedRTPSink" (subclass), "duration in
<br>microseconds" tells the "MultiFramedRTPSink" how long to delay<br>between sending each outgoing RTP packet.</blockquote><div><br>
That is to say the durationTime from RTPSource is useless? <br>
</div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">To understand how the "<a href="http://LIVE.COM">LIVE.COM</a> Streaming Media" code works, I
<br>suggest that you start by examing the code for the existing demo<br>applications ("testProgs"), before trying to modify this to develop<br>your own code.</blockquote><div><br>
I have read testMP3Streamer and testMP3Receiver. but synchronization is not involved in them.<br>
Thanks!<br>
</div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">_______________________________________________<br>live-devel mailing list<br><a href="mailto:live-devel@lists.live.com">
live-devel@lists.live.com</a><br><a href="http://lists.live.com/mailman/listinfo/live-devel">http://lists.live.com/mailman/listinfo/live-devel</a><br></blockquote></div><br><br clear="all"><br>-- <br>Best regards<br><br>Shixin Zeng