[Live-devel] Please help: I have difficulties implementing the right classes
Fabrice Triboix
fabricet at ovation.co.uk
Fri Sep 5 10:02:37 PDT 2014
Hi Ross,
Thanks a lot for these details, that makes things a lot clearer.
One last question: Let's assume fDurationInMicroseconds is 0; if the transmitting object immediately requests the next frame and doGetNextFrame() returns immediately because no frame is available, isn't there a risk that the application will use 100% of CPU? How does the transmitting object avoid using 100% CPU?
Best regards,
Fabrice
________________________________
From: live-devel [live-devel-bounces at ns.live555.com] on behalf of Ross Finlayson [finlayson at live555.com]
Sent: 05 September 2014 16:38
To: LIVE555 Streaming Media - development & use
Subject: Re: [Live-devel] Please help: I have difficulties implementing the right classes
On Sep 5, 2014, at 1:14 AM, Fabrice Triboix <fabricet at ovation.co.uk<mailto:fabricet at ovation.co.uk>> wrote:
You're thinking about this the wrong way. "doGetNextFrame()" gets called automatically (by the downstream, 'transmitting' object) whenever it needs a new NAL unit to transmit. So you should just deliver the next NAL unit (just one!) whenever "doGetNextFrame()" is called. If your encoder can generate more than one NAL unit at a time, then you'll need to enqueue them in some way.
[Fabrice] I would be interested in understanding a bit more here. Is live555 is a pull model?
Yes.
How does the transmitting object knows when to send the next frame? Who/what decides to call doGetNextFrame() and when?
The transmitting object (a "MultiFramedRTPSink" subclass) uses the frame duration parameter ("fDurationInMicroseconds") to figure out how long to wait - after transmitting a RTP packet - before requesting another 'frame' from the upstream object. (I put 'frame' in quotes here, because - for H.264 streaming - the piece of data being delivered is actually a H.264 NAL unit.) If "fDurationInMicroseconds" is 0 (its default value), then the transmitting object will request another 'frame' immediately after transmitting a RTP packet. If data is being delivered from a live encoder - as in your case - then that's OK, because the encoder won't actually deliver data until it becomes available.
That's why you don't need to set "fDurationInMicroseconds" if your data comes from a live source.
Ross Finlayson
Live Networks, Inc.
http://www.live555.com/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.live555.com/pipermail/live-devel/attachments/20140905/a53b4217/attachment-0001.html>
More information about the live-devel
mailing list