[Live-devel] Subclassing OnDemandServerMediaSubsession for Live Video Sources to stream h264

Bhawesh Kumar Choudhary bhawesh at vizexperts.com
Fri Jan 10 04:38:44 PST 2014


Thanks, the fix does work, I am able to stream my live source. But the video
quality I am getting on other side is quite glitch. Basically I am streaming
encoded data from ffmpeg's output packet (AVPacket) but I am suspecting that
since FFmpeg Gives more than one nal unit in a single AVPacket there might
be data loss in live media while streaming because of which playing in
client side producing wrong images. I have increased the
outPacketBuffer::maxSize to 160000 but it doesn't seem to fix the problem. 

Does live media do the parsing of Nal unit which are inside FFmpeg's
AVPacket or I have to copy single nal unit at a time in my device source?

 

Thanks

Bhawesh Kumar

VizExperts India Pvt. Ltd.

 

From: live-devel-bounces at ns.live555.com
[mailto:live-devel-bounces at ns.live555.com] On Behalf Of Ross Finlayson
Sent: 09 January 2014 01:59
To: LIVE555 Streaming Media - development & use
Subject: Re: [Live-devel] Subclassing OnDemandServerMediaSubsession for Live
Video Sources to stream h264

 

 

I am using Live555 library to create rtsp stream for live device source. I
read the FAQ and mailing list and sub classed all the required classes ::

1.       Framed source (using the deviceSource.cpp model)

2.       OnDemandServerMediaSubsession (using
H264VideoFileServerMediaSubsession model)

But few on the questions are I am not able to figure out:

1.       In each call of my doGetNextFrame() of my device source I am
assuming that no frame data is available and I am returning. Instead
whenever I receive data I trigger a event which schedule getNextFrame() of
my device source. Is there is status check (waiting for event or running)
required for Live555 run loop  to schedule the task of the new data arrival
in the function signalNewFrameData() of device source?

 

The LIVE555 event loop (which you entered when you ran "doEventLoop()")
automatically figures out when "TaskScheduler::triggerEvent()" has been
called (from another thread), and calls the appropriate handler function
(which you registered when you called "createEventTrigger()"; i.e., the
"deliverFrame0()" function in the "DeviceSource" example code).

 





2.       In my OnDemandServerMediaSubsession subclass I have implemented
both required function createNewStreamSource() and createNewRTPSink(). Since
I set the reuseFirstSource to true in my OnDemandServerMediaSunsession I
keep a reference of my device source in my dataMember variable.

 

That's your problem.  Setting "reuseFirstSource" to True simply means that
only one instance of the source class will be created *at a time*,
regardless of the number of concurrent RTSP clients that have requested the
stream.  It does *not* mean that only one instance of the source class will
be created *ever*.  In fact, as you noticed, the
"OnDemandServerMediaSubsession" code creates an initial instance of the
source class (it uses this to generate the SDP description in response to
the first RTSP "DESCRIBE").  It then closes this object.  Later, when the
first RTSP client does a RTSP "SETUP", another instance of the source class
will be created.  (That instance will not get closed again until the last
concurrent client does a "TEARDOWN".)

 

So, your code should allow for the possibility of more than one instance of
your data source class being instantiated (and later closed) - but
sequentially, not concurrently.  DO NOT modify the supplied
"OnDemandServerMediaSubsession" source code.

 

Ross Finlayson
Live Networks, Inc.
http://www.live555.com/ 

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.live555.com/pipermail/live-devel/attachments/20140110/17683103/attachment-0001.html>


More information about the live-devel mailing list