[Live-devel] using live555 with in memory data

Ross Finlayson finlayson at live555.com
Tue Oct 8 12:11:09 PDT 2013


> I wish to use live555 to deliver in memory nals created with x264.  In my application, I spawn off the live555 event loop into a separate thread from the one responsible for doing the encoding.  I am configuring my encoder to output nals that will fit inside the standard mtu.

You don't have to do that.  The LIVE555 code automatically takes care of fragmenting large NAL units into appropriate-sized RTP packets.

Nonetheless, small NAL units are still a good idea, because they reduce the effect of network packet loss.  It's always a good idea, for example, to break large 'I-frame' NAL units into multiple slices; they just don't need to be as small as a standard MTU (1500 bytes), because our software will take care of fragmenting them if they happen to be larger.


> Given what I've read so far, I should subclass ServerMediaSubsession and implement createNewStreamSource

Yes.


> making use of ByteStreamMemorySource class in some way.

The "ByteStreamMemoryBufferSource" (sic) class is used to implement a memory buffer that acts like a file - i.e., with bytes that are read from it sequentially, until it reaches its end.  You could, in theory, use this to implement your H.264 streaming, feeding it into a "H264VideoStreamFramer" (i.e., in your "createNewStreamSource()" implementation).  However, the fact that the buffer is a fixed size is a problem.  If you haven't created all of your NAL units in advance, then that's a problem.  You could instead just use an OS pipe as input, and read it using a regular "ByteStreamFileSource".  However, because the size of the NAL units (created by you) are known in advance, it would be more efficient to have your own memory buffer - that contains just a single NAL unit at a time - and feed it into a "H264VideoStreamDiscreteFramer" (*not* a "H264VideoStreamFramer".  (Note that each NAL unit that you feed into a "H264VideoStreamDiscreteFramer" must *not* begin with a 0x000001 'start code'.)


> Given that I'll be changing the buffer to be read from, does this mean that I will have to create a new instance of the custom sms every time there is a new buffer to read from?

Absolutely not! 


> Or instead, will my custom sms need to handle the setup and teardown of ByteStreamMemorySources?

That's why I suggest not using "ByteStreamMemoryBufferSource" (see above).


> Or else, how is it anticipated that an in memory location be used to pass data to the live555 event loop when the data is sourced from a different thread?  Would it be easier to simply memmove the data to be read into the readbuffer instead of change the readbuffer's location?

I suggest that you look at the "DeviceSource" code ("liveMedia/DeviceSource.cpp"), and use that as a model for how to implement your "FramedSource" subclass (an instance of which you'll create in your implementation of "createNewStreamSource()").  Note in particular the code (the function "signalNewFrameData()") at the end of the file.  That is something that you could call from your non-LIVE555 thread to signal the availability of a new NAL unit.


> Also, I notice that the createNewStreamSource call returns a framed source object, of which H264FUAFragmenter seems to implement, and whose methods seem to suggest that it is intending to be used to feed nals to some upper layer.  Given also that I am already creating nals small enough to be packed inside an individual rtp packet, and the FUAFragmenter class seems to have code in it to handle nals of varrying sizes, is it the correct class for use in implementing the createNewStreamSource as mentioned above?

Yes, the object created by your implementation of the "createNewRTPSink()" virtual function should be a "H264VideoRTPSink".  As I noted above, it automatically takes care of fragmenting large NAL units, if needed.


>   I've read through some examples, but what I'm supposed to make happen inside of a createNewStreamSource hasn't clicked with me yet.

I suggest looking at the "H264VideoFileServerMediaSubsession" code as a model for how to implement your "ServerMediaSubsession" subclass.  The big difference will be the implementation of "createNewStreamSource()".  Your implementation should create an instance of your own "FramedSource" subclass (instead of "ByteStreamFileSource"), and feed it into a "H264VideoStreamDiscreteFramer" (instead of a "H264VideoStreamFramer").

Ross Finlayson
Live Networks, Inc.
http://www.live555.com/

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.live555.com/pipermail/live-devel/attachments/20131008/9406d26c/attachment.html>


More information about the live-devel mailing list