[Live-devel] using live555 with in memory data
Joshua Kordani
jkordani at lsa2.com
Tue Oct 8 15:33:49 PDT 2013
Ross,
Thank you for your detailed response! I have responded inline.
On 10/8/13 3:11 PM, Ross Finlayson wrote:
> You don't have to do that. The LIVE555 code automatically takes care
> of fragmenting large NAL units into appropriate-sized RTP packets.
>
> Nonetheless, small NAL units are still a good idea, because they
> reduce the effect of network packet loss. It's always a good idea,
> for example, to break large 'I-frame' NAL units into multiple slices;
> they just don't need to be as small as a standard MTU (1500 bytes),
> because our software will take care of fragmenting them if they happen
> to be larger.
>
>
As I was originally reading warnings from your framework in regards to
my passing of oversized NALs (or at least, of large NALs that were
encompassing an entire frame), and in anticipation that the software
will be used in packet lossy environments, I figured that I'd try to
keep their size to a minimum. Being new to both of these domains makes
it hard to ask the right questions, so I guess I will try this, are
there large nals that can be split by your software, for whom losing a
slice does not result in the loss of the whole frame, in addition to
other large nals for which the opposite is true? I've just naively
reduced the size of all nals because I didn't know any better.
> The "ByteStreamMemoryBufferSource" (sic) class is used to implement a
> memory buffer that acts like a file - i.e., with bytes that are read
> from it sequentially, until it reaches its end. You could, in theory,
> use this to implement your H.264 streaming, feeding it into a
> "H264VideoStreamFramer" (i.e., in your "createNewStreamSource()"
> implementation). However, the fact that the buffer is a fixed size is
> a problem. If you haven't created all of your NAL units in advance,
> then that's a problem. You could instead just use an OS pipe as
> input, and read it using a regular "ByteStreamFileSource". However,
> because the size of the NAL units (created by you) are known in
> advance, it would be more efficient to have your own memory buffer -
> that contains just a single NAL unit at a time - and feed it into a
> "H264VideoStreamDiscreteFramer" (*not* a "H264VideoStreamFramer".
> (Note that each NAL unit that you feed into
> a "H264VideoStreamDiscreteFramer" must *not* begin with a 0x000001
> 'start code'.)
So myFramedSource will be responsible for passing individual nals up to
the DiscreteFramer, via deliverFrame, but currently, after my encode
call, I have quite a few nals that I need to send over. I either need
to call the triggerEvent function successively after loading each nal
into the input memory location (which sounds like the wrong thing to
do), or.. I can see how in DeviceSource how we return without writing in
the event that there is nothing to ship out, but I don't see where we
continue to get back into deliverFrame if there is more data to write.
My encoder thread either... waits till all nals have been consumed and
then continues? Or leaves the nals somewhere where live555 can continue
to consume them at its leisure?
Also, I'm not expecting the nals that I pass into live555 to represent
full frames, does this change the suitability of the DiscreteFramer for
this task?
Again, thank you very much for your responses, and I appreciate your time.
--
Joshua Kordani
LSA Autonomy
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.live555.com/pipermail/live-devel/attachments/20131008/a4d79403/attachment.html>
More information about the live-devel
mailing list