[Live-devel] H.264 streaming -- Not receiving all packets

Patrick White patbob at imoveinc.com
Fri Mar 20 08:56:57 PDT 2009


I just fixed this issue in our code last night -- the I-frame from the H.264 
encoder was getting truncated because it was too big to fit into the stock  
OutPacketBuffer buffer (only ~60000 bytes).

The short answer is that the entire outgoing packet must fit into a buffer -- 
there's no way for the LIVE555 code to output the correct packets if it 
doesn't.  Matt's idea of having the codec produce smaller NALs has a good  
channe of working.. but I just upped the buffers for now:

On the server end, before you instantiate RTSPServer, set the global variable 
OutPacketBuffer::maxSize to your desired output buffer size.  The output 
buffer(s) will be automatically allocated.  On the client side you have to 
call getNextFrame() with a big enough buffer -- that call happens from code 
we've written, so I don't know what you might have to do to make it work, or 
even if you need to do anything.

I upped both buffers because I'm in a hurry -- perhaps only the outgoing 
buffer is sufficient.

Long answer:  Over in H264FUAFragmenter::doGetNextFrame(), it uses case 2 to 
send the initial FU-A packet after it gets a new buffer full, then case 3 to 
send the balance of that buffer via FU-B packets.  Near as I can tell, 
there's no way to jump back into case 3 with another buffer full.. ergo, the 
entire frame must fit into a single buffer full to be sent out as the proper 
sequence of FU-A/FU-B packets.  At least, that's the state of the code as of 
a month or two ago when we last updated... Ross, the code's a little 
convoluted down there, is my interpretation correct?, did I misunderstand the 
logic flow? or have you made recent changes down there to fix it?

Regardless, upping the buffer sizes fixed the issue for us.

Hope that helps.
patbob

On Thu Mar 19 12:56:08 2009, Georges Côté wrote:
>..
>I based my code on the H.264 tutorial.
>
>I get corruption once in a while. The H/W encoder is configured to 
>generate one IDR and 14 forward frames, no backward frames (I, P and B 
>in mpeg2 terminology). I'm not sure of the H.264 terminology.
>What I see is that the reference frames are quite large > 150 KB while 
>the other frames are around 15 KB.
>
>Most of the times, the client is called with the right size. Once in a 
>while, I will be missing part of a IDR or even the whole reference 
>frame. If I use Wireshark on the client side, I see that I'm receiving 
>the "missing" packets. I haven't digged in the code to investigate yet!
>
>On the server side, when the frame is larger than the destination 
>buffer, I copy as much as I can. The remaining data will be copied when 
>doGetNextFrame is called again.
>
>Incomplete parts have the right presentation time but I set the duration 
>to 0.
>The last part has the same presentation time but I set the duration 
>according to the right frame rate.
>..

On Friday 20 March 2009 6:48 am, Georges Côté wrote:
>  Thank you Matt and Ross.
>
>  My code is already calling increaseReceiveBufferTo(2000000) for the video
> and 100000 for the audio. I added a call to increaseSendBufferTo(20000000)
> on the server side but it didn't make a difference.
>
>  My preliminary investigation tells me that I'm receiving all the packets
> (I added TRACEs in MultiFramedRTPSource::networkReadHandler). I will look
> into MultiFramedRTPSource::doGetNextFrame1.
>
>  See below.
>
>  Matt Schuckmann wrote:
> Chances are your socket receiver buffers in the OS are too small.
>  Try increasing them with calls to setReceiveBufferTo() or
> increaseReceiveBufferTo(), I think you can find examples of this in the
> OpenRTSP example and I think there are some references to this in the FAQ.
> Check out the FAQ because there maybe some registry/config settings (in the
> case of windows) you need to change to allow bigger buffers.
>
>  There are companion methods for the send buffers on the server side but I
> don't think your having a problem with that if WireShark shows all the data
> is making it to the client.
>
>
>  On the server side, when the frame is larger than the destination buffer,
> I copy as much as I can. The remaining data will be copied when
> doGetNextFrame is called again.
>
> I'd be interested to know if this works because I got the impression that
> it doesn't. It seems to be working. I modified unsigned
> OutPacketBuffer::maxSize = 600000 instead of 60000. But it works with the
> smaller value.
>
>
>
>  There is a similar buffer on the client side that is used to pass data to
> the afterGettingFrame() method of your videoSink, if the data is too big
> for that buffer then the numTrucatedBytes parameter is set to number of
> bytes that are lost and as far as I can tell that data is gone. I don't
> think I've come across a case where this has occurred but in theory it
> could happen, I'm still not sure how you'd increase this buffer size.
>
>  Another thing you might try is to have your H.264 encoder slice up the
> frames into multiple slices, I think this will push your NAL packet sizes
> down which should reduce the buffer size requirements. I haven't tired this
> either but I've been meaning to just to see what happens.
>
> I currently can't configure it to encode multiple slices which is a pain
> since my S/W decoder is multi-threaded.
>
>  Regards,
>
>  Georges
>
>
>  Matt S.
>  iMove Inc.
>  mschuck at imoveinc.com
>  _______________________________________________
>  live-devel mailing list
>  live-devel at lists.live555.com
>  http://lists.live555.com/mailman/listinfo/live-devel


More information about the live-devel mailing list