[Live-devel] RTSP Server Packet Loss
Jeff Shanab
jshanab at jfs-tech.com
Mon Nov 30 17:46:06 PST 2015
I concour.
Here are some data points.
I have been streaming 4K cameras recently one of them is actually
4000x3000 resolution.
A Sony is the normal 3840 x 2150 streaming at 30 fps, It puts out 5
slices for each frame type. I,P,B Slices also enable us to parallelize
the decoding so it can be reasonably decoded. The keyframes are indeed
250K+
A 12MB (4000x3000) Samsung also streams at 30 fps but no slices.
keyframes are large, I am not near my measurements so I cannot say but I
would not be surprised if they hit 160K.
A Panasonic does not have slices either but it has a powerful processor
on board.
The Sony has larger frames than the Samsung but is by far smoother and
easier to stream, less resources and better at maintaining the framerate.
Rock solid timestamps not varying more than a ms, not dropping either.
All 3 of these as well as some panoramic cameras that use slices and
stitch the images into a ridiculously wide view, have been successfully
streamed at full framerate on a single thread in debug mode or on low end
dedicated PC.s
TCP has overhead and if combined with any network latency, I think it taxes
the camera's internal buffer and it will drop framerate. I was surprised
by this, really sensitive to latency. I am guessing that it must hold
onto packets until they are acked and if there is delay the queue is forced
to hold more.
Measure your network latency and try UDP transport if you think you are
hitting a wall.
On Mon, Nov 30, 2015 at 7:06 PM, Ross Finlayson <finlayson at live555.com>
wrote:
>
> > I have a 40 Mb/s 1920x1080 H264 source video file running at 30 frames
> per second. So each frame is roughly 160 kB being bursted out every 33ms.
>
> I doubt that *each* frame is 160 kBytes - presumably only the ‘key
> frames’. But in any case, with key frames this large, I recommend that you
> reconfigure your encoder so that each key frame is encoded as a sequence of
> ’slice’ NAL units, rather than as a single NAL unit.
>
> People often have trouble streaming H.264 video with extremely large
> I-frames, if each I-frame gets encoded as a single NAL-unit. The problem
> with this is that these NAL units get sent as a (very long) sequence of RTP
> packets - and if even one of these RTP packets gets lost, then the whole
> NAL (I-frame in this case) will get discarded by the receiver; see
>
> http://lists.live555.com/pipermail/live-devel/2011-December/014190.html
>
> http://lists.live555.com/pipermail/live-devel/2012-August/015615.html
> http://lists.live555.com/pipermail/live-devel/2013-May/016994.html
>
> http://lists.live555.com/pipermail/live-devel/2014-June/018426.html
>
> http://lists.live555.com/pipermail/live-devel/2014-June/018432.html
>
> http://lists.live555.com/pipermail/live-devel/2014-June/018433.html
>
> http://lists.live555.com/pipermail/live-devel/2014-June/018434.html
>
> http://lists.live555.com/pipermail/live-devel/2015-March/019135.html
>
> http://lists.live555.com/pipermail/live-devel/2015-April/019228.html
> For streaming, it’s better to encode large I-frames as a sequence of
> ‘slice’ NAL units.
>
> Ross Finlayson
> Live Networks, Inc.
> http://www.live555.com/
>
>
> _______________________________________________
> live-devel mailing list
> live-devel at lists.live555.com
> http://lists.live555.com/mailman/listinfo/live-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.live555.com/pipermail/live-devel/attachments/20151130/464be9d1/attachment.html>
More information about the live-devel
mailing list