[Live-devel] Urgent help needed!

Morgan Tørvolt morgan.torvolt at gmail.com
Mon Feb 5 07:18:47 PST 2007


> Perhaps, although this would add a little latency, and require
> buffering (and add a lot of complexity) at the server.
> Architecturally, with memory being so cheap these days, it seems much
> better to require that *clients* have sufficient buffer memory to
> handle not just variable-bitrate streams, but also network jitter
> (which buffering at the server could never eliminate).

That would also cause trick play functionality to be laggy which would
annoy the hell out of me for one. "No, NO! Stop rewinding!"

> He seemed to be complaining, however, about the *average* output
> bitrate from "MPEG2TransportStreamFramer" being too low.  If that's
> the case, then I don't understand how that could be occurring.  If
> the PCR values in the Transport Stream are correct, then the average
> output bitrate from "MPEG2TransportStreamFramer" should also be
> correct.

Yes, you are now using the server clock to adjust the time, so all in
all, the datarate should be dead stable over time. But not always.
Take this scenario: GOP size of 12, static picture source, variable
bitrate encoding, PCR sent 4 times per second (not totally uncommon i
believe). Usually PCRs come at a regular time interval, not regular
packet interval. The still frame is very well compressed in B and P
frames, but the I frame is still the full frame, and contains alot of
data. Lets say you get this packet count between PCRs during a GOP:

1000, 10, 10, 10 ....... , 10

Lets disregard the startup phase to make this simpler, and assume that
we are in the middle of a stream.

Just before the start of a new GOP, the durationInMicroseconds will be
around 250ms/10 = 25ms. The TIME_ADJUSTMENT_FACTOR will pull that down
somewhat, since NEW_DURATION_WEIGHT is 0.5, it will not be much less.
When you get to the GOP start, the one PCR interval will take
something like 1000*20ms = 20 seconds. The following frame will have a
much shorter interval, at around 0.5ms, but that is only for 10
packets. Then the interval will change after the first 5ms to around
12ms, and then close in on 25ms again, being pulled down by the 0.8
factor, but it will never catch up. Every second of the film you get a
20 second delay.

Yes, this was extreme, but I assure you that some encoders does this,
and with a still frame like from a surveillance camera or webcam, it
will happen. Even some TV stations (still frame soft-porn commercial,
telephone number thingys and such, as well as quite a few religious
channels) have this distribution of bandwidth. There is nothing like
testing your system on these services, cause everything that can be
configured is wrong and every standard is disregarded. Of course noone
believes me when I say it is for testing =)

In most cases though, your implementation will work nicely, catching
up after some time. I have seen buffer underruns in movies with
variable bitrate. Especially one movie I remember problems every time
where after a silent night picture there was a scene change to a big
explosion. We got more than a second of delay, which is much more than
the STB can handle (iirc even mplayer complained at this stage), then
the catching up caused a buffer overrun later. On fixed bandwidth it
works flawlessly.

I guess you see the problem. Decreasing the NEW_DURATION_WEIGHT to
something like 0.1 would make a more stable bitrate, allowing the
catching up part to overpower the PCR calculations, but then you would
never really get a stable bitrate. Making the TIME_ADJUSTMENT_FACTOR
dynamic instead of fixed could help alot.

PCR is also important for another thing, namely the internal clock in
the STB. Satellite STBs does not handle PCR jitter very well, since
the internal clock of the STB is adjusted according to when it
arrives, and this is very strict. There are not many milliseconds to
spare. This makes the encoder clock the reference clock, just as it
should be. On IP network one cannot be that dependent upon PCR, but
the internal clock is also there adjusted after how the PCR times
arrives. If one solves it like in TransportStreamFramer, then some
standard compliant STBs will eventually have a problem with the very
large PCR jitter it produces. That is why you should abandon
TransportStreamFramer for live sources, as the receive->transmit
process will introduce some jitter, but not nearly as much as
TransportStreamFramer.

A quick fix for some of the worst problems of buffer underrun could be
to add an average time between PCRs, and if the time between two is
avgTime*2 you should just flush out alot of data as fast as possible
up until the next PCR arrives. This could go seriously wrong on borked
TS streams though, even worse than it would with the current framer I
guess, but many times it could save the day also. The best way is of
course to buffer data up to the next PCR. This would also give less
random access on disk and maybe better troughput, as one reads
linearly more often, but with many clients, the memory demand would
escalate, and could cause packet delays when the buffer is being
filled. One could make this more intelligent of course, and thread the
file-read part, but then it gets complicated.

> "MAX_PLAYOUT_BUFFER_DURATION".  (If you do this, though, you're
> definitely on your own...)

But it is alot of fun, running around unplugging network cables as a
quick and dirty disaster solution =)

-Morgan-


More information about the live-devel mailing list