[Live-devel] Urgent help needed!

Ross Finlayson finlayson at live555.com
Wed Feb 7 00:32:04 PST 2007


>  > I don't buy this, because the streaming itself causes the data to be
>>  read linearly.
>
>Yes, if you have only one client, but having 30 clients, none of them
>at the same point in the stream, or maybe even on different streams,
>causes random access on the disk.

OK, I guess I was assuming that the OS's file system implementation 
is smart enough to detect sequential reads within a file (even  if 
they're interleaved with sequential reads within other files), and 
optimize this by doing read-ahead - for each file - into its file 
system cache.  However, I'm not sure in practice how smart OSs are 
about this, but if the OS kernel *could* do this automatically, that 
would certainly be better than having the application implementing 
its own read-ahead in user space.  In any case, file systems 
typically read from the disk in large block sizes (certainly much 
larger than 188 bytes), so you get some 'read-ahead' for free there. 
(OTOH, we had a major VOD customer that had no choice but to use 
Windows, and they ended up deciding to do their own 
read-ahead/buffering of Transport Stream file data in user space 
(replacing "ByteStreamFileSource" and "MPEG2TransportStreamFramer"). 
But perhaps other OSs aren't as brain-damaged as Windows about 
this...)

>  > I think that a better (but related) solution would be to use the
>  index file.
[...]
>This is what we are doing.

Are you using our index file format, or a different one that you created?

>You will need to index _every_ PCR
>though. An average between I frames is not sufficient.

Yes, our index file format records every PCR - not just those that 
belong to I-frames.

>By the way, the PCR is not supposed to be a guideline. The PCR
>timestamp is the exact time the packet should reach the client.

More precisely, the client's *decoder*.  (See below.)

>  You really have no slack there.

That's true, but in an Internet environment (as opposed to, say, a 
satellite network, where jitter can presumably be tightly controlled) 
you can't avoid nontrivial network jitter.  The *only* way to 
overcome this is to add buffering at the client, between the network 
interface and the decoder.  If a client is too 'cheap' to have enough 
buffering to overcome typical Internet network jitter, then it's not 
going to work (well) on the Internet, no matter how precisely the 
Transport Packets are transmitted from the source.

>The problem with live sources, I already said how you could fix that.
>The live sources do of course not give data to fast or to slow, at it
>is a live process being done realtime, it sends the data at the
>exactly correct time (somtimes in big chunks that you have to even out
>over time, but still, the timing is right). There is no reason to
>re-time the packets in a framer, they are fine just the way they are.
>Just drop the framer, and add support for setting the RTP timestamp to
>server-time, and you're done. Whatever you have in store, it will
>never beat the PCR accuracy from DVB-S, DVB-T and DVB-C sources
>anyway. Just pass it on as soon as you receive it, introducing as
>little jitter as possible. I have provided you with a patch before to
>make this possible trough using timestamps instead of
>durationInMicroseconds if one wished, but it was never added to
>live555. I do as before suggest that it gets added. If you do not have
>the patch anymore, I think I could dig it up again. It has been some
>time since I last had a look at it.

Yes, that would be useful.  I can imagine providing a 
"MPEG2TransportStreamDiscreteFramer" (similar to the existing 
"MPEG1or2VideoStreamDiscreteFramer" and 
"MPEG4VideoStreamDiscreteFramer" classes) that could be used when the 
input source is discrete Transport Stream packets, arriving at 
precise times.  (It would set "fPresentationTime", but not 
"fDurationInMicroseconds", and thus be very simple.)

>Don't get me wrong here. Live555 is an excellent product, the best
>there is imo, but it does have it's flaws. The transprtstremframer
>works, it is just not very good at what it does

Ultimately, I can envision there being three Transport Stream 'framer' classes:
1/ "MPEG2TransportStreamDiscreteFramer", as described above, for live 
streams that deliver discrete Transport Stream packets at precise 
times.
2/ "MPEG2TransportStreamFramerFromIndexFile", for use with 
prerecorded Transport Stream files that have a corresponding index 
file.
3/ The current "MPEG2TransportStreamFramer", as a fallback for everything else.
-- 

Ross Finlayson
Live Networks, Inc.
http://www.live555.com/


More information about the live-devel mailing list