[Live-devel] Two questions

Ross Finlayson finlayson at live.com
Tue Nov 25 09:26:35 PST 2003


>I am trying to use live to send JPEG video. I have derived a class from 
>JPEGVideoSource but don't understand why the type(), qFactor(), etc. are 
>needed.

Those values are needed because they're used in the special header that's 
used for JPEG/RTP packets (as defined in RFC 2435, and implemented in 
"JPEGVideoRTPSink").  The idea here is that rather than sending an entire 
JPEG header along with each JPEG frame, this special header is sent instead 
(thereby saving a lot of space).  The special header is, in effect, a 
'summary' of the original JPEG header, and contains sufficient information 
for a JPEG/RTP receiver to reconstruct the original JPEG header.

>Isn't this information in the JPEG header?

Yes, all of these values - except "qFactor" - can be extracted from the 
JPEG header.  To see how, look at the implementation of 
"JPEGVideoRTPSource" (not "Sink").  This tells you how a JPEG/RTP receiver 
reconstructs a JPEG header from the information in the special RTP header - 
and thus how this information can be extracted from a JPEG header.

>  For example I know what the qFactor is but its user adjustable from the 
> encoder and may vary.

Your derived class will therefore need to get this value from the encoder.

>I also know the width and height, but that could be variable (currently 
>isn't, though). I don't know the type, however.

As noted above, you can get these values from the JPEG header.

>Also, I am running live on an embedded system that has a RTOS with only 
>1msec timer resolution. Is having microsecond resolution very important?

No, it's not.  As long as your application doesn't need <1ms granularity, 
you should be OK.


	Ross Finlayson
	LIVE.COM
	<http://www.live.com/>



More information about the live-devel mailing list