[Live-devel] FramedFilter Performance & Questions

Ross Finlayson finlayson at live555.com
Sat Jun 23 20:28:12 PDT 2007


>I've implemented a FramedFilter subclass that transcoders video from 
>MPEG-2 to MPEG-4 using the libavcodec library.
>
>Therefore I use a live-"chain" that looks like:
>
>MPEG1or2VideoRTPSource -> Transcoder (my class) -> 
>MPEG4VideoStreamDiscreteFramer -> MPEG4ESVideoRTPSink

If your "Transcoder" filter delivers discrete MPEG-4 frames, and sets 
"fPresentationTime" and "fDurationInMicroseconds" properly, then you 
don't need to insert a "MPEG4VideoStreamDiscreteFramer".

>From the timevals inserted I calculate decoding, encoding and total 
>frame processing time.
>While decoding takes between 3ms and 10ms per frame and encoding 
>takes between 8ms and 15ms per frame, the total processing time is 
>sometimes greater than 50ms and exceeds the sum of decoding and 
>encoding time by more than 15ms or even more.
>
>This seems to be quite much just for fetching new data from the 
>source... where could this come from?

I don't know; you'll need to instrument this for yourself to figure 
it out.  Make sure, though, that you are setting 
"fDurationInMicroseconds" correctly at each point in the chain, 
because that field determines how long "MPEG4ESVideoRTPSink" can wait 
after sending each packet before requesting new data.

>Another question: Can I use a MPEG1or2VideoStreamFramer after the 
>MPEG1or2VideoRTPSource, so that I can pass complete frames into the 
>decoder instead of chunks?

No.  ("MPEG1or2VideoStreamFramer" takes an unstructured byte stream as input.)

You wlll need write your own filter - inserted after the 
"MPEG1or2VideoRTPSource" object - to assemble incoming slices into a 
complete frame, prior to delivering it to your decoder.  (Better yet, 
fix your decoder so that it can handle individual slices - that way, 
you will handle data loss much better.)
-- 

Ross Finlayson
Live Networks, Inc.
http://www.live555.com/


More information about the live-devel mailing list