[Live-devel] What is a frame in the live-lib ?
Ross Finlayson
finlayson at live.com
Mon Jun 6 18:50:42 PDT 2005
> > So, is there one PES source for audio, and another PES source for
> > video?
>
>Yes, our DVB parser currently delivers two PES, one stream for audio and
>one for video. It could deliver ES instead, but then we guess that both
>are out of sync ;-)
No, not necessarily, because the timestamps in the video Elementary Stream
will be used instead of the timestamps in the PES header.
I suggest that you deliver your PES streams as an unstructured byte source
- i.e., deliver as much data as the downstream reader requests (fMaxSize),
but no more than the amount of data remaining in the last-read PES
packet. *Do not* include the PES header in the data that you deliver.
Then, feed the data from the video stream into a
"MPEG1or2VideoStreamFramer", and feed the data from the audio stream into a
"MPEG1or2AudioStreamFramer". (In the latter case, you may want to set the
"syncWithInputSource" parameter to True.)
>We want to stream via RTP (2 streams) to another liveMedia app we're
>writing, a proxy, which will transcode the video part on-the-fly and
>transmit the streams to the client (2 streams, client: VLC). (i assume
>that we have to implement the proxy as a FramedFilter and then do
>something magic in order to keep both streams sync, although video is
>delayed due to the transcoding process, right?
Yes, but the 'magic' required is probably just to preserve, unchanged, the
"presentationTime" parameters that come from the
"MPEG1or2VideoStreamFramer". I.e., no real magic at all?
I.e., it sounde like you want something like:
video-PES-data -> MPEG1or2VideoStreamFramer -> your-transcoder ->
<something>RTPSink
and
audio-PES-data -> MPEG1or2AudioStreamFramer -> MPEG1or2AudioRTPSink
Note that the 'frames' that "MPEG1or2VideoStreamFramer" delivers are not
actually complete MPEG video frames (perhaps that is what you were
asking??). Instead, they are Video Sequence Headers, GOP Headers, Picture
Headers, and Slices. Your transcoder will need to take that into account
(by checking the first 4 bytes of each 'frame' that it receives).
I suggest that you start by making sure you can successfully stream the
original, untranscoded MPEG video data, i.e., using
video-PES-data -> MPEG1or2VideoStreamFramer -> MPEG1or2VideoRTPSink
before you start developing a transcoder.
Ross Finlayson
LIVE.COM
<http://www.live.com/>
More information about the live-devel
mailing list