[Live-devel] Synchrnozing video and audio using OnDemandServerMediaSubsessions
Diego Barberio
diego.barberio at redmondsoftware.com
Thu Oct 2 13:35:23 PDT 2008
Hi Ross
I have been analyzing the feedback you have sent me. However I'm still stuck
with this same issue. Here is a short brief of what I'm doing:
In addition to what I had described in my first e-mail, I have an
H263plusVideoRTPSink instance which is constructed in the
CH263plusVideoDXServerMediaSubsession. Additionally, for the audio
subsession I have written a CAlawSimpleRTPSink which extends from
SimpleRTPSink. I did this because I couldn't find any Sink for aLaw audio
with the capability for sending the rtpmap in the description message
packet. I need this field to be included in the description message.
Therefore, I only redefined the rtpmapLine() method. Attached is the code
for AlawSimpleRTPSink class.
Additionally, I have read in other e-mail (Timestamp gap in RTCP Report for
MPEG1or2VideoStreamFramer subject) a case that seems to be similar to mine.
There, you said that the code which generates RTP and RTCP presentation
times are correct, and that it is necessary to set correctly the
presentation times (you also mentioned it in the mail you sent me). Here is
the code snippet where I set the presentation time for the
CH263plusVideoDXFrameSource, after setting the fDurationInMicroseconds
(which is always 40ms) at doGetNextFrame method. The same code applies for
the Alaw audio frame, with the exception that the fDurationInMicroseconds is
about 500ms. I tried to shorten the 500ms duration to 40ms but no change was
perceived with the delay:
void CH263plusVideoDXFrameSource::setPresentationTime()
{
// Check if this is the first frame we send
if (fPresentationTime.tv_sec == 0 && fPresentationTime.tv_usec == 0)
{
// This is the first frame, so we set the current time
gettimeofday(&this->fPresentationTime, NULL);
}
else
{
// These are the following frames, so we set the previous
presentation time plus the frame duration
fPresentationTime.tv_usec += this->fDurationInMicroseconds;
if(fPresentationTime.tv_usec>=1000000)
{
fPresentationTime.tv_sec++;
fPresentationTime.tv_usec =
fPresentationTime.tv_usec - 1000000;
}
}
}
To sum up, my questions are the following:
- Am I doing something wrong, or missing anything else, in the definition of
CAlawSimpleRTPSink? Should I use another class or redefine any other method?
- Is my code for setting the presentation time correct? If not, what should
I change?
Thanks,
Diego
-----Original Message-----
From: live-devel-bounces at ns.live555.com
[mailto:live-devel-bounces at ns.live555.com] On Behalf Of Ross Finlayson
Sent: Viernes, 05 de Septiembre de 2008 06:27 p.m.
To: LIVE555 Streaming Media - development & use
Subject: Re: [Live-devel] Synchrnozing video and audio using
OnDemandServerMediaSubsessions
To get proper audio/video synchronization, you must create a
"RTCPInstance" for each "RTPSink". However, the
"OneDemandServerMediaSubsession" class does this automatically, so
because you're subclassing this, you don't need to do anything
special to implement RTCP - you already have it.
However, the second important thing that you need is that the
presentation times that you give to each frame (that feeds into each
"RTPSink") *must* be accurate. It is those presentation times that
get delivered to the receiver, and used (by the receiver) to do
audio/video synchronization.
Delaying the transmission (or not) does not affect this at all; it
doesn't matter if video packets get sent slightly ahead of audio
packets (or vice versa). What's important is the *presentation
times* that you give each frame. If those are correct, then you will
get audio/video synchronization at the receiver.
This is assuming, of course, that your *receiver* implements standard
RTCP-based synchronization correctly. (If your receiver uses our
library, than it will.) But if your receiver is not standards
compliant and doesn't implement this, then audio/video
synchronization will never work.)
--
Ross Finlayson
Live Networks, Inc.
http://www.live555.com/
_______________________________________________
live-devel mailing list
live-devel at lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel
Hi all,
I'm new to the live555 library.
I have a MediaServerSession with two SubSessions (one for H263 video and the
other for G.711 A-law audio), both SubSessions extend from
OnDemandServerMediaSubsession, the one I use for the video is called
CH263plusVideoDXServerMediaSubsession and the other is called
CALawAudioDXServerMediaSubsession. I have also two FramedSources one for
each SubSession, one called CH263plusVideoDXFrameSource and the other
CAlawAudioDXFrameSource.
The streaming for both medias works perfectly, but the audio is delayed
about 1.5 seconds from the video. To solve this I've tried to delay the
video subsession by adding 1500 milliseconds to the fPresentationTime
attribute in CH263plusVideoDXServerMediaSubsession::doGetNextFrame method,
however no change was perceived. So I started googling this problem, until I
reached to the question "Why do most RTP sessions use separate streams for
audio and video? How can a receiving client synchronize these streams?" from
FAQs.
The problem is that I don't know where I should create the instance for
RTCPInstance class, and there's no variable or field where I can store it. I
looked in the OnDemandServerMediaSubsession and FramedSource class.
Is there any way to delay the video streaming without using the
RTCPInstance, if not, where I should create it and where I should store it?
If you need anything else, please ask for it.
Diego
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: ALawSimpleRTPSink.cpp
URL: <http://lists.live555.com/pipermail/live-devel/attachments/20081002/f39c5cf3/attachment.ksh>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: ALawSimpleRTPSink.h
URL: <http://lists.live555.com/pipermail/live-devel/attachments/20081002/f39c5cf3/attachment.h>
More information about the live-devel
mailing list