[Live-devel] A question regarding the timing of doGetNextFrame
Ben Rush
ben at ben-rush.net
Sat Feb 6 13:41:55 PST 2016
Ross,
Thanks for the swift response. As it happens, I am using a discrete framer
for just this reason (in hopes it'd make synchronization easier). Here is
the implementation of my createNewStreamSource:
FramedSource* H264LiveServerMediaSession::createNewStreamSource(unsigned
clientSessionID, unsigned& estBitRate)
{
estBitRate = 90000;
SimpleFramedSource *source = SimpleFramedSource::createNew(envir());
return H264VideoStreamDiscreteFramer::createNew(envir(), source);
}
So as to be clear, I'm not doubting your intuition on this (as you wrote
the darn thing, you obviously know more than I), but I still don't
understand why the SERVER itself is calling my AudioInputDevice less
frequently once the video is enabled. I expected it to call the audio and
video at the same rate as when each is enabled individually.
Here is doGetNextFrame on my video source (cout statements are there for
debugging this issue):
void SimpleFramedSource::doGetNextFrame()
{
std::cout << "-";
long currentTickCount = ::GetTickCount();
_lastTickCount = currentTickCount;
if (this->_nalQueue.empty())
{
// get a frame of data, encode, and enqueue it.
this->GetFrameAndEncodeToNALUnitsAndEnqueue();
// get time of day for the broadcaster
::gettimeofday(&_time, NULL);
// take the nal units and push them to live 555.
this->DeliverNALUnitsToLive555FromQueue(true);
}
else
{
// there's already stuff to deliver, so just deliver it.
this->DeliverNALUnitsToLive555FromQueue(false);
}
}
And here it is on my audio source:
void WindowsAudioInputDevice_common::doGetNextFrame() {
std::cout << "<";
if (!fHaveStarted) {
// Before reading the first audio data, flush any existing data:
while (readHead != NULL) releaseHeadBuffer();
fHaveStarted = True;
}
fTotalPollingDelay = 0;
audioReadyPoller1();
std::cout << ">";
}
By the way, off topic (and I don't know if you care to know), but I had to
fix something in your waveInCallback method
(in WindowsAudioInputDevice_common). The callback method needs to have
DWORD parameters changed to DWORD_PTR to support 64-bit Windows.
static void CALLBACK waveInCallback(HWAVEIN /*hwi*/, UINT uMsg,
DWORD_PTR /*dwInstance*/, DWORD_PTR dwParam1, DWORD_PTR /*dwParam2*/) {
switch (uMsg) {
case WIM_DATA:
WAVEHDR* hdr = (WAVEHDR*)dwParam1;
WindowsAudioInputDevice_common::waveInProc(hdr);
break;
}
}
The call stack was being messed up and dwParam1 was pointing to garbage
otherwise.
On Sat, Feb 6, 2016 at 3:30 PM Ross Finlayson <finlayson at live555.com> wrote:
> > I have seen from reading the lists that care must be taken to ensure the
> timing is correct between the two streams
>
> Yes. Problems like this are usually caused by not setting proper
> “fPresentationTime” values in your (video and audio)
> “OnDemandServerMediaSubsession” subclasses (when you deliver each frame).
>
> You should also read
>
> http://lists.live555.com/pipermail/live-devel/2016-January/019856.html
>
> If your H.264 video source is coming from a byte stream (i.e., you’re
> using “H264VideoStreamFramer” rather than “H264VideoStreamDiscreteFramer”),
> then you can’t expect to get good audio/video synchronization, because the
> H.264 video stream parsing code can’t give you accurate presentation times
> (it can only give you accurate presentation times relative to the rest of
> the video stream). Instead, you’ll need to use a
> “H264VideoStreamDiscreteFramer”, and set accurate presentation times when
> you deliver H.264 NAL units to it.
>
>
> Ross Finlayson
> Live Networks, Inc.
> http://www.live555.com/
>
>
> _______________________________________________
> live-devel mailing list
> live-devel at lists.live555.com
> http://lists.live555.com/mailman/listinfo/live-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.live555.com/pipermail/live-devel/attachments/20160206/5071f365/attachment.html>
More information about the live-devel
mailing list