[Live-devel] compilation issue / vcpkg / windows
g.jaegy
g.jaegy at imagine3d.fr
Mon Feb 14 05:44:26 PST 2022
It is not easy to figure out what's wrong.
I'm mostly using the same initialization code as testH265VideoStreamer.cpp (this code is in the main() function).
I'm using a different implementation of play() however. I've created a custom class inheriting from FramedSource which is responsible for delivering the data. This class is pretty simple at the moment, it implements "deliverFrame()" which simply (unoptimally for now) pulls "NALUs + presentation time" entries from a list, and copy the data to "fTo" (setting " fFrameSize" and "fPresentationTime" in addition to the copy). It then calls FramedSource::afterGetting(this).
FYI, the "NALUs + presentation time" entries are being created from another thread (all unoptimally made thread-safe using mutex, keep it simple for now). This thread has a main loop that basically processes frames created by NVEnc, decomposing each frame into a list of NALUs with the reference frame presentation time (one nvenc frame == multiple NALUs with the same time). It is also responsible of calling triggerEvent() in each cycle in the case the NALUs list is not empty (leading to deliverFrame() being called).
In my class, "deliverFrame()" sends one single NALUs per call, meaning a frame is being sent through multiple "deliverFrame()" calls (I initially tried sending multiple NALUs in deliverFrame(), one by one with a call to FramedSource::afterGetting(this); after I copied each NALU, but I had a few asserts() so assumed this wasn't the right way of doing it).
Although, I've used a " H265VideoStreamDiscreteFramer" framer instead of the " H265VideoStreamFramer" used in the example, which is correct I think.
This all works, until a second client connect. In that case, the first client stop playing.
I must be doing something wrong, but I don't know what.
Would you have any advice or hint that would allow me to find out what is wrong ? Thanks a lot !!
-----Original Message-----
From: live-devel <live-devel-bounces at us.live555.com> On Behalf Of Ross Finlayson
Sent: Friday, February 4, 2022 10:57 PM
To: LIVE555 Streaming Media - development & use <live-devel at us.live555.com>
Subject: Re: [Live-devel] compilation issue / vcpkg / windows
> On Feb 5, 2022, at 10:43 AM, g.jaegy <g.jaegy at imagine3d.fr> wrote:
>
> Thanks for your answer.
>
> I was actually running VLC from two different PCs on my LAN.
>
> The fact that multiple clients connect should be fully transparent to me, right ? fPresentationTime should continue increasing normally, shouldn't it ?
Yes. When streaming multicast (using a “PassiveServerMediaSubsession", your “FramedSource” subclass should be oblivious to multicast clients joining/leaving.
Once again, you should test this on the original, unmodified “testH265VideoStreamer” application, before looking into your own code.
Ross Finlayson
Live Networks, Inc.
http://www.live555.com/
_______________________________________________
live-devel mailing list
live-devel at lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel
-------------- next part --------------
/**********
This library is free software; you can redistribute it and/or modify it under
the terms of the GNU Lesser General Public License as published by the
Free Software Foundation; either version 3 of the License, or (at your
option) any later version. (See <http://www.gnu.org/copyleft/lesser.html>.)
This library is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for
more details.
You should have received a copy of the GNU Lesser General Public License
along with this library; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
**********/
// "liveMedia"
// Copyright (c) 1996-2022 Live Networks, Inc. All rights reserved.
// A template for a MediaSource encapsulating an audio/video input device
//
// NOTE: Sections of this code labeled "%%% TO BE WRITTEN %%%" are incomplete, and need to be written by the programmer
// (depending on the features of the particular device).
// Implementation$
#include "imVideoCapturePrec.h"
#include "imVideoCaptureManager.h"
#include "imDeviceSource.hh"
#include <GroupsockHelper.hh> // for "gettimeofday()"
#include "ImagineNvEncoder.h"
imDeviceSource*
imDeviceSource::createNew(UsageEnvironment& env,
imDeviceParameters params) {
return new imDeviceSource(env, params);
}
//EventTriggerId imDeviceSource::s_nEventTriggerId = 0;
unsigned imDeviceSource::s_nReferenceCount = 0;
imDeviceSource::imDeviceSource(UsageEnvironment& env,
imDeviceParameters params)
: FramedSource(env), m_oParams(params) {
if (s_nReferenceCount == 0) {
// Any global initialization of the device would be done here:
//%%% TO BE WRITTEN %%%
}
++s_nReferenceCount;
// Any instance-specific initialization of the device would be done here:
//%%% TO BE WRITTEN %%%
// We arrange here for our "deliverFrame" member function to be called
// whenever the next frame of data becomes available from the device.
//
// If the device can be accessed as a readable socket, then one easy way to do this is using a call to
// envir().taskScheduler().turnOnBackgroundReadHandling( ... )
// (See examples of this call in the "liveMedia" directory.)
//
// If, however, the device *cannot* be accessed as a readable socket, then instead we can implement it using 'event triggers':
// Create an 'event trigger' for this device (if it hasn't already been done):
//if (s_nEventTriggerId == 0) {
m_nEventTriggerId = envir().taskScheduler().createEventTrigger(deliverFrame0);
//}
}
imDeviceSource::~imDeviceSource() {
// Any instance-specific 'destruction' (i.e., resetting) of the device would be done here:
//%%% TO BE WRITTEN %%%
--s_nReferenceCount;
//if (s_nReferenceCount == 0) {
// Any global 'destruction' (i.e., resetting) of the device would be done here:
//%%% TO BE WRITTEN %%%
// Reclaim our 'event trigger'
envir().taskScheduler().deleteEventTrigger(m_nEventTriggerId);
m_nEventTriggerId = 0;
//}
}
void imDeviceSource::doGetNextFrame() {
// This function is called (by our 'downstream' object) when it asks for new data.
// Note: If, for some reason, the source device stops being readable (e.g., it gets closed), then you do the following:
if (0 /* the source stops being readable */ /*%%% TO BE WRITTEN %%%*/) {
handleClosure();
return;
}
// If a new frame of data is immediately available to be delivered, then do this now:
if (0 /* a new frame of data is immediately available to be delivered*/ /*%%% TO BE WRITTEN %%%*/) {
deliverFrame();
}
// No new data is immediately available to be delivered. We don't do anything more here.
// Instead, our event trigger must be called (e.g., from a separate thread) when new data becomes available.
}
void imDeviceSource::deliverFrame0(void* clientData) {
((imDeviceSource*)clientData)->deliverFrame();
}
void imDeviceSource::deliverFrame()
{
// This function is called when new frame data is available from the device.
// We deliver this data by copying it to the 'downstream' object, using the following parameters (class members):
// 'in' parameters (these should *not* be modified by this function):
// fTo: The frame data is copied to this address.
// (Note that the variable "fTo" is *not* modified. Instead,
// the frame data is copied to the address pointed to by "fTo".)
// fMaxSize: This is the maximum number of bytes that can be copied
// (If the actual frame is larger than this, then it should
// be truncated, and "fNumTruncatedBytes" set accordingly.)
// 'out' parameters (these are modified by this function):
// fFrameSize: Should be set to the delivered frame size (<= fMaxSize).
// fNumTruncatedBytes: Should be set iff the delivered frame would have been
// bigger than "fMaxSize", in which case it's set to the number of bytes
// that have been omitted.
// fPresentationTime: Should be set to the frame's presentation time
// (seconds, microseconds). This time must be aligned with 'wall-clock time' - i.e., the time that you would get
// by calling "gettimeofday()".
// fDurationInMicroseconds: Should be set to the frame's duration, if known.
// If, however, the device is a 'live source' (e.g., encoded from a camera or microphone), then we probably don't need
// to set this variable, because - in this case - data will never arrive 'early'.
// Note the code below.
if (!isCurrentlyAwaitingData()) return; // we're not ready for the data yet
// NOCOMMIT avoid copy
imEASTLVector<imU8> arrNALU;
imDouble dNALUTime;
m_oParams.m_pPerWindowData->m_oMutexListNALUsWithTime.lock();
if (!m_oParams.m_pPerWindowData->m_lstNALUsWithTime.empty())
{
arrNALU = m_oParams.m_pPerWindowData->m_lstNALUsWithTime.front().first;
dNALUTime = m_oParams.m_pPerWindowData->m_lstNALUsWithTime.front().second;
m_oParams.m_pPerWindowData->m_lstNALUsWithTime.pop_front();
}
m_oParams.m_pPerWindowData->m_oMutexListNALUsWithTime.unlock();
//
if (!arrNALU.empty())
{
imU8* newFrameDataStart = &arrNALU[0];
imU32 newFrameSize = (imU32)arrNALU.size();
// Deliver the data here:
if (newFrameSize > fMaxSize)
{
fFrameSize = fMaxSize;
fNumTruncatedBytes = newFrameSize - fMaxSize;
}
else
{
fFrameSize = newFrameSize;
}
//gettimeofday(&fPresentationTime, NULL); // If you have a more accurate time - e.g., from an encoder - then use that instead.
fPresentationTime.tv_sec = (imU32)imFloor(dNALUTime);
fPresentationTime.tv_usec = (imU32)((dNALUTime - (imDouble)fPresentationTime.tv_sec) * 1000000.0);
// If the device is *not* a 'live source' (e.g., it comes instead from a file or buffer), then set "fDurationInMicroseconds" here.
memcpy(fTo, newFrameDataStart, fFrameSize);
// After delivering the data, inform the reader that it is now available:
FramedSource::afterGetting(this);
}
}
//void imDeviceSource::RetrieveNextInputFrame()
//{
// m_oParams.m_pPerWindowData->m_oMutexNextInputBufferIDAndFrameTimes.lock();
// imAssert(m_oParams.m_pPerWindowData->m_iNextInputBufferID == -1);
// const NvEncInputFrame* pEncInput = m_oParams.m_pPerWindowData->m_pNvEncoder->GetNextInputFrame();
// CUarray pCUarray = (CUarray)pEncInput->inputPtr;
// m_oParams.m_pPerWindowData->m_iNextInputBufferID = m_oParams.m_pPerWindowData->m_pNvEncoder->GetBufferIDFromCUArray(pCUarray);
// m_oParams.m_pPerWindowData->m_oMutexNextInputBufferIDAndFrameTimes.unlock();
//}
-------------- next part --------------
A non-text attachment was scrubbed...
Name: imDeviceSource.hh
Type: application/octet-stream
Size: 2455 bytes
Desc: imDeviceSource.hh
URL: <http://lists.live555.com/pipermail/live-devel/attachments/20220214/75687cba/attachment.obj>
More information about the live-devel
mailing list