[Live-devel] Subject: MJPEG streaming server and packet fragmentation
Czesław Makarski
cmakarski at jpembedded.eu
Tue May 2 02:59:12 PDT 2017
Thank you for your answer. My comments are below:
I'm fully aware of the limitation of MJPEG, however I'm using it in a
small Gigabit local network, so the limitations are not so grave.
And yes, I'm using the latest versions of VLC.
> Another thing that I’d do is try using our command-line RTSP client “openRTSP” (see <http://www.live555.com/openRTSP/>) rather than VLC as the receiving client. If you run “openRTSP” with the “-m” option, it should output a series of files, one for each video frame. If you rename each of these files to have a “.jpg” filename suffix, then you should be able to view each file (using normal image viewer software).
I used the openRTSP program and it does not produce any files (and I'm
killing it with kill -HUP, just as it should be). I've debugged it using
gdb (compiling with -O0 and -g) and what I've found is that:
* FramedSource::getNextFrame() (on the client side, so the one which
gets the data out of the socket) is being called only once.
* The variable MediaSubsession::fReceiveRawJPEGFrames on the client side
is set to 'false' (its default value) - should it be like this?
* RTSPClient::incomingDataHandler1() is being called several (3-4)
times, reading bytes from the socket (presumably reading the whole
network packet) and then is never called.
During all this time, the server is calling the doGetNextFrame() and
producing the network frames with the data.
I suspect, that the issue may lie in the socket configuration of my
server. They are implemented like this:
--------------------8<-----------------------------------
MyCamera_Subsession::MyCamera_Subsession(UsageEnvironment& env, Boolean
reuseFirstSource) : OnDemandServerMediaSubsession(env, reuseFirstSource)
{
}
MyCamera_Source::MyCamera_Source(UsageEnvironment& env) :
JPEGVideoSource(env)
{
}
MyCamera_Subsession* MyCamera_Subsession::createNew(UsageEnvironment&
env, bool reuseFirstSource)
{
return new MyCamera_Subsession(env, reuseFirstSource);
}
FramedSource* MyCamera_Subsession::createNewStreamSource(unsigned
clientSessionId, unsigned& estBitrate)
{
FramedSource* resultSource = NULL;
MyCamera_Source* src = MyCamera_Source::createNew(envir());
src->frameClient = &frameClient; //unused field
estBitrate = 90000;
resultSource = src;
return resultSource;
}
RTPSink* MyCamera_Subsession::createNewRTPSink(Groupsock* rtpGroupsock,
unsigned char rtpPayloadTypeIfDynamic, FramedSource* inputSource)
{
return JPEGVideoRTPSink::createNew(envir(), rtpGroupsock);
}
MyCamera_Source* MyCamera_Source::createNew(UsageEnvironment& env)
{
return new MyCamera_Source(env);
}
--------------------8<-----------------------------------
Also, just in case, I'm providing the call stack of the client (openRTSP):
--------------------8<-----------------------------------
#0 FramedSource::getNextFrame (this=0x80e56a0, to=0x80e6610
"\260Gⷠ\001\016\b", maxSize=100000,
afterGettingFunc=0x804eb20 <FileSink::afterGettingFrame(void*,
unsigned int, unsigned int, timeval, unsigned int)>,
afterGettingClientData=0x80e65b0,
onCloseFunc=0x804e700 <MediaSink::onSourceClosure(void*)>,
onCloseClientData=0x80e65b0) at FramedSource.cpp:61
#1 0x0804ebfb in FileSink::continuePlaying (this=0x80e65b0) at
FileSink.cpp:79
#2 0x0804c19b in createOutputFiles (periodicFilenameSuffix=0x8099e80
"") at playCommon.cpp:950
#3 0x0804c795 in setupStreams () at playCommon.cpp:999
#4 0x0805c635 in RTSPClient::handleResponseBytes (this=0x80db130,
newBytesRead=209) at RTSPClient.cpp:1878
#5 0x0805d08e in RTSPClient::incomingDataHandler1 (this=0x80db130) at
RTSPClient.cpp:1563
#6 0x08097eef in BasicTaskScheduler::SingleStep (this=0x80daa10,
maxDelayTime=0) at BasicTaskScheduler.cpp:171
#7 0x080993bb in BasicTaskScheduler0::doEventLoop (this=0x80daa10,
watchVariable=0x0) at BasicTaskScheduler0.cpp:80
#8 0x08049b68 in main (argc=<optimized out>, argv=<optimized out>) at
playCommon.cpp:629
--------------------8<-----------------------------------
>
>
> One more thing. Instead of doing this:
>
>> memcpy(fTo, SPACE_JPG + offset, SPACE_JPG_len - offset);
>>
>> fFrameSize = SPACE_JPG_len - offset;
>
> You should check to make sure that you have enough space available to do the copy (although, in your case, it appears that you do). Instead, to be safer, do this:
>
> fFrameSize = SPACE_JPG_len - offset;
> if (fFrameSize > fMaxSize) {
> fNumTruncatedBytes = fFrameSize - fMaxSize;
> fFrameSize = fMaxSize;
> }
> memcpy(fTo, SPACE_JPG + offset, fFrameSize);
>
Yes, thank you for the remark and for your reponse.
--
Best Regards,
Czesław Makarski
JPEmbedded
More information about the live-devel
mailing list