[Live-devel] Subject: MJPEG streaming server and packet fragmentation

Czesław Makarski cmakarski at jpembedded.eu
Thu Apr 27 09:02:30 PDT 2017


Hi all,

I'm developing a MJPEG over RSTP (RFC 2435) streaming server using the 
live555 library.

For now, I'm trying to stream a simple JPEG image which is stored in the 
memory (array
char* SPACE_JPG with a length of SPACE_JPG_len).

According to the guidelines, I've created a subclass of JPEGVideoSource 
class and also a
subclass of OnDemandServerMediaSubsession class. My subclass' 
doGetNextFrame() is like follows:

-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠8<-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠
void MyCamera_Source::doGetNextFrame()
{
     size_t size;

     int offset = 0x16B;

     memcpy(fTo, SPACE_JPG + offset, SPACE_JPG_len -⁠ offset);

     fFrameSize = SPACE_JPG_len -⁠ offset;

         /⁠/⁠ Set the 'presentation time':
     gettimeofday(&fPresentationTime, NULL);

     fDurationInMicroseconds = 1000000U/⁠25;

     printf("doGetNextFrame() invocation!\n");

     nextTask() = envir().taskScheduler().scheduleDelayedTask(0, 
(TaskFunc*)FramedSource::afterGetting, this);
     return;
}
-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠8<-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠-⁠

The virtual functions type(), qFactor(), width(), height() are also 
defined and return the needed value (all according
to the RFC). The quantisation tables are also defined and being 
memcopied to the separate buffer in the quantizationTables()
defined function.

I'm testing the server using several images, and it appears to me that 
when the size of the image (fFrameSize) is small enough
that it fits the resulting RTP-JPEG Network Packet (so there is only one 
network packet to send), the MJPEG client (VLC) shows
that image alright. However, when the image is large, so it spans across 
several images, the output is distorted (see the link:)

http://imgur.com/a/ApaBy

In this particular case I was trying to stream a JPEG with approximately 
40kB of payload. If I understand correctly, it looks like
only the first network packet containing this JPEG frame is being taken 
into account, and the rest is just discarded, so that's
why the image is distorted this way.

The question is - what I'm doing wrong? While analyzing the network 
traffic with a Wireshark it clearly shows, that the Fragmentation Offset
of the JPEG frames is being incremented correctly.

Thank you in advance for you suggestions.

Best Regards,
Czesław Makarski



More information about the live-devel mailing list