[Live-devel] Am I doing this right?

Ross Finlayson finlayson at live555.com
Thu Apr 6 13:01:13 PDT 2017


> FramedSource* OCVFileServerMediaSubsession::createNewStreamSource(unsigned /*clientSessionId*/, unsigned& estBitrate) {
>     estBitrate = 90000; // kbps, estimate
> 
>     // Create the video source:
>     {
>         std::ifstream in(fFileName, std::ifstream::ate | std::ifstream::binary);
>         fFileSize = in.tellg();
>     }
> 
>     OCVFileSource* fileSource = OCVFileSource::createNew(envir(), fFileName);
>     if (fileSource == NULL) return NULL;
> 
>     // Create a framer for the Video Elementary Stream:
>     return H264VideoStreamFramer::createNew(envir(), fileSource);
> }

This is correct, *provided that* your “OCVFileSource” delivers a stream of H.264 NAL units, each prepended with a 4-byte ‘start code’ (0x00 0x00 0x00 0x01).  (That’s because the downstream ‘framer’ object - “H264VideoStreamFramer” - expects a stream of NAL units in this format.)


> Then within OCVFileSource::doGetNextFrame() I use our own special file reader to get a frame from the OCV file and I use lib x264 to encode it into a group of NAL units (much like I do with our live camera solution).

Again, each NAL unit in this ‘group of NAL units’ needs to be prepended with a (0x00 0x00 0x00 0x01) ‘start code’.

> I then set "fTo" with the encoded data

No, you don’t ‘set’ “fTo” to point to the data; you *copy* the data to the address pointed to by “fTo”.  You should also check the size of the data against “fMaxSize” (and then set “fFrameSize” and “fNumTruncatedBytes” as appropriate).

I.e., you do something like this:
	if (myDataSize > fMaxSize) {
		fFrameSize = fMaxSize;
		fNumTruncatedBytes = newFrameSize - fMaxSize;
	} else {
		fFrameSize = myDataSize;
	}
	memmove(fTo, myDataPointer, fFrameSize);

Note that, in your case, you don’t need to set “fPresentationTime” or “fDurationInMicroseconds”; those will be computed by the downstream “H264VideoStreamFramer” object instead.


> and call 
> 
>  nextTask() = envir().taskScheduler().scheduleDelayedTask(0,
>         (TaskFunc*)FramedSource::afterGetting, this);
> 
> to reschedule another call to doGetNextFrame(). 

This will work.  However, in your case you could replace this with:
	FramedSource::afterGetting(this);
which is more efficient.  (You can do this because you’re streaming to a network, rather than to a file.  The downstream ‘RTPSink’ object will - after transmitting a RTP packet - return to the event loop, so you won’t get infinite recursion.)


Ross Finlayson
Live Networks, Inc.
http://www.live555.com/




More information about the live-devel mailing list