Hey all,<div><br></div><div>I know some parts of this have been covered in (many) other posts, I've spent sometime reading through them so apologies if I've missed the crucial post but believe this is unique.</div>
<div><br></div><div>I have a live camera source providing individual frames which I'm encoding in the x264 library, I've setup an RTSP server using a custom "OnDemandServerMediaSubsession", and a custom FramedSource (using DeviceSource as a template). To give an idea of how it chains together I've provided "createNewStreamSource" and "createNewRTPSink" in the footer, but AFAIK they follow the guidance in the FAQ.</div>
<div><br></div><div>I can watch this stream using either the Discrete or normal framer classes in VLC over the RTSP server when the x264 library provides annex-b supporting NAL units (with the 00000001 start code) AND I do not try to separate the NALs into individual calls to "doGetNextFrame()" - each frame is provided as a single concatenated block of data where several NALs could be within. However there are problems with using both the discrete and standard framers:</div>
<div><br></div><div>- If I use the H264VideoStreamFramer class, the fMaxSize variable counts down till a frame is truncated, this truncation is visible in VLC as a broken frame. This problem is similar to <a href="http://lists.live555.com/pipermail/live-devel/2010-July/012357.html">http://lists.live555.com/pipermail/live-devel/2010-July/012357.html</a> where it is advised to use a Discrete framer.</div>
<div><br></div><div>- If I use the H264VideoStreamDiscreteFramer class, I get the warnings about a startcode being present, looking at the code this means that saveCopyOfSPS and saveCopyOfPPS are never called. It does play in VLC, I'm just concerned about the implications of never using these functions? If I remove the startcode (just provide the remaining data block), VLC won't display anything & in the messages it says "waiting for SPS/PPS", this is true whether or not I split the NALs into individual "doGetNextFrame()" calls, but in this case live555 seems happy and doesn't output any warnings.</div>
<div><br></div><div>- I've seen hints at writing your own framer class, but it's unclear why & what I need to achieve in doing so?</div><div><br></div><div>Thanks in advance & do appreciate all help,</div>
<div>James</div><div><br></div><div><br></div><div>--</div><div><br></div><div><div><div>FramedSource* FramedServerMediaSubsession::createNewStreamSource(unsigned clientSessionId, unsigned& estBitrate)</div><div>{</div>
<div><span class="Apple-tab-span" style="white-space:pre"> </span>estBitrate = 500;</div><div><br></div><div><span class="Apple-tab-span" style="white-space:pre"> </span>// Create the video source:</div><div><span class="Apple-tab-span" style="white-space:pre"> </span>DeviceParameters p;</div>
<div><span class="Apple-tab-span" style="white-space:pre"> </span>RTPFrameLoader* frameSource = RTPFrameLoader::createNew( envir(), p );</div><div><br></div><div><span class="Apple-tab-span" style="white-space:pre"> </span>// encoder outputs to RTPFrameLoader</div>
<div><span class="Apple-tab-span" style="white-space:pre"> </span>encoder = new H264Encoder ( frameSource, 640, 480, 3 );</div><div><br></div><div><span class="Apple-tab-span" style="white-space:pre"> </span>Camera *cam = CameraFactory::getInstance()->getCamera( CameraFactory::FAKE );</div>
<div><br></div><div><span class="Apple-tab-span" style="white-space:pre"> </span>// the encoder listens in for raw camera frames</div><div><span class="Apple-tab-span" style="white-space:pre"> </span>cam->registerFrameListener ( encoder );</div>
<div><br></div><div><span class="Apple-tab-span" style="white-space:pre"> </span>encoder->go();</div><div><br></div><div><span class="Apple-tab-span" style="white-space:pre"> </span>// Create a framer for the Video Elementary Stream:</div>
<div><span class="Apple-tab-span" style="white-space:pre"> </span>return H264VideoStreamDiscreteFramer::createNew ( envir(), frameSource );</div><div>}</div><div><br></div><div>RTPSink* FramedServerMediaSubsession::createNewRTPSink ( Groupsock* rtpGroupsock, unsigned char rtpPayloadTypeIfDynamic, FramedSource* inputSource )</div>
<div>{</div><div><span class="Apple-tab-span" style="white-space:pre"> </span>return H264VideoRTPSink::createNew(envir(), rtpGroupsock, rtpPayloadTypeIfDynamic);</div><div>}</div></div></div>