<html><head><meta http-equiv="Content-Type" content="text/html charset=GB2312"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;"><div><blockquote type="cite"><div style="font-size: 14px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;"><div><font face="宋体"><span style="line-height: 1.5;">And I use ByteStreamFileSource.cpp and </span></font><span style="font-family: 宋体; line-height: 1.5; background-color: rgba(0, 0, 0, 0); font-size: 10.5pt;">ADTSAudioFileSource.cpp to get the frame data.</span></div><div style="font-family: 宋体; line-height: 1.5;"><span style="background-color: rgba(0, 0, 0, 0); font-size: 10.5pt; line-height: 1.5;"><br></span></div><div style="font-family: 宋体; line-height: 1.5;"><span style="background-color: rgba(0, 0, 0, 0); font-size: 10.5pt; line-height: 1.5;">For h264/aac sync, I use </span><span style="background-color: rgba(0, 0, 0, 0); font-size: 10.5pt; line-height: 1.5;">testProgs/testOnDemandRTSPServer.cpp to do:</span></div><div style="font-family: 宋体; line-height: 1.5;"><span style="background-color: rgba(0, 0, 0, 0); font-size: 10.5pt; line-height: 1.5;"><br></span></div><div style="font-family: 宋体; line-height: 1.5;"><span style="background-color: rgba(0, 0, 0, 0); font-size: 10.5pt; line-height: 1.5;">ServerMediaSession* sms </span></div><div style="font-family: 宋体; line-height: 1.5;"><span style="background-color: rgba(0, 0, 0, 0);">= ServerMediaSession::createNew(*env, streamName, streamName,<span class="Apple-converted-space"> </span><br>descriptionString);</span></div><div style="font-family: 宋体; line-height: 1.5;"><span style="background-color: rgba(0, 0, 0, 0); font-size: 10.5pt; line-height: 1.5;">sms->addSubsession(H264VideoFileServerMediaSubsession </span></div><div style="font-family: 宋体; line-height: 1.5;"><span style="background-color: rgba(0, 0, 0, 0);">::createNew(*env, inputFileName, reuseFirstSource));</span></div><div style="font-family: 宋体; line-height: 1.5;"><span style="background-color: rgba(0, 0, 0, 0); font-size: 10.5pt; line-height: 1.5;">sms->addSubsession(ADTSAudioFileServerMediaSubsession </span></div><div style="font-family: 宋体; line-height: 1.5;"><span style="background-color: rgba(0, 0, 0, 0);">::createNew(*env, inputFileName3, reuseFirstSource));</span></div></div></blockquote><div><br></div>Using a byte stream as input works well when you are streaming just a single medium (audio or video). However, if you are streaming both audio and video, and want them properly synchronized, then you *cannot* use byte streams as input (because, as you discovered, you don't get precise presentation times for each frame).</div><div><br></div><div>Instead - if you are streaming both audio and video - then each input source must deliver *discrete* frames (i.e., one frame at a time), with each frame being given an presentation time ("fPresentationTime") when it is encoded.</div><div><br></div><div>Specifically: You will need to define new subclass(es) of "FramedSource" for your audio and video inputs. You will also need to define new subclasses of "OnDemandServerMediaSubsession" for your audio and video streams. In particular:</div><div>- For audio, your subclass will redefine the "createNewStreamSource()" virtual function to create an instance of your new audio source class (that delivers one AAC frame at a time).</div><div>- For video, your subclass will redefine the "createNewStreamSource()" virtual function to create an instance of your new video source class (that delivers one H.264 NAL unit at a time - with each H.264 NAL unit *not* having an initial 0x00 0x00 0x00 0x01 'start code). It should then feed this into a "H264VideoStreamDiscreteFramer" (*not* a "H264VideoStreamFramer"). Your implementation of the "createNewRTPSink()" virtual function may be the same as in "H264VideoFileServerMediaSubsession", but you may prefer instead to use one of the alternative forms of "H264VideoRTPSink::createNew()" that takes SPS and PPS NAL units as parameters. (If you do that, then you won't need to insert SPS and PPS NAL units into your input stream.)</div><br><div apple-content-edited="true">
<span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; "><span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; ">Ross Finlayson<br>Live Networks, Inc.<br><a href="http://www.live555.com/">http://www.live555.com/</a></span></span>
</div>
<br></body></html>