[Live-devel] MPEGTS FramedSource Pt2
Anthony Clark
aclark at bbn.com
Tue Jun 9 13:05:41 PDT 2015
Ross,
I've been seriously poking at my (mostly) working MPEGTS FramedSource
and OnDemandMediaSubsession derivatives for my MPEGTS RTSP server. I
posted here
(http://lists.live555.com/pipermail/live-devel/2015-June/019441.html) a
few days ago from my gmail address (sorry about that). I quickly
realized I was confusing "frame" in an MPEGTS context with a "video
frame"... once I fed `fTo` packetized MPEGTS data, the server worked as
expected - as in, wireshark captures all RTP packets without ANY LOSS
(even over 802.11).
You mentioned previously that fDurationInMicroseconds in important for
the client to request data at a correct rate. I haven't found out
exactly if this should be relative to a video frame or relative to a
MPEGTS frame (ie - some integral number of 188 byte mpegts packets). I
put some print statements in MPEG2TransportStreamFramer so that openRTSP
prints out `fDurationInMicroseconds`. This yields a pretty good range
from 1500-5000 with 3200 being the average. I've made
`fDurationInMicroSeconds` a constant 3200 for each 1316 (or less) MPEGTS
payload for my FramedSource and I observe that the client (openRTSP)
falls behind at an almost-linear rate. I've also tried 1500 and openRTSP
drops RTP packets. I think the answer to this is just knowing what
fDurationInMicroseconds is relative to.
My FramedSource does:
1) mux a single h264 NAL into some amount of 1316 byte MPEGTS packets -
the muxer always spits out 1316 until the last packet, but always spits
out a multiple of 188.
2) gettimeofday(fPresentationTime) for each 1316 (or less...) packet.
3) hardcode fDurationInMicroseconds to some constants above.
4) move/copy a single 1316 byte (or less...) packet to `fTo`
5) FramedSource::aftergetting(this)
Any advice is much appreciated!
-ac
More information about the live-devel
mailing list