[Live-devel] Re-streaming RTP as RAW-UDP multicast transported as MPEGTS
Shyam Kaundinya
shyam.kaundinya at digitalforcetech.com
Tue Aug 28 07:28:14 PDT 2018
Re#1. Yes. I use proxy to support additional clients which are RTSP.
Re#2.
a> In trying to implement the FAQ recommendation of using fmtp_spropvps(),sps,pps and then passing the values to parseSPropParameterSets, I tried to follow the code in H265VideoRTPSink::auxSDPLine and createNew functions and removed the parts of code that look for a fragmenter. When building the format a=fmtp:%d, the sample code uses rtpPayloadType(). It is not clear to me where to get this value from since my subsession's source is a FramedSource*. The RTP header format seems to suggest that this is not a fixed-value. It is case dependent which seems to me means it needs to be extracted from the incoming streaming. Any help or sample would be much appreciated.
b> Do I need to use all the code in the auxSDPLine function? It seems to do build a proper afmtp string. But is sending out just a concatenated, base64Encodes of vps+sps+pps string out sufficient (dropping all the profile, tier stuff) ? It is not clear from the FAQ article.
void preparePropSets(MediaSubsession& scs_subsession)
{
char const* sPropVPSStr = scs_subsession.fmtp_spropvps();
char const* sPropSPSStr = scs_subsession.fmtp_spropsps();
char const* sPropPPSStr = scs_subsession.fmtp_sproppps();
// Parse each 'sProp' string, extracting and then classifying the NAL unit(s) from each one.
// We're 'liberal in what we accept'; it's OK if the strings don't contain the NAL unit type
// implied by their names (or if one or more of the strings encode multiple NAL units).
unsigned numSPropRecords[3];
sPropRecords[0] = parseSPropParameterSets(sPropVPSStr, numSPropRecords[0]);
sPropRecords[1] = parseSPropParameterSets(sPropSPSStr, numSPropRecords[1]);
sPropRecords[2] = parseSPropParameterSets(sPropPPSStr, numSPropRecords[2]);
for (unsigned j = 0; j < 3; ++j) {
SPropRecord* records = sPropRecords[j];
unsigned numRecords = numSPropRecords[j];
for (unsigned i = 0; i < numRecords; ++i) {
if (records[i].sPropLength == 0) continue; // bad data
u_int8_t nal_unit_type = ((records[i].sPropBytes[0])&0x7E)>>1;
if (nal_unit_type == 32/*VPS*/) {
vps = records[i].sPropBytes;
vpsSize = records[i].sPropLength;
} else if (nal_unit_type == 33/*SPS*/) {
sps = records[i].sPropBytes;
spsSize = records[i].sPropLength;
} else if (nal_unit_type == 34/*PPS*/) {
pps = records[i].sPropBytes;
ppsSize = records[i].sPropLength;
}
}
}
}
char const* auxSDPLine(FramedSource* framerSource)
{
// Generate a new "a=fmtp:" line each time, using our VPS, SPS and PPS (if we have them),
// otherwise parameters from our framer source (in case they've changed since the last time that
// we were called):
// Set up the "a=fmtp:" SDP line for this stream.
u_int8_t* vpsWEB = new u_int8_t[vpsSize]; // "WEB" means "Without Emulation Bytes"
unsigned vpsWEBSize = removeH264or5EmulationBytes(vpsWEB, vpsSize, vps, vpsSize);
if (vpsWEBSize < 6/*'profile_tier_level' offset*/ + 12/*num 'profile_tier_level' bytes*/) {
// Bad VPS size => assume our source isn't ready
delete[] vpsWEB;
return NULL;
}
u_int8_t const* profileTierLevelHeaderBytes = &vpsWEB[6];
unsigned profileSpace = profileTierLevelHeaderBytes[0]>>6; // general_profile_space
unsigned profileId = profileTierLevelHeaderBytes[0]&0x1F; // general_profile_idc
unsigned tierFlag = (profileTierLevelHeaderBytes[0]>>5)&0x1; // general_tier_flag
unsigned levelId = profileTierLevelHeaderBytes[11]; // general_level_idc
u_int8_t const* interop_constraints = &profileTierLevelHeaderBytes[5];
char interopConstraintsStr[100];
sprintf(interopConstraintsStr, "%02X%02X%02X%02X%02X%02X",
interop_constraints[0], interop_constraints[1], interop_constraints[2],
interop_constraints[3], interop_constraints[4], interop_constraints[5]);
delete[] vpsWEB;
char* sprop_vps = base64Encode((char*)vps, vpsSize);
char* sprop_sps = base64Encode((char*)sps, spsSize);
char* sprop_pps = base64Encode((char*)pps, ppsSize);
char const* fmtpFmt =
"a=fmtp:%d profile-space=%u"
";profile-id=%u"
";tier-flag=%u"
";level-id=%u"
";interop-constraints=%s"
";sprop-vps=%s"
";sprop-sps=%s"
";sprop-pps=%s\r\n";
unsigned fmtpFmtSize = strlen(fmtpFmt)
+ 3 /* max num chars: rtpPayloadType */ + 20 /* max num chars: profile_space */
+ 20 /* max num chars: profile_id */
+ 20 /* max num chars: tier_flag */
+ 20 /* max num chars: level_id */
+ strlen(interopConstraintsStr)
+ strlen(sprop_vps)
+ strlen(sprop_sps)
+ strlen(sprop_pps);
char* fmtp = new char[fmtpFmtSize];
sprintf(fmtp, fmtpFmt,
rtpPayloadType(), profileSpace,
profileId,
tierFlag,
levelId,
interopConstraintsStr,
sprop_vps,
sprop_sps,
sprop_pps);
delete[] sprop_vps;
delete[] sprop_sps;
delete[] sprop_pps;
return fmtp;
}
======
c> Do I need to send out the prop-sets (VPS+SPS+PPS) before sending out every incoming frame ? Since I am sinking with UDP multicast, there is no concept of "a client establishing a connection. As such, I won't be able to tell when a client starts reading. So it seems to me, I either need to send it periodically or before every frame. If periodic, what is a good time interval to keep resending it.
d> In order to sink the prop-sets, I am using the memmove to the fTo variable in FramedSource. But that is protected. So I am guessing I would either need to add a public accessor function to get it and sink to it as follows :
FramedSource.hh:
[...]
public:
unsigned char* getfTo() { return fTo;}
[...]
My testRTSPClient:
[...]
videoESRawUDP = (FramedSource*) scs.subsession->readSource();
preparePropSets(*scs.subsession);
char const* fmtp = auxSDPLine(videoESRawUDP);
unsigned char* to = videoESRawUDP->getfTo();
memmove(to, fmtp, vpsSize+spsSize+ppsSize);
// After delivering the data, inform the reader that it is now available:
FramedSource::afterGetting(videoESRawUDP);
[...]
e> An alternative to <d>, I am guessing would involve sub-classing a BasicUDPSource object by adding a public member function to extract the fTo field.
Then sink the FramedSource from the subsession into it and then sink the subclassed object into the BasicUDPSink for multicasting. This seems to be the cleaner approach. Is this a valid approach ?
Re#3:
Just to clarify here. The customer requirements don't explicitly specify streaming over MPEGTS. Just that it should be "playable" as RAW-UDP (not RTP). I was under the impression that I would need a container around RAW-UDP for that to be possible and hence chose MPEGTS. Sorry to have mischaracterized this on my earlier post. But, based on your responses so far, it appears that if I send out the prop-set stuff properly, I may not need the MPEGTS wrapper in order for a VLC client to be able to play it. Is my understanding correct ?
Re: licensing terms. Yes. The end product for deployment will have subclasses to any changes to the live555 code. For example, I subclass mediasession to create a new object that requests raw-udp from an RTSP server by "transport" param mods as recommended by your FAQ.
=====
Date: Sun, 26 Aug 2018 18:19:41 -0700
From: Ross Finlayson <finlayson at live555.com>
To: LIVE555 Streaming Media - development & use
<live-devel at ns.live555.com>
Subject: Re: [Live-devel] Re-streaming RTP as RAW-UDP multicast
transported as MPEGTS
Message-ID: <2B4F8F35-6927-4D8F-9148-3A486F9E1846 at live555.com>
Content-Type: text/plain; charset=utf-8
> Questions:
> 1. Am I pursing the right strategy to accomplish my final objective - namely, playing MPEGTS stream over multicast UDP, the video source being the proxy server.
Perhaps. An alternative approach, of course, would be for your RTSP client application to read directly from the source video stream (i.e., without using a proxy server at all). But presumably you have some reason for wanting to use a proxy server (e.g., to support additional (regular) RTSP video player clients as well?).
> 2. If yes, what is the best way to verify that the RAW-UDP data I receive in my RTSP client are indeed H.265 frames ?
If the source stream is, indeed H.265, then the data that you receive in your RTSP client *will* be H.265 NAL units.
However, for your receiving video player (e.g., VLC) to be able to understand/play the stream, you probably need to prepend the stream with three special H.265 NAL units: The SPS, PPS, and VPS NAL units. See the last two paragraphs of this FAQ entry:
http://live555.com/liveMedia/faq.html#testRTSPClient-how-to-decode-data
> 3. Also, what the best way to verify that the MPEGTS framing is being sent to the multicast group?
I suggest that - before streaming the H.265/Transport Stream data over multicast - you first write it to a file (i.e., using ?FileSink? instead of ?BasicUDPSink?). Then you can try playing the file (locally) using VLC. If (and only if) that works OK, you can then try streaming it.
And finally, a reminder (to everyone) that if you are using the ?LIVE555 Streaming Media? software in a product, you are free to do so, as long as you comply with the conditions of the GNU LGPL v3 license; see:
http://live555.com/liveMedia/faq.html#copyright-and-license
Ross Finlayson
Live Networks, Inc.
http://www.live555.com/
------------------------------
Subject: Digest Footer
_______________________________________________
live-devel mailing list
live-devel at lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel
------------------------------
End of live-devel Digest, Vol 177, Issue 12
*******************************************
More information about the live-devel
mailing list