[Live-devel] OPUS encoded stream via SimpleRTPSink
Ross Finlayson
finlayson at live555.com
Fri Feb 3 14:35:05 PST 2017
> I'm trying to stream an OPUS encoded audio signal, but I could not find
> out yet how to implement this.
>
> I'm using a H264 encoder and a class derived from FramedSource - that
> one works perfectly and I can see the stream in VLC. So I wrote my own
> OPUS encoder along the same lines with "frames" of about 20ms each, but
> how do I set up an appropriate RTPAudioSink?
>
> m_audioSink = MPEG1or2AudioRTPSink::createNew(*m_env, m_rtpGroupsockAud);
>
> works perfectly if I use an MP3FileSource as in the demos, but does not
> work here, so I tried to go with SimpleRTPSink - however, what
> parameters do I need to set such that the packets really "leave" and are
> correctly interpreted by a receiving application like VLC?
You’re correct that “SimpleRTPSink” is the correct ‘sink’ class to use. (You can do this because the RTP payload format for OPUS audio - defined in RFC 7587 - is relatively straightforward.)
Note that - from RFC 7587, section 4.2 - a RTP packet contains exactly one ‘OPUS packet’, which must represent no more than 120ms of audio. An ‘OPUS packet’ is defined in RFC 6716, section 3. Therefore, your input source (to the “SimpleRTPSink”) must deliver exactly one ‘OPUS packet’ at a time - i.e., as a ‘frame’ in our software.
Your call to “SimpleRTPSink::createNew()” (e.g., from your “createNewRTPSink()” function, if you’re doing this from a RTSP server) would be:
SimpleRTPSink::createNew(env, rtpGS, rtpPayloadTypeIfDynamic, 48000, "audio", "OPUS", 2, False);
Ross Finlayson
Live Networks, Inc.
http://www.live555.com/
More information about the live-devel
mailing list