[Live-devel] my performance benchmark of livemedia library, not satisfactory
liu yang
yangliu.seu at gmail.com
Wed Mar 11 19:52:15 PDT 2009
Thanks you guys' reply.
But starting multiple processes doesn't suit for my case, because I
use livemedia as one independent component of my application which is
of only one instance.
My application is concerned about performance, high capacity,
real-time packet delivery. Seem livemedia is not designed for such
purpose.
BTW, I plan to do some change to make livemedia threadsafe and
multi-threaded. So any part of code you think I need change?
As far as I know now, what I think needs be modified as below :-
1. static member variable "tokenCounter" needs be changed to class
member variable to isolate eachTaskScheduler instance which is the
main entry of thread polling.
2. IMHO, current delta timing mechanism to trigger timer is not very
efficient because every time when invoking "synchronize", whole queue
will be traversed. Why not just keep it simple to maintain a sorted
timeval-eventProc map? Scheduler just compares the absolute now time
with the expected timeval, if expires, fire the proc.
3. Is it possible to make delayed task polling and trigger as a
separate thread? I mean, a separate thread runs dedicatedly to
maintain the timer queue and do event trigger. We know, most logic
processing of livemedia is timer triggered ( DelayTask in livemedia's
jargon )
Welcome you guys' thoughts and comments. I think, most of you also
concern about livemdia performance. Hope we can make it better
together.
Thanks
Kandy
On Thu, Mar 12, 2009 at 8:12 AM, <live-devel-request at ns.live555.com> wrote:
> Send live-devel mailing list submissions to
> live-devel at lists.live555.com
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://lists.live555.com/mailman/listinfo/live-devel
> or, via email, send a message with subject or body 'help' to
> live-devel-request at lists.live555.com
>
> You can reach the person managing the list at
> live-devel-owner at lists.live555.com
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of live-devel digest..."
>
>
> Today's Topics:
>
> 1. What's the DirectedNetInterface used for? (dai jun)
> 2. Re: Live555 RTSP Client never sees RTCP BYE message from
> Live555 Server (Matt Schuckmann)
> 3. Re: setupDatagramSocket - SO_REUSEADDR problems (Guido Marelli)
> 4. Re: What's the DirectedNetInterface used for? (Ross Finlayson)
> 5. [PATCH] RTSPClient::recordMediaSession? (Martin Storsj?)
> 6. Re: [PATCH] RTSPClient::recordMediaSession? (Ross Finlayson)
> 7. Re: my performance benchmark of livemedia library, not
> satisfactory (Brad Bitterman)
> 8. Re: frame rate supported (Patrick White)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Wed, 11 Mar 2009 23:45:45 +0800
> From: dai jun <daijun88 at gmail.com>
> Subject: [Live-devel] What's the DirectedNetInterface used for?
> To: live-devel at ns.live555.com
> Message-ID:
> <c3cb57e80903110845v43149f49j2e1fdd786a76b116 at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> I'm reading the groupSock source code these days.I found
> DirectedNetInterface is a pure abstract class,
> In groupSock::outputToAllMemberExcept(...), the member functions are called,
> but I cannot find any implementation in the project.
> I wonder what's this class used for?
> I guess it can be extended and used for stream forwarding? or application
> layer multicast?
>
> Daly
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.live555.com/pipermail/live-devel/attachments/20090311/affa7fd6/attachment-0001.html>
>
> ------------------------------
>
> Message: 2
> Date: Wed, 11 Mar 2009 10:02:06 -0700
> From: Matt Schuckmann <matt at schuckmannacres.com>
> Subject: Re: [Live-devel] Live555 RTSP Client never sees RTCP BYE
> message from Live555 Server
> To: LIVE555 Streaming Media - development & use
> <live-devel at ns.live555.com>
> Message-ID: <49B7EE8E.2010805 at schuckmannacres.com>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> I understand that it's hard to test bugs on modified code, I'd submit my
> modifications to the project but you've already told me that you won't
> accept some of them (I understand your reasons), and I'm not ready to
> submit the others. I'm only trying to help you out by reporting what I
> see, I haven't changed any of the code in the areas I'm referring to here.
>
> I'm pretty sure that this would happen with unmodified code, just see if
> your bye handler is called in OpenRTSP after sending or a teardown
> message while the source is still playing. Or better yet stick a network
> sniffer on a test setup and you'll see the server never sends the bye
> message after receiving a teardown from the client.
>
> Also note the call sequence when the server is tearing things down:
> The server deletes the clientSession which causes reclaimStreamStates to
> be called which causes OnDemandServerMediaSubsession::deleteStream().
> The first thing OnDemandServerMediaSubsession::deleteStream() does is
> call StreamState::endPlaying() on the streamState with destination
> addresses to stop playing for.
> End playing removes the destinations from the groupSock associated with
> the RTCPInstance object.
> OnDemandServerMediaSubsession::deleteStream() goes through and
> decrements the refcounts for all the streamStates and deletes all the
> streamState's with a reference count of 0 (which a unicast session is
> all of them). The destructor for the streamState objects calls
> StreamState::reclaim which calls Medium::close() for the RTCPinstance ,
> which deletes the RTCPInstance. The destructor for RTCPInstance calls
> SendBYE() which on the surface appears to work but in fact doesn't do
> anything at all, because all the destinations in the groupsock have been
> removed already so the BYE message never gets sent to the server.
> Make sense?
>
> Ross Finlayson wrote:
>> In general, it's hard to respond to alleged bug reports on modified
>> code. The best bug reports are those that apply to the original,
>> unmodified code, so we can (hopefully) reproduce the problem (if any)
>> ourselves.
>>
>>
>>> PS. You also should note that the BYE handler code in OpenRTSP causes
>>> all the streams to be deleted and the RTCPInstance objects with them,
>>> the problem is the RTCPInstance object is in the process of handling
>>> a packet.
>>
>> Remember that this is all single-threaded code. If an "RTCPInstance"
>> object is deleted, then it's not also 'in the process' of doing
>> anything else. The only way it could also be involved in 'handling a
>> packet' would be if this packet handling happened later, as a result
>> of an 'incoming packet' event in the event loop. But that should
>> never happen, because - as a result of deleting the "RTCPInstance"
>> object - "TaskScheduler::turnOffBackgroundReadHandling()" gets called.
>
> I'm very aware that it's single threaded and I've made sure to work
> within that context. The fact that it's a single thread library doesn't
> preclude one from calling delete from within one of that objects
> methods, or worse yet while such a method is on the call stack, which is
> the case here.
>
> If you look at the BYE case in RTCPInstance::incomingReportHandler1()
> you see that the byeHandler is called on or about line 522 (I've changed
> all the fprintf's to calls to envir() << to improve debugging in a non
> command line app, so my line numbers might not match yours, that's the
> only change I've made to this code).
> If you follow the call sequence for OpenRTSP you'll see that this
> becomes a call to subsessionByeHandler in PlayCommon.cpp, If all the
> subsessions have been stopped subsesionByeHandler() then calls
> sessionAfterPlaying() (also in PlayCommon.cpp). The
> sessionAfterPlaying() function then calls Shutdown() which calls
> closeMediaSinks() and tearDownStreams() , and closes the session before
> calling exit(). It think it's closeMediaSink() or tearDownStreams() that
> causes all the streams and there respective RTCPInstance's to be
> deleted. Now if you remove the call to exit() you'd see that eventually
> the call stack will unwind all the way back up to
> RTCPInstance::incomingReportHandler1() which oops the object associated
> with this call has been deleted so if any of the execution after the
> call the ByeHandler tries to use the any of that objects state you'll be
> in trouble. That's what I mean by the RTCPInstance is deleted while it's
> in the process of handling the BYE message. I generally consider it bad
> form to call delete on a object while one of it's methods are on the
> call stack. In this particular case I don't think that
> RTCPInstance::incomingReportHandler1() should be changed it's, it can't
> know that a client is going to want to delete it, in fact it should
> assume that it won't. More likely OpenRTSP should be changed to clean
> things up outside of the call sequence origonating from it's byeHandler.
> My case I just scheduled a delayed task with the taskScheduler to do the
> clean up.
>
> I discovered this when:
> 1. I got the server to send BYE messages by adding a call to SendBYE as
> I described before.
> 2. I changed all the fprintf's in RTCPInstance to envir() << and my app
> would crash when RTCPInstance::incomingReportHandler1() would try to log
> something after the call to the byeHandler.
>
> I hope this all makes sense and it helps you and other users of this
> very useful library out.
>
> Matt S.
>
>
> ------------------------------
>
> Message: 3
> Date: Wed, 11 Mar 2009 15:33:01 -0200
> From: Guido Marelli <guido.marelli at intraway.com>
> Subject: Re: [Live-devel] setupDatagramSocket - SO_REUSEADDR problems
> To: LIVE555 Streaming Media - development & use
> <live-devel at ns.live555.com>
> Message-ID: <49B7F5CD.8060501 at intraway.com>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> Hi,
> I think that SO_REAUSEPORT will continue introducing problems cause we
> need to force a port when creating a socket for the RTCP channel (RTP
> port + 1).
>
> I believe that the best approach is to forget about SO_REAUSEPORT so we
> can be sure that every socket requested to the OS will work just fine.
>
>
> Regards!
>
>
> Ross Finlayson wrote:
>>> It seems that the code on MediaSubsession::initiate will cause the
>>> effect
>>> I'am reporting when the OS offers the same odd port number for both the
>>> video and the audio stream.
>>
>> Yes, you're right. This bug got introduced in version 2008.12.20 when
>> I changed the port number selection code in response to another bug
>> that some people were seeing. (Before, the code was always letting
>> the OS choose the port number, and this was sometimes causing a loop
>> whereby the same (odd) port number would get chosen over and over again.)
>>
>> From what I can tell, the problem occurs only if we end up making the
>> code - rather than the OS - choose a port number. (So, SO_REUSEPORT
>> is not the problem, because even if this were not set, we'd end up
>> getting an error when we tried to create the socket with the same port
>> number the second time.)
>>
>> It seems that I need to change the code again so that it always lets
>> the OS choose the port number, but be smarter about doing so, so we
>> don't end up in an infinite loop. Stay tuned...
>>
>
> --
> Guido Marelli
> Intraway Corp.
>
> Oficina AR: +54 (11) 4393-2091
> Oficina CO: +57 (1) 750-4929
> Oficina US: +1 (516) 620-3890
> Fax: +54 (11) 5258-2631
> MSN: guido.marelli at intraway.com
>
> Visite nuestro sitio web en http://www.intraway.com
>
>
>
>
> ------------------------------
>
> Message: 4
> Date: Wed, 11 Mar 2009 08:57:50 -0700
> From: Ross Finlayson <finlayson at live555.com>
> Subject: Re: [Live-devel] What's the DirectedNetInterface used for?
> To: LIVE555 Streaming Media - development & use
> <live-devel at ns.live555.com>
> Message-ID: <f06240800c5dd8f652455@[66.80.62.44]>
> Content-Type: text/plain; charset="us-ascii" ; format="flowed"
>
>>I'm reading the groupSock source code these days.
>>I found DirectedNetInterface is a pure abstract class,
>>In groupSock::outputToAllMemberExcept(...), the member functions are
>>called, but I cannot find any implementation in the project.
>>I wonder what's this class used for?
>>I guess it can be extended and used for stream forwarding? or
>>application layer multicast?
>
> Yes, we use this for our implementation of UMTP
> <http://www.live555.com/umtp.txt>
>
> Otherwise, this code ends up not being used.
> --
>
> Ross Finlayson
> Live Networks, Inc.
> http://www.live555.com/
>
>
> ------------------------------
>
> Message: 5
> Date: Wed, 11 Mar 2009 21:36:14 +0200 (EET)
> From: Martin Storsj? <martin at martin.st>
> Subject: [Live-devel] [PATCH] RTSPClient::recordMediaSession?
> To: live-devel at ns.live555.com
> Message-ID: <Pine.LNX.4.64.0903112130290.14405 at localhost.localdomain>
> Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
>
> Hi,
>
> I noticed that the RTSPClient class lacks a method for sending RECORD
> requests for a whole media session; there's only a method for sending
> RECORD requests for individual subsessions. Is this an intentional
> omission, or has it just not been needed yet?
>
> I implemented such a method, see the attached patch. This seems to work
> fine for me.
>
> Regards,
> // Martin Storsj?
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: live555-record-media-session.diff
> Type: text/x-diff
> Size: 2587 bytes
> Desc:
> URL: <http://lists.live555.com/pipermail/live-devel/attachments/20090311/5262b754/attachment-0001.bin>
>
> ------------------------------
>
> Message: 6
> Date: Wed, 11 Mar 2009 12:45:20 -0700
> From: Ross Finlayson <finlayson at live555.com>
> Subject: Re: [Live-devel] [PATCH] RTSPClient::recordMediaSession?
> To: LIVE555 Streaming Media - development & use
> <live-devel at ns.live555.com>
> Message-ID: <f06240801c5ddc507b31f@[66.80.62.44]>
> Content-Type: text/plain; charset="us-ascii" ; format="flowed"
>
>>I noticed that the RTSPClient class lacks a method for sending
>>RECORD requests for a whole media session; there's only a method for
>>sending RECORD requests for individual subsessions. Is this an
>>intentional omission, or has it just not been needed yet?
>
> The latter.
>
>
>>I implemented such a method, see the attached patch. This seems to
>>work fine for me.
>
> I will likely add this when I update the "RTSPClient" code,
> --
>
> Ross Finlayson
> Live Networks, Inc.
> http://www.live555.com/
>
>
> ------------------------------
>
> Message: 7
> Date: Wed, 11 Mar 2009 16:09:15 -0400
> From: Brad Bitterman <bitter at vtilt.com>
> Subject: Re: [Live-devel] my performance benchmark of livemedia
> library, not satisfactory
> To: LIVE555 Streaming Media - development & use
> <live-devel at ns.live555.com>
> Message-ID: <5A4BFC47-6A00-4FA4-8B6C-36DB7FE7E204 at vtilt.com>
> Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
>
> I found that under Linux a single threaded process such as one using
> live555 only runs on one core of a multi-core CPU. My suggestion is to
> run multiple processes if possible. This will let Linux distribute the
> processes to the different cores.
>
> - Brad
>
> On Mar 11, 2009, at 6:31 AM, Marco Amadori wrote:
>
>> On Wednesday 11 March 2009, 10:28:25, liu yang wrote:
>>
>>> I plan to develop an application which may support 500+ or even 1000+
>>> rtp session simultaneously. So anybody could tell me whether
>>> livemedia
>>> could support such load?
>>> BTW, I did some test based on testWAV sample program. The result is
>>> not satisfactory, frankly speaking.
>>
>> On a bigger machine (dual xeon) with 6 raid5 15K SAS disks streaming
>> 4mbits
>> MPEG2 ts I found that I could not stream more than 95 streams without
>> artifacts on screen.
>>
>> My bold analisys (on a early 2008 release of livemedia) was that the
>> problem
>> wasn't IO bound but CPU bound (95%+). Also the network (400Mbps)
>> wasn't
>> problematic since we had tried both a single gigabit and a bonding
>> of 4
>> interfaces sawing both server and router side very little load.
>>
>> But I just did a quick analisys, so I could be enterely wrong or
>> misleaded.
>>
>>> FAQ told me livemedia is a single threaded framework, which all
>>> logics
>>> are processed in single thread sequentially.
>>
>> This could be a problem (a known one) in our case since multiple
>> Xeon cores
>> and CPUs was not used. Launching another session of
>> live555MediaServer helper
>> in adding another 95 streams to our tests.. so this could be a hint
>> for
>> looking for optimization interventions. Do some profiling and if some
>> computation effort is really needed, parallelize the code as
>> possible in
>> order to use multiples cores/CPUs.
>>
>>> So do you have any insightful thoughts of where we can optimize to
>>> enhance livemeida as a high-performance rtp streaming stack which
>>> could undergo heavy load.
>>
>> This is of real interest to me too.
>>
>> --
>> ESC:wq
>>
>> --
>> This message has been scanned for viruses and
>> dangerous content by MailScanner, and is
>> believed to be clean.
>>
>> _______________________________________________
>> live-devel mailing list
>> live-devel at lists.live555.com
>> http://lists.live555.com/mailman/listinfo/live-devel
>
>
>
> ------------------------------
>
> Message: 8
> Date: Wed, 11 Mar 2009 17:10:03 -0700
> From: Patrick White <patbob at imoveinc.com>
> Subject: Re: [Live-devel] frame rate supported
> To: LIVE555 Streaming Media - development & use
> <live-devel at ns.live555.com>
> Message-ID: <200903111710.03977.patbob at imoveinc.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
>
> Could you be running into sleep quantitization issues? 60 fps is down in
> sleep quantitization issueland. The buffering on the receiver should take
> care of things, but maybe it isn't, or the timestamps are not getting
> generated correctly?
>
> patbob
>
>
> On Tuesday 10 March 2009 4:57 pm, Gbzbz Gbzbz wrote:
>> I thought the fDuration is related to the frame rate? We have a hardware
>> encoder and we save the contents to a file before we stream them out. When
>> play the file locally with VLC, it looks a 720P60, but at the remote VLC
>> RTSP client side it is 720P30(or less) - visually it is not as smooth.
>>
>> I am not familiar with the live555 (or C++ in general). So I am not sure if
>> the schedulTask or fDuration has anything to do with the above. May not?!?!
>>
>> --- On Tue, 3/10/09, Ross Finlayson <finlayson at live555.com> wrote:
>> > From: Ross Finlayson <finlayson at live555.com>
>> > Subject: Re: [Live-devel] frame rate supported
>> > To: "LIVE555 Streaming Media - development & use"
>> > <live-devel at ns.live555.com> Date: Tuesday, March 10, 2009, 9:19 PM
>> >
>> > > Does current live555 support 60
>> >
>> > fps? say 720p60? any parameters in this area?
>> >
>> > Frame rate and dimension parameters are carried within the
>> > video data itself (and therefore is specific to the video
>> > codec, and has nothing to do with RTSP or RTP).
>> > --
>> > Ross Finlayson
>> > Live Networks, Inc.
>> > http://www.live555.com/
>> > _______________________________________________
>> > live-devel mailing list
>> > live-devel at lists.live555.com
>> > http://lists.live555.com/mailman/listinfo/live-devel
>>
>> _______________________________________________
>> live-devel mailing list
>> live-devel at lists.live555.com
>> http://lists.live555.com/mailman/listinfo/live-devel
>
>
> ------------------------------
>
> _______________________________________________
> live-devel mailing list
> live-devel at lists.live555.com
> http://lists.live555.com/mailman/listinfo/live-devel
>
>
> End of live-devel Digest, Vol 65, Issue 9
> *****************************************
>
More information about the live-devel
mailing list