From nick.ogden at usa.g4s.com Mon Sep 1 01:23:34 2014 From: nick.ogden at usa.g4s.com (Ogden, Nick) Date: Mon, 1 Sep 2014 09:23:34 +0100 Subject: [Live-devel] Shared Library Support Message-ID: Greetings. We are currently exploring the possibility of using Live555 in a product consisting of a proprietary code base, that must run on both Windows and Linux. In order to comply with the LGPL, but not have to release our entire codebase, we must build Live555 as shared libraries. Currently however, since there are no storage class definition macros in the classes, this is not possible on the Windows platform. I have found the forked Live456 project on GitHub that adds support for shared libraries, but which has had no changes for the last 2 years. If I were to port this support back to the Live555 codebase, what would the chances be of this work being accepted upstream and what would the acceptance criteria be? Kind regards. -- Nick Ogden G4S Technology Tel: +44 (0) 1684 857299 nick.ogden at usa.g4s.com www.g4stechnology.com Challenge House, International Drive, Tewkesbury, Gloucestershire, GL20 8UQ, UK P Please consider the environment before printing this email ________________________________ The details of this company are as follows: G4S Technology Limited, Registered Office: Challenge House, International Drive, Tewkesbury, Gloucestershire GL20 8UQ, Registered in England No. 2382338. This communication may contain information which is confidential, personal and/or privileged. It is for the exclusive use of the intended recipient(s). If you are not the intended recipient(s), please note that any distribution, forwarding, copying or use of this communication or the information in it is strictly prohibited. Any personal views expressed in this e-mail are those of the individual sender and the company does not endorse or accept responsibility for them. Prior to taking any action based upon this e-mail message, you should seek appropriate confirmation of its authenticity. This e-mail has been scanned for all viruses by MessageLabs. -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Mon Sep 1 02:03:34 2014 From: finlayson at live555.com (Ross Finlayson) Date: Mon, 1 Sep 2014 02:03:34 -0700 Subject: [Live-devel] Shared Library Support In-Reply-To: References: Message-ID: > I have found the forked Live456 project on GitHub that adds support for shared libraries, but which has had no changes for the last 2 years. If I were to port this support back to the Live555 codebase, what would the chances be of this work being accepted upstream and what would the acceptance criteria be? It is very unlikely that we'd accept a patch that includes significant changes to large numbers of files, especially changes that are specific to Windows. However, other people on this mailing list have presumably developed LIVE555-based applications for Windows that use shared libraries - without making significant changes to the supplied LIVE555 source code - so perhaps they would be interested in sharing how they did this. Most companies that have used our code have been satisfied with the LGPL. However, if it turns out that the LGPL won't work for you, then relicensing (a snapshot of) the "LIVE555 Streaming Media" code under some other license (e.g., a simple BSD or MIT-style license) is something that we will consider doing - but not lightly (nor cheaply!), and for serious customers only. (Should this be something that you're interested in, please have your management contact me, via separate email.) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From grom86 at mail.ru Mon Sep 1 06:12:45 2014 From: grom86 at mail.ru (=?UTF-8?B?bWludXM=?=) Date: Mon, 01 Sep 2014 17:12:45 +0400 Subject: [Live-devel] =?utf-8?q?How_to_get_timestamp_difference_between_fr?= =?utf-8?q?ames_in_milliseconds_or_microseconds=3F?= Message-ID: <1409577165.512950374@f133.i.mail.ru> I'm analyzing RTSP stream data and need to have the time difference in milliseconds between each video frame. Stream server send timestamp values like below: 1271120994 1271124594 1271128194 ... ... ... but it does not look like difference between frames is coming in?milliseconds or microseconds: 1271124594 - 1271120994 = 3600 How to calculate timestamp difference in milli or micro seconds? -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Sep 2 00:02:52 2014 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 2 Sep 2014 00:02:52 -0700 Subject: [Live-devel] How to get timestamp difference between frames in milliseconds or microseconds? In-Reply-To: <1409577165.512950374@f133.i.mail.ru> References: <1409577165.512950374@f133.i.mail.ru> Message-ID: <3E63B159-FB33-4A6E-87B8-593C1B761D09@live555.com> You shouldn't be looking at RTP timestamps at all. They are only used internally by the RTP/RTCP protocol, to convert from/to "presentation times". Instead, you should be looking only at "presentation times". (See, for example, the implementation of "DummySink::afterGettingFrame()" in the "testRTSPClient" demo application. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Sep 2 04:01:10 2014 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 2 Sep 2014 04:01:10 -0700 Subject: [Live-devel] RTCPInstance error: Hit limit when reading incoming packet over TCP. Increase \"maxRTCPPacketSize\" In-Reply-To: References: <1408617661.3863.16.camel@IV247-DnyaneshG> Message-ID: > Sorry for the late reply. I did tested the same scenario with latest > version of live555 ( live.2014.08.26.tar.gz, no modifications made ) > and getting same results. > Regarding back-end server I am using, is a third party cloud camera > running their own RTSP server (Ambarella RTSP Server), so that I do > not have any idea about the make and model of it. Unfortunately, the problem is your 'back-end' server. It is incorrectly packaging RTP (and RTCP) packets over the TCP connection (to your proxy server). Based on the symptoms that you are seeing, it is very likely that the back-end server has been implemented using an earlier, buggy version of the "LIVE555 Streaming Media" software. If this is the case, then - as one of the consequences of the GNU LGPL - they *must* upgrade their server's software, upon request, to use the latest version of the LIVE555 library. Please tell me how I can contact the provider of your back-end server, so I can remind them of their legal obligation. If the back-end server will not upgrade their software, then you have only two choices: 1/ Use a different back-end server, or 2/ Stream from the back-end server using UDP, not TCP. I.e., do *not* give the "LIVE555 Proxy Server" the "-t" option. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From longnv at elcom.com.vn Wed Sep 3 03:32:42 2014 From: longnv at elcom.com.vn (Nguyen Van Long) Date: Wed, 3 Sep 2014 17:32:42 +0700 Subject: [Live-devel] Problem when multi client connect to server using OndemandServerMediaSubsession Message-ID: <000c01cfc762$6543d360$2fcb7a20$@com.vn> Dear Live555 team, I recently develop an application that uses live555 library and operates simple tasks such as: get video data from camera, decode video data, draw something on video frame, encode video frame and streaming. My application is a client (to camera) and also a server when streaming video data. To get video data from camera, I modified the testRTSPClient in testProgs. Data received then decoded using ffmpeg and drawn some text, shape using Qt. After that, I use ffmpeg again to encode video frame (codec is MJPEG-4) and put the output into a queue which will be streamed later. To stream video from a queue, I write a class based on DeviceSource, the function doGetNextFrame always read mpeg-4 package from queue and calls FrameSource::afterGetting(this) when data available. I also write a class called Mpeg4LiveServerMediaSubsession which subbed class from OnDemandServerMediaSubsession and re-implement three virtual functions (getAuxSDPLine, createNewFrameSource, createNewRTPSink). The createNewFrameSource actually return the MPEG4VideoStreamDiscreteFramer::createNew() with input source parameter is my class based on DeviceSource described above. I use VLC as client to connect to my server and play video stream. Everything seems ok and my application works quite fine when there are less than 4 clients connect to server. When the 4th client connect to server, the video of all clients is getting slower, image is very bad, I cannot see video content clearly . I don't think the problem is with my network because I use LAN with a good capable and even my server and client (VLC) in the same computer, this problem still happens. I have some more information here: My processor : corei3 3.36 MHz, Memory (RAM): 4GB, When 4 client connect to server, program uses 30% of memory, 49% of CPU When 1 client (VLC) connect to server, from VLC tool I see that the content birate is about 6000 - 7000 kb/s. It reduce to 4000 - 5000 kb/s when 2 clients connect to server, 2000 - 2500 kb/s when 3 clients connect to server and 600 - 1500 kb/s when 4 clients connect to server. Do you have any ideals with my problem and any suggestions to improve video quality when there are many clients connect to server? Thanks! Regards, Nguyen Van Long (Mr) ESoft - Software Development ---------------------------------------------------------------------------- - ELCOM CORP Add: Elcom Building, Duy Tan Street, Cau Giay District, Ha Noi Mobile: (+84) 936 369 326 | Skype: Pfiev.long | Web: www.elcom.com.vn -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Wed Sep 3 04:25:27 2014 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 3 Sep 2014 04:25:27 -0700 Subject: [Live-devel] Problem when multi client connect to server using OndemandServerMediaSubsession In-Reply-To: <000c01cfc762$6543d360$2fcb7a20$@com.vn> References: <000c01cfc762$6543d360$2fcb7a20$@com.vn> Message-ID: <69A00F7A-1AAB-4004-9404-5E308D8B297F@live555.com> > I also write a class called Mpeg4LiveServerMediaSubsession which subbed class from OnDemandServerMediaSubsession When your class "Mpeg4LiveServerMediaSubsession"s constructor calls the "OnDemandServerMediaSubsession" constructor, is the "reuseFirstSource" parameter "True"? (This ensures that only one input source object is created at a time, regardless of how many RTSP clients connect to your server. > I use VLC as client to connect to my server and play video stream. Everything seems ok and my application works quite fine when there are less than 4 clients connect to server. When the 4th client connect to server, the video of all clients is getting slower, image is very bad, I cannot see video content clearly ? Are these 4 clients (running VLC) on 4 *separate* computers? The reason I ask this is that - a few months ago - someone else reported similar symptoms to what you're reporting. In that case, though, the problem was that they were running more than one copy of VLC on the same client computer, and it turned out that the problem was the client computer's CPU overhead (from running more than one copy of VLC), not the server computer (running your LIVE555-based code). Assuming that the 4 clients (running VLC) are on 4 separate computers, then does anything change when you run "openRTSP" rather than VLC as your client (again, all on separate client computers)? Also, are the clients requesting RTP-over-UDP streaming from your server, or RTP-over-TCP streaming? If any of them are requesting RTP-over-TCP streaming, then does anything change if they request RTP-over-UDP streaming instead? Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From longnv at elcom.com.vn Thu Sep 4 02:47:26 2014 From: longnv at elcom.com.vn (Nguyen Van Long) Date: Thu, 4 Sep 2014 16:47:26 +0700 Subject: [Live-devel] Problem when multi client connect to server using OndemandServerMediaSubsession In-Reply-To: <000c01cfc762$6543d360$2fcb7a20$@com.vn> References: <000c01cfc762$6543d360$2fcb7a20$@com.vn> Message-ID: <000f01cfc825$4035da50$c0a18ef0$@com.vn> Dear Ross, I had seen the answer at http://lists.live555.com/pipermail/live-devel/2014-September/thread.html although I didn't received email from mailing list (may be our mail server's down or something cause to lost email .) and I would like to thank you for your quick answer. As your suggestions, I use VLC as client running in 4 separated computer (I think hardware in each computer is ok to run just one vlc), The "reuseFirstSource" is of course set to true but nothing changed, video again getting slower with bad image when the 4th or 5th client connect to server (All clients use RTP-over-UDP to request to server) I also use "openRTSP" to get data from my server and write into files. The same thing happens when the 4th or 5th client connect to server (when the connection less than 4, everything is ok). It's glad if you could give me some more suggestions about this problem. Thanks! From: Nguyen Van Long [mailto:longnv at elcom.com.vn] Sent: Wednesday, September 03, 2014 5:33 PM To: live-devel at lists.live555.com Cc: longnv at elcom.com.vn Subject: Problem when multi client connect to server using OndemandServerMediaSubsession Dear Live555 team, I recently develop an application that uses live555 library and operates simple tasks such as: get video data from camera, decode video data, draw something on video frame, encode video frame and streaming. My application is a client (to camera) and also a server when streaming video data. To get video data from camera, I modified the testRTSPClient in testProgs. Data received then decoded using ffmpeg and drawn some text, shape using Qt. After that, I use ffmpeg again to encode video frame (codec is MJPEG-4) and put the output into a queue which will be streamed later. To stream video from a queue, I write a class based on DeviceSource, the function doGetNextFrame always read mpeg-4 package from queue and calls FrameSource::afterGetting(this) when data available. I also write a class called Mpeg4LiveServerMediaSubsession which subbed class from OnDemandServerMediaSubsession and re-implement three virtual functions (getAuxSDPLine, createNewFrameSource, createNewRTPSink). The createNewFrameSource actually return the MPEG4VideoStreamDiscreteFramer::createNew() with input source parameter is my class based on DeviceSource described above. I use VLC as client to connect to my server and play video stream. Everything seems ok and my application works quite fine when there are less than 4 clients connect to server. When the 4th client connect to server, the video of all clients is getting slower, image is very bad, I cannot see video content clearly . I don't think the problem is with my network because I use LAN with a good capable and even my server and client (VLC) in the same computer, this problem still happens. I have some more information here: My processor : corei3 3.36 MHz, Memory (RAM): 4GB, When 4 client connect to server, program uses 30% of memory, 49% of CPU When 1 client (VLC) connect to server, from VLC tool I see that the content birate is about 6000 - 7000 kb/s. It reduce to 4000 - 5000 kb/s when 2 clients connect to server, 2000 - 2500 kb/s when 3 clients connect to server and 600 - 1500 kb/s when 4 clients connect to server. Do you have any ideals with my problem and any suggestions to improve video quality when there are many clients connect to server? Thanks! Regards, Nguyen Van Long (Mr) ESoft - Software Development ---------------------------------------------------------------------------- - ELCOM CORP Add: Elcom Building, Duy Tan Street, Cau Giay District, Ha Noi Mobile: (+84) 936 369 326 | Skype: Pfiev.long | Web: www.elcom.com.vn -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Thu Sep 4 06:04:25 2014 From: finlayson at live555.com (Ross Finlayson) Date: Thu, 4 Sep 2014 06:04:25 -0700 Subject: [Live-devel] Problem when multi client connect to server using OndemandServerMediaSubsession In-Reply-To: <000f01cfc825$4035da50$c0a18ef0$@com.vn> References: <000c01cfc762$6543d360$2fcb7a20$@com.vn> <000f01cfc825$4035da50$c0a18ef0$@com.vn> Message-ID: <2EE9462E-5A48-47C2-9F31-C0B1BB1A8315@live555.com> > I had seen the answer at http://lists.live555.com/pipermail/live-devel/2014-September/thread.html although I didn?t received email from mailing list (may be our mail server?s down or something cause to lost email ?) You're on the mailing list, so you should be receiving mailing list messages. Perhaps your company's spam filter is incorrectly rejecting this mailing list's messages?? > As your suggestions, I use VLC as client running in 4 separated computer (I think hardware in each computer is ok to run just one vlc), The ?reuseFirstSource? is of course set to true but nothing changed, video again getting slower with bad image when the 4th or 5th client connect to server (All clients use RTP-over-UDP to request to server) > I also use ?openRTSP? to get data from my server and write into files. The same thing happens when the 4th or 5th client connect to server (when the connection less than 4, everything is ok). I suspect that you are approaching the capacity of your network (or at least the capacity of your server's OS to transmit packets on your network). The big problem here is your choice of codec: JPEG. This is a very inefficient codec for streaming. If you haven't already done so, see http://www.live555.com/liveMedia/faq.html#jpeg-streaming I suggest using H.264 instead of MJPEG. Also, if your clients are all on the same LAN, you should stream via IP multicast rather than IP unicast. (Then, your server's network traffic will be independent of the number of clients.) To do this, you would use a "PassiveServerMediaSubsession" rather than an "OnDemandServerMediaSubsession" (and note the various "test*Streamer" demo applications, which stream via multicast, using a "PassiveServerMediaSubsession". Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From fabricet at ovation.co.uk Thu Sep 4 08:41:52 2014 From: fabricet at ovation.co.uk (Fabrice Triboix) Date: Thu, 4 Sep 2014 15:41:52 +0000 Subject: [Live-devel] Please help: I have difficulties implementing the right classes Message-ID: <2C1DDB155C0446419AB4EB1D5AFBD193532800DA@OVATIONSBS2011.ovation.local> Hello, I am trying to build an RTSP server using liveMedia to stream H.264 in real time from a live encoder. In essence, as soon as I try to connect with VLC, liveMedia abort()s with the following message: FramedSource[0xbe838]::getNextFrame(): attempting to read more than once at the same time! I am using live555-latest.tar.gz, downloaded from the live555 website on 27/08/2014. Here is the stack trace: (gdb) bt #0 FramedSource::getNextFrame (this=0xbe828, to=0xbeb78 "\361s\177\211?\257\320??G\372BZ%@u\265\033}\"?(\262k\034?\362n\215", maxSize=201248, afterGettingFunc=0x129bc , afterGettingClientData=0xbe9d0, onCloseFunc=0x12100 , onCloseClientData=0xbe9d0) at FramedSource.cpp:65 #1 0x000122d8 in MultiFramedRTPSink::packFrame (this=0xbe9d0) at MultiFramedRTPSink.cpp:216 #2 0x00012b68 in MultiFramedRTPSink::buildAndSendPacket (this=0xbe9d0, isFirstPacket=0 '\000') at MultiFramedRTPSink.cpp:191 #3 0x00012b98 in MultiFramedRTPSink::sendNext (firstArg=0xbe9d0) at MultiFramedRTPSink.cpp:414 #4 0x0007d404 in AlarmHandler::handleTimeout (this=0xefe28) at BasicTaskScheduler0.cpp:34 #5 0x00079e58 in DelayQueue::handleAlarm (this=0xb059c) at DelayQueue.cpp:187 #6 0x00078f24 in BasicTaskScheduler::SingleStep (this=0xb0598, maxDelayTime=0) at BasicTaskScheduler.cpp:212 #7 0x0007be9c in BasicTaskScheduler0::doEventLoop (this=0xb0598, watchVariable=0xb55d8 "") at BasicTaskScheduler0.cpp:80 #8 0x0000bcd0 in CVideoSubsession::getAuxSDPLine (this=0xb5518, rtpSink=0xbe9d0, inputSource=0xbe828) at videosubsession.cpp:79 #9 0x00036abc in OnDemandServerMediaSubsession::setSDPLinesFromRTPSink (this=0xb5518, rtpSink=0xbe9d0, inputSource=0xbe828, estBitrate=1074845076) at OnDemandServerMediaSubsession.cpp:390 #10 0x000379bc in OnDemandServerMediaSubsession::sdpLines (this=0xb5518) at OnDemandServerMediaSubsession.cpp:76 #11 0x0003502c in ServerMediaSession::generateSDPDescription (this=0xb5400) at ServerMediaSession.cpp:236 #12 0x0001df2c in RTSPServer::RTSPClientConnection::handleCmd_DESCRIBE (this=0xb9860, urlPreSuffix=0xbe905998 "", urlSuffix=0xbe9058d0 "test", fullRequestStr=0xb9884 "DESCRIBE rtsp://192.168.0.8/test RTSP/1.0\r\nCSeq: 3\r\nUser-Agent: LibVLC/2.1.5 (LIVE555 Streaming Media v2014.05.27)\r\nAccept: application/sdp\r\n\r\n") at RTSPServer.cpp:524 #13 0x0001aef0 in RTSPServer::RTSPClientConnection::handleRequestBytes (this=0xb9860, newBytesRead=143) at RTSPServer.cpp:988 #14 0x0001bb54 in RTSPServer::RTSPClientConnection::incomingRequestHandler1 (this=0xb9860) at RTSPServer.cpp:788 #15 0x0001bb8c in RTSPServer::RTSPClientConnection::incomingRequestHandler (instance=0xb9860) at RTSPServer.cpp:781 #16 0x00078cf8 in BasicTaskScheduler::SingleStep (this=0xb0598, maxDelayTime=0) at BasicTaskScheduler.cpp:171 #17 0x0007be9c in BasicTaskScheduler0::doEventLoop (this=0xb0598, watchVariable=0x0) at BasicTaskScheduler0.cpp:80 #18 0x0000b7f0 in main () at rtsp.cpp:118 (gdb) I derived the FramedSource and OnDemandServerMediaSubsession classes to implement my code, getting inspirations from both DeviceSource.cpp and wis-streamer. I can provide more information and/or source code, just let me know what would be useful. Thank you so much for your help! Fabrice -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Thu Sep 4 09:21:20 2014 From: finlayson at live555.com (Ross Finlayson) Date: Thu, 4 Sep 2014 09:21:20 -0700 Subject: [Live-devel] Please help: I have difficulties implementing the right classes In-Reply-To: <2C1DDB155C0446419AB4EB1D5AFBD193532800DA@OVATIONSBS2011.ovation.local> References: <2C1DDB155C0446419AB4EB1D5AFBD193532800DA@OVATIONSBS2011.ovation.local> Message-ID: > I am trying to build an RTSP server using liveMedia to stream H.264 in real time from a live encoder. In essence, as soon as I try to connect with VLC, liveMedia abort()s with the following message: > FramedSource[0xbe838]::getNextFrame(): attempting to read more than once at the same time! This error means that a "FramedSource" object (presumably of your subclass that you wrote to deliver live H.264 data from your encoder) is getting a call to "getNextFrame()" while it is already handling a previous call to "getNextFrame()". I.e., it seems that your "FramedSource" subclass's implementation of "doGetNextFrame()" is incorrect; it apparently is not calling FramedSource::afterGetting(this); after it completes delivery to the downstream (i.e., calling) object. (Also, you should use "testRTSPClient" and/or "openRTSP" as RTSP clients for testing, before using VLC. VLC is more complex, and is not our software.) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From fabricet at ovation.co.uk Thu Sep 4 09:44:30 2014 From: fabricet at ovation.co.uk (Fabrice Triboix) Date: Thu, 4 Sep 2014 16:44:30 +0000 Subject: [Live-devel] Please help: I have difficulties implementing the right classes In-Reply-To: References: <2C1DDB155C0446419AB4EB1D5AFBD193532800DA@OVATIONSBS2011.ovation.local>, Message-ID: <2C1DDB155C0446419AB4EB1D5AFBD193532800F9@OVATIONSBS2011.ovation.local> Hi Ross, Thanks for your comment. Here is the implementation of doGetNextFrame() in my class (which is indeed derived from FramedSource): void CAvSource::doGetNextFrame() { // Forward all frames in the input AV buffer while (getTopMetadata() != NULL) { deliverFrame(); } } void CAvSource::deliverFrame() { av_data_t frame; int32_t ret = av_get_variable_frame(mAvBuffer, &frame, POLL_NOWAIT); if (ret <= 0) { log_error("Failed to get frame even if AV buffer told us some are " "available for av_buffer %p", mAvBuffer); return; } unsigned size = frame.num_bytes; if (size > fMaxSize) { fFrameSize = fMaxSize; fNumTruncatedBytes = size - fMaxSize; } else { fFrameSize = size; fNumTruncatedBytes = 0; } memmove(fTo, frame.virt_addr, fFrameSize); av_metadata_t* meta = frame.metadata; fPresentationTime.tv_sec = meta->time_usec_then / 1000000LL; fPresentationTime.tv_usec = meta->time_usec_then % 1000000LL; fDurationInMicroseconds = meta->time_usec_now - meta->time_usec_then; // We don't need the AV frame any more, so we can release it now. av_finished_with_variable_frame(mAvBuffer, 1); // Inform the reader that a new frame is available. FramedSource::afterGetting(this); } Could it be that calling deliverFrame() multiple times might be wrong? Thanks a lot for your help! Fabrice ________________________________ From: live-devel [live-devel-bounces at ns.live555.com] on behalf of Ross Finlayson [finlayson at live555.com] Sent: 04 September 2014 17:21 To: LIVE555 Streaming Media - development & use Subject: Re: [Live-devel] Please help: I have difficulties implementing the right classes I am trying to build an RTSP server using liveMedia to stream H.264 in real time from a live encoder. In essence, as soon as I try to connect with VLC, liveMedia abort()s with the following message: FramedSource[0xbe838]::getNextFrame(): attempting to read more than once at the same time! This error means that a "FramedSource" object (presumably of your subclass that you wrote to deliver live H.264 data from your encoder) is getting a call to "getNextFrame()" while it is already handling a previous call to "getNextFrame()". I.e., it seems that your "FramedSource" subclass's implementation of "doGetNextFrame()" is incorrect; it apparently is not calling FramedSource::afterGetting(this); after it completes delivery to the downstream (i.e., calling) object. (Also, you should use "testRTSPClient" and/or "openRTSP" as RTSP clients for testing, before using VLC. VLC is more complex, and is not our software.) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Thu Sep 4 10:35:07 2014 From: finlayson at live555.com (Ross Finlayson) Date: Thu, 4 Sep 2014 10:35:07 -0700 Subject: [Live-devel] Please help: I have difficulties implementing the right classes In-Reply-To: <2C1DDB155C0446419AB4EB1D5AFBD193532800F9@OVATIONSBS2011.ovation.local> References: <2C1DDB155C0446419AB4EB1D5AFBD193532800DA@OVATIONSBS2011.ovation.local>, <2C1DDB155C0446419AB4EB1D5AFBD193532800F9@OVATIONSBS2011.ovation.local> Message-ID: > Could it be that calling deliverFrame() multiple times might be wrong? Yes, that's wrong. Your "doGetNextFrame()" function should deliver one, and only one, H.264 NAL unit (note, not a H.264 'frame') each time it's called. Note that if - at the time that "doGetNextFrame()" is called - no H.264 NAL unit is currently available, your "doGetNextFrame()" implementation should return immediately, and your "deliverFrame()" function must not get called again until later, when a new H.264 NAL unit becomes available. I suggest reviewing the "DeviceSource" demo code, and note how it uses an 'event trigger' (signaled from a separate thread) to do this. Also, because you're delivering discrete H.264 NAL units (i.e., one at a time), your "OnDemandServerMediaSubclass::createNewStreamSource()" implementation should be feeding your input source object into a "H264VideoStreamDiscreteFramer", not a "H264VideoStreamFramer". (Note that, in this case, the H.264 NAL units from your input source must *not* begin with a 'start code' (0x00 0x00 0x00 0x01).) Finally, because you're delivering from a live source (rather than a prerecorded source like a file), you don't need to set "fDurationInMicroseconds". Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From longnv at elcom.com.vn Thu Sep 4 18:30:36 2014 From: longnv at elcom.com.vn (Nguyen Van Long) Date: Fri, 5 Sep 2014 08:30:36 +0700 Subject: [Live-devel] Problem when multi client connect to server using OndemandServerMediaSubsession In-Reply-To: <000f01cfc825$4035da50$c0a18ef0$@com.vn> References: <000c01cfc762$6543d360$2fcb7a20$@com.vn> <000f01cfc825$4035da50$c0a18ef0$@com.vn> Message-ID: <000201cfc8a8$fef7d730$fce78590$@com.vn> Dear Ross, I would like to give many thanks for your suggestions. It seems you are right in my case. I will try to use another codec (H.264) as well as change from streaming unicast to multicast. Thank again for your help. From: Nguyen Van Long [mailto:longnv at elcom.com.vn] Sent: Thursday, September 04, 2014 4:47 PM To: live-devel at lists.live555.com Cc: longnv at elcom.com.vn Subject: RE: Problem when multi client connect to server using OndemandServerMediaSubsession Dear Ross, I had seen the answer at http://lists.live555.com/pipermail/live-devel/2014-September/thread.html although I didn't received email from mailing list (may be our mail server's down or something cause to lost email .) and I would like to thank you for your quick answer. As your suggestions, I use VLC as client running in 4 separated computer (I think hardware in each computer is ok to run just one vlc), The "reuseFirstSource" is of course set to true but nothing changed, video again getting slower with bad image when the 4th or 5th client connect to server (All clients use RTP-over-UDP to request to server) I also use "openRTSP" to get data from my server and write into files. The same thing happens when the 4th or 5th client connect to server (when the connection less than 4, everything is ok). It's glad if you could give me some more suggestions about this problem. Thanks! From: Nguyen Van Long [mailto:longnv at elcom.com.vn] Sent: Wednesday, September 03, 2014 5:33 PM To: live-devel at lists.live555.com Cc: longnv at elcom.com.vn Subject: Problem when multi client connect to server using OndemandServerMediaSubsession Dear Live555 team, I recently develop an application that uses live555 library and operates simple tasks such as: get video data from camera, decode video data, draw something on video frame, encode video frame and streaming. My application is a client (to camera) and also a server when streaming video data. To get video data from camera, I modified the testRTSPClient in testProgs. Data received then decoded using ffmpeg and drawn some text, shape using Qt. After that, I use ffmpeg again to encode video frame (codec is MJPEG-4) and put the output into a queue which will be streamed later. To stream video from a queue, I write a class based on DeviceSource, the function doGetNextFrame always read mpeg-4 package from queue and calls FrameSource::afterGetting(this) when data available. I also write a class called Mpeg4LiveServerMediaSubsession which subbed class from OnDemandServerMediaSubsession and re-implement three virtual functions (getAuxSDPLine, createNewFrameSource, createNewRTPSink). The createNewFrameSource actually return the MPEG4VideoStreamDiscreteFramer::createNew() with input source parameter is my class based on DeviceSource described above. I use VLC as client to connect to my server and play video stream. Everything seems ok and my application works quite fine when there are less than 4 clients connect to server. When the 4th client connect to server, the video of all clients is getting slower, image is very bad, I cannot see video content clearly . I don't think the problem is with my network because I use LAN with a good capable and even my server and client (VLC) in the same computer, this problem still happens. I have some more information here: My processor : corei3 3.36 MHz, Memory (RAM): 4GB, When 4 client connect to server, program uses 30% of memory, 49% of CPU When 1 client (VLC) connect to server, from VLC tool I see that the content birate is about 6000 - 7000 kb/s. It reduce to 4000 - 5000 kb/s when 2 clients connect to server, 2000 - 2500 kb/s when 3 clients connect to server and 600 - 1500 kb/s when 4 clients connect to server. Do you have any ideals with my problem and any suggestions to improve video quality when there are many clients connect to server? Thanks! Regards, Nguyen Van Long (Mr) ESoft - Software Development ---------------------------------------------------------------------------- - ELCOM CORP Add: Elcom Building, Duy Tan Street, Cau Giay District, Ha Noi Mobile: (+84) 936 369 326 | Skype: Pfiev.long | Web: www.elcom.com.vn -------------- next part -------------- An HTML attachment was scrubbed... URL: From fabricet at ovation.co.uk Fri Sep 5 00:05:30 2014 From: fabricet at ovation.co.uk (Fabrice Triboix) Date: Fri, 5 Sep 2014 07:05:30 +0000 Subject: [Live-devel] Please help: I have difficulties implementing the right classes In-Reply-To: References: <2C1DDB155C0446419AB4EB1D5AFBD193532800DA@OVATIONSBS2011.ovation.local>, <2C1DDB155C0446419AB4EB1D5AFBD193532800F9@OVATIONSBS2011.ovation.local>, Message-ID: <2C1DDB155C0446419AB4EB1D5AFBD19353280144@OVATIONSBS2011.ovation.local> Hi Ross, Thanks a lot for your answers. I have some comments/additional questions, inline in green below. Many thanks for your help! Fabrice ________________________________ From: live-devel [live-devel-bounces at ns.live555.com] on behalf of Ross Finlayson [finlayson at live555.com] Sent: 04 September 2014 18:35 To: LIVE555 Streaming Media - development & use Subject: Re: [Live-devel] Please help: I have difficulties implementing the right classes Could it be that calling deliverFrame() multiple times might be wrong? Yes, that's wrong. Your "doGetNextFrame()" function should deliver one, and only one, H.264 NAL unit (note, not a H.264 'frame') each time it's called. [Fabrice] All right, that's good to know. From time to time, the encoder sends an SPS and a PPS NALU before the data NALU (all in one "frame"). So when that's the case, I need to somehow have the doGetNextFrame() method being called a 2nd and a 3rd time... What's the best way to do this? Note that if - at the time that "doGetNextFrame()" is called - no H.264 NAL unit is currently available, your "doGetNextFrame()" implementation should return immediately, and your "deliverFrame()" function must not get called again until later, when a new H.264 NAL unit becomes available. [Fabrice] All right. This is what I am doing now; my deliverFrame() method is called only from doGetNextFrame(). I suggest reviewing the "DeviceSource" demo code, and note how it uses an 'event trigger' (signaled from a separate thread) to do this. [Fabrice] Yes, I carefully went through it. However, I am using a scheduleDelayedTask() with a "period" of 10ms because I wanted to get something working quickly. I will implement a proper monitoring thread which will signal a trigger once I get this implementation working. BTW, what would happen if the monitor thread signal the trigger twice because 2 frames arrived almost simultaneously (and doGetNextFrame() hadn't been called yet for the first frame)? Also, because you're delivering discrete H.264 NAL units (i.e., one at a time), your "OnDemandServerMediaSubclass::createNewStreamSource()" implementation should be feeding your input source object into a "H264VideoStreamDiscreteFramer", not a "H264VideoStreamFramer". (Note that, in this case, the H.264 NAL units from your input source must *not* begin with a 'start code' (0x00 0x00 0x00 0x01).) [Fabrice] I am using an MPEG4VideoStreamDiscreteFramer, as in wis-streamer. Is that wrong? Should I discard the start code if I use MPEG4VideoStreamDiscreteFramer? Same question if I use H264VideoStreamDiscreteFramer? Here is the implementation of my createNewStreamSource() method: FramedSource* CVideoSubsession::createNewStreamSource(unsigned clientSessionId, unsigned& estimatedBitRate) { // TODO: This is a leak, the CAvSource object should be a property CAvSource* src = CAvSource::createNew(envir(), "vid_h264_hd", "output"); if (NULL == src) { return NULL; } return MPEG4VideoStreamDiscreteFramer::createNew(envir(), src); } Finally, because you're delivering from a live source (rather than a prerecorded source like a file), you don't need to set "fDurationInMicroseconds". [Fabrice] Is is bad if I do? Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Fri Sep 5 00:28:04 2014 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 5 Sep 2014 00:28:04 -0700 Subject: [Live-devel] Please help: I have difficulties implementing the right classes In-Reply-To: <2C1DDB155C0446419AB4EB1D5AFBD19353280144@OVATIONSBS2011.ovation.local> References: <2C1DDB155C0446419AB4EB1D5AFBD193532800DA@OVATIONSBS2011.ovation.local>, <2C1DDB155C0446419AB4EB1D5AFBD193532800F9@OVATIONSBS2011.ovation.local>, <2C1DDB155C0446419AB4EB1D5AFBD19353280144@OVATIONSBS2011.ovation.local> Message-ID: >> Could it be that calling deliverFrame() multiple times might be wrong? > > Yes, that's wrong. Your "doGetNextFrame()" function should deliver one, and only one, H.264 NAL unit (note, not a H.264 'frame') each time it's called. > [Fabrice] All right, that's good to know. From time to time, the encoder sends an SPS and a PPS NALU before the data NALU (all in one "frame"). So when that's the case, I need to somehow have the doGetNextFrame() method being called a 2nd and a 3rd time... What's the best way to do this? You're thinking about this the wrong way. "doGetNextFrame()" gets called automatically (by the downstream, 'transmitting' object) whenever it needs a new NAL unit to transmit. So you should just deliver the next NAL unit (just one!) whenever "doGetNextFrame()" is called. If your encoder can generate more than one NAL unit at a time, then you'll need to enqueue them in some way. > BTW, what would happen if the monitor thread signal the trigger twice because 2 frames arrived almost simultaneously (and doGetNextFrame() hadn't been called yet for the first frame)? The triggered event might end up getting handled just once. But that should be OK, as long as you are enqueuing incoming NAL units. (The next time "doGetNextFrame()" gets called, it'll see whatever NAL unit(s) are left in the queue.) > [Fabrice] I am using an MPEG4VideoStreamDiscreteFramer, as in wis-streamer. Is that wrong? No, that's right. Just make sure that the NAL units that you feed to it do *not* begin with a 0x00 0x00 0x00 0x01 start code. > Finally, because you're delivering from a live source (rather than a prerecorded source like a file), you don't need to set "fDurationInMicroseconds". > [Fabrice] Is is bad if I do? No, as long as you're not setting it to a value that's larger than it should be. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From fabricet at ovation.co.uk Fri Sep 5 01:14:03 2014 From: fabricet at ovation.co.uk (Fabrice Triboix) Date: Fri, 5 Sep 2014 08:14:03 +0000 Subject: [Live-devel] Please help: I have difficulties implementing the right classes In-Reply-To: References: <2C1DDB155C0446419AB4EB1D5AFBD193532800DA@OVATIONSBS2011.ovation.local>, <2C1DDB155C0446419AB4EB1D5AFBD193532800F9@OVATIONSBS2011.ovation.local>, <2C1DDB155C0446419AB4EB1D5AFBD19353280144@OVATIONSBS2011.ovation.local>, Message-ID: <2C1DDB155C0446419AB4EB1D5AFBD1935328015B@OVATIONSBS2011.ovation.local> You're thinking about this the wrong way. "doGetNextFrame()" gets called automatically (by the downstream, 'transmitting' object) whenever it needs a new NAL unit to transmit. So you should just deliver the next NAL unit (just one!) whenever "doGetNextFrame()" is called. If your encoder can generate more than one NAL unit at a time, then you'll need to enqueue them in some way. [Fabrice] I would be interested in understanding a bit more here. Is live555 is a pull model? How does the transmitting object knows when to send the next frame? Who/what decides to call doGetNextFrame() and when? I think I understood all the other bits you mentioned, so I'll make the necessary changes. -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Fri Sep 5 08:38:00 2014 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 5 Sep 2014 08:38:00 -0700 Subject: [Live-devel] Please help: I have difficulties implementing the right classes In-Reply-To: <2C1DDB155C0446419AB4EB1D5AFBD1935328015B@OVATIONSBS2011.ovation.local> References: <2C1DDB155C0446419AB4EB1D5AFBD193532800DA@OVATIONSBS2011.ovation.local>, <2C1DDB155C0446419AB4EB1D5AFBD193532800F9@OVATIONSBS2011.ovation.local>, <2C1DDB155C0446419AB4EB1D5AFBD19353280144@OVATIONSBS2011.ovation.local>, <2C1DDB155C0446419AB4EB1D5AFBD1935328015B@OVATIONSBS2011.ovation.local> Message-ID: On Sep 5, 2014, at 1:14 AM, Fabrice Triboix wrote: > You're thinking about this the wrong way. "doGetNextFrame()" gets called automatically (by the downstream, 'transmitting' object) whenever it needs a new NAL unit to transmit. So you should just deliver the next NAL unit (just one!) whenever "doGetNextFrame()" is called. If your encoder can generate more than one NAL unit at a time, then you'll need to enqueue them in some way. > [Fabrice] I would be interested in understanding a bit more here. Is live555 is a pull model? Yes. > How does the transmitting object knows when to send the next frame? Who/what decides to call doGetNextFrame() and when? The transmitting object (a "MultiFramedRTPSink" subclass) uses the frame duration parameter ("fDurationInMicroseconds") to figure out how long to wait - after transmitting a RTP packet - before requesting another 'frame' from the upstream object. (I put 'frame' in quotes here, because - for H.264 streaming - the piece of data being delivered is actually a H.264 NAL unit.) If "fDurationInMicroseconds" is 0 (its default value), then the transmitting object will request another 'frame' immediately after transmitting a RTP packet. If data is being delivered from a live encoder - as in your case - then that's OK, because the encoder won't actually deliver data until it becomes available. That's why you don't need to set "fDurationInMicroseconds" if your data comes from a live source. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From fabricet at ovation.co.uk Fri Sep 5 10:02:37 2014 From: fabricet at ovation.co.uk (Fabrice Triboix) Date: Fri, 5 Sep 2014 17:02:37 +0000 Subject: [Live-devel] Please help: I have difficulties implementing the right classes In-Reply-To: References: <2C1DDB155C0446419AB4EB1D5AFBD193532800DA@OVATIONSBS2011.ovation.local>, <2C1DDB155C0446419AB4EB1D5AFBD193532800F9@OVATIONSBS2011.ovation.local>, <2C1DDB155C0446419AB4EB1D5AFBD19353280144@OVATIONSBS2011.ovation.local>, <2C1DDB155C0446419AB4EB1D5AFBD1935328015B@OVATIONSBS2011.ovation.local>, Message-ID: <2C1DDB155C0446419AB4EB1D5AFBD193532801A3@OVATIONSBS2011.ovation.local> Hi Ross, Thanks a lot for these details, that makes things a lot clearer. One last question: Let's assume fDurationInMicroseconds is 0; if the transmitting object immediately requests the next frame and doGetNextFrame() returns immediately because no frame is available, isn't there a risk that the application will use 100% of CPU? How does the transmitting object avoid using 100% CPU? Best regards, Fabrice ________________________________ From: live-devel [live-devel-bounces at ns.live555.com] on behalf of Ross Finlayson [finlayson at live555.com] Sent: 05 September 2014 16:38 To: LIVE555 Streaming Media - development & use Subject: Re: [Live-devel] Please help: I have difficulties implementing the right classes On Sep 5, 2014, at 1:14 AM, Fabrice Triboix > wrote: You're thinking about this the wrong way. "doGetNextFrame()" gets called automatically (by the downstream, 'transmitting' object) whenever it needs a new NAL unit to transmit. So you should just deliver the next NAL unit (just one!) whenever "doGetNextFrame()" is called. If your encoder can generate more than one NAL unit at a time, then you'll need to enqueue them in some way. [Fabrice] I would be interested in understanding a bit more here. Is live555 is a pull model? Yes. How does the transmitting object knows when to send the next frame? Who/what decides to call doGetNextFrame() and when? The transmitting object (a "MultiFramedRTPSink" subclass) uses the frame duration parameter ("fDurationInMicroseconds") to figure out how long to wait - after transmitting a RTP packet - before requesting another 'frame' from the upstream object. (I put 'frame' in quotes here, because - for H.264 streaming - the piece of data being delivered is actually a H.264 NAL unit.) If "fDurationInMicroseconds" is 0 (its default value), then the transmitting object will request another 'frame' immediately after transmitting a RTP packet. If data is being delivered from a live encoder - as in your case - then that's OK, because the encoder won't actually deliver data until it becomes available. That's why you don't need to set "fDurationInMicroseconds" if your data comes from a live source. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Fri Sep 5 12:37:11 2014 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 5 Sep 2014 12:37:11 -0700 Subject: [Live-devel] Please help: I have difficulties implementing the right classes In-Reply-To: <2C1DDB155C0446419AB4EB1D5AFBD193532801A3@OVATIONSBS2011.ovation.local> References: <2C1DDB155C0446419AB4EB1D5AFBD193532800DA@OVATIONSBS2011.ovation.local>, <2C1DDB155C0446419AB4EB1D5AFBD193532800F9@OVATIONSBS2011.ovation.local>, <2C1DDB155C0446419AB4EB1D5AFBD19353280144@OVATIONSBS2011.ovation.local>, <2C1DDB155C0446419AB4EB1D5AFBD1935328015B@OVATIONSBS2011.ovation.local>, <2C1DDB155C0446419AB4EB1D5AFBD193532801A3@OVATIONSBS2011.ovation.local> Message-ID: > One last question: Let's assume fDurationInMicroseconds is 0; if the transmitting object immediately requests the next frame and doGetNextFrame() returns immediately because no frame is available, isn't there a risk that the application will use 100% of CPU? No, because after returning from "doGetNextFrame()", the code will then re-enter the event loop. It will then block (i.e., consuming no CPU) until a new event occurs (such as the arrival of a new frame). Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jp4work at gmail.com Thu Sep 4 18:10:14 2014 From: jp4work at gmail.com (JIA Pei) Date: Fri, 5 Sep 2014 09:10:14 +0800 Subject: [Live-devel] How to generate shared libraries for live555? Message-ID: To whom it may concern: How to generate the shared libraries for live555? say: libBasicUsageEnvironment.so libgroupsock.so libliveMedia.so libUsageEnvironment.so ??? Cheers -- Pei JIA Email: jp4work at gmail.com cell: +1 604-362-5816 Welcome to Vision Open http://www.visionopen.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From cgomes at iinet.com Thu Sep 4 22:38:05 2014 From: cgomes at iinet.com (Joy Carlos Gomes) Date: Thu, 4 Sep 2014 22:38:05 -0700 Subject: [Live-devel] OpenRTSP with Open Broadcast Software Message-ID: Hi I am trying to receive an RTSP stream into Open Broadcast Software. I currently use VLC (libVLC), however we have some hardware issues which causes BYE messages to be sent. VLC doesn't handle these gracefully, so I am looking for another RTSP streamer. *Question:* libVLC is providing pixelData that is set on the texture. I am wondering how do I get the pixelData via openRTSP? I am new to video programming and wondering where to start. *Call to set pixel data on OBS texture:* GetTexture()->SetImage(*pixelData, GS_IMAGEFORMAT_BGRA, _this->GetTexture()->Width() * 4); Thanks, ~Carlos~ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jshanab at jfs-tech.com Sun Sep 7 03:37:30 2014 From: jshanab at jfs-tech.com (Jeff Shanab) Date: Sun, 7 Sep 2014 06:37:30 -0400 Subject: [Live-devel] OpenRTSP with Open Broadcast Software In-Reply-To: References: Message-ID: Does open broadcast need raw video as input? Live555 is a streamer only. I decoded my frames using libavcodec(FFMPEG) and then apply them to a texture using opengl or directx on windows,linux,mac,android or iphone. same procedure for all. Libavcodec decodes to YUV422 data, That is a luminance value for each pixel and half as many of each chroma values. I used a shader in HSGL to convert the YUV to RGBA in video hardware, but libavcodec has a sws_scale function that can scale and color convert in main memory. http://en.wikipedia.org/wiki/YUV http://en.wikipedia.org/wiki/Libavcodec useful but dated: http://blog.tomaka17.com/2012/03/libavcodeclibavformat-tutorial/ On Fri, Sep 5, 2014 at 1:38 AM, Joy Carlos Gomes wrote: > Hi I am trying to receive an RTSP stream into Open Broadcast Software. I > currently use VLC (libVLC), however we have some hardware issues which > causes BYE messages to be sent. VLC doesn't handle these gracefully, so I > am looking for another RTSP streamer. > > *Question:* > libVLC is providing pixelData that is set on the texture. I am wondering > how do I get the pixelData via openRTSP? I am new to video programming and > wondering where to start. > > *Call to set pixel data on OBS texture:* > GetTexture()->SetImage(*pixelData, GS_IMAGEFORMAT_BGRA, > _this->GetTexture()->Width() * 4); > > Thanks, > ~Carlos~ > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Sun Sep 7 19:02:55 2014 From: finlayson at live555.com (Ross Finlayson) Date: Sun, 7 Sep 2014 19:02:55 -0700 Subject: [Live-devel] How to generate shared libraries for live555? In-Reply-To: References: Message-ID: <208A2CFE-9ADF-4AF7-9955-E0A7C4FC7F6F@live555.com> > How to generate the shared libraries for live555? say: > > libBasicUsageEnvironment.so > libgroupsock.so > libliveMedia.so > libUsageEnvironment.so ??? If you're building the code for Linux, you can use the "config.linux-with-shared-libraries" configuration file. I.e., run ./genMakefiles linux-with-shared-libraries If you're running some OS other than Linux, you'll need to make your own configuration file (perhaps using "config.linux-with-shared-libraries" as a model). Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From longnv at elcom.com.vn Sun Sep 7 21:55:37 2014 From: longnv at elcom.com.vn (Nguyen Van Long) Date: Mon, 8 Sep 2014 11:55:37 +0700 Subject: [Live-devel] RTSPClient Auto re-connect when connection lost Message-ID: <000601cfcb21$2262bfd0$67283f70$@com.vn> Hi Ross, Based on "openRTSP", I wrote my own Rtsp client class that handle my own operations and everything works perfectly. My question is how to detect the connection lost (unplug the capable, server down .) and how to re-connect to server? I've thought that if something's wrong after sending play command, the "continueAfterPlay" function will be called and I could handle retrying in this function (sending Describe Command again .). Is that the right way to solve my issue or any your suggestions? Thanks! Regards, Nguyen Van Long (Mr) ESoft - Software Development ---------------------------------------------------------------------------- - ELCOM CORP Add: Elcom Building, Duy Tan Street, Cau Giay District, Ha Noi Mobile: (+84) 936 369 326 | Skype: Pfiev.long | Web: www.elcom.com.vn -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Mon Sep 8 01:00:09 2014 From: finlayson at live555.com (Ross Finlayson) Date: Mon, 8 Sep 2014 01:00:09 -0700 Subject: [Live-devel] RTSPClient Auto re-connect when connection lost In-Reply-To: <000601cfcb21$2262bfd0$67283f70$@com.vn> References: <000601cfcb21$2262bfd0$67283f70$@com.vn> Message-ID: <01EF2B90-6C12-41AE-A791-D1ACD2015640@live555.com> > Based on ?openRTSP?, I wrote my own Rtsp client class that handle my own operations and everything works perfectly. My question is how to detect the connection lost (unplug the capable, server down ?) and how to re-connect to server? Probably the best way to do this is to periodically check whether new incoming RTP packets have been received. If no new RTP packets have been received, then you can assume that the stream has ended. Note, for example, how we implement the "-D " option in "openRTSP": http://www.live555.com/openRTSP/#playing-time Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From thilina91 at gmail.com Mon Sep 8 04:42:58 2014 From: thilina91 at gmail.com (Thilina Jayanath) Date: Mon, 8 Sep 2014 11:42:58 +0000 (UTC) Subject: [Live-devel] No audio when streaming mkv , mpg Message-ID: I downloaded the source code and compiled the code to make the live555MediaServer exe file. when i stream .mkv and .mpg file view using vlc, it does not receive an audio stream as shown in the image. Image -> http://lookpic.com/O/i2/1433/9s4lV8AC.jpeg Can someone please help me with this. Thank you in advance! From finlayson at live555.com Mon Sep 8 15:53:03 2014 From: finlayson at live555.com (Ross Finlayson) Date: Mon, 8 Sep 2014 15:53:03 -0700 Subject: [Live-devel] No audio when streaming mkv , mpg In-Reply-To: References: Message-ID: <2333DF61-49DC-44BF-9C53-8C195AD782B8@live555.com> > I downloaded the source code and compiled the code to make the > live555MediaServer exe file. when i stream .mkv and .mpg file view using > vlc, it does not receive an audio stream as shown in the image. > > Image -> http://lookpic.com/O/i2/1433/9s4lV8AC.jpeg > > Can someone please help me with this See http://www.live555.com/liveMedia/faq.html#my-file-doesnt-work Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From streaming-admin at your-wayout.com Tue Sep 9 07:38:53 2014 From: streaming-admin at your-wayout.com (Streaming Admin) Date: Tue, 9 Sep 2014 17:38:53 +0300 Subject: [Live-devel] Configure live mpeg-ts udp input Message-ID: Is it possible to (i) configure live mpeg-ts udp input (ii) store it in mpeg-ts and (iii) create timecode (MPEG2TransportStreamIndexer) on the fly to allow trick play? Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Sep 9 11:16:11 2014 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 9 Sep 2014 11:16:11 -0700 Subject: [Live-devel] Configure live mpeg-ts udp input In-Reply-To: References: Message-ID: <7D337537-0B06-42E0-A2D4-0B5149836E62@live555.com> The "MPEG2IFrameIndexFromTransportStream" class (which is used to implement the "MPEG2TransportStreamIndexer" application) can read from any source that delivers discrete 188-byte MPEG Transport Stream packets (one at a time). So it can read from a MPEG Transport Stream UDP source, provided that each UDP packet contains only a single 188-byte Transport Stream packet. You should note, however, that 'trick play' operations will work only with a fixed-size Transport Stream file (with a corresponding fixed-size 'index file'). So you can't do 'trick play' operations on a file while it's still growing. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Alan.Martinovic at zenitel.com Wed Sep 10 01:12:53 2014 From: Alan.Martinovic at zenitel.com (Alan Martinovic) Date: Wed, 10 Sep 2014 08:12:53 +0000 Subject: [Live-devel] Subclassing RTSPServer events Message-ID: <916A03CCEB30DF44AD98D4CFDC7448D00D59916C@nooslzsmx1.zenitelcss.com> I have a streaming source that already provides me with a h264 RTP stream and would like to use live555 just as a RTSP server. I require neither RTP packetization nor RTCP generation. The ideal use case would be to be able to run custom scripts on events that represent RTSP commands, and be able to send a modified SDP. Would the recommended approach be to subclass the RTSPServer, completely ignore the ServerMediaSession (and the source-sink mechanism) and reimplement the handleCmd* commands? DISCLAIMER: This e-mail may contain confidential and privileged material for the sole use of the intended recipient. Any review, use, distribution or disclosure by others is strictly prohibited. If you are not the intended recipient (or authorized to receive for the recipient), please contact the sender by reply e-mail and delete all copies of this message. -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Wed Sep 10 08:52:41 2014 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 10 Sep 2014 08:52:41 -0700 Subject: [Live-devel] Subclassing RTSPServer events In-Reply-To: <916A03CCEB30DF44AD98D4CFDC7448D00D59916C@nooslzsmx1.zenitelcss.com> References: <916A03CCEB30DF44AD98D4CFDC7448D00D59916C@nooslzsmx1.zenitelcss.com> Message-ID: > I have a streaming source that already provides me with a h264 RTP stream and would like to use live555 just as a RTSP server. > I require neither RTP packetization nor RTCP generation. > The ideal use case would be to be able to run custom scripts on events that represent RTSP commands, and be able to send a modified SDP. > > Would the recommended approach be to subclass the RTSPServer, completely ignore the ServerMediaSession (and the source-sink mechanism) and reimplement the handleCmd* commands? I believe you can get what you want without subclassing "RTSPServer" at all. The trick is to add, to your "RTSPServer" object, a "PassiveServerMediaSubsession". A "PassiveServerMediaSubsession" is used when you want your "RTSPServer" to handle a stream that already exists, rather than one that is created, on demand, for each RTSP client. The stream (that already exists) is usually multicast, rather than unicast (although "PassiveServerMediaSubsession" might still work if the stream is unicast). In any case, I suggest that - if you're not already doing so - you change your RTP streaming source to transmit to an IP multicast address (unless you are going across a network that doesn't support IP multicast routing). By doing this, you will allow any client (rather than just one) to receive the stream, if is wishes. Note that "PassiveServerMediaSubsession::createNew()" takes two parameters: A "RTPSink" object, and (optionally, if the stream also contains RTCP packets (which is should, if it wants to be fully standards-compliant)) a "RTCPInstance" object. Just create these objects beforehand (using the destination IP address (ideally, multicast) and port(s) for the stream), and pass them to "PassiveServerMediaSubsession::createNew()". Then add the "PassiveServerMediaSubsession" object as a 'subsession' to a "ServerMediaSession" object, and add the "ServerMediaSession" object to your "RTSPServer". For an illustration of how this is done, note the code for the "testH264VideoStreamer" demo application, in the "testProgs" directory. (Of course, one big difference between your application and "testH264VideoStreamer" is that you *won't* be calling "startPlaying()" to start transmitting RTP packets, because your source is already doing that.) One important note, however. Because your stream is H.264, your "RTPSink" object will actually be a "H264VideoRTPSink". Because this will not be fed by an input source, you *must* use one of the 'optional variants' of "H264VideoRTPSink::createNew()" that specify the stream's SPS and PPS NAL units. Finally, I suggest that initially you use "openRTSP" as a RTSP client to test your server, before testing it with a media player client. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gilles.chanteperdrix at xenomai.org Wed Sep 10 14:28:53 2014 From: gilles.chanteperdrix at xenomai.org (Gilles Chanteperdrix) Date: Wed, 10 Sep 2014 23:28:53 +0200 Subject: [Live-devel] live555MediaServer refresh In-Reply-To: References: <53BBA74A.90909@ismb.it> Message-ID: <5410C295.9000602@xenomai.org> On 07/09/2014 06:00 AM, Ross Finlayson wrote: > Actually, thinking about this a bit more - I'm going to include your > 'hack' in the next release of the "LIVE555 Streaming Media" software, > because it's generally useful - for any type of file - if the > underlying file has changed since the last time that it was > requested. Hi Ross, I believe this change causes the media clients to only be able to receive the first track of a multi-track file, such as an mkv with video and audio. The answer to SETUP of the second track and others is a "400 Bad Request". Please find a test file here: http://www.xenomai.org/video-tests/elephant_dreams/ed_1024_512kb/ed_1024_512kb.mkv This may be the same issue as the one reported by Thilina Jayanath. Regards. P.S: the entire conversation between client and server: OPTIONS rtsp://192.168.0.1:8554/ed_1024_512kb.mkv RTSP/1.0 CSeq: 2 User-Agent: LibVLC/2.0.6 (LIVE555 Streaming Media v2013.01.15) RTSP/1.0 200 OK CSeq: 2 Date: Wed, Sep 10 2014 21:21:50 GMT Public: OPTIONS, DESCRIBE, SETUP, TEARDOWN, PLAY, PAUSE, GET_PARAMETER, SET_PARAMETER DESCRIBE rtsp://192.168.0.1:8554/ed_1024_512kb.mkv RTSP/1.0 CSeq: 3 User-Agent: LibVLC/2.0.6 (LIVE555 Streaming Media v2013.01.15) Accept: application/sdp RTSP/1.0 200 OK CSeq: 3 Date: Wed, Sep 10 2014 21:21:50 GMT Content-Base: rtsp://192.168.0.1:8554/ed_1024_512kb.mkv/ Content-Type: application/sdp Content-Length: 815 v=0 o=- 1410384110913775 1 IN IP4 192.168.0.1 s=Matroska video+audio+(optional)subtitles, streamed by the LIVE555 Media Server i=ed_1024_512kb.mkv t=0 0 a=tool:LIVE555 Streaming Media v2014.08.26 a=type:broadcast a=control:* a=range:npt=0-653.792 a=x-qt-text-nam:Matroska video+audio+(optional)subtitles, streamed by the LIVE555 Media Server a=x-qt-text-inf:ed_1024_512kb.mkv m=video 0 RTP/AVP 96 c=IN IP4 0.0.0.0 b=AS:500 a=rtpmap:96 H264/90000 a=fmtp:96 packetization-mode=1;profile-level-id=42C00D;sprop-parameter-sets=Z0LADatA2P8ngIgAAAMACAAAAwGEeKFV,aM48gA== a=control:track1 m=audio 0 RTP/AVP 97 c=IN IP4 0.0.0.0 b=AS:96 a=rtpmap:97 MPEG4-GENERIC/48000/2 a=fmtp:97 streamtype=5;profile-level-id=1;mode=AAC-hbr;sizelength=13;indexlength=3;indexdeltalength=3;config=1190 a=control:track2 SETUP rtsp://192.168.0.1:8554/ed_1024_512kb.mkv/track1 RTSP/1.0 CSeq: 4 User-Agent: LibVLC/2.0.6 (LIVE555 Streaming Media v2013.01.15) Transport: RTP/AVP;unicast;client_port=34620-34621 RTSP/1.0 200 OK CSeq: 4 Date: Wed, Sep 10 2014 21:21:50 GMT Transport: RTP/AVP;unicast;destination=192.168.0.2;source=192.168.0.1;client_port=34620-34621;server_port=6970-6971 Session: D796241C;timeout=65 SETUP rtsp://192.168.0.1:8554/ed_1024_512kb.mkv/track2 RTSP/1.0 CSeq: 5 User-Agent: LibVLC/2.0.6 (LIVE555 Streaming Media v2013.01.15) Transport: RTP/AVP;unicast;client_port=54846-54847 Session: D796241C RTSP/1.0 400 Bad Request Date: Wed, Sep 10 2014 21:21:50 GMT Allow: OPTIONS, DESCRIBE, SETUP, TEARDOWN, PLAY, PAUSE, GET_PARAMETER, SET_PARAMETER PLAY rtsp://192.168.0.1:8554/ed_1024_512kb.mkv/ RTSP/1.0 CSeq: 6 User-Agent: LibVLC/2.0.6 (LIVE555 Streaming Media v2013.01.15) Session: D796241C Range: npt=0.000- RTSP/1.0 200 OK CSeq: 6 Date: Wed, Sep 10 2014 21:21:50 GMT Range: npt=0.000- Session: D796241C RTP-Info: url=rtsp://192.168.0.1:8554/ed_1024_512kb.mkv/track1;seq=6808;rtptime=2631818198,url=rtsp://192.168.0.1:8554/ed_1024_512kb.mkv/track2;seq=0;rtptime=0 -- Gilles. From gilles.chanteperdrix at xenomai.org Wed Sep 10 14:34:15 2014 From: gilles.chanteperdrix at xenomai.org (Gilles Chanteperdrix) Date: Wed, 10 Sep 2014 23:34:15 +0200 Subject: [Live-devel] live555MediaServer refresh In-Reply-To: <5410C295.9000602@xenomai.org> References: <53BBA74A.90909@ismb.it> <5410C295.9000602@xenomai.org> Message-ID: <5410C3D7.6000902@xenomai.org> On 09/10/2014 11:28 PM, Gilles Chanteperdrix wrote: > RTSP/1.0 400 Bad Request > Date: Wed, Sep 10 2014 21:21:50 GMT > Allow: OPTIONS, DESCRIBE, SETUP, TEARDOWN, PLAY, PAUSE, GET_PARAMETER, > SET_PARAMETER Also, the Bad Request answer does not have a CSeq, which causes an RTSP client to complain. -- Gilles. From streaming-admin at your-wayout.com Thu Sep 11 06:16:31 2014 From: streaming-admin at your-wayout.com (Streaming Admin) Date: Thu, 11 Sep 2014 16:16:31 +0300 Subject: [Live-devel] Configure live mpeg-ts udp input Message-ID: Another approach I was thinking, if I split the incoming stream into 5-10 seconds chunks with time-stamp as filename, run a watcher script to generate index files, can the code amended to continue to the next chunk? Sorry I am not very familiar with C and couldn't find the source which checks for EOL and see if can be modified. -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Thu Sep 11 07:32:51 2014 From: finlayson at live555.com (Ross Finlayson) Date: Thu, 11 Sep 2014 07:32:51 -0700 Subject: [Live-devel] live555MediaServer refresh In-Reply-To: <5410C295.9000602@xenomai.org> References: <53BBA74A.90909@ismb.it> <5410C295.9000602@xenomai.org> Message-ID: <4B7AF537-41D3-4FEF-A927-E6DEC26F77A0@live555.com> > I believe this change causes the media clients to only be able to > receive the first track of a multi-track file, such as an mkv with > video and audio. The answer to SETUP of the second track and others is > a "400 Bad Request". Yes, you're right. This slipped by me; thanks for pointing it out. I've now installed a new version (2014.09.11) of the code that fixes this. (I've also updated the pre-built "LIVE555 Media Server" binary applications.) The bug affected only the "LIVE555 Media Server" application. However, to fix it, I made a change to the signature of the virtual function "RTSPServer::lookupServerMediaSession()"; this function now takes an extra (in) parameter "Boolean isFirstLookupInSession". Therefore, if you have subclassed "RTSPServer" and redefined the function "lookupServerMediaSession()", you must update your redefinition to match this new signature. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Thu Sep 11 07:38:48 2014 From: finlayson at live555.com (Ross Finlayson) Date: Thu, 11 Sep 2014 07:38:48 -0700 Subject: [Live-devel] Configure live mpeg-ts udp input In-Reply-To: References: Message-ID: <18D97807-6928-4A9F-B297-4DB24F70BA8D@live555.com> > Another approach I was thinking, if I split the incoming stream into 5-10 seconds chunks with time-stamp as filename, run a watcher script to generate index files, can the code amended to continue to the next chunk? No. The Transport Stream 'trick play' code (for RTSP servers) uses a single index file, which must be static. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.cassany at i2cat.net Fri Sep 12 06:03:18 2014 From: david.cassany at i2cat.net (David Cassany Viladomat) Date: Fri, 12 Sep 2014 15:03:18 +0200 Subject: [Live-devel] Problem when multi client connect to server using OndemandServerMediaSubsession In-Reply-To: <000201cfc8a8$fef7d730$fce78590$@com.vn> References: <000c01cfc762$6543d360$2fcb7a20$@com.vn> <000f01cfc825$4035da50$c0a18ef0$@com.vn> <000201cfc8a8$fef7d730$fce78590$@com.vn> Message-ID: Dear Nguyen and Ross, I am writing as an answer to this email loop because our similar issue might be useful to you. Some days ago we just noticed a similar behaviour of what Nguyen exposed lately. We have an RTSP server that uses live555 library and we were facing a similar issue of experimenting packet loss when having multiple clients connected to the server. Finally today we had time to make further tests to isolate the issue in order to exclude any performance limitation and we concluded: 1-> one of our old cheap switches was introducing about 3% of packet loss (keeping network equipment updated and fully functional is a must if you don't want to become mad ;-) ) 2-> our server has two interfaces one with a public IP and another behind the NAT in our company. So it seams that only when the server is using both interfaces simultaneously we have packet losses. Having clients only in the public interface we could stream up to six clients (with HD h264 streams) without any loses, the same happened if checking only clients from our private LAN. But the interesting point is that having just a single client in the private LAN simultaneously with a single client out of our LAN we faced up to 5% of packet loss. Our work around is to have the server with a single network interface (not to be tempted to use both at the same time) which works like charm right now. Hopefully this might be useful to you. Best regards, David Cassany 2014-09-05 3:30 GMT+02:00 Nguyen Van Long : > Dear Ross, > > I would like to give many thanks for your suggestions. It seems you are > right in my case. I will try to use another codec (H.264) as well as change > from streaming unicast to multicast. > > Thank again for your help. > > > > *From:* Nguyen Van Long [mailto:longnv at elcom.com.vn] > *Sent:* Thursday, September 04, 2014 4:47 PM > *To:* live-devel at lists.live555.com > *Cc:* longnv at elcom.com.vn > *Subject:* RE: Problem when multi client connect to server using > OndemandServerMediaSubsession > > > > Dear Ross, > > I had seen the answer at > http://lists.live555.com/pipermail/live-devel/2014-September/thread.html > although I didn?t received email from mailing list (may be our mail > server?s down or something cause to lost email ?) and I would like to thank > you for your quick answer. > > As your suggestions, I use VLC as client running in 4 separated computer (I think hardware in each computer is ok to run just one vlc), The ?reuseFirstSource? is of course set to true but nothing changed, video again getting slower with bad image when the 4th or 5th client connect to server (All clients use RTP-over-UDP to request to server) > > I also use ?openRTSP? to get data from my server and write into files. The same thing happens when the 4th or 5th client connect to server (when the connection less than 4, everything is ok). > > It?s glad if you could give me some more suggestions about this problem. > > > > Thanks! > > > > *From:* Nguyen Van Long [mailto:longnv at elcom.com.vn] > *Sent:* Wednesday, September 03, 2014 5:33 PM > *To:* live-devel at lists.live555.com > *Cc:* longnv at elcom.com.vn > *Subject:* Problem when multi client connect to server using > OndemandServerMediaSubsession > > > > Dear Live555 team, > > I recently develop an application that uses live555 library and operates > simple tasks such as: get video data from camera, decode video data, draw > something on video frame, encode video frame and streaming. My application > is a client (to camera) and also a server when streaming video data. > > To get video data from camera, I modified the testRTSPClient in testProgs. > Data received then decoded using ffmpeg and drawn some text, shape using > Qt. After that, I use ffmpeg again to encode video frame (codec is MJPEG-4) > and put the output into a queue which will be streamed later. > > To stream video from a queue, I write a class based on DeviceSource, the > function doGetNextFrame always read mpeg-4 package from queue and calls > FrameSource::afterGetting(this) when data available. I also write a class > called Mpeg4LiveServerMediaSubsession which subbed class from > OnDemandServerMediaSubsession and re-implement three virtual functions > (getAuxSDPLine, createNewFrameSource, createNewRTPSink). The > createNewFrameSource actually return the > MPEG4VideoStreamDiscreteFramer::createNew() with input source parameter is > my class based on DeviceSource described above. > > I use VLC as client to connect to my server and play video stream. > Everything seems ok and my application works quite fine when there are less > than 4 clients connect to server. When the 4th client connect to server, > the video of all clients is getting slower, image is very bad, I cannot see > video content clearly ? > > I don?t think the problem is with my network because I use LAN with a good > capable and even my server and client (VLC) in the same computer, this > problem still happens. > > I have some more information here: > > My processor : corei3 3.36 MHz, Memory (RAM): 4GB, When 4 client connect > to server, program uses 30% of memory, 49% of CPU > > When 1 client (VLC) connect to server, from VLC tool I see that the > content birate is about 6000 ? 7000 kb/s. It reduce to 4000 ? 5000 kb/s > when 2 clients connect to server, 2000 ? 2500 kb/s when 3 clients connect > to server and 600 ? 1500 kb/s when 4 clients connect to server. > > Do you have any ideals with my problem and any suggestions to improve > video quality when there are many clients connect to server? > > > > Thanks! > > > > *Regards,* > > > > *Nguyen Van Long (Mr)* > > ESoft - Software Development > > > ----------------------------------------------------------------------------- > > *ELCOM CORP * > > Add: Elcom Building, Duy Tan Street, Cau Giay District, Ha Noi > > Mobile: (+84) 936 369 326 | Skype: Pfiev.long | Web: www.elcom.com.vn > > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gerard.castillo at i2cat.net Mon Sep 15 07:37:29 2014 From: gerard.castillo at i2cat.net (Gerard Castillo Lasheras) Date: Mon, 15 Sep 2014 16:37:29 +0200 Subject: [Live-devel] RTSP audio & video synchronization issue Message-ID: Hi all, I have some doubts on liveMedia regarding RTSP audio & video synchronization. We have developed a streaming application using live555 library to stream via RTSP. It works like charm when streaming only one stream per session (audio or video). However, problems appear when I try to stream both audio and video using the same RTSP session. I tried playing it using VLC and it only plays the video. In debug mode, it shows this message continuously: core audio output warning: buffer too late (-541608 us): dropped In order to understand this behaviour I used the testRTSPClient as RTSP client, modified to print also the Normal Play Time. What I could observe is that Presentation Times of both audio and video streams are respectively coherent between them. However, after some seconds running, there is an abrupt change in Presentation Time of one of the streams (usually audio), which I suppose that corresponds to RTCP synchronization. After that, there is a big gap between video and audio NPT which remains there during all the transmission, taking sometimes negative values. This is the testRTSPClient output (the timestamp gap can be observed): Stream "rtsp://192.168.10.77:8554/S22J/"; audio/PCMU: Received 1436 bytes. Presentation time: 1410784970.033291 NPT: 2.911316 Stream "rtsp://192.168.10.77:8554/S22J/"; audio/PCMU: Received 484 bytes. Presentation time: 1410784970.033291 NPT: 2.911316 Stream "rtsp://192.168.10.77:8554/S22J/"; audio/PCMU: Received 1436 bytes. Presentation time: 1410784970.053291 NPT: 2.931316 Stream "rtsp://192.168.10.77:8554/S22J/"; audio/PCMU: Received 484 bytes. Presentation time: 1410784970.053291 NPT: 2.931316 Stream "rtsp://192.168.10.77:8554/S22J/"; audio/PCMU: Received 1436 bytes. Presentation time: 1410784970.073291 NPT: 2.951316 Stream "rtsp://192.168.10.77:8554/S22J/"; audio/PCMU: Received 484 bytes. Presentation time: 1410784970.073291 NPT: 2.951316 Stream "rtsp://192.168.10.77:8554/S22J/"; video/H264: Received 1780 bytes. Presentation time: 1410784970.163530 NPT: 3.041563 Stream "rtsp://192.168.10.77:8554/S22J/"; video/H264: Received 3093 bytes. Presentation time: 1410784970.163530 NPT: 3.041563 Stream "rtsp://192.168.10.77:8554/S22J/"; video/H264: Received 9393 bytes. Presentation time: 1410784970.163530 NPT: 3.041563 Stream "rtsp://192.168.10.77:8554/S22J/"; video/H264: Received 5410 bytes. Presentation time: 1410784970.163530 NPT: 3.041563 Stream "rtsp://192.168.10.77:8554/S22J/"; audio/PCMU: Received 1436 bytes. Presentation time: 1410784970.093291 NPT: 2.971316 Stream "rtsp://192.168.10.77:8554/S22J/"; audio/PCMU: Received 484 bytes. Presentation time: 1410784970.093291 NPT: 2.971316 Stream "rtsp://192.168.10.77:8554/S22J/"; audio/PCMU: Received 1436 bytes. Presentation time: 1410784970.113291 NPT: 2.991316 Stream "rtsp://192.168.10.77:8554/S22J/"; audio/PCMU: Received 484 bytes. Presentation time: 1410784970.113291 NPT: 2.991316 Stream "rtsp://192.168.10.77:8554/S22J/"; video/H264: Received 1731 bytes. Presentation time: 1410784977.533251 NPT: 10.411284 Stream "rtsp://192.168.10.77:8554/S22J/"; video/H264: Received 3474 bytes. Presentation time: 1410784977.533251 NPT: 10.411284 Stream "rtsp://192.168.10.77:8554/S22J/"; video/H264: Received 11707 bytes. Presentation time: 1410784977.533251 NPT: 10.411284 Stream "rtsp://192.168.10.77:8554/S22J/"; video/H264: Received 8298 bytes. Presentation time: 1410784977.533251 NPT: 10.411284 Stream "rtsp://192.168.10.77:8554/S22J/"; audio/PCMU: Received 1436 bytes. Presentation time: 1410784970.133291 NPT: 3.011316 Stream "rtsp://192.168.10.77:8554/S22J/"; audio/PCMU: Received 484 bytes. Presentation time: 1410784970.133291 NPT: 3.011316 Stream "rtsp://192.168.10.77:8554/S22J/"; audio/PCMU: Received 1436 bytes. Presentation time: 1410784970.153291 NPT: 3.031316 Stream "rtsp://192.168.10.77:8554/S22J/"; audio/PCMU: Received 484 bytes. Presentation time: 1410784970.153291 NPT: 3.031316 Stream "rtsp://192.168.10.77:8554/S22J/"; video/H264: Received 2468 bytes. Presentation time: 1410784977.574917 NPT: 10.452950 Stream "rtsp://192.168.10.77:8554/S22J/"; video/H264: Received 4043 bytes. Presentation time: 1410784977.574917 NPT: 10.452950 Stream "rtsp://192.168.10.77:8554/S22J/"; video/H264: Received 12999 bytes. Presentation time: 1410784977.574917 NPT: 10.452950 Stream "rtsp://192.168.10.77:8554/S22J/"; video/H264: Received 9043 bytes. Presentation time: 1410784977.574917 NPT: 10.452950 Stream "rtsp://192.168.10.77:8554/S22J/"; audio/PCMU: Received 1436 bytes. Presentation time: 1410784970.173291 NPT: 3.051316 Stream "rtsp://192.168.10.77:8554/S22J/"; audio/PCMU: Received 484 bytes. Presentation time: 1410784970.173291 NPT: 3.051316 Stream "rtsp://192.168.10.77:8554/S22J/"; audio/PCMU: Received 1436 bytes. Presentation time: 1410784970.193291 NPT: 3.071316 Stream "rtsp://192.168.10.77:8554/S22J/"; audio/PCMU: Received 484 bytes. Presentation time: 1410784970.193291 NPT: 3.071316 Stream "rtsp://192.168.10.77:8554/S22J/"; video/H264: Received 3153 bytes. Presentation time: 1410784977.616583 NPT: 10.494616 Stream "rtsp://192.168.10.77:8554/S22J/"; video/H264: Received 3766 bytes. Presentation time: 1410784977.616583 NPT: 10.494616 Stream "rtsp://192.168.10.77:8554/S22J/"; video/H264: Received 6609 bytes. Presentation time: 1410784977.616583 NPT: 10.494616 Stream "rtsp://192.168.10.77:8554/S22J/"; video/H264: Received 4695 bytes. Presentation time: 1410784977.616583 NPT: 10.494616 I checked the RTP timestamps (in transmission) and everything seems OK (no gaps and coherent between them). This behaviour is not the one I would expect and I think it is exactly what's happening when I try to play the session using VLC. So my question is what do you think is happening here and how do you think I can solve this issue? I have the feeling that I do something wrong in transmission (maybe defining the Presentation Time) but I can't see what. Thanks in advance for any hint, Kind regards, -------------------------------------------------------- Gerard Castillo Lasheras Enginyer de Projectes Fundaci? i2CAT - Unitat Audiovisual SkypeID: gerardcl85 Telf.: +34.93.553.25.48 -------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Mon Sep 15 08:06:11 2014 From: finlayson at live555.com (Ross Finlayson) Date: Mon, 15 Sep 2014 08:06:11 -0700 Subject: [Live-devel] RTSP audio & video synchronization issue In-Reply-To: References: Message-ID: <58E4AB7A-7293-4CC8-9424-87D704BA7786@live555.com> First, NPT is completely irrelevant here; it has nothing to do with audio/video synchronization. NPT simply represents the 'time within the currently-playing clip' - i.e., analogous to a VCR display. As you can see, it's aligned with "presentation time", which is the only thing you should be concerned with here. Your problem is the sudden jump (more than 7 seconds) in presentation time - in the video stream only - from 1410784970.163530 to 1410784977.533251. That problem seems to be occurring at the server end. Your server is apparently producing this jump in presentation times (again, in the video stream only). So that's what you need to debug - i.e., try to find out why your server is generating this jump in presentation times. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gerard.castillo at i2cat.net Mon Sep 15 08:13:25 2014 From: gerard.castillo at i2cat.net (Gerard Castillo Lasheras) Date: Mon, 15 Sep 2014 17:13:25 +0200 Subject: [Live-devel] RTSP audio & video synchronization issue In-Reply-To: <58E4AB7A-7293-4CC8-9424-87D704BA7786@live555.com> References: <58E4AB7A-7293-4CC8-9424-87D704BA7786@live555.com> Message-ID: Hi Ross, Sorry for not specifying before regarding server side. We already checked this and there are no gaps/jumps on server side. Thank you in advance, Kind regards, -------------------------------------------------------- Gerard Castillo Lasheras Enginyer de Projectes Fundaci? i2CAT - Unitat Audiovisual SkypeID: gerardcl85 Telf.: +34.93.553.25.48 -------------------------------------------------------- 2014-09-15 17:06 GMT+02:00 Ross Finlayson : > First, NPT is completely irrelevant here; it has nothing to do with > audio/video synchronization. NPT simply represents the 'time within the > currently-playing clip' - i.e., analogous to a VCR display. As you can > see, it's aligned with "presentation time", which is the only thing you > should be concerned with here. > > Your problem is the sudden jump (more than 7 seconds) in presentation time > - in the video stream only - from 1410784970.163530 to 1410784977.533251. > That problem seems to be occurring at the server end. Your server is > apparently producing this jump in presentation times (again, in the video > stream only). So that's what you need to debug - i.e., try to find out why > your server is generating this jump in presentation times. > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Mon Sep 15 08:32:05 2014 From: finlayson at live555.com (Ross Finlayson) Date: Mon, 15 Sep 2014 08:32:05 -0700 Subject: [Live-devel] RTSP audio & video synchronization issue In-Reply-To: References: <58E4AB7A-7293-4CC8-9424-87D704BA7786@live555.com> Message-ID: <8FC58748-956A-4E3E-BBA5-31D4CF03DA1E@live555.com> > Sorry for not specifying before regarding server side. We already checked this and there are no gaps/jumps on server side. You should make sure that the presentation times - generated by the server - are aligned with 'wall clock' time - i.e., the times that you'd get by calling "gettimeofday()". But if you are doing this, then sorry - I have no explanation for what you are seeing. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gilles.chanteperdrix at xenomai.org Mon Sep 15 13:06:25 2014 From: gilles.chanteperdrix at xenomai.org (Gilles Chanteperdrix) Date: Mon, 15 Sep 2014 22:06:25 +0200 Subject: [Live-devel] live555MediaServer refresh In-Reply-To: <4B7AF537-41D3-4FEF-A927-E6DEC26F77A0@live555.com> References: <53BBA74A.90909@ismb.it> <5410C295.9000602@xenomai.org> <4B7AF537-41D3-4FEF-A927-E6DEC26F77A0@live555.com> Message-ID: <541746C1.3040906@xenomai.org> On 09/11/2014 04:32 PM, Ross Finlayson wrote: >> I believe this change causes the media clients to only be able to >> receive the first track of a multi-track file, such as an mkv with >> video and audio. The answer to SETUP of the second track and others >> is a "400 Bad Request". > > Yes, you're right. This slipped by me; thanks for pointing it out. > > I've now installed a new version (2014.09.11) of the code that fixes > this. (I've also updated the pre-built "LIVE555 Media Server" binary > applications.) Ok, thanks for the fix. > > The bug affected only the "LIVE555 Media Server" application. > However, to fix it, I made a change to the signature of the virtual > function "RTSPServer::lookupServerMediaSession()"; this function now > takes an extra (in) parameter "Boolean isFirstLookupInSession". > > Therefore, if you have subclassed "RTSPServer" and redefined the > function "lookupServerMediaSession()", you must update your > redefinition to match this new signature. Yes, have subclassed RTSPServer, thanks again. -- Gilles. From gongfen at vgaic.com Mon Sep 15 22:54:27 2014 From: gongfen at vgaic.com (gongfen at vgaic.com) Date: Tue, 16 Sep 2014 13:54:27 +0800 Subject: [Live-devel] How to set H264 and aac live frame timestamp ? Message-ID: <2014091613542585296915@vgaic.com> Dear Sir, How to set H264 and aac live frame timestamp ? I use live555 to do rtsp server from my h264/aac live stream. First, I know every frame about timestamp and frame len from two linux fifo. And I use ByteStreamFileSource.cpp and ADTSAudioFileSource.cpp to get the frame data. For h264/aac sync, I use testProgs/testOnDemandRTSPServer.cpp to do: ServerMediaSession* sms = ServerMediaSession::createNew(*env, streamName, streamName, descriptionString); sms->addSubsession(H264VideoFileServerMediaSubsession ::createNew(*env, inputFileName, reuseFirstSource)); sms->addSubsession(ADTSAudioFileServerMediaSubsession ::createNew(*env, inputFileName3, reuseFirstSource)); Everything is good, but I use vlc to play only 30 minutes? then it's broken. The vlc debug message is: avcodec error: more than 5 seconds of late video -> dropping frame (computer too slow ?) main warning: picture is too late to be displayed (missing 656606 ms) main warning: picture is too late to be displayed (missing 656602 ms) main warning: picture is too late to be displayed (missing 656598 ms) main warning: picture is too late to be displayed (missing 656262 ms) main warning: picture is too late to be displayed (missing 656298 ms) I found that the timestamp code in ByteStreamFileSource.cpp is this: // Set the 'presentation time': if (fPlayTimePerFrame > 0 && fPreferredFrameSize > 0) { if (fPresentationTime.tv_sec == 0 && fPresentationTime.tv_usec == 0) { // This is the first frame, so use the current time: gettimeofday(&fPresentationTime, NULL); } else { // Increment by the play time of the previous data: unsigned uSeconds = fPresentationTime.tv_usec + fLastPlayTime; fPresentationTime.tv_sec += uSeconds/1000000; fPresentationTime.tv_usec = uSeconds%1000000; } // Remember the play time of this data: fLastPlayTime = (fPlayTimePerFrame*fFrameSize)/fPreferredFrameSize; fDurationInMicroseconds = fLastPlayTime; } else { // We don't know a specific play time duration for this data, // so just record the current time as being the 'presentation time': gettimeofday(&fPresentationTime, NULL); } And I check the timestamp in liveMedia/H264VideoStreamFramer.cpp, the true timestamp is this: // Note that the presentation time for the next NAL unit will be different: struct timeval& nextPT = usingSource()->fNextPresentationTime; // alias nextPT = usingSource()->fPresentationTime; double nextFraction = nextPT.tv_usec/1000000.0 + 1/usingSource()->fFrameRate; unsigned nextSecsIncrement = (long)nextFraction; nextPT.tv_sec += (long)nextSecsIncrement; nextPT.tv_usec = (long)((nextFraction - nextSecsIncrement)*1000000); It's use framerate to get timestamp. So, if I set the ture video timestamp "nextPT.tv_sec and nextPT.tv_usec". There is anything that I missing ? I found that if I change this, the same problem. -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Sep 16 01:46:29 2014 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 16 Sep 2014 01:46:29 -0700 Subject: [Live-devel] How to set H264 and aac live frame timestamp ? In-Reply-To: <2014091613542585296915@vgaic.com> References: <2014091613542585296915@vgaic.com> Message-ID: > And I use ByteStreamFileSource.cpp and ADTSAudioFileSource.cpp to get the frame data. > > For h264/aac sync, I use testProgs/testOnDemandRTSPServer.cpp to do: > > ServerMediaSession* sms > = ServerMediaSession::createNew(*env, streamName, streamName, > descriptionString); > sms->addSubsession(H264VideoFileServerMediaSubsession > ::createNew(*env, inputFileName, reuseFirstSource)); > sms->addSubsession(ADTSAudioFileServerMediaSubsession > ::createNew(*env, inputFileName3, reuseFirstSource)); Using a byte stream as input works well when you are streaming just a single medium (audio or video). However, if you are streaming both audio and video, and want them properly synchronized, then you *cannot* use byte streams as input (because, as you discovered, you don't get precise presentation times for each frame). Instead - if you are streaming both audio and video - then each input source must deliver *discrete* frames (i.e., one frame at a time), with each frame being given an presentation time ("fPresentationTime") when it is encoded. Specifically: You will need to define new subclass(es) of "FramedSource" for your audio and video inputs. You will also need to define new subclasses of "OnDemandServerMediaSubsession" for your audio and video streams. In particular: - For audio, your subclass will redefine the "createNewStreamSource()" virtual function to create an instance of your new audio source class (that delivers one AAC frame at a time). - For video, your subclass will redefine the "createNewStreamSource()" virtual function to create an instance of your new video source class (that delivers one H.264 NAL unit at a time - with each H.264 NAL unit *not* having an initial 0x00 0x00 0x00 0x01 'start code). It should then feed this into a "H264VideoStreamDiscreteFramer" (*not* a "H264VideoStreamFramer"). Your implementation of the "createNewRTPSink()" virtual function may be the same as in "H264VideoFileServerMediaSubsession", but you may prefer instead to use one of the alternative forms of "H264VideoRTPSink::createNew()" that takes SPS and PPS NAL units as parameters. (If you do that, then you won't need to insert SPS and PPS NAL units into your input stream.) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Sep 16 01:46:29 2014 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 16 Sep 2014 01:46:29 -0700 Subject: [Live-devel] How to set H264 and aac live frame timestamp ? In-Reply-To: <2014091613542585296915@vgaic.com> References: <2014091613542585296915@vgaic.com> Message-ID: > And I use ByteStreamFileSource.cpp and ADTSAudioFileSource.cpp to get the frame data. > > For h264/aac sync, I use testProgs/testOnDemandRTSPServer.cpp to do: > > ServerMediaSession* sms > = ServerMediaSession::createNew(*env, streamName, streamName, > descriptionString); > sms->addSubsession(H264VideoFileServerMediaSubsession > ::createNew(*env, inputFileName, reuseFirstSource)); > sms->addSubsession(ADTSAudioFileServerMediaSubsession > ::createNew(*env, inputFileName3, reuseFirstSource)); Using a byte stream as input works well when you are streaming just a single medium (audio or video). However, if you are streaming both audio and video, and want them properly synchronized, then you *cannot* use byte streams as input (because, as you discovered, you don't get precise presentation times for each frame). Instead - if you are streaming both audio and video - then each input source must deliver *discrete* frames (i.e., one frame at a time), with each frame being given an presentation time ("fPresentationTime") when it is encoded. Specifically: You will need to define new subclass(es) of "FramedSource" for your audio and video inputs. You will also need to define new subclasses of "OnDemandServerMediaSubsession" for your audio and video streams. In particular: - For audio, your subclass will redefine the "createNewStreamSource()" virtual function to create an instance of your new audio source class (that delivers one AAC frame at a time). - For video, your subclass will redefine the "createNewStreamSource()" virtual function to create an instance of your new video source class (that delivers one H.264 NAL unit at a time - with each H.264 NAL unit *not* having an initial 0x00 0x00 0x00 0x01 'start code). It should then feed this into a "H264VideoStreamDiscreteFramer" (*not* a "H264VideoStreamFramer"). Your implementation of the "createNewRTPSink()" virtual function may be the same as in "H264VideoFileServerMediaSubsession", but you may prefer instead to use one of the alternative forms of "H264VideoRTPSink::createNew()" that takes SPS and PPS NAL units as parameters. (If you do that, then you won't need to insert SPS and PPS NAL units into your input stream.) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ali at and-or.com Thu Sep 18 05:04:42 2014 From: ali at and-or.com (Muhammad Ali) Date: Thu, 18 Sep 2014 17:04:42 +0500 Subject: [Live-devel] OpenRTSP stream delay increasing with time Message-ID: I am using OpenRTSP to open an RTSP stream (IP Camera) and send it to ffplay using Unix Pipes as input. I see the video playing which is roughly 1 to 1.5 sec delayed than live (Direct RTSP stream view of IP Cam) . here is my command line ./OpenRTSP -v -Q -D 1 -n rtsp://:554/11 | ffplay -loglevel verbose -fflags nobuffer -i pipe:0 Now this delay slowly keeps increasing from 1.5 sec upto 4~5 seconds in about 20 minutes. And I assume it will continue to increase with time. So my questions are :- 1 - what is causing the initial delay ? 2 - Why does that delay keeps increasing ? Is there a way to disable "pre caching" of frames in OpenRTSP ? I compare the delay by watching two videos of the same IP camera. One is configured at angelcam.com and other I view locally via the above command line. Surprisingly, it is always Local stream that gets delayed. Angelcam stream keeps functioning as it started. Any help ? -- Muhammad Ali And Or Logic www.andorlogic.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Thu Sep 18 05:45:08 2014 From: finlayson at live555.com (Ross Finlayson) Date: Thu, 18 Sep 2014 05:45:08 -0700 Subject: [Live-devel] OpenRTSP stream delay increasing with time In-Reply-To: References: Message-ID: <79F92D0B-408C-4C58-B43E-B40ECD5B740B@live555.com> > here is my command line > > ./OpenRTSP -v -Q -D 1 -n rtsp://:554/11 | ffplay -loglevel verbose -fflags nobuffer -i pipe:0 > > Now this delay slowly keeps increasing from 1.5 sec upto 4~5 seconds in about 20 minutes. And I assume it will continue to increase with time. > > So my questions are :- > 1 - what is causing the initial delay ? I don't know, but it's not "openRTSP". (Our RTSP/RTP client software does not include any buffering, or significant delay.) The delay is probably some combination of the OS pipe (i.e., the "!") and "ffplay" itself (i.e., the decoding/rendering overhead). > Is there a way to disable "pre caching" of frames in OpenRTSP ? There's no such thing. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Thu Sep 18 05:48:47 2014 From: finlayson at live555.com (Ross Finlayson) Date: Thu, 18 Sep 2014 05:48:47 -0700 Subject: [Live-devel] OpenRTSP stream delay increasing with time In-Reply-To: <79F92D0B-408C-4C58-B43E-B40ECD5B740B@live555.com> References: <79F92D0B-408C-4C58-B43E-B40ECD5B740B@live555.com> Message-ID: > I don't know, but it's not "openRTSP". (Our RTSP/RTP client software does not include any buffering, or significant delay.) The delay is probably some combination of the OS pipe (i.e., the "!" Oops, I meant 'the "|"', of course (I typed the wrong character). > ) and "ffplay" itself (i.e., the decoding/rendering overhead). Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ali at and-or.com Thu Sep 18 06:21:06 2014 From: ali at and-or.com (Muhammad Ali) Date: Thu, 18 Sep 2014 18:21:06 +0500 Subject: [Live-devel] OpenRTSP stream delay increasing with time In-Reply-To: References: <79F92D0B-408C-4C58-B43E-B40ECD5B740B@live555.com> Message-ID: My next question would be what if some RTP packets are delayed? How will the client behave? Will it wait for "x" seconds and continue. If it waits then there will be a delay that may be carried on to the next time the packets are delayed. Maybe that's where the delay is. On Sep 18, 2014 6:14 PM, "Ross Finlayson" wrote: > I don't know, but it's not "openRTSP". (Our RTSP/RTP client software does > not include any buffering, or significant delay.) The delay is probably > some combination of the OS pipe (i.e., the "!" > > > Oops, I meant 'the "|"', of course (I typed the wrong character). > > ) and "ffplay" itself (i.e., the decoding/rendering overhead). > > > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Thu Sep 18 06:58:12 2014 From: finlayson at live555.com (Ross Finlayson) Date: Thu, 18 Sep 2014 06:58:12 -0700 Subject: [Live-devel] OpenRTSP stream delay increasing with time In-Reply-To: References: <79F92D0B-408C-4C58-B43E-B40ECD5B740B@live555.com> Message-ID: No. Our code delivers data (in this case, to the OS pipe, and from there to your "ffplay" command) once it arrives. The 'increasing delay' that you're seeing has to be caused by your downstream code (your "ffplay" command) not consuming data fast enough. Either your "ffplay" command is CPU bound, or else it's displaying at a frame rate that's slower than that of the incoming data. In the latter case, what might be happening is that your receiving computer's clock is running at a slightly slower rate than the sending computer's clock. In that case - because you're using a simple FIFO (i.e., pipe) feeding into your application (i.e., "ffplay") - there's nothing that you can do. (A more sophisticated media player application would typically start dropping frames once it noticed that its internal 'jitter buffer' was filling up (due to mismatched clock speeds).) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From serg_lvov at yahoo.com Thu Sep 18 05:55:12 2014 From: serg_lvov at yahoo.com (Sergey Lvov) Date: Thu, 18 Sep 2014 05:55:12 -0700 Subject: Possible bug in RTPInterface::sendDataOverTCP Message-ID: <1411044912.92382.YahooMailNeo@web161802.mail.bf1.yahoo.com> Hallo everybody! I discovered strange disconnections when I used streaming over TCP. I recompiled live555 library with -DDEBUG and -DDEBUG_SEND and saw some diagnostic: sendRTPorRTCPPacketOverTCP: 1448 bytes over channel 0 (socket 7) sendDataOverTCP: resending 795-byte send (blocking) sendDataOverTCP: blocking send() failed (delivering -1 bytes out of 795); closing socket 7 SocketDescriptor(socket 7)::deregisterRTPInterface(channel 255) sendRTPorRTCPPacketOverTCP: failed! (errno 11) RTSPClientConnection[0x8e80978]::handleRequestBytes() read 4 new bytes:$ RTSPClientConnection[0x8e80978]::handleRequestBytes() read 52 new bytes:? schedule(5.170436->1411036332.468457) RTSPClientConnection[0x8e7baf0]::handleRequestBytes() read 212 new bytes:GET_PARAMETER rtsp://192.168.0.35:8554/archive?record=541697a20c8ac43f&sessionId=35/ RTSP/1.0 CSeq: 21349 User-Agent: LibVLC/2.2.0-pre4-20140908-0202 (LIVE555 Streaming Media v2014.07.25) Session: CED66A9C So, errno 11 - it's EAGAIN, and that's very strange for socket in blocking mode. However, I found topic: stackoverflow.com/questions/735249/blocking-socket-returns-eagain And I understood that is quite possible. I tried to fix the problem by this way: - sendResult = send(socketNum, (char const*)(&data[numBytesSentSoFar]), numBytesRemainingToSend, 0/*flags*/); + + do { + sendResult = send(socketNum, (char const*)(&data[numBytesSentSoFar]), numBytesRemainingToSend, 0/*flags*/); + } while(sendResult == -1 && envir().getErrno() == EAGAIN); And it works now! Could you possibly investigate this problem? Thank you for you work! Best regards, Sergey. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: sendDataOverTCP.patch Type: application/octet-stream Size: 1074 bytes Desc: not available URL: From finlayson at live555.com Fri Sep 19 00:13:28 2014 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 19 Sep 2014 00:13:28 -0700 Subject: [Live-devel] Possible bug in RTPInterface::sendDataOverTCP In-Reply-To: References: Message-ID: <9700B3D8-A54C-4315-A149-C26C0E01F12F@live555.com> > I tried to fix the problem by this way: > > - sendResult = send(socketNum, (char const*)(&data[numBytesSentSoFar]), numBytesRemainingToSend, 0/*flags*/); > + > + do { > + sendResult = send(socketNum, (char const*)(&data[numBytesSentSoFar]), numBytesRemainingToSend, 0/*flags*/); > + } while(sendResult == -1 && envir().getErrno() == EAGAIN); No, you can't do this (and there's not a 'problem' that needs fixing)! If the TCP connection gets blocked permanently (e.g., because the receiving client has stopped running, or has a cable disconnected), then you can't just sit in a loop, attempting to send() over and over again. That would starve out all other server activity. There's no 'bug' or 'problem' here. The "EAGAIN" error occurs when the sending OS's TCP buffer is full - which occurs if the stream's bitrate exceeds (at least temporarily) the capacity of the TCP connection. When this happens, the only solution is to discard outgoing data. Our code does so by ensuring that if/when data gets discarded, the discarded data will be a complete RTP (or RTCP) packet - equivalent to the loss of a packet if you were streaming over UDP. Some people seem to think that streaming over TCP is a 'magic wand' that will avoid all data loss, regardless of the bitrate of your stream and the capacity of your network. But that's impossible. If your stream's bitrate exceeds the capacity of your network, you *will* lose data. The only way to prevent this is either to transmit a slower stream, or use a faster network. Yet again, let me remind everyone that streaming over TCP is something that you should be doing *only* if you are behind a firewall that blocks UDP packets Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From serg_lvov at yahoo.com Fri Sep 19 06:12:24 2014 From: serg_lvov at yahoo.com (Sergey Lvov) Date: Fri, 19 Sep 2014 06:12:24 -0700 Subject: [Live-devel] Possible bug in RTPInterface::sendDataOverTCP In-Reply-To: <9700B3D8-A54C-4315-A149-C26C0E01F12F@live555.com> References: <9700B3D8-A54C-4315-A149-C26C0E01F12F@live555.com> Message-ID: <1411132344.87346.YahooMailNeo@web161806.mail.bf1.yahoo.com> Thank you very much for detailed explanation. Probably increasing of the send buffer (SO_SNDBUF) can help, it's value depends on estimatedBitrate from RTPSink: if (rtpSink != NULL && rtpSink->estimatedBitrate() > 0) streamBitrate = rtpSink->estimatedBitrate(); if (rtpGroupsock != NULL) { // Try to use a big send buffer for RTP - at least 0.1 second of // specified bandwidth and at least 50 KB unsigned rtpBufSize = streamBitrate * 25 / 2; // 1 kbps * 0.1 s = 12.5 bytes if (rtpBufSize < 50 * 1024) rtpBufSize = 50 * 1024; increaseSendBufferTo(envir(), rtpGroupsock->socketNum(), rtpBufSize); } On Friday, 19 September 2014, 11:13, Ross Finlayson wrote: > > >I tried to fix the problem by this way: >> >> >>- sendResult = send(socketNum, (char const*)(&data[numBytesSentSoFar]), numBytesRemainingToSend, 0/*flags*/); >>+ >>+ do { >>+ sendResult = send(socketNum, (char const*)(&data[numBytesSentSoFar]), numBytesRemainingToSend, 0/*flags*/); >>+ } while(sendResult == -1 && envir().getErrno() == EAGAIN); >> > >No, you can't do this (and there's not a 'problem' that needs fixing)! If the TCP connection gets blocked permanently (e.g., because the receiving client has stopped running, or has a cable disconnected), then you can't just sit in a loop, attempting to send() over and over again. That would starve out all other server activity. > > >There's no 'bug' or 'problem' here. The "EAGAIN" error occurs when the sending OS's TCP buffer is full - which occurs if the stream's bitrate exceeds (at least temporarily) the capacity of the TCP connection. When this happens, the only solution is to discard outgoing data. Our code does so by ensuring that if/when data gets discarded, the discarded data will be a complete RTP (or RTCP) packet - equivalent to the loss of a packet if you were streaming over UDP. > > >Some people seem to think that streaming over TCP is a 'magic wand' that will avoid all data loss, regardless of the bitrate of your stream and the capacity of your network. But that's impossible. If your stream's bitrate exceeds the capacity of your network, you *will* lose data. The only way to prevent this is either to transmit a slower stream, or use a faster network. > > >Yet again, let me remind everyone that streaming over TCP is something that you should be doing *only* if you are behind a firewall that blocks UDP packets > >Ross Finlayson >Live Networks, Inc. >http://www.live555.com/ > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.heliker at gmail.com Fri Sep 19 12:26:26 2014 From: james.heliker at gmail.com (James Heliker) Date: Fri, 19 Sep 2014 12:26:26 -0700 Subject: [Live-devel] ASIO input device -> RTP stream Message-ID: Hi All - I have a project being built on Live555 to stream PCM audio from an ASIO device to an RTP stream in as close to real-time as possible. Latency is biggest concern, next-up would be quality of the audio reaching the network. The project so far is built on Windows / VS2012 using the RtAudio lib for ASIO device access and my own circular buffer to make the input data available to Live555, mimicking your WAV file source example to make the project work with AudioBufferSource and SimpleRTPSink. I'm not sure if this was the appropriate path with Live555; any insights or guidance would be so appreciated here! Audio is getting on to the network but in a bad state with many pops ticks and glitches, and a lot of variable latency. I've confirmed the basics like appropriate RTP payload format of 11 (L16 PCM @ 44.1) and am receiving data from my ASIO device at 16bit 44.1 through RtAudio. Thanks for your help!! - James -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Fri Sep 19 22:42:57 2014 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 19 Sep 2014 22:42:57 -0700 Subject: [Live-devel] Possible bug in RTPInterface::sendDataOverTCP In-Reply-To: References: <9700B3D8-A54C-4315-A149-C26C0E01F12F@live555.com> Message-ID: <2DDE6282-7DA3-46FB-ABE9-3F6708777FE3@live555.com> > Probably increasing of the send buffer (SO_SNDBUF) can help Yes, you can call "increaseSendBufferTo()" in your application. However, you'll still get data loss if your stream's bitrate exceeds the capacity of your TCP connection (which is *not* the same as the nominal bitrate of your network). You can never avoid that. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Fri Sep 19 22:52:19 2014 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 19 Sep 2014 22:52:19 -0700 Subject: [Live-devel] ASIO input device -> RTP stream In-Reply-To: References: Message-ID: > Audio is getting on to the network but in a bad state with many pops ticks and glitches, and a lot of variable latency. This suggests that you're probably not packing PCM audio samples into outgoing RTP packets properly. Make sure that your source's "doGetNextFrame()" implementation is packing an appropriate number of complete PCM audio samples (remember that you'll have 2x as many bytes if the audio is stereo) before calling "FramedSource::afterGetting()". Also make sure that you're setting "fPresentationTime" properly (it must be aligned with 'wall clock' time: the time that you'd get by calling "gettimeofday()".) You also need to set "fDurationInMicroseconds" appropriately each time. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dnyanesh.gate at intelli-vision.com Sun Sep 21 09:11:14 2014 From: dnyanesh.gate at intelli-vision.com (Dnyanesh Gate) Date: Sun, 21 Sep 2014 21:41:14 +0530 Subject: [Live-devel] RTSP over TLS Message-ID: Hi, I am working on special case where UDP is blocked and only outgoing port 443 is opened. So that RTSP client only able to open RTSP stream over SSL connection. I can pass socket descriptor of pre established SSL connection to RTSPClient class, but SSL calls are not implemented in groupsock to read encrypted packets. Is there any way to write custom wrapper over groupsock? Can anybody suggest me on how to add such support with/without making changes in live555? -- Thanks & Regards, DnyaneshG. From finlayson at live555.com Sun Sep 21 21:11:48 2014 From: finlayson at live555.com (Ross Finlayson) Date: Sun, 21 Sep 2014 21:11:48 -0700 Subject: [Live-devel] RTSP over TLS In-Reply-To: References: Message-ID: <9DDAC167-8465-49CD-8972-4A70C25D8340@live555.com> > I am working on special case where UDP is blocked and only outgoing > port 443 is opened. So that RTSP client only able to open RTSP stream > over SSL connection. I can pass socket descriptor of pre established > SSL connection to RTSPClient class, but SSL calls are not implemented > in groupsock to read encrypted packets. I'm not sure I understand your question. If you've already set up a socket for a SSL connection, then you're not dealing with encrypted data at all; encryption/decryption is done automatically at a lower level. So you should be able to simply pass this socket number as a parameter to "RTSPClient::createNew()", and your client should work as normal. (Provided, of course, that the *server* knows how to deal with the SSL connection at its end.) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.brady+live555 at denbridgemarine.com Mon Sep 22 06:40:55 2014 From: jonathan.brady+live555 at denbridgemarine.com (Jonathan Brady) Date: Mon, 22 Sep 2014 14:40:55 +0100 Subject: [Live-devel] RTSP proxy not noticing server restart Message-ID: <542026E7.6070305@denbridgemarine.com> Hello, It is possible to restart an origin server quickly enough that a live 555 proxy server does not notice. As a test this can be done by modifying one of the live 555 test programs (I was using testH264VideoStreamer) to enable SO_REUSEADDR/SO_REUSEPORT, then the process could be quickly killed and restarted without waiting for the socket to timeout. Other examples of this being embedded systems that reboot quickly. When the origin server is restarted the proxy server simply stops receiving data, the proxy sends DESCRIBE requests but they succeed because of the fast restart so the proxy never notices. Disconnecting all clients from the proxy server results in a PAUSE command being sent to the origin server, the proxy receives back an invalid session id, as it does with a PLAY command when another client connects, however these errors are ignored. Once this has happened the only fix is to restart the proxy server, or stop the origin server long enough for the live 555 proxy to notice. Is it possible to notice this situation and rectify it? Regards, Jonathan From finlayson at live555.com Mon Sep 22 07:42:01 2014 From: finlayson at live555.com (Ross Finlayson) Date: Mon, 22 Sep 2014 07:42:01 -0700 Subject: [Live-devel] RTSP proxy not noticing server restart In-Reply-To: <542026E7.6070305@denbridgemarine.com> References: <542026E7.6070305@denbridgemarine.com> Message-ID: > It is possible to restart an origin server quickly enough that a live 555 proxy server does not notice. > > As a test this can be done by modifying one of the live 555 test programs (I was using testH264VideoStreamer) to enable SO_REUSEADDR/SO_REUSEPORT, then the process could be quickly killed and restarted without waiting for the socket to timeout. And that's precisely why you shouldn't make such a change to the server's code :-) But anyway, the proxy server sends periodic RTSP "OPTIONS" commands to the 'back-end' server, to test whether its still alive. If any of these "OPTIONS" commands fails (or if the RTSP (TCP) connection to the back-end server fails), then the proxy server will notice this, and establish a new connection (with a new "DESCRIBE" command) to the back-end server. You should be able to see this by running the proxy server with the -V (uppercase "V") option. But (getting back to the first point) if your server is able to restart in such a way that a previously-set-up TCP connection can get misinterpreted as still being valid, then that's a serious security flaw in your server OS; that's the real bug that you should be fixing. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.brady+live555 at denbridgemarine.com Mon Sep 22 08:17:40 2014 From: jonathan.brady+live555 at denbridgemarine.com (Jonathan Brady) Date: Mon, 22 Sep 2014 16:17:40 +0100 Subject: [Live-devel] RTSP proxy not noticing server restart In-Reply-To: References: <542026E7.6070305@denbridgemarine.com> Message-ID: <54203D94.5090200@denbridgemarine.com> On 22/09/14 15:42, Ross Finlayson wrote: >> It is possible to restart an origin server quickly enough that a live >> 555 proxy server does not notice. >> >> As a test this can be done by modifying one of the live 555 test >> programs (I was using testH264VideoStreamer) to enable >> SO_REUSEADDR/SO_REUSEPORT, then the process could be quickly killed >> and restarted without waiting for the socket to timeout. > > And that's precisely why you shouldn't make such a change to the > server's code :-) Its not that I'm making a change to the server code. It was actually a third party embedded server that is capable of restarting fast enough that the proxy server never notices. The change to the server code suggested is just a quick way of reproducing the problem on the same machine. > > But anyway, the proxy server sends periodic RTSP "OPTIONS" commands to > the 'back-end' server, to test whether its still alive. If any of > these "OPTIONS" commands fails (or if the RTSP (TCP) connection to the > back-end server fails), then the proxy server will notice this, and > establish a new connection (with a new "DESCRIBE" command) to the > back-end server. > > You should be able to see this by running the proxy server with the -V > (uppercase "V") option. > > But (getting back to the first point) if your server is able to > restart in such a way that a previously-set-up TCP connection can get > misinterpreted as still being valid, then that's a serious security > flaw in your server OS; that's the real bug that you should be fixing. > I can indeed see this running the proxy server with -V. It is not the case of a TCP connection being misinterpreted as valid. It is the case of when the proxy server retries an OPTIONS command, it sees the TCP connection has gone away, reconnects and receives a valid response, it doesn't realise this is because the server has restarted. Perhaps rather than using OPTIONS something that sends a session id should be sent and error responses checked for? Sending request: PLAY rtsp://127.0.0.1:8554/testStream/ RTSP/1.0 CSeq: 4 User-Agent: ProxyRTSPClient (LIVE555 Streaming Media v2014.09.11) Session: 29A64580 Received 186 new bytes of response data. Received a complete PLAY response: RTSP/1.0 200 OK CSeq: 4 Date: Mon, Sep 22 2014 15:07:50 GMT Range: npt=29.767- Session: 29A64580 RTP-Info: url=rtsp://127.0.0.1:8554/testStream/track1;seq=24127;rtptime=32393928 ================== I restart the origin server here ================== Opening connection to 127.0.0.1, port 8554... ...remote connection opened Sending request: OPTIONS rtsp://127.0.0.1:8554/testStream/ RTSP/1.0 CSeq: 6 User-Agent: ProxyRTSPClient (LIVE555 Streaming Media v2014.09.11) Session: 29A64580 Received 152 new bytes of response data. Received a complete OPTIONS response: RTSP/1.0 200 OK CSeq: 6 Date: Mon, Sep 22 2014 15:08:47 GMT Public: OPTIONS, DESCRIBE, SETUP, TEARDOWN, PLAY, PAUSE, GET_PARAMETER, SET_PARAMETER ============= I close my client here ============= ProxyServerMediaSubsession["H264"]::closeStreamSource() Sending request: PAUSE rtsp://127.0.0.1:8554/testStream/ RTSP/1.0 CSeq: 7 User-Agent: ProxyRTSPClient (LIVE555 Streaming Media v2014.09.11) Session: 29A64580 Received 80 new bytes of response data. Received a complete PAUSE response: RTSP/1.0 454 Session Not Found CSeq: 7 Date: Mon, Sep 22 2014 15:08:54 GMT Regards, Jonathan -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Mon Sep 22 12:06:00 2014 From: finlayson at live555.com (Ross Finlayson) Date: Mon, 22 Sep 2014 12:06:00 -0700 Subject: [Live-devel] RTSP proxy not noticing server restart In-Reply-To: <54203D94.5090200@denbridgemarine.com> References: <542026E7.6070305@denbridgemarine.com> <54203D94.5090200@denbridgemarine.com> Message-ID: <3C10F5D8-12CD-46F5-B318-5DDAC90F8F7D@live555.com> > I can indeed see this running the proxy server with -V. It is not the case of a TCP connection being misinterpreted as valid. It is the case of when the proxy server retries an OPTIONS command, it sees the TCP connection has gone away, reconnects and receives a valid response, it doesn't realise this is because the server has restarted. OK, now I understand what's going on. The problem here is that the proxy server can't tell that the server has restarted - merely from the fact that it needed to reopen the TCP connection in order to send an "OPTIONS" command. It's perfectly valid (although admittedly unusual) for a server to close the RTSP (TCP) connection while the RTP (UDP) stream is ongoing. In this case, the server's client (in this case, the proxy server) would need to reopen the RTSP (TCP) connection in order to send the next RTSP command (in this case, "OPTIONS"). Our code does this just fine. The problem is that - because this next RTSP command ("OPTIONS") completes without error, the proxy server can't tell that the server has restarted. > Perhaps rather than using OPTIONS something that sends a session id should be sent and error responses checked for? The "OPTIONS" command sent by the proxy server *does* contain a session id (when a session has been established - i.e., when there's at least one front-end client). Unfortunately, the server (after restarting) didn't see this incoming "OPTIONS" command as being erroneous - because it didn't look at the "Session:" id. I've now installed a new version (2014.09.22) of the "LIVE555 Streaming Media" code that changes the way in which the RTSP server handles incoming "OPTIONS" commands that contain a "Session:" header. If the specified session id refers to a session that does not exist, the server now responds with a "Session not found" error. (If the session id is valid, or is not present at all in the "OPTIONS" request, then the request is handled as normal.) If you rebuild your 'back-end' server using this new version of the code, then the proxy server will now be able to better handle the case when it restarts. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.heliker at gmail.com Mon Sep 22 13:19:47 2014 From: james.heliker at gmail.com (James Heliker) Date: Mon, 22 Sep 2014 13:19:47 -0700 Subject: [Live-devel] ASIO input device -> RTP stream In-Reply-To: References: Message-ID: Hi Ross - Thanks for getting back to me. I've copied in my doGetNextFrame because I'm not seeing anything that is particularly wrong here from your last email... would you possibly be willing to shed any light on my mistakes? I would so appreciate it!!! Kind Regards, - James void AudioBufferSource::doGetNextFrame() { fFrameSize = 0; // until it's set later if (fPreferredFrameSize < fMaxSize) { fMaxSize = fPreferredFrameSize; } unsigned bytesPerSample = (fNumChannels*fBitsPerSample) / 8; if (bytesPerSample == 0) bytesPerSample = 1; // because we can't read less than a byte at a time unsigned samplesToRead = fMaxSize / bytesPerSample; unsigned numSamplesRead; numSamplesRead = fBufferManager->getFrames((int16_t*)fTo, samplesToRead); unsigned numBytesRead = numSamplesRead * bytesPerSample; fFrameSize += numBytesRead; fTo += numBytesRead; //TODO - eval - is this line necessary fMaxSize -= numBytesRead; fNumBytesToStream -= numBytesRead; // Set the 'presentation time' and 'duration' of this frame: if (fPresentationTime.tv_sec == 0 && fPresentationTime.tv_usec == 0) { // This is the first frame, so use the current time: gettimeofday(&fPresentationTime, NULL); } else { // Increment by the play time of the previous data: unsigned uSeconds = fPresentationTime.tv_usec + fLastPlayTime; fPresentationTime.tv_sec += uSeconds / 1000000; fPresentationTime.tv_usec = uSeconds % 1000000; } // Remember the play time of this data: fDurationInMicroseconds = fLastPlayTime = (unsigned)((fPlayTimePerSample*fFrameSize) / bytesPerSample); // Inform the reader that he has data: // To avoid possible infinite recursion, we need to return to the event loop to do this: nextTask() = envir().taskScheduler().scheduleDelayedTask(0, (TaskFunc*)FramedSource::afterGetting, this); } On Fri, Sep 19, 2014 at 10:52 PM, Ross Finlayson wrote: > Audio is getting on to the network but in a bad state with many pops ticks > and glitches, and a lot of variable latency. > > > This suggests that you're probably not packing PCM audio samples into > outgoing RTP packets properly. Make sure that your source's > "doGetNextFrame()" implementation is packing an appropriate number of > complete PCM audio samples (remember that you'll have 2x as many bytes if > the audio is stereo) before calling "FramedSource::afterGetting()". Also > make sure that you're setting "fPresentationTime" properly (it must be > aligned with 'wall clock' time: the time that you'd get by calling > "gettimeofday()".) You also need to set "fDurationInMicroseconds" > appropriately each time. > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christiano at techideasbrasil.com Tue Sep 23 18:07:49 2014 From: christiano at techideasbrasil.com (Christiano Belli) Date: Tue, 23 Sep 2014 22:07:49 -0300 Subject: [Live-devel] Open H.264 stream frame by frame Message-ID: Hello, I'm new with live555 and I'm so exciting in learning this. I have some questions: There is a way where I can get the RTSP/H.264 stream from a ip camera and get each frame from video to do video analytics with openCV per example. Does the process to get each frame from a RTSP/H.264 stream need a lot of machine resources? Can I in the same time which I'm analyzing the video, save and live streaming the same stream? All suggestions will help me a lot. Att. Christiano Belli Cel.: +55 (41) 8444-7468 christiano at techideasbrasil.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcin at speed666.info Tue Sep 23 23:18:26 2014 From: marcin at speed666.info (Marcin) Date: Wed, 24 Sep 2014 08:18:26 +0200 Subject: [Live-devel] Open H.264 stream frame by frame In-Reply-To: References: Message-ID: <54226232.4010706@speed666.info> Hi, Really - i think that you have to decode the stream to get frame-by-frame data for further analysis. Soon Ross will reply also that Live555 is not responsible for decoding/encoding stream but only for reciving it. I think you have to use ffmpeg or libavcodec for example to decode bitstream recived by live555 libs and then pass it to analyzing software as RAW data. Marcin W dniu 2014-09-24 03:07, Christiano Belli pisze: > Hello, I'm new with live555 and I'm so exciting in learning this. I > have some questions: > > There is a way where I can get the RTSP/H.264 stream from a ip camera > and get each frame from video to do video analytics with openCV per > example. > > Does the process to get each frame from a RTSP/H.264 stream need a lot > of machine resources? > > Can I in the same time which I'm analyzing the video, save and live > streaming the same stream? > > All suggestions will help me a lot. > > Att. > > Christiano Belli > Cel.:+55 (41) 8444-7468 > christiano at techideasbrasil.com > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From christiano at techideasbrasil.com Wed Sep 24 04:36:13 2014 From: christiano at techideasbrasil.com (Christiano Belli) Date: Wed, 24 Sep 2014 08:36:13 -0300 Subject: [Live-devel] Open H.264 stream frame by frame In-Reply-To: <54226232.4010706@speed666.info> References: <54226232.4010706@speed666.info> Message-ID: Hi Marcin, Thanks for the response. Where can I get a example of use live555 with ffmpeg? Att. Christiano Belli Cel.: +55 (41) 8444-7468 christiano at techideasbrasil.com On Wed, Sep 24, 2014 at 3:18 AM, Marcin wrote: > Hi, > Really - i think that you have to decode the stream to get frame-by-frame > data for further analysis. Soon Ross will reply also that Live555 is not > responsible for decoding/encoding stream but only for reciving it. > I think you have to use ffmpeg or libavcodec for example to decode > bitstream recived by live555 libs and then pass it to analyzing software as > RAW data. > Marcin > > W dniu 2014-09-24 03:07, Christiano Belli pisze: > > Hello, I'm new with live555 and I'm so exciting in learning this. I have > some questions: > > There is a way where I can get the RTSP/H.264 stream from a ip camera > and get each frame from video to do video analytics with openCV per example. > > Does the process to get each frame from a RTSP/H.264 stream need a lot > of machine resources? > > Can I in the same time which I'm analyzing the video, save and live > streaming the same stream? > > All suggestions will help me a lot. > > Att. > > Christiano Belli > Cel.: +55 (41) 8444-7468 > christiano at techideasbrasil.com > > > _______________________________________________ > live-devel mailing listlive-devel at lists.live555.comhttp://lists.live555.com/mailman/listinfo/live-devel > > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ali at and-or.com Wed Sep 24 04:47:30 2014 From: ali at and-or.com (Muhammad Ali) Date: Wed, 24 Sep 2014 16:47:30 +0500 Subject: [Live-devel] Open H.264 stream frame by frame In-Reply-To: References: <54226232.4010706@speed666.info> Message-ID: Hi Marcin, You'll have to use libavcodec to get decoded frames ( frame / frame) and then convert the received frame to either Iplimage or Mat whichever you prefer. You can check sample code given under ffmpeg sources to see how you can use libavcodec. On Sep 24, 2014 4:38 PM, "Christiano Belli" wrote: > Hi Marcin, > > Thanks for the response. Where can I get a example of use live555 with > ffmpeg? > > > Att. > > Christiano Belli > Cel.: +55 (41) 8444-7468 > christiano at techideasbrasil.com > > On Wed, Sep 24, 2014 at 3:18 AM, Marcin wrote: > >> Hi, >> Really - i think that you have to decode the stream to get frame-by-frame >> data for further analysis. Soon Ross will reply also that Live555 is not >> responsible for decoding/encoding stream but only for reciving it. >> I think you have to use ffmpeg or libavcodec for example to decode >> bitstream recived by live555 libs and then pass it to analyzing software as >> RAW data. >> Marcin >> >> W dniu 2014-09-24 03:07, Christiano Belli pisze: >> >> Hello, I'm new with live555 and I'm so exciting in learning this. I >> have some questions: >> >> There is a way where I can get the RTSP/H.264 stream from a ip camera >> and get each frame from video to do video analytics with openCV per example. >> >> Does the process to get each frame from a RTSP/H.264 stream need a lot >> of machine resources? >> >> Can I in the same time which I'm analyzing the video, save and live >> streaming the same stream? >> >> All suggestions will help me a lot. >> >> Att. >> >> Christiano Belli >> Cel.: +55 (41) 8444-7468 >> christiano at techideasbrasil.com >> >> >> _______________________________________________ >> live-devel mailing listlive-devel at lists.live555.comhttp://lists.live555.com/mailman/listinfo/live-devel >> >> >> >> _______________________________________________ >> live-devel mailing list >> live-devel at lists.live555.com >> http://lists.live555.com/mailman/listinfo/live-devel >> >> > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jkordani at lsa2.com Wed Sep 24 07:33:09 2014 From: jkordani at lsa2.com (Joshua Kordani) Date: Wed, 24 Sep 2014 10:33:09 -0400 Subject: [Live-devel] Open H.264 stream frame by frame In-Reply-To: References: <54226232.4010706@speed666.info> Message-ID: <5422D625.7070406@lsa2.com> Last year when I did this, decoding h264 with libavcodec required finding specific nals, the sps and pps, and holding on to them for all subsequent decode calls. What you'll likely end up needing to do is to figure out at what point in the live555 library the encoded data is available to be read, and then do your own handling that consists of looking for the sps and pps nals and then making decisions on when to submit the data you have to the decoder based on when you have a full frame to decode. since there are a few different ways that a full frame can be shipped to you, you have to handle these on a case by case basis. The most common one I've found is that frames are either encoded as one whole nal, and then for a period of frames following, subsequent frames are shipped as deltas, until such point as the encoder decides to emit a full frame again, OR explicit frame start and end markers are emitted, or other ways i'm sure I'm not familiar with. Most commonly though is the full frame plus deltas approach for live streamed media. You'll end up doing something like this discard data until you read sps and pps nals from live555 (or these are passed to your client at stream start and are available immediately, usually the live555 library can tell you if this is the case) label start: once you see your first full frame nal, do: sps pps frame_nal, send this in annex b format to libavcodec then when you see delta nals, sps pps frame_nal delta_nal, send this in annex b format to libavcodec Append delta nals as you get them, so sps pps frame_nal delta_nal delta_nal ... send this in annex b format to libavcodec if you see a full frame nal, goto start. after each call to libavcodec, you'll get a decoded frame, but you will likely have to convert it to the color format your cv library expects (or not, depends on what you're using) If you do have to convert, lookup the swscale functions. Joshua Kordani LSA Autonomy On 9/24/14 7:36 AM, Christiano Belli wrote: > Hi Marcin, > > Thanks for the response. Where can I get a example of use live555 > with ffmpeg? > > > Att. > > Christiano Belli > Cel.:+55 (41) 8444-7468 > christiano at techideasbrasil.com > > On Wed, Sep 24, 2014 at 3:18 AM, Marcin > wrote: > > Hi, > Really - i think that you have to decode the stream to get > frame-by-frame data for further analysis. Soon Ross will reply > also that Live555 is not responsible for decoding/encoding stream > but only for reciving it. > I think you have to use ffmpeg or libavcodec for example to decode > bitstream recived by live555 libs and then pass it to analyzing > software as RAW data. > Marcin > > W dniu 2014-09-24 03:07, Christiano Belli pisze: >> Hello, I'm new with live555 and I'm so exciting in learning this. >> I have some questions: >> >> There is a way where I can get the RTSP/H.264 stream from a ip >> camera and get each frame from video to do video analytics with >> openCV per example. >> >> Does the process to get each frame from a RTSP/H.264 stream need >> a lot of machine resources? >> >> Can I in the same time which I'm analyzing the video, save and >> live streaming the same stream? >> >> All suggestions will help me a lot. >> >> Att. >> >> Christiano Belli >> Cel.:+55 (41) 8444-7468 >> christiano at techideasbrasil.com >> >> >> _______________________________________________ >> live-devel mailing list >> live-devel at lists.live555.com >> http://lists.live555.com/mailman/listinfo/live-devel > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Thu Sep 25 05:48:34 2014 From: finlayson at live555.com (Ross Finlayson) Date: Thu, 25 Sep 2014 05:48:34 -0700 Subject: [Live-devel] Open H.264 stream frame by frame In-Reply-To: <5422D625.7070406@lsa2.com> References: <54226232.4010706@speed666.info> <5422D625.7070406@lsa2.com> Message-ID: As others have noted, the "LIVE555 Streaming Media" code doesn't include any 'codec' (i.e., audio/video decoding or encoding) functionality. If you want to actually decode an incoming RTSP/RTP stream, you'll need to use separate decoding software. If you want to process the H.264 NAL units for an incoming RTSP/RTP stream, then I suggest that you use the "testRTSPClient" demo application as a model (see "testProgs/testRTSPClient.cpp"). Note, in particular, the "DummySink" class. You can use this code as a model for your own class that analyzes each incoming H.264 NAL unit. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kenneth.Forsythe at activu.com Fri Sep 26 07:01:20 2014 From: Kenneth.Forsythe at activu.com (Kenneth Forsythe) Date: Fri, 26 Sep 2014 10:01:20 -0400 Subject: [Live-devel] rtsp client -> h264 decoder Message-ID: Hello Live555, I have an application that is based off of testRTSPClient. In afterGettingFrame I am then passing the data off to an H264 decoder which then is rendered on a video surface. I have 2 cameras and live555MediaServer to test with. LiveMediaServer and one camera work very well. The other camera on the other hand not so much. I am seeing distorted/garbled video and lots of green. This camera, does work through VLC and other players. What usually causes this type of behavior? What I am doing is quite similar to what I see in H264or5VideoFileSink::afterGettingFrame. On first frame I am prepending the data with [startcodes][sps] and [startcodes] [pps], which was captured earlier on. After first frame the data is only prepended with the start codes. Is there something else I should be doing, ie analyzing the nal type and modifying differently? Or do you think the problem could be outside the stream and more in the area where I am setting up the video information? Thank You, Ken -------------- next part -------------- An HTML attachment was scrubbed... URL: From hedi_naily at yahoo.fr Fri Sep 26 02:50:14 2014 From: hedi_naily at yahoo.fr (Hedi Naily) Date: Fri, 26 Sep 2014 10:50:14 +0100 Subject: [Live-devel] Stream multiple files Message-ID: <1411725014.44676.YahooMailNeo@web172105.mail.ir2.yahoo.com> Hi, As the subject indicates, I want to play a list of files as if they are one big file and without interruption. Is is possible with live555, I'm new to it and want to know wheher it's possible and how to achieve it. I've read about using ByteStreamMultipleFileSource class but I can't find a good example on how to use it Any help would be appreciated. Regards ______________ Hedi NAILY Software developper Tel. : +21622866616 -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Fri Sep 26 09:49:27 2014 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 26 Sep 2014 09:49:27 -0700 Subject: [Live-devel] rtsp client -> h264 decoder In-Reply-To: References: Message-ID: <62D17E33-E2D6-4D22-A95E-A22C858BD955@live555.com> > What usually causes this type of behavior? Truncated video frames, due to your camera's frames (actually, H.264 NAL units) being too large for the outgoing "RTPSink"s buffer. If this is happening, you should be seeing warning messages on your server's 'stderr', telling you to increase "OutPacketBuffer::maxSize". You can also check this in your server code by noting whether/when you ever have to set "fNumTruncatedBytes" (due to the NAL unit size being > "fMaxSize"). While increasing "OutPacketBuffer::maxSize" will fix your problem, a better solution is to reconfigure your camera's encoder to not generate such large H.264 NAL units in the first place. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Fri Sep 26 09:52:41 2014 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 26 Sep 2014 09:52:41 -0700 Subject: [Live-devel] Stream multiple files In-Reply-To: <1411725014.44676.YahooMailNeo@web172105.mail.ir2.yahoo.com> References: <1411725014.44676.YahooMailNeo@web172105.mail.ir2.yahoo.com> Message-ID: On Sep 26, 2014, at 2:50 AM, Hedi Naily wrote: > Hi, > As the subject indicates, I want to play a list of files as if they are one big file and without interruption. Is is possible with live555, I'm new to it and want to know wheher it's possible and how to achieve it. > I've read about using ByteStreamMultipleFileSource class but I can't find a good example on how to use it "ByteStreamMultiFileSource::createNew()" takes a NULL-terminated array of file names. E.g., char* ourFileNames = new char*[3+1]; ourFileNames[0] = "ourFile0.ts"; ourFileNames[1] = "ourFile1.ts"; ourFileNames[2] = "ourFile2.ts"; ourFileNames[3] = NULL; ByteStreamMultiFileSource* source = ByteStreamMultiFileSource::createNew(envir(), ourFileNames); Alternatively, if your media player client supports playlists, you could give it a playlist consisting of multiple "rtsp://" URLs - one for each stream that you want to play. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kenneth.Forsythe at activu.com Fri Sep 26 10:15:06 2014 From: Kenneth.Forsythe at activu.com (Kenneth Forsythe) Date: Fri, 26 Sep 2014 13:15:06 -0400 Subject: [Live-devel] rtsp client -> h264 decoder In-Reply-To: <62D17E33-E2D6-4D22-A95E-A22C858BD955@live555.com> References: <62D17E33-E2D6-4D22-A95E-A22C858BD955@live555.com> Message-ID: Hi Ross, I think I wrote that a little confusingly. I am not feeding the cameras into the liveMeda server. The liveMeda server is there as another test source. I am using it to play files, no changes to code. With that said, I don't actually have the ability to change much of the camera's server properties. The most relevant setting is max packet size which is at 1400. Since it works in other players, so I can only assume I am doing something wrong on my end. Thank You, Ken From: live-devel [mailto:live-devel-bounces at ns.live555.com] On Behalf Of Ross Finlayson Sent: Friday, September 26, 2014 12:49 PM To: LIVE555 Streaming Media - development & use Subject: Re: [Live-devel] rtsp client -> h264 decoder What usually causes this type of behavior? Truncated video frames, due to your camera's frames (actually, H.264 NAL units) being too large for the outgoing "RTPSink"s buffer. If this is happening, you should be seeing warning messages on your server's 'stderr', telling you to increase "OutPacketBuffer::maxSize". You can also check this in your server code by noting whether/when you ever have to set "fNumTruncatedBytes" (due to the NAL unit size being > "fMaxSize"). While increasing "OutPacketBuffer::maxSize" will fix your problem, a better solution is to reconfigure your camera's encoder to not generate such large H.264 NAL units in the first place. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Fri Sep 26 10:24:25 2014 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 26 Sep 2014 10:24:25 -0700 Subject: [Live-devel] rtsp client -> h264 decoder In-Reply-To: References: <62D17E33-E2D6-4D22-A95E-A22C858BD955@live555.com> Message-ID: <0CBFB4E8-B3EB-4A1C-ABE8-3C79A832D559@live555.com> OK, so perhaps it would be best if you clarified what specific problem you are seeing. Is it a problem at the server end, at the client end, or both? And where specifically are you using the "LIVE555 Streaming Media" code? In your server (i.e., camera), in your client (i.e., media player), or both? Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kenneth.Forsythe at activu.com Fri Sep 26 11:06:58 2014 From: Kenneth.Forsythe at activu.com (Kenneth Forsythe) Date: Fri, 26 Sep 2014 14:06:58 -0400 Subject: [Live-devel] rtsp client -> h264 decoder In-Reply-To: <0CBFB4E8-B3EB-4A1C-ABE8-3C79A832D559@live555.com> References: <62D17E33-E2D6-4D22-A95E-A22C858BD955@live555.com> <0CBFB4E8-B3EB-4A1C-ABE8-3C79A832D559@live555.com> Message-ID: For servers I have two commercial H264 cameras and one instance of live555MediaServer.exe running stock. VLC can connect fine to all three. No problems here (as far as I can tell). My client, on the other hand only works for the live555MediaServer and one of the cameras. The other camera, after receiving the data and delivering to the decoder/renderer and am seeing green distorted video, complete gobbly-gook. There doesn't appear to be any connectivity problems nor buffer size error messages. So I wanted to see if my rtsp code handling is correct or if I need to do some extra work before the data is delivered to the decoder (I am new to this and figuring it out as I go along). As I mentioned before: What I am doing is quite similar to what I see in H264or5VideoFileSink::afterGettingFrame. On first frame I am prepending the data with [startcodes][sps] and [startcodes] [pps], (sps,pps was captured earlier on when setting up the subsession). After first frame the data is only prepended with the start codes. Is there something else I should be doing, ie analyzing the nal type and modifying differently? Or do you think the problem could be outside the stream and more in the area where I am setting up the video information? Thanks, Ken From: live-devel [mailto:live-devel-bounces at ns.live555.com] On Behalf Of Ross Finlayson Sent: Friday, September 26, 2014 1:24 PM To: LIVE555 Streaming Media - development & use Subject: Re: [Live-devel] rtsp client -> h264 decoder OK, so perhaps it would be best if you clarified what specific problem you are seeing. Is it a problem at the server end, at the client end, or both? And where specifically are you using the "LIVE555 Streaming Media" code? In your server (i.e., camera), in your client (i.e., media player), or both? Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Fri Sep 26 11:27:49 2014 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 26 Sep 2014 11:27:49 -0700 Subject: [Live-devel] rtsp client -> h264 decoder In-Reply-To: References: <62D17E33-E2D6-4D22-A95E-A22C858BD955@live555.com> <0CBFB4E8-B3EB-4A1C-ABE8-3C79A832D559@live555.com> Message-ID: > My client, on the other hand only works for the live555MediaServer and one of the cameras. The other camera, after receiving the data and delivering to the decoder/renderer and am seeing green distorted video, complete gobbly-gook. There doesn?t appear to be any connectivity problems nor buffer size error messages. Nonetheless, I suspect that the problem is a buffer size problem - but in your client, not the server. In your "afterGettingFrame()" function, check the "numTruncatedBytes" parameter. If it's ever non-zero, then you'll need to increase the size of the buffer (in the "MediaSink" subclass that's receiving from your "H264VideoRTPSource" object). I also suggest running the "testRTSPClient" and "openRTSP" demo applications (RTSP clients) against the camera that is causing problems, and against the camera that's not. This may give you hints as to what's going wrong. In particular, I suggest running "openRTSP" as a client for the problematic camera, renaming the output file to have a ".h264" filename suffix, and playing it with VLC. Do you see the same artifacts that you see in your client? > What I am doing is quite similar to what I see in H264or5VideoFileSink::afterGettingFrame. On first frame I am prepending the data with [startcodes][sps] and [startcodes] [pps], (sps,pps was captured earlier on when setting up the subsession). After first frame the data is only prepended with the start codes. Is there something else I should be doing, ie analyzing the nal type and modifying differently? No, what you're doing should be enough. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ppulliam at llnw.com Fri Sep 26 13:40:38 2014 From: ppulliam at llnw.com (Pete Pulliam) Date: Fri, 26 Sep 2014 13:40:38 -0700 Subject: [Live-devel] scaling live555 rtsp proxy to hundreds of viewers. Message-ID: I have an implementation of an rtsp proxy server based on the live555ProxyServer that ships with the live555 source. When connecting only a couple of viewers to a single proxied stream, things look great. I'm hoping to get a few hundred viewers per stream though. What I'm seeing is that there is a large drop off in quality and kbps as I add more viewers. I see this drop off both with the custom proxy I've written and the stock proxy that ships with the source. While this is happening, the CPU and memory are only lightly used, and the NIC is not that busy. I'm trying to use this to shield a transcoder that provides an RTSP origin (as well as several other formats). When I run the same test directly against the origin I don't see a drop off in quality like this for viewers less than 50 (and haven't tested that for more than 50 viewers). Hitting the origin directly looks great. Is there perhaps something I should have tuned in the proxy, or could tune with the Linux box this is running on that is causing this drop off? Advice wanted to improve the scaling of single stream => many viewer. The kbps drop off looks like: # ? of clients ? , ? bitrate (kbps) 2 977.688820 3 976.666311 4 936.494160 5 940.096328 6 944.486316 7 955.723431 8 945.044076 9 944.803396 10 930.465372 11 925.247045 12 931.066158 13 713.205068 14 486.331708 15 492.473134 16 485.362249 17 483.711453 18 490.636454 19 485.919289 20 487.862608 21 489.587224 Pete -- The information in this message may be confidential. It is intended solely for the addressee(s). If you are not the intended recipient, any disclosure, copying or distribution of the message, or any action or omission taken by you in reliance on it, is prohibited and may be unlawful. Please immediately contact the sender if you have received this message in error. -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Fri Sep 26 14:09:05 2014 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 26 Sep 2014 14:09:05 -0700 Subject: [Live-devel] scaling live555 rtsp proxy to hundreds of viewers. In-Reply-To: References: Message-ID: <9F9BBB1F-1164-4F80-9C15-584AA30F8969@live555.com> Problems like this are often caused by running into an OS-imposed limit on the number of open files (i.e., sockets) that a process can have open at a time. http://www.live555.com/liveMedia/faq.html#scalability Sometimes this limit tends to be a 'soft limit'; if you approach it, the OS's performance may suffer, but not fail completely. So, I would recommend checking whether you can increase this limit in your OS. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jshanab at jfs-tech.com Fri Sep 26 14:05:08 2014 From: jshanab at jfs-tech.com (Jeff Shanab) Date: Fri, 26 Sep 2014 17:05:08 -0400 Subject: [Live-devel] scaling live555 rtsp proxy to hundreds of viewers. In-Reply-To: References: Message-ID: Are you making a copy for each connected viewer? The system I worked on just over a year ago could stream around 400 streams but never was it 400 of 1 stream, it was 5 or 10 of 100-200 sources. Even then I used a buffer pool and a shared pointer so when the last unicast client was sent the packet, it returned itself to the pool. Another thing was my architecture, which may be important. I was RTSP in from many sources and serving thru my own http protocol basterdization. In this I ran "Processors" of a settable number of sources. Each Processor was it's own usage environment and ran in a thread. This allowed the OS to parallelize the I/O to the buffers to the ethernet interface. IF the event Queue design, whichdoes have the best overall thruoughput, gets too deep, (spread across 2 many sources) then latency can go up and slow overall thruput to multiple clients. I was able to get to the limit of the gigibit network. I usually ran from 10 to 64 sources per "processor" If the cameras had small frames like D1 it was better to run 64 per processor. (FD_SET on windows is slower beyond 64 sockets) If they were high resolution cameras and had big nal frames, it was faster to break it into 6 processors of 10 and let the OS utilize the cores. (Threads tend to stay on a single core) On Fri, Sep 26, 2014 at 4:40 PM, Pete Pulliam wrote: > I have an implementation of an rtsp proxy server based on the > live555ProxyServer that ships with the live555 source. When connecting > only a couple of viewers to a single proxied stream, things look great. > I'm hoping to get a few hundred viewers per stream though. What I'm seeing > is that there is a large drop off in quality and kbps as I add more > viewers. I see this drop off both with the custom proxy I've written and > the stock proxy that ships with the source. > > While this is happening, the CPU and memory are only lightly used, and the > NIC is not that busy. > > I'm trying to use this to shield a transcoder that provides an RTSP origin > (as well as several other formats). When I run the same test directly > against the origin I don't see a drop off in quality like this for viewers > less than 50 (and haven't tested that for more than 50 viewers). Hitting > the origin directly looks great. > > Is there perhaps something I should have tuned in the proxy, or could tune > with the Linux box this is running on that is causing this drop off? > Advice wanted to improve the scaling of single stream => many viewer. > > The kbps drop off looks like: > # ? > of clients > ? , ? > bitrate (kbps) > 2 977.688820 > 3 976.666311 > 4 936.494160 > 5 940.096328 > 6 944.486316 > 7 955.723431 > 8 945.044076 > 9 944.803396 > 10 930.465372 > 11 925.247045 > 12 931.066158 > 13 713.205068 > 14 486.331708 > 15 492.473134 > 16 485.362249 > 17 483.711453 > 18 490.636454 > 19 485.919289 > 20 487.862608 > 21 489.587224 > > Pete > > > The information in this message may be confidential. It is intended > solely for > the addressee(s). If you are not the intended recipient, any disclosure, > copying or distribution of the message, or any action or omission taken by > you > in reliance on it, is prohibited and may be unlawful. Please immediately > contact the sender if you have received this message in error. > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ppulliam at llnw.com Sun Sep 28 21:08:36 2014 From: ppulliam at llnw.com (Pete Pulliam) Date: Sun, 28 Sep 2014 21:08:36 -0700 Subject: [Live-devel] ubuntu live555 tuning issue Message-ID: I have an implementation of an rtsp proxy server based not he live555ProxyServer that ships with the live555 source. I'm having problems tuning my hardware to efficiently stream one external to many players. When running test players on the same machine as the proxy, which is proxy'ing an external rtsp origin (rtsp:// wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov, if you must know ;) I can hit the proxy with 1000 test players, getting 0% packet loss, plus perfect playback from an additional player on a cell phone using T-Mobile's network. The single-threaded proxy is using about 30% of the CPU at that point. That would be fantastic, but that's for 1000 players on localhost + one external. If, instead, I run a similar test with 20 test players on a different machine on the same subnet plus the cellphone, I get about 1.3% packet loss in the test players and horrible looking video on the cell phone. With 40 external players I'm up to about 8% packet loss and un-viewable video on the phone. This all implies that I have a problem with the network stack or the networking hardware serving the machines. (Correct me if I'm wrong) uname -a: Linux xxxx.xxxx.xxxx.net 3.2.0-30-generic #48-Ubuntu SMP Fri Aug 24 16:52:48 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux ?It's a 1 gig NIC.? ?Here are the tuning things I have tried so far: sysctl -w net.core.rmem_max=8388608 sysctl -w net.core.wmem_max=8388608 sysctl -w net.core.rmem_default=65536 sysctl -w net.core.wmem_default=65536 sysctl -w net.ipv4.tcp_rmem='4096 87380 8388608' sysctl -w net.ipv4.tcp_wmem='4096 65536 8388608' sysctl -w net.ipv4.tcp_mem='8388608 8388608 8388608' sysctl -w net.ipv4.route.flush=1 ulimit -n 65536 I'm approaching my wits end (not representative of a loss of humor), and would appreciate any advice. ? ?Pete -- The information in this message may be confidential. It is intended solely for the addressee(s). If you are not the intended recipient, any disclosure, copying or distribution of the message, or any action or omission taken by you in reliance on it, is prohibited and may be unlawful. Please immediately contact the sender if you have received this message in error. -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcin at speed666.info Mon Sep 29 01:35:09 2014 From: marcin at speed666.info (Marcin) Date: Mon, 29 Sep 2014 10:35:09 +0200 Subject: [Live-devel] openRTSP duration parameter for runtime not reciving time? Message-ID: <542919BD.8050305@speed666.info> Hello Ross, If i undestand right - the "-d" duration parameter in openRTSP the time since the openRTSP started negotiating paramters, right? Because i recive sometimes shorter dumps of stream than specified duration (in example, when negotation part takes longer than usual). Marcin From ken.ferguson at cubitech.co.uk Mon Sep 29 02:02:09 2014 From: ken.ferguson at cubitech.co.uk (Ken Ferguson) Date: Mon, 29 Sep 2014 10:02:09 +0100 Subject: [Live-devel] rtsp client -> h264 decoder In-Reply-To: References: <62D17E33-E2D6-4D22-A95E-A22C858BD955@live555.com> <0CBFB4E8-B3EB-4A1C-ABE8-3C79A832D559@live555.com> Message-ID: <54292011.5050602@cubitech.co.uk> Hi Ken, Ross, I just wanted to add there is one extra possibility here. We have noticed some cameras, especially HD cameras, split images across more than one slice. In a vast majority of cameras the sequence is: SPS PPS IDR -> SLICE -> SLICE -> SLICE... Where -> indicates the image boundary. However in some HD cameras we have seen: SPS PPS IDR IDR -> SLICE SLICE -> SLICE SLICE -> SLICE SLICE. If you do not group all the correct parts of an image together when you decode each *image* the you can see the green distorted video. Just putting it out there as another possibility. Kind Regards, Ken Ferguson. On 26/09/2014 19:06, Kenneth Forsythe wrote: > > For servers I have two commercial H264 cameras and one instance of live555MediaServer.exe running stock. VLC can connect fine to all three. No problems here (as far as I can tell). > > My client, on the other hand only works for the live555MediaServer and one of the cameras. The other camera, after receiving the data and delivering to the decoder/renderer and am seeing green distorted video, complete gobbly-gook. There doesn't appear to be any connectivity problems nor buffer size error messages. > > So I wanted to see if my rtsp code handling is correct or if I need to do some extra work before the data is delivered to the decoder (I am new to this and figuring it out as I go along). As I mentioned before: > > /What I am doing is quite similar to what I see in H264or5VideoFileSink::afterGettingFrame. On first frame I am prepending the data with [startcodes][sps] and [startcodes] [pps], (sps,pps was captured earlier on when setting up the subsession). After first frame the data is only prepended with the start codes. Is there something else I should be doing, ie analyzing the nal type and modifying differently? Or do you think the problem could be outside the stream and more in the area where I am setting up the video information?/ > > Thanks, > > Ken > > *From:*live-devel [mailto:live-devel-bounces at ns.live555.com] *On Behalf Of *Ross Finlayson > *Sent:* Friday, September 26, 2014 1:24 PM > *To:* LIVE555 Streaming Media - development & use > *Subject:* Re: [Live-devel] rtsp client -> h264 decoder > > OK, so perhaps it would be best if you clarified what specific problem you are seeing. Is it a problem at the server end, at the client end, or both? And where specifically are you using the "LIVE555 Streaming Media" code? In your server (i.e., camera), in your client (i.e., media player), or both? > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel -- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ *Ken Ferguson Cubitech Ltd. * Discovery Court Business Centre 551-553 Wallisdown Road Poole Dorset BH12 5AG United Kingdom Tel. +44 (0)1202 853 237 Email: ken.ferguson at cubitech.co.uk Web: www.cubitech.co.uk This email may contain confidential information. if you are not a named recipient, or believe you have been sent this email in error, please inform Cubitech Ltd immediately. All outbound email is scanned for viruses, but we cannot guarantee that it is virus-free when you receive it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Mon Sep 29 04:29:25 2014 From: finlayson at live555.com (Ross Finlayson) Date: Mon, 29 Sep 2014 04:29:25 -0700 Subject: [Live-devel] openRTSP duration parameter for runtime not reciving time? In-Reply-To: <542919BD.8050305@speed666.info> References: <542919BD.8050305@speed666.info> Message-ID: <30BC6ECB-BD22-458F-BF48-002EDC8E523F@live555.com> > If i undestand right - the "-d" duration parameter in openRTSP the time since the openRTSP started negotiating paramters, right? Because i recive sometimes shorter dumps of stream than specified duration (in example, when negotation part takes longer than usual). I'm not sure I understand your question, but the timing of the duration specified in the "-d" option begins immediately after "openRTSP" receives a response to the "PLAY" command. Note, however, that if a stream ends earlier than the specified duration, then obviously the recorded file will contain less data. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.heliker at gmail.com Fri Sep 26 13:59:42 2014 From: james.heliker at gmail.com (James Heliker) Date: Fri, 26 Sep 2014 13:59:42 -0700 Subject: [Live-devel] fGranularityInMS? Suggestions? Message-ID: Hi Ross - Can you explain what fGranularityInMS does? I have a custom audio source and I'm trying to track down causes of glitching in my PCM audio stream; the basics have been ironed out already such as presentation time and duration in microseconds. Thanks for your help! Kind Regards, - James -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeanluc.bonnet at barco.com Tue Sep 30 05:16:15 2014 From: jeanluc.bonnet at barco.com (Bonnet, Jean-Luc) Date: Tue, 30 Sep 2014 12:16:15 +0000 Subject: [Live-devel] Receive H264 packets from GStreamer Message-ID: <75C4803536D10B4EB6FA0743F98B6A0A1ECE10E7@KUUMEX10.barco.com> Hi Ross, I want to receive H264 video frame over RTP from a GStreamer server. I use H264VideoRTPSource which works fine, I receive all RTP packets. But rtph264pay GStreamer's component which generate H264 payload send only One NAL Unit per RTP packet. Is there a way to rebuild the whole video frame at Live 555 side. thanks for your help. Jean-Luc Bonnet This message is subject to the following terms and conditions: MAIL DISCLAIMER -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Sep 30 08:08:46 2014 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 30 Sep 2014 08:08:46 -0700 Subject: [Live-devel] Receive H264 packets from GStreamer In-Reply-To: <75C4803536D10B4EB6FA0743F98B6A0A1ECE10E7@KUUMEX10.barco.com> References: <75C4803536D10B4EB6FA0743F98B6A0A1ECE10E7@KUUMEX10.barco.com> Message-ID: <9C97C183-B278-402A-9379-68197CFA496B@live555.com> > I want to receive H264 video frame over RTP from a GStreamer server. I use H264VideoRTPSource which works fine, I receive all RTP packets. > > But rtph264pay GStreamer's component which generate H264 payload send only One NAL Unit per RTP packet. Is there a way to rebuild the whole video frame at Live 555 side. It's the job of the decoder to figure out how to render the incoming NAL units - which includes deciding when one video frame (called an 'access unit' in H.264 parlance) ends, and the next one begins. However, as a hint, you can use the value of the RTP packet's 'M' (i.e., 'marker') bit, which is (supposed to be) set for the last RTP packet of an 'access unit' (i.e., video frame). I.e., you can call "RTPSource::curPacketMarkerBit()" to test this. Note, though, that this is only a hint, because this last RTP packet may have been lost (or the server might not have properly set the 'M' bit in the first place). Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From cuonglm1489 at gmail.com Mon Sep 29 01:37:18 2014 From: cuonglm1489 at gmail.com (=?UTF-8?B?Q8aw4budbmcgTMOq?=) Date: Mon, 29 Sep 2014 15:37:18 +0700 Subject: [Live-devel] Using QuickTimeFileSink in lib live555 Message-ID: Hi all! I'm using openRTSP to receive data from camera over RTSP and write into a file .mp4. Stream data of camera has frame rate = 30, width = 800, height = 600, codec H264. When I receive respond DESCRIBE which don't have infor framerate, width, height(no infor sps and pps), I configed in openRTSP is frame rate = 15, width = 640, height = 480. I configed to write file has duration = 60 seconds. I try to play recoded file by VLC and I saw duration of file is 89 seconds. Can you help me explain how to QuickTimeFileSink work with frame rate, width, height to record file and how to check timeout data when using QuickTimeFileSink? Thanks you very much! -------------- next part -------------- An HTML attachment was scrubbed... URL: From hedi_naily at yahoo.fr Mon Sep 29 01:59:02 2014 From: hedi_naily at yahoo.fr (Hedi Naily) Date: Mon, 29 Sep 2014 09:59:02 +0100 Subject: [Live-devel] Stream multiple files In-Reply-To: References: <1411725014.44676.YahooMailNeo@web172105.mail.ir2.yahoo.com> Message-ID: <1411981142.44264.YahooMailNeo@web172103.mail.ir2.yahoo.com> Thanks for your reply, Will files be played as if they are one file? I mean without interruptions at the end of each one? And for mp3 files, where should I make the changes you suggested (which class) ______________ Hedi NAILY Software developper Tel. : +21622866616 Le Vendredi 26 septembre 2014 18h15, Ross Finlayson a ?crit : On Sep 26, 2014, at 2:50 AM, Hedi Naily wrote: Hi, > As the subject indicates, I want to play a list of files as if they are one big file and without interruption. Is is possible with live555, I'm new to it and want to know wheher it's possible and how to achieve it. >I've read about using ByteStreamMultipleFileSource class but I can't find a good example on how to use it "ByteStreamMultiFileSource::createNew()" takes a NULL-terminated array of file names. E.g., char* ourFileNames = new char*[3+1]; ourFileNames[0] = "ourFile0.ts"; ourFileNames[1] = "ourFile1.ts"; ourFileNames[2] = "ourFile2.ts"; ourFileNames[3] = NULL; ByteStreamMultiFileSource* source = ByteStreamMultiFileSource::createNew(envir(), ourFileNames); Alternatively, if your media player client supports playlists, you could give it a playlist consisting of multiple "rtsp://" URLs - one for each stream that you want to play. Ross Finlayson Live Networks, Inc. http://www.live555.com/ _______________________________________________ live-devel mailing list live-devel at lists.live555.com http://lists.live555.com/mailman/listinfo/live-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Sep 30 22:20:15 2014 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 30 Sep 2014 22:20:15 -0700 Subject: [Live-devel] fGranularityInMS? Suggestions? In-Reply-To: References: Message-ID: > Can you explain what fGranularityInMS does? In "AudioInputDevice" subclasses that are implemented using polling, it specifies how often the device should be polled. (It's used in "WindowsAudioInputDevice", but not in "WAVAudioFileSource") Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jasleen at beesys.com Tue Sep 30 23:04:34 2014 From: Jasleen at beesys.com (Jasleen Kaur) Date: Wed, 1 Oct 2014 06:04:34 +0000 Subject: [Live-devel] Writing RTSP server Message-ID: We wish to stream our contents using RTSP using Live555. I have downloaded the code and also has been built using Visual Studio 2010. Our requirement is to using Live555 libraries to stream real time data ( data generated by our software) , using RTSP. Is it possible using Live555 libraries or we will have to write some plug-in kind of thing for Live555. Best regards Jasleen -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Sep 30 23:12:48 2014 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 30 Sep 2014 23:12:48 -0700 Subject: [Live-devel] Writing RTSP server In-Reply-To: References: Message-ID: <6BC06FE5-A6A9-43DD-B275-AF14369CE105@live555.com> > Our requirement is to using Live555 libraries to stream real time data ( data generated by our software) , using RTSP. > > Is it possible using Live555 libraries Yes, of course. See http://www.live555.com/liveMedia/faq.html#liveInput-unicast Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: