From 6.45.vapuru at gmail.com Wed Feb 1 08:32:18 2012 From: 6.45.vapuru at gmail.com (Novalis Vapuru) Date: Wed, 1 Feb 2012 18:32:18 +0200 Subject: [Live-devel] Custom FramedSource for testOnDemandRTSPServer Problem Message-ID: Hi, I finally able to modify testOnDemandRTSPServer so that: It uses my custom FramedSource... Tests -------- Test One: -------- In order to test it: 1. I get video stream from h264 ip camera using OpenRTSPClient and write each Frame to a single file[ Boolean oneFilePerFrame = True] for 100 frame. 2. I rename each frame file as vide1,...... video100 3. I stream those frames[ 1...100] with my modified OnDemandRTSPServer using my custom FramedSource 4. I used OpenRTSPClient to connect modified OnDemandRTSPServer. ---------- Test Two: --------- Just change step 4 [above]: For step 4 i used VLC player as a client Test Result: ---------------- Test Result One ---------------- OpenRTSPClient successfully connect and write incoming data to single file [Boolean oneFilePerFrame = True] call it Result.h264 The Result.h264 is not a playbale file.I investigate its binary data with hex editor and MediaInfo Utility. It does not seem to be correct H264 file... ----------------- Test Result Two ----------------- VLC connect to server but dows not show any stream //////////////////////////////////////////////////////////////////// So Here is my Questions: 1. Is my test logic true? Should i expect to playable file at OpenRTSPClient side? Should I expect VLC should play file? 2. If my logic true, what i may do wrong in practice? Wellcome to any ideas-suggestions from who develop such a custom FramedSource for testOnDemandRTSPServer... Best Wishes PS:----------------------------------------------------------------------------------------------------------------------------- I attach my modified testOnDemandRTSPServer code below Files: MyCustomFramedSource.h : This is a custom FramedSource for my custom MyCustomServerMediaSubsession MyCustomServerMediaSubsession.h : This is a custom OnDemandServerMediaSubsession. TestOnDemandRTSPServer.cpp : Simple test for my server More PS:------------------------------------------------------------------------------------------------------------ As you can see from my MyCustomFramedSource.cpp I stream file on windows env, like this at deliverFrame // Just for test if my logic true // not a production ready code void MyCustomFramedSource::deliverFrame() { cout << "Now deliverFrame() is called" << endl; if (!isCurrentlyAwaitingData()) { cout << " we're not ready for the data yet" << endl; return; // we're not ready for the data yet } cout << " we're ready for the data" << endl; static int frameNumber = 0; struct stat results; char filename[256]; if(frameNumber >= 100) { cout << "finished " << endl; return; } sprintf_s(filename, 256, "D:\\h264\\frame%d.bin", frameNumber++); stat(filename, &results); unsigned int newFrameSize = results.st_size; if (newFrameSize > fMaxSize) { cout << "newFrameSize > fMaxSize" << endl; fFrameSize = fMaxSize; fNumTruncatedBytes = newFrameSize - fMaxSize; } else { cout << "fFrameSize = newFrameSize" << endl; fFrameSize = newFrameSize; } unsigned char* newFrameData = (unsigned char*)malloc( fFrameSize); fstream fbin(filename, ios::in | ios::binary); fbin.read((char*)newFrameData, fFrameSize); fbin.close(); memcpy(fTo,newFrameData, fFrameSize); gettimeofday(&fPresentationTime,NULL); fDurationInMicroseconds = 1000000 / 15; // 15 fps FramedSource::afterGetting(this); } ----------------------------------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: MyCustomFramedSource.cpp Type: text/x-c++src Size: 2041 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: MyCustomFramedSource.h Type: text/x-chdr Size: 640 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: MyCustomServerMediaSubsession.cpp Type: text/x-c++src Size: 1092 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: MyCustomServerMediaSubsession.h Type: text/x-chdr Size: 802 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: TestOnDemandRTSPServer.cpp Type: text/x-c++src Size: 2486 bytes Desc: not available URL: From sachin.taraiya at gmail.com Wed Feb 1 22:11:22 2012 From: sachin.taraiya at gmail.com (Sachin Taraiya) Date: Thu, 2 Feb 2012 11:41:22 +0530 Subject: [Live-devel] Sending live stream to Darwin Server In-Reply-To: References: Message-ID: > > Hi, > > I need to send a live streaming feed to Darwin Server. For this, I have > created a program "testH264VideoToDarwin" same as "testMPEG4VideoToDarwin" > which is working fine using a test file being read from the disk. Now I > need to replace the functionality of reading from disk file to reading from > a stream which can be captured from a streaming remote IP camera or a > server remotely streaming (the streaming is done using RTSP). > > Can you please suggest what all places in the source code I need to make > changes? I have already gone through FAQ but not getting a clear idea. > > Thanks, > Sachin T. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jeremiah.Morrill at econnect.tv Thu Feb 2 11:18:24 2012 From: Jeremiah.Morrill at econnect.tv (Jeremiah Morrill) Date: Thu, 2 Feb 2012 19:18:24 +0000 Subject: [Live-devel] Custom FramedSource for testOnDemandRTSPServer Problem In-Reply-To: References: Message-ID: <80C795F72B3CB241A9256DABF0A04EC5022D7529@SN2PRD0710MB395.namprd07.prod.outlook.com> While I didn't look at your code in detail, I didn't see you mention what you did with the h264 "subsession->fmtp_spropparametersets()". In order for VLC to play, I believe you need to send the NAL units. Try parsing the fmtp_spropparametersets with parseSPropParameters(...), and write each returned SPropRec::sPropBytes to a file like you are doing with your frames. Make sure these get sent out first with playback. Take my advice with a grain of salt as I'm still fairly new to these libs. :) -Jer -----Original Message----- From: live-devel-bounces at ns.live555.com [mailto:live-devel-bounces at ns.live555.com] On Behalf Of Novalis Vapuru Sent: Wednesday, February 01, 2012 8:32 AM To: LIVE555 Streaming Media - development & use Subject: [Live-devel] Custom FramedSource for testOnDemandRTSPServer Problem Hi, I finally able to modify testOnDemandRTSPServer so that: It uses my custom FramedSource... Tests -------- Test One: -------- In order to test it: 1. I get video stream from h264 ip camera using OpenRTSPClient and write each Frame to a single file[ Boolean oneFilePerFrame = True] for 100 frame. 2. I rename each frame file as vide1,...... video100 3. I stream those frames[ 1...100] with my modified OnDemandRTSPServer using my custom FramedSource 4. I used OpenRTSPClient to connect modified OnDemandRTSPServer. ---------- Test Two: --------- Just change step 4 [above]: For step 4 i used VLC player as a client Test Result: ---------------- Test Result One ---------------- OpenRTSPClient successfully connect and write incoming data to single file [Boolean oneFilePerFrame = True] call it Result.h264 The Result.h264 is not a playbale file.I investigate its binary data with hex editor and MediaInfo Utility. It does not seem to be correct H264 file... ----------------- Test Result Two ----------------- VLC connect to server but dows not show any stream //////////////////////////////////////////////////////////////////// So Here is my Questions: 1. Is my test logic true? Should i expect to playable file at OpenRTSPClient side? Should I expect VLC should play file? 2. If my logic true, what i may do wrong in practice? Wellcome to any ideas-suggestions from who develop such a custom FramedSource for testOnDemandRTSPServer... Best Wishes PS:----------------------------------------------------------------------------------------------------------------------------- I attach my modified testOnDemandRTSPServer code below Files: MyCustomFramedSource.h : This is a custom FramedSource for my custom MyCustomServerMediaSubsession MyCustomServerMediaSubsession.h : This is a custom OnDemandServerMediaSubsession. TestOnDemandRTSPServer.cpp : Simple test for my server More PS:------------------------------------------------------------------------------------------------------------ As you can see from my MyCustomFramedSource.cpp I stream file on windows env, like this at deliverFrame // Just for test if my logic true // not a production ready code void MyCustomFramedSource::deliverFrame() { cout << "Now deliverFrame() is called" << endl; if (!isCurrentlyAwaitingData()) { cout << " we're not ready for the data yet" << endl; return; // we're not ready for the data yet } cout << " we're ready for the data" << endl; static int frameNumber = 0; struct stat results; char filename[256]; if(frameNumber >= 100) { cout << "finished " << endl; return; } sprintf_s(filename, 256, "D:\\h264\\frame%d.bin", frameNumber++); stat(filename, &results); unsigned int newFrameSize = results.st_size; if (newFrameSize > fMaxSize) { cout << "newFrameSize > fMaxSize" << endl; fFrameSize = fMaxSize; fNumTruncatedBytes = newFrameSize - fMaxSize; } else { cout << "fFrameSize = newFrameSize" << endl; fFrameSize = newFrameSize; } unsigned char* newFrameData = (unsigned char*)malloc( fFrameSize); fstream fbin(filename, ios::in | ios::binary); fbin.read((char*)newFrameData, fFrameSize); fbin.close(); memcpy(fTo,newFrameData, fFrameSize); gettimeofday(&fPresentationTime,NULL); fDurationInMicroseconds = 1000000 / 15; // 15 fps FramedSource::afterGetting(this); } ----------------------------------------------------------------------------------------------------- From xzha286 at aucklanduni.ac.nz Thu Feb 2 11:42:25 2012 From: xzha286 at aucklanduni.ac.nz (James Zhang) Date: Fri, 3 Feb 2012 08:42:25 +1300 Subject: [Live-devel] question about parseSPropParameterSets() In-Reply-To: References: Message-ID: Hello everyone I got a question. Lots of documents mentioned that (7) sps, (8) pps, (6) sei. what is that mean? Nal unit is a binary set, 7 means position 7 or 0111 or something else? Thanks alot Best regards James On 31 January 2012 23:17, James Zhang wrote: > Hello everyone > > Thanks for everybody's suggestion. > > Looks i have made it start to do something instead on no frame, fail to > decode. I m keep getting something like this. Is it because of i m putting > the wrong NAL units to decoder? > > My nal structure is > 0x1sps0x1pps > > > *[h264 @ 0x700f400]slice type too large (2) at 0 0* > > *[h264 @ 0x700f400]decode_slice_header error* > > *[h264 @ 0x700f400]non-existing PPS 0 referenced* > > *[h264 @ 0x700f400]decode_slice_header error* > > *[h264 @ 0x700f400]B picture before any references, skipping* > > *[h264 @ 0x700f400]decode_slice_header error* > > *[h264 @ 0x700f400]B picture before any references, skipping* > > *[h264 @ 0x700f400]decode_slice_header error* > > *[h264 @ 0x700f400]B picture before any references, skipping* > > *[h264 @ 0x700f400]decode_slice_header error* > > *[h264 @ 0x700f400]B picture before any references, skipping* > > *[h264 @ 0x700f400]decode_slice_header error* > > *[h264 @ 0x700f400]slice type too large (3) at 0 0* > > *[h264 @ 0x700f400]decode_slice_header error* > > *[h264 @ 0x700f400]non-existing PPS 0 referenced* > > *[h264 @ 0x700f400]decode_slice_header error* > > *[h264 @ 0x700f400]B picture before any references, skipping* > > *[h264 @ 0x700f400]decode_slice_header error* > > *[h264 @ 0x700f400]B picture before any references, skipping* > > *[h264 @ 0x700f400]decode_slice_header error* > > *[h264 @ 0x700f400]slice type too large (3) at 0 0* > > *[h264 @ 0x700f400]decode_slice_header error* > > *[h264 @ 0x700f400]AVC: nal size 0* > > *[h264 @ 0x700f400]AVC: nal size 8459* > > Thank you very much > Best regards > > James > > > On 31 January 2012 22:05, Jon Burgess wrote: > >> > >>> > SPropRecord * data to a NSData and send into extradata to decode? >>> >>> I don't know what a "NSData" is (it's apparently something outside our >>> libraries), but I hope it should be obvious from the implementation of the >>> function in "liveMedia/H264VideoRTPSource.cpp" how it works. >>> >>> >> "NSData" is basically a glorified byte buffer available on cocoa/Mac/iOS. >> >> Cheers, >> Jon Burgess >> >> >> >> _______________________________________________ >> live-devel mailing list >> live-devel at lists.live555.com >> http://lists.live555.com/mailman/listinfo/live-devel >> >> > > > -- > James Zhang > > BE (Hons) > Department of Electrical and Computer Engineering > THE UNIVERSITY OF AUCKLAND > -- James Zhang BE (Hons) Department of Electrical and Computer Engineering THE UNIVERSITY OF AUCKLAND -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Thu Feb 2 11:49:11 2012 From: finlayson at live555.com (Ross Finlayson) Date: Thu, 2 Feb 2012 11:49:11 -0800 Subject: [Live-devel] question about parseSPropParameterSets() In-Reply-To: References: Message-ID: > I got a question. Lots of documents mentioned that (7) sps, (8) pps, (6) sei. what is that mean? They're defined in the H.264 specification: ISO/IEC 14496-10, I think. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From xzha286 at aucklanduni.ac.nz Thu Feb 2 17:37:40 2012 From: xzha286 at aucklanduni.ac.nz (James Zhang) Date: Fri, 3 Feb 2012 14:37:40 +1300 Subject: [Live-devel] question about parseSPropParameterSets() In-Reply-To: References: Message-ID: Hello Ross I m using live555 test server to streaming a h264 video form hard disc from terminal it looks like Play this stream using the URL "rtsp://172.28.31.103:8554/testStream" Beginning streaming... Beginning to read from file... My client get: ** *2012-02-03 14:32:37.376 rtsp[51110:6c03] *** __NSAutoreleaseNoPool(): Object 0x6608b80 of class NSConcreteMutableData autoreleased with no pool in place - just leaking* *2012-02-03 14:32:37.377 rtsp[51110:6c03] getSDP: --> v=0* *o=- 1328232613792590 1 IN IP4 172.28.31.103* *s=Session streamed by "testH264VideoStreamer"* *i=test.264* *t=0 0* *a=tool:LIVE555 Streaming Media v2011.12.02* *a=type:broadcast* *a=control:** *a=source-filter: incl IN IP4 * 172.28.31.103* *a=rtcp-unicast: reflection* *a=range:npt=0-* *a=x-qt-text-nam:Session streamed by "testH264VideoStreamer"* *a=x-qt-text-inf:test.264* *m=video 18888 RTP/AVP 96* *c=IN IP4 232.91.104.231/255* *b=AS:500* *a=rtpmap:96 H264/90000* *a=control:track1* *2012-02-03 14:32:37.377 rtsp[51110:6c03] *** __NSAutoreleaseNoPool(): Object 0x661a100 of class __NSArrayM autoreleased with no pool in place - just leaking* *2012-02-03 14:32:37.378 rtsp[51110:6c03] *** __NSAutoreleaseNoPool(): Object 0x611f6e0 of class RTSPSubsession autoreleased with no pool in place - just leaking* *Sending request: SETUP rtsp://172.28.31.103:8554/testStream/track1 RTSP/1.0 * *CSeq: 3* *Transport: RTP/AVP/TCP;unicast;interleaved=0-1* *User-Agent: LIVE555 Streaming Media v2010.04.09* ** ** *Received SETUP response: RTSP/1.0 461 Unsupported Transport* *CSeq: 3* *Date: Fri, Feb 03 2012 01:32:37 GMT* ** ** *2012-02-03 14:32:37.379 rtsp[51110:6c03] HEre!!!!!!* *2012-02-03 14:32:37.380 rtsp[51110:6c03] *** __NSAutoreleaseNoPool(): Object 0x6120a30 of class NSCFString autoreleased with no pool in place - just leaking* *2012-02-03 14:32:37.380 rtsp[51110:6c03] protocol nameRTP* *2012-02-03 14:32:37.381 rtsp[51110:6c03] *** __NSAutoreleaseNoPool(): Object 0x6120a40 of class NSCFString autoreleased with no pool in place - just leaking* *2012-02-03 14:32:37.381 rtsp[51110:6c03] codec nameH264* *2012-02-03 14:32:37.381 rtsp[51110:6c03] *** __NSAutoreleaseNoPool(): Object 0xb305230 of class NSCFString autoreleased with no pool in place - just leaking* *2012-02-03 14:32:37.382 rtsp[51110:6c03] get medium namevideo* *2012-02-03 14:32:37.382 rtsp[51110:6c03] video height0* *2012-02-03 14:32:37.383 rtsp[51110:6c03] video width18888* *2012-02-03 14:32:37.383 rtsp[51110:6c03] *** __NSAutoreleaseNoPool(): Object 0x6120da0 of class NSCFString autoreleased with no pool in place - just leaking* *2012-02-03 14:32:37.383 rtsp[51110:6c03] error: --> No RTSP session is currently in progress* * * * * why is that? Did i set the test server wrong? Thank you James On 3 February 2012 08:49, Ross Finlayson wrote: > I got a question. Lots of documents mentioned that (7) sps, (8) pps, (6) > sei. what is that mean? > > > They're defined in the H.264 specification: ISO/IEC 14496-10, I think. > > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > -- James Zhang BE (Hons) Department of Electrical and Computer Engineering THE UNIVERSITY OF AUCKLAND -------------- next part -------------- An HTML attachment was scrubbed... URL: From xzha286 at aucklanduni.ac.nz Thu Feb 2 17:40:39 2012 From: xzha286 at aucklanduni.ac.nz (James Zhang) Date: Fri, 3 Feb 2012 14:40:39 +1300 Subject: [Live-devel] question about parseSPropParameterSets() In-Reply-To: References: Message-ID: Forgot to mention that I m using testH264VideoStreamer. On 3 February 2012 14:37, James Zhang wrote: > Hello Ross > > I m using live555 test server to streaming a h264 video form hard disc > > from terminal it looks like > > Play this stream using the URL "rtsp://172.28.31.103:8554/testStream" > Beginning streaming... > Beginning to read from file... > > My client get: > > ** > > *2012-02-03 14:32:37.376 rtsp[51110:6c03] *** __NSAutoreleaseNoPool(): > Object 0x6608b80 of class NSConcreteMutableData autoreleased with no pool > in place - just leaking* > > *2012-02-03 14:32:37.377 rtsp[51110:6c03] getSDP: --> v=0* > > *o=- 1328232613792590 1 IN IP4 172.28.31.103* > > *s=Session streamed by "testH264VideoStreamer"* > > *i=test.264* > > *t=0 0* > > *a=tool:LIVE555 Streaming Media v2011.12.02* > > *a=type:broadcast* > > *a=control:** > > *a=source-filter: incl IN IP4 * 172.28.31.103* > > *a=rtcp-unicast: reflection* > > *a=range:npt=0-* > > *a=x-qt-text-nam:Session streamed by "testH264VideoStreamer"* > > *a=x-qt-text-inf:test.264* > > *m=video 18888 RTP/AVP 96* > > *c=IN IP4 232.91.104.231/255* > > *b=AS:500* > > *a=rtpmap:96 H264/90000* > > *a=control:track1* > > *2012-02-03 14:32:37.377 rtsp[51110:6c03] *** __NSAutoreleaseNoPool(): > Object 0x661a100 of class __NSArrayM autoreleased with no pool in place - > just leaking* > > *2012-02-03 14:32:37.378 rtsp[51110:6c03] *** __NSAutoreleaseNoPool(): > Object 0x611f6e0 of class RTSPSubsession autoreleased with no pool in place > - just leaking* > > *Sending request: SETUP rtsp://172.28.31.103:8554/testStream/track1RTSP/1.0 > * > > *CSeq: 3* > > *Transport: RTP/AVP/TCP;unicast;interleaved=0-1* > > *User-Agent: LIVE555 Streaming Media v2010.04.09* > > ** > > ** > > *Received SETUP response: RTSP/1.0 461 Unsupported Transport* > > *CSeq: 3* > > *Date: Fri, Feb 03 2012 01:32:37 GMT* > > ** > > ** > > *2012-02-03 14:32:37.379 rtsp[51110:6c03] HEre!!!!!!* > > *2012-02-03 14:32:37.380 rtsp[51110:6c03] *** __NSAutoreleaseNoPool(): > Object 0x6120a30 of class NSCFString autoreleased with no pool in place - > just leaking* > > *2012-02-03 14:32:37.380 rtsp[51110:6c03] protocol nameRTP* > > *2012-02-03 14:32:37.381 rtsp[51110:6c03] *** __NSAutoreleaseNoPool(): > Object 0x6120a40 of class NSCFString autoreleased with no pool in place - > just leaking* > > *2012-02-03 14:32:37.381 rtsp[51110:6c03] codec nameH264* > > *2012-02-03 14:32:37.381 rtsp[51110:6c03] *** __NSAutoreleaseNoPool(): > Object 0xb305230 of class NSCFString autoreleased with no pool in place - > just leaking* > > *2012-02-03 14:32:37.382 rtsp[51110:6c03] get medium namevideo* > > *2012-02-03 14:32:37.382 rtsp[51110:6c03] video height0* > > *2012-02-03 14:32:37.383 rtsp[51110:6c03] video width18888* > > *2012-02-03 14:32:37.383 rtsp[51110:6c03] *** __NSAutoreleaseNoPool(): > Object 0x6120da0 of class NSCFString autoreleased with no pool in place - > just leaking* > > *2012-02-03 14:32:37.383 rtsp[51110:6c03] error: --> No RTSP session is > currently in progress* > * > * > * > * > why is that? Did i set the test server wrong? > > Thank you > > James > > On 3 February 2012 08:49, Ross Finlayson wrote: > >> I got a question. Lots of documents mentioned that (7) sps, (8) pps, (6) >> sei. what is that mean? >> >> >> They're defined in the H.264 specification: ISO/IEC 14496-10, I think. >> >> >> Ross Finlayson >> Live Networks, Inc. >> http://www.live555.com/ >> >> >> _______________________________________________ >> live-devel mailing list >> live-devel at lists.live555.com >> http://lists.live555.com/mailman/listinfo/live-devel >> >> > > > -- > James Zhang > > BE (Hons) > Department of Electrical and Computer Engineering > THE UNIVERSITY OF AUCKLAND > -- James Zhang BE (Hons) Department of Electrical and Computer Engineering THE UNIVERSITY OF AUCKLAND -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Thu Feb 2 17:49:43 2012 From: finlayson at live555.com (Ross Finlayson) Date: Thu, 2 Feb 2012 17:49:43 -0800 Subject: [Live-devel] question about parseSPropParameterSets() In-Reply-To: References: Message-ID: > Received SETUP response: RTSP/1.0 461 Unsupported Transport This happens because the server's stream is multicast, but your client is asking for it to be streamed via RTP-over-TCP, which you can't do for multicast streams. If your client *does not* request RTP-over-TCP streaming (e.g., if you're using "openRTSP", then *don't* use the "-t" flag), then it should work. Alternatively, you could use a unicast RTSP server - e.g., "testOnDemandRTSPServer" or "live555MediaServer" - which supports RTP-over-TCP streaming. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Thu Feb 2 17:58:07 2012 From: finlayson at live555.com (Ross Finlayson) Date: Thu, 2 Feb 2012 17:58:07 -0800 Subject: [Live-devel] question about parseSPropParameterSets() In-Reply-To: References: Message-ID: <7D2352CE-B64F-4001-9D85-92DBC044D5B5@live555.com> Also, your client is using a very old version of the "LIVE555 Streaming Media" software. You should upgrade it if you can. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Marlon at scansoft.co.za Thu Feb 2 23:14:42 2012 From: Marlon at scansoft.co.za (Marlon Reid) Date: Fri, 3 Feb 2012 09:14:42 +0200 Subject: [Live-devel] PCM data gets corrupt during transport Message-ID: <002962EA5927BE45B2FFAB0B5B5D67970A8B1F@SSTSVR1.sst.local> Hi, I am experiencing a problem that has me stumped. My application uses Live555 to stream PCM data over a network. The problem is that the data received on the client side is corrupt. The bottom half of the right channel contains noise. If you take a look at the image hosted here : http://www.freeimagehosting.net/q1cgw you will see what I mean. The first file is the PCM data directly before transport (obtained from the input buffer in AfterGettingFrame1 in my FramedFilter, the second PCM file is data directly after transport (obtained from the input buffer in AfterGettingFrame1 in my MediaSink). My only conclusion is that I must be doing something wrong that causes the transport to add this noise to the right channel during transport, but I cannot figure out what. Can you perhaps guide me in a direction? As always, your help is highly appreciated. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From ddugger at isdcam.com Thu Feb 2 23:21:59 2012 From: ddugger at isdcam.com (Dirk Dugger) Date: Thu, 2 Feb 2012 23:21:59 -0800 Subject: [Live-devel] Streaming live h.264 video + PCM/ulaw from a fifo Message-ID: I have been struggling with how to stream live audio+video in an embedded device. What I have is a process which feeds a h.264 ES into a video fifo, say /tmp/video.fifo, and a process which feeds ulaw into an audio fifo, say /tmp/audio.fifo. If I use the H264*.cpp/hh classes and point them at video.fifo then it sends video fine. I modified the WAV*.cpp/hh classes to read from a raw data fifo (removed the seek stuff and WAV header stuff) that works fine as well. Then problem is combining audio+video. I think I did the audio incorrectly. I think it's doing a blocking read on my audio fifo and messing up the video since the whole shebang is single threaded. I think what I need is the ByteStream class which does async, that way it tells the scheduler to wait on available read data. Is that correct? Does someone have pointers on how to do this? It should be pretty simple but I don't grok the code (yet). Thanks, Dirk -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Fri Feb 3 00:22:26 2012 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 3 Feb 2012 00:22:26 -0800 Subject: [Live-devel] PCM data gets corrupt during transport In-Reply-To: <002962EA5927BE45B2FFAB0B5B5D67970A8B1F@SSTSVR1.sst.local> References: <002962EA5927BE45B2FFAB0B5B5D67970A8B1F@SSTSVR1.sst.local> Message-ID: <06D7DAAC-2D7B-4CF8-8812-4EA76800A776@live555.com> > I am experiencing a problem that has me stumped. My application uses Live555 to stream PCM data over a network. The problem is that the data received on the client side is corrupt. The bottom half of the right channel contains noise. If you take a look at the image hosted here : http://www.freeimagehosting.net/q1cgw you will see what I mean. The first file is the PCM data directly before transport (obtained from the input buffer in AfterGettingFrame1 in my FramedFilter, the second PCM file is data directly after transport (obtained from the input buffer in AfterGettingFrame1 in my MediaSink). > > My only conclusion is that I must be doing something wrong that causes the transport to add this noise to the right channel during transport, but I cannot figure out what. Can you perhaps guide me in a direction? One possibility that comes immediately to mind is that you're not setting "fFrameSize" correctly (and/or are not copying the correct number of bytes) in your source or filter objects. Remember that PCM is usually 16-bits per sample, so its size in bytes is 2x the number of samples. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Fri Feb 3 00:38:07 2012 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 3 Feb 2012 00:38:07 -0800 Subject: [Live-devel] Streaming live h.264 video + PCM/ulaw from a fifo In-Reply-To: References: Message-ID: <3C8318AF-DC07-459A-8E61-28A88B1186CA@live555.com> > Then problem is combining audio+video. I think I did the audio incorrectly. I think it's doing a blocking read on my audio fifo and messing up the video since the whole shebang is single threaded. I think what I need is the ByteStream class which does async, that way it tells the scheduler to wait on available read data. By default, reading from a "ByteStreamFileSource" object *is* asynchronous - unless you're running Windows (because Windows is brain damaged, and doesn't let you call "select()" on open files). So, if you're running Windows, you'll get asynchronous file reading if you switch to a real operating system :-) But I'm not convinced that that is your problem. One important thing to get right if you're streaming multiple media types is presentation times - i.e., the "fPresentationTime" values that you set in your media source classes (when delivering each frame). These values *must* be properly synchronized, and *must* also be aligned with 'wall clock' time - i.e., the time that you'd get by calling "gettimeofday()". Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Fri Feb 3 00:57:33 2012 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 3 Feb 2012 00:57:33 -0800 Subject: [Live-devel] New LIVE555 version, adds 'NAT hole punching' for RTSP clients Message-ID: <2DE894A4-59C8-4B0E-B5C6-13CCDCA6FEAF@live555.com> I have installed a new version (2012.02.03) of the "LIVE555 Streaming Media" code that makes a small change to the behavior of "RTSPClient"s. Now, after receiving the response to each RTSP "SETUP" command, the code will send a couple of short, 'dummy' UDP packets towards the server. If the client is behind a NAT box (i.e., its IP address is private), then this improves the likelihood that RTP/UDP packets from the server will successfully reach the client. I.e., this change increases the likelihood that regular RTP/UDP streaming will work, so that the client does not have to resort to requesting RTP-over-TCP streaming instead. It's important to note that this will work only if it's the RTSP *client* that's behind a NAT. As always, the RTSP *server* must be on the public Internet (i.e., has a public IP address). (We currently have no way of supporting a RTSP server that's behind a NAT.) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dekarl at spaetfruehstuecken.org Fri Feb 3 01:06:27 2012 From: dekarl at spaetfruehstuecken.org (Karl Dietz) Date: Fri, 03 Feb 2012 10:06:27 +0100 Subject: [Live-devel] PCM data gets corrupt during transport In-Reply-To: <002962EA5927BE45B2FFAB0B5B5D67970A8B1F@SSTSVR1.sst.local> References: <002962EA5927BE45B2FFAB0B5B5D67970A8B1F@SSTSVR1.sst.local> Message-ID: <4F2BA393.60708@spaetfruehstuecken.org> On 03.02.2012 08:14, Marlon Reid wrote: > The bottom half of the right > channel contains noise. If you take a look at the image hosted here : > http://www.freeimagehosting.net/q1cgw you will see what I mean. That's a great example where looking closer at the data will give a hint on what might be wrong. Are you transferring the data as 32bit float or are you transferring unsigned integers? In case of the latter, if you see peaks to -max in regular intervals it could be an off by one where the last element of the buffer is always 0. Regards, Karl From Marlon at scansoft.co.za Fri Feb 3 03:02:36 2012 From: Marlon at scansoft.co.za (Marlon Reid) Date: Fri, 3 Feb 2012 13:02:36 +0200 Subject: [Live-devel] PCM data gets corrupt during transport In-Reply-To: <06D7DAAC-2D7B-4CF8-8812-4EA76800A776@live555.com> References: <002962EA5927BE45B2FFAB0B5B5D67970A8B1F@SSTSVR1.sst.local> <06D7DAAC-2D7B-4CF8-8812-4EA76800A776@live555.com> Message-ID: <002962EA5927BE45B2FFAB0B5B5D67970A8B24@SSTSVR1.sst.local> Hi Ross, I found the problem and manage to fix it even though I am not entirely sure of the reason for it. I have my own custom AudioRTPSink for PCM data. Removing the MultiFramedRTPSink::doSpecialFrameHandling call from the doSpecialFrameHandling function in my derived AudioRTPSink solved the problem. I can only speculate that one of the parameters of the function might have been incorrect. In any case, removing this call gives me crystal clear sound and it seems to have no adverse effect on my application. Thanks for you help. ___________________________________ Marlon Reid Web: www.scansoft.co.za Tel: +27 21 913 8664 Cell: +27 72 359 0902 ________________________________ From: live-devel-bounces at ns.live555.com [mailto:live-devel-bounces at ns.live555.com] On Behalf Of Ross Finlayson Sent: 03 February 2012 10:22 AM To: LIVE555 Streaming Media - development & use Subject: Re: [Live-devel] PCM data gets corrupt during transport I am experiencing a problem that has me stumped. My application uses Live555 to stream PCM data over a network. The problem is that the data received on the client side is corrupt. The bottom half of the right channel contains noise. If you take a look at the image hosted here : http://www.freeimagehosting.net/q1cgw you will see what I mean. The first file is the PCM data directly before transport (obtained from the input buffer in AfterGettingFrame1 in my FramedFilter, the second PCM file is data directly after transport (obtained from the input buffer in AfterGettingFrame1 in my MediaSink). My only conclusion is that I must be doing something wrong that causes the transport to add this noise to the right channel during transport, but I cannot figure out what. Can you perhaps guide me in a direction? One possibility that comes immediately to mind is that you're not setting "fFrameSize" correctly (and/or are not copying the correct number of bytes) in your source or filter objects. Remember that PCM is usually 16-bits per sample, so its size in bytes is 2x the number of samples. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 32497 bytes Desc: SST Logo with Shadow.png URL: From ddugger at isdcam.com Fri Feb 3 07:57:31 2012 From: ddugger at isdcam.com (Dirk Dugger) Date: Fri, 3 Feb 2012 07:57:31 -0800 Subject: [Live-devel] Streaming live h.264 video + PCM/ulaw from a fifo Message-ID: But the wav source class just uses fread instead of the bytestream so my getframe blocks on my audio fifo. Doesn't that also block video since there is only one thread? What do I do if I have no audio data avail? I will check presentation time as well. Ross Finlayson wrote: Then problem is combining audio+video. I think I did the audio incorrectly. I think it's doing a blocking read on my audio fifo and messing up the video since the whole shebang is single threaded. I think what I need is the ByteStream class which does async, that way it tells the scheduler to wait on available read data. By default, reading from a "ByteStreamFileSource" object *is* asynchronous - unless you're running Windows (because Windows is brain damaged, and doesn't let you call "select()" on open files). So, if you're running Windows, you'll get asynchronous file reading if you switch to a real operating system :-) But I'm not convinced that that is your problem. One important thing to get right if you're streaming multiple media types is presentation times - i.e., the "fPresentationTime" values that you set in your media source classes (when delivering each frame). These values *must* be properly synchronized, and *must* also be aligned with 'wall clock' time - i.e., the time that you'd get by calling "gettimeofday()". Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Fri Feb 3 12:45:42 2012 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 3 Feb 2012 12:45:42 -0800 Subject: [Live-devel] Streaming live h.264 video + PCM/ulaw from a fifo In-Reply-To: References: Message-ID: <408977BE-FF91-4BF3-BE67-5DADEC080204@live555.com> > But the wav source class just uses fread instead of the bytestream so my getframe blocks on my audio fifo. OK, I didn't realize that you were using the "WAVAudioFileSource" class. Yes, you're right - that still uses a blocking "fread()", rather than asynchronous reading. I'll need to fix that. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From George.Campbell at vmsuk.com Fri Feb 3 08:59:18 2012 From: George.Campbell at vmsuk.com (George Campbell) Date: Fri, 3 Feb 2012 16:59:18 -0000 Subject: [Live-devel] Contents of fReceiveBuffer Always NULL In-Reply-To: <473752969FF27C46BC57D6036648674038DDBD@vmsserver.VMSGlas.local> References: <473752969FF27C46BC57D6036648674038DDBD@vmsserver.VMSGlas.local> Message-ID: <473752969FF27C46BC57D6036648674038DDC0@vmsserver.VMSGlas.local> Hi, I'm fairly new to using Live 555, so apologies if my question is a stupid one, I've read through the FAQ and looked for an answer to this through the live-devel mailing list but I've so far come up with nothing. I've subclassed Mediasink for use with an Iphone app I'm developing, I've based my class on the testRTSPClient.cpp, and have created an Objective-C wrapper class around it. I've added lines to send as much debugging information to the debug console as I can, and I can see that my application sends all of the correct describe, setup and play requests, I can see that the correct replies are returned by my RTSP server device,and I can also see that my application is receiving frames from the RTSP server, as I've inserted the code from the original testRTSPClient.cpp in the afterGettingFrame method that displays information about the size of the received frame size and presentation time, however my video decoding routine had been failing, stating that no frame has been found, I've walked my way back through my code and I've found that NULL data is being passed into by decoder, tracing it back further I've found that when I inspect the contents of my receive buffer(invoked from within continuePlaying() by fSource->getNextFrame(fReceiveBuffer, bufferSize, afterGettingFrame,this, onSourceCloseure, this)) I find that the contents are always listed as "0\000", even though the debugging info indicates that a valid frame was received. The main code within my subclass is as follows, I've removed all the code that calls my decoder for the moment until I figure out why fReceiveBuffer is always empty: void afterGettingFrame(unsigned frameSize, unsigned numTruncatedBytes, struct timeval presentationTime, unsigned durationInMicroseconds) { if (fStreamId != NULL) envir() << "Stream \"" << fStreamId << "\"; "; envir() << fSubsession.mediumName() << "/" << fSubsession.codecName() << ":\tReceived " << frameSize << " bytes"; if (numTruncatedBytes > 0) envir() << " (with " << numTruncatedBytes << " bytes truncated)"; envir() << "Packet Data:" << fReceiveBuffer; char uSecsStr[6+1]; sprintf(uSecsStr, "%06u", (unsigned)presentationTime.tv_usec); envir() << ".\tPresentation time: " << (unsigned)presentationTime.tv_sec << "." << uSecsStr; if (fSubsession.rtpSource() != NULL && !fSubsession.rtpSource()->hasBeenSynchronizedUsingRTCP()) { envir() << "!"; // mark the debugging output to indicate that this presentation time is not RTCP-synchronized } envir() << "\n"; continuePlaying(); } static void afterGettingFrame(void* clientData, unsigned frameSize, unsigned numTruncatedBytes, struct timeval presentationTime, unsigned durationInMicroseconds) { RTSPSubsessionMediaSink* sink = (RTSPSubsessionMediaSink*)clientData; sink->afterGettingFrame(frameSize, numTruncatedBytes, presentationTime, durationInMicroseconds); } virtual Boolean continuePlaying() { if (fSource == NULL) return False fSource->getNextFrame(fReceiveBuffer, VMSiOS_MEDIA_SINK_RECEIVE_BUFFER_SIZE, afterGettingFrame, this, onSourceClosure, this); return True; }; Have I missed something obvious? George Campbell Product Development Manager Visual Management Systems Limited (VMS) 0141 763 2690 07971 992 459 www.vmsuk.com Disclaimer: Nothing in this email shall bind Visual Management Systems Ltd in any contract or obligation unless expressly stated otherwise. This e-mail is for the intended addressee only. If you believe you have received it in error then please delete it and notify the sender by return e-mail. In case of doubt about the correctness or completeness of this e-mail please contact the sender. Visual Management Systems Ltd make every effort to virus check any files available for downloading on their website or sent as attachments via email. Visual Management Systems Ltd accept no responsibility whatsoever for any loss or damage that may arise from the receipt of such material. We therefore recommend that all recipients check all received material with their own virus check software. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 7596 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 4645 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 239 bytes Desc: image003.gif URL: From finlayson at live555.com Fri Feb 3 13:22:18 2012 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 3 Feb 2012 13:22:18 -0800 Subject: [Live-devel] Contents of fReceiveBuffer Always NULL In-Reply-To: <473752969FF27C46BC57D6036648674038DDC0@vmsserver.VMSGlas.local> References: <473752969FF27C46BC57D6036648674038DDBD@vmsserver.VMSGlas.local> <473752969FF27C46BC57D6036648674038DDC0@vmsserver.VMSGlas.local> Message-ID: <12232CF0-6AA1-4080-A227-9F7D92051CF9@live555.com> Because you've made your own custom modifications to the "testRTSPClient" code, it's hard for me to tell what might be going wrong. So you need to start first with the original, unmodified "testRTSPClient" code. I suggest going back to that code, and verifying - in the implementation of the (second) "DummySink::afterGettingFrame()" function - that "fReceiveBuffer" is getting reasonable-looking data. Then, and only then, you can think about replacing "DummySink" with your own "MediaSink" subclass - using the existing "DummySink" code as a model. As you do this, it should be easy to figure out which of your changes is causing your code to fail. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ddugger at isdcam.com Fri Feb 3 14:49:34 2012 From: ddugger at isdcam.com (Dirk Dugger) Date: Fri, 3 Feb 2012 14:49:34 -0800 Subject: [Live-devel] Streaming live h.264 video + PCM/ulaw from a fifo Message-ID: Yes there is a comment about that in the code. Is it difficult to convert to bytestream? Or is there something quick I can do to tell the caller that I don't have data yet? Ross Finlayson wrote: But the wav source class just uses fread instead of the bytestream so my getframe blocks on my audio fifo. OK, I didn't realize that you were using the "WAVAudioFileSource" class. Yes, you're right - that still uses a blocking "fread()", rather than asynchronous reading. I'll need to fix that. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Fri Feb 3 15:40:38 2012 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 3 Feb 2012 15:40:38 -0800 Subject: [Live-devel] Streaming live h.264 video + PCM/ulaw from a fifo In-Reply-To: References: Message-ID: <96B580FE-EB1C-4291-94F8-A1D9626A8E9D@live555.com> > Yes there is a comment about that in the code. Is it difficult to convert to bytestream? Or is there something quick I can do to tell the caller that I don't have data yet? I'll be updating the "WAVAudioFileSource" code very shortly to do asynchronous file reads (except on Windows). (Because you're a professional user, rather than a lame @gmail n00b, this is a high priority for me :-) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Anarchist666 at yandex.ru Fri Feb 3 10:05:28 2012 From: Anarchist666 at yandex.ru (=?koi8-r?B?5sHazMXF1yDtwcvTyc0=?=) Date: Fri, 03 Feb 2012 22:05:28 +0400 Subject: [Live-devel] RTSP Server in separate thread Message-ID: <643031328292328@web22.yandex.ru> I'm creating a RTSP Server in a separate thread. My encoder (libx264) produces arrays x264 nal units. When the encoder processes the first frame it produces an array of 4 nal units. Then I pass by one unit in my DeviceSource and call signalNewFrameData every time. But it seems this separate thread does not have time to process, and rewrites them to the main thread. If I need to synchronize threads, then please tell me where I need to do it. From 6.45.vapuru at gmail.com Fri Feb 3 05:06:39 2012 From: 6.45.vapuru at gmail.com (Novalis Vapuru) Date: Fri, 3 Feb 2012 15:06:39 +0200 Subject: [Live-devel] How to increase H264VideoRTPSink buffer size ? [ fMaxSize] Message-ID: How can I increase H264VideoRTPSink buffer size ? Novalis Best Wishes From ddugger at isdcam.com Fri Feb 3 17:00:24 2012 From: ddugger at isdcam.com (Dirk Dugger) Date: Fri, 3 Feb 2012 17:00:24 -0800 Subject: [Live-devel] Streaming live h.264 video + PCM/ulaw from a fifo Message-ID: Awesome! Ross Finlayson wrote: Yes there is a comment about that in the code. Is it difficult to convert to bytestream? Or is there something quick I can do to tell the caller that I don't have data yet? I'll be updating the "WAVAudioFileSource" code very shortly to do asynchronous file reads (except on Windows). (Because you're a professional user, rather than a lame @gmail n00b, this is a high priority for me :-) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Sat Feb 4 03:15:34 2012 From: finlayson at live555.com (Ross Finlayson) Date: Sat, 4 Feb 2012 03:15:34 -0800 Subject: [Live-devel] Streaming live h.264 video + PCM/ulaw from a fifo In-Reply-To: References: Message-ID: <5B05D0D7-1BE3-40E3-8CCA-633B21F81AD9@live555.com> FYI, I've now installed a new version (2012.02.04) of the "LIVE555 Streaming Media" code that reimplements "WAVAudioFileSource" to do asynchronous reads (except on Windoze). Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Layne.Berge.1 at my.ndsu.edu Sat Feb 4 15:14:45 2012 From: Layne.Berge.1 at my.ndsu.edu (Layne Berge) Date: Sat, 4 Feb 2012 23:14:45 +0000 Subject: [Live-devel] openRTSP Test Program Fails to Export a File Message-ID: Hi, I'm using the live555 library in order to save an RTSP stream from a PTZ camera. I used the openRTSP program in the "testProgs" directory in order to test whether it would work with the camera I was using. From the output, the program was able to connect to and talk to the camera and request frames from it. However, upon examining the output files from the program, the data was corrupt. My next test was to try outputting a quicktime file. When I tried this, the program started printing Klingon to the screen and producing an annoying beeping. The program crashed and did this with the various other output options (avi, mp4). >From the behavior I described, does someone have any idea of what I'm doing wrong? Do I not have a needed library or did I compile something wrong? I did test it using VLC and it was able to view and record the RTSP stream from the camera successfully. From finlayson at live555.com Sat Feb 4 22:47:25 2012 From: finlayson at live555.com (Ross Finlayson) Date: Sat, 4 Feb 2012 22:47:25 -0800 Subject: [Live-devel] openRTSP Test Program Fails to Export a File In-Reply-To: References: Message-ID: > I'm using the live555 library in order to save an RTSP stream from a PTZ camera. I used the openRTSP program in the "testProgs" directory in order to test whether it would work with the camera I was using. From the output, the program was able to connect to and talk to the camera and request frames from it. However, upon examining the output files from the program, the data was corrupt. Are you sure? You haven't said what RTP payload format the stream was using. Please post the RTSP protocol exchange (the debugging output from "openRTSP"). > My next test was to try outputting a quicktime file. When I tried this, the program started printing Klingon to the screen and producing an annoying beeping. That's because the "-q" (and the "-i" and "-4" options) output to 'stdout'. If you want to write the output to a file, then you will need to redirect 'stdout' to it. From Layne.Berge.1 at my.ndsu.edu Sun Feb 5 12:03:24 2012 From: Layne.Berge.1 at my.ndsu.edu (Layne Berge) Date: Sun, 5 Feb 2012 20:03:24 +0000 Subject: [Live-devel] openRTSP Test Program Fails to Export a File In-Reply-To: References: Message-ID: >> I'm using the live555 library in order to save an RTSP stream from a PTZ camera. I used the openRTSP program in the "testProgs" directory in order to test whether it would work with the camera I was using. From the output, the program was able to connect to and talk to the camera and request frames from it. However, upon examining the output files from the program, the data was corrupt. > > Are you sure? You haven't said what RTP payload format the stream was using. Please post the RTSP protocol exchange (the debugging output from "openRTSP"). H:\live\testProgs>openRTSP -d 10 -V -Q rtsp://10.248.123.150/live2.sdp Opened URL "rtsp://10.248.123.150/live2.sdp", returning a SDP description: v=0 o=RTSP 1327875302 714 IN IP4 0.0.0.0 s=RTSP server c=IN IP4 0.0.0.0 t=0 0 a=charset:Shift_JIS a=range:npt=0- a=control:* a=etag:1234567890 m=video 0 RTP/AVP 96 b=AS:0 a=rtpmap:96 MP4V-ES/30000 a=control:trackID=2 a=fmtp:96 profile-level-id=3;config=000001B003000001B2464D5F5047204D6F6465000001 B509000001000000012000C48881F4514043C1463F;decode_buf=76800 m=audio 0 RTP/AVP 97 a=control:trackID=3 a=rtpmap:97 mpeg4-generic/44100/2 a=fmtp:97 streamtype=5; profile-level-id=15; mode=AAC-hbr; config=1210;SizeLengt h=13; IndexLength=3; IndexDeltaLength=3; CTSDeltaLength=0; DTSDeltaLength=0; Created receiver for "video/MP4V-ES" subsession (client ports 50818-50819) Created receiver for "audio/MPEG4-GENERIC" subsession (client ports 50820-50821) Setup "video/MP4V-ES" subsession (client ports 50818-50819) Setup "audio/MPEG4-GENERIC" subsession (client ports 50820-50821) Created output file: "video-MP4V-ES-1" Created output file: "audio-MPEG4-GENERIC-2" Started playing session Receiving streamed data (for up to 10.000000 seconds)... begin_QOS_statistics subsession video/MP4V-ES num_packets_received 360 num_packets_lost 3 elapsed_measurement_time 10.005074 kBytes_received_total 388.502000 measurement_sampling_interval_ms 1000 kbits_per_second_min 8.895464 kbits_per_second_ave 310.643979 kbits_per_second_max 726.239417 packet_loss_percentage_min 0.000000 packet_loss_percentage_ave 0.826446 packet_loss_percentage_max 8.333333 inter_packet_gap_ms_min 0.016000 inter_packet_gap_ms_ave 27.116086 inter_packet_gap_ms_max 1275.389000 subsession audio/MPEG4-GENERIC num_packets_received 427 num_packets_lost 3 elapsed_measurement_time 10.005074 kBytes_received_total 158.056000 measurement_sampling_interval_ms 1000 kbits_per_second_min 4.746848 kbits_per_second_ave 126.380674 kbits_per_second_max 301.392489 packet_loss_percentage_min 0.000000 packet_loss_percentage_ave 0.697674 packet_loss_percentage_max 6.818182 inter_packet_gap_ms_min 0.013000 inter_packet_gap_ms_ave 22.985400 inter_packet_gap_ms_max 1051.451000 end_QOS_statistics > > >> My next test was to try outputting a quicktime file. When I tried this, the program started printing Klingon to the screen and producing an annoying beeping. > > That's because the "-q" (and the "-i" and "-4" options) output to 'stdout'. If you want to write the output to a file, then you will need to redirect 'stdout' to it. Is there a simple switch to do that or do I have to compile the test program in order to allow that? I did see the "-F" switch for specifying the output file prefix but I don't think that is what you are talking about, correct? Thank you for dealing with my software naivete. From finlayson at live555.com Sun Feb 5 12:53:27 2012 From: finlayson at live555.com (Ross Finlayson) Date: Sun, 5 Feb 2012 12:53:27 -0800 Subject: [Live-devel] openRTSP Test Program Fails to Export a File In-Reply-To: References: Message-ID: >>> I'm using the live555 library in order to save an RTSP stream from a PTZ camera. I used the openRTSP program in the "testProgs" directory in order to test whether it would work with the camera I was using. From the output, the program was able to connect to and talk to the camera and request frames from it. However, upon examining the output files from the program, the data was corrupt. >> >> Are you sure? You haven't said what RTP payload format the stream was using. Please post the RTSP protocol exchange (the debugging output from "openRTSP"). > > H:\live\testProgs>openRTSP -d 10 -V -Q rtsp://10.248.123.150/live2.sdp > Opened URL "rtsp://10.248.123.150/live2.sdp", returning a SDP description: > v=0 > o=RTSP 1327875302 714 IN IP4 0.0.0.0 > s=RTSP server > c=IN IP4 0.0.0.0 > t=0 0 > a=charset:Shift_JIS > a=range:npt=0- > a=control:* > a=etag:1234567890 > m=video 0 RTP/AVP 96 > b=AS:0 > a=rtpmap:96 MP4V-ES/30000 > a=control:trackID=2 > a=fmtp:96 profile-level-id=3;config=000001B003000001B2464D5F5047204D6F6465000001 > B509000001000000012000C48881F4514043C1463F;decode_buf=76800 > m=audio 0 RTP/AVP 97 > a=control:trackID=3 > a=rtpmap:97 mpeg4-generic/44100/2 > a=fmtp:97 streamtype=5; profile-level-id=15; mode=AAC-hbr; config=1210;SizeLengt > h=13; IndexLength=3; IndexDeltaLength=3; CTSDeltaLength=0; DTSDeltaLength=0; OK, this shows that your received video file contained "MPEG-4 Elementary Stream" data, and that your received audio file contained "MPEG-4 (AAC) Elementary Stream" data. It's unlikely that this data really was 'corrupt'. It's more likely that whatever media player you used to try to play these files failed to do so, because it did not know how to play this type of file. >>> My next test was to try outputting a quicktime file. When I tried this, the program started printing Klingon to the screen and producing an annoying beeping. >> >> That's because the "-q" (and the "-i" and "-4" options) output to 'stdout'. If you want to write the output to a file, then you will need to redirect 'stdout' to it. > > Is there a simple switch to do that or do I have to compile the test program in order to allow that? Oh dear. Read http://en.wikipedia.org/wiki/Redirection_%28computing%29 This should be common knowledge for anyone who wants to use software like this. Run openRTSP -q -w frame-width -h frame-height -f frame-rate rtsp://url > outputfile.mov Note: - As noted in the "openRTSP" documentation http://www.live555.com/openRTSP/#quicktime the "-w ", "-h " and "-f " options are important, and should not be omitted. - Note also the "important note" at the end of that section of the documentation. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ddugger at isdcam.com Sun Feb 5 13:53:06 2012 From: ddugger at isdcam.com (Dirk Dugger) Date: Sun, 5 Feb 2012 13:53:06 -0800 Subject: [Live-devel] Streaming live h.264 video + PCM/ulaw from a fifo In-Reply-To: <5B05D0D7-1BE3-40E3-8CCA-633B21F81AD9@live555.com> References: <5B05D0D7-1BE3-40E3-8CCA-633B21F81AD9@live555.com> Message-ID: Aha! All is happy now. Thanks Ross! From: live-devel-bounces at ns.live555.com [mailto:live-devel-bounces at ns.live555.com] On Behalf Of Ross Finlayson Sent: Saturday, February 04, 2012 3:16 AM To: LIVE555 Streaming Media - development & use Subject: Re: [Live-devel] Streaming live h.264 video + PCM/ulaw from a fifo FYI, I've now installed a new version (2012.02.04) of the "LIVE555 Streaming Media" code that reimplements "WAVAudioFileSource" to do asynchronous reads (except on Windoze). Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Sun Feb 5 14:11:04 2012 From: finlayson at live555.com (Ross Finlayson) Date: Sun, 5 Feb 2012 14:11:04 -0800 Subject: [Live-devel] How to increase H264VideoRTPSink buffer size ? [ fMaxSize] In-Reply-To: References: Message-ID: <064ADA42-C6A3-4093-874D-064B978E2E62@live555.com> > How can I increase H264VideoRTPSink buffer size ? Set the variable OutPacketBuffer::maxSize before you create any "RTPSink" objects (or before you create any "ServerMediaSubsession" objects). Note some examples of this in "testProgs". Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Sun Feb 5 14:18:34 2012 From: finlayson at live555.com (Ross Finlayson) Date: Sun, 5 Feb 2012 14:18:34 -0800 Subject: [Live-devel] RTSP Server in separate thread In-Reply-To: <643031328292328@web22.yandex.ru> References: <643031328292328@web22.yandex.ru> Message-ID: <5286797D-2E81-41E5-AFF8-69215514B45D@live555.com> > I'm creating a RTSP Server in a separate thread. My encoder (libx264) produces arrays x264 nal units. When the encoder processes the first frame it produces an array of 4 nal units. Then I pass by one unit in my DeviceSource and call signalNewFrameData every time. But it seems this separate thread does not have time to process, and rewrites them to the main thread. That last sentence makes no sense. > If I need to synchronize threads, then please tell me where I need to do it. No, you don't need to do any 'thread synchronization'. Note that only one of your threads - the one that contains the RTSP server - is running LIVE555 code. The only exception to this is that other thread(s) may call "TaskScheduler::triggerEvent()", to signal an event that the first thread (the LIVE555 thread) will then handle, within the LIVE555 event loop. If this is what you are doing, then your code should work, provided, of course, that the LIVE555 thread is processing its event loop - i.e., has already called "doEventLoop()". Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From isambhav at gmail.com Sun Feb 5 04:29:29 2012 From: isambhav at gmail.com (Sambhav) Date: Sun, 5 Feb 2012 17:59:29 +0530 Subject: [Live-devel] Multiple network interfaces Message-ID: Hi, When there are multiple network interfaces on a machine, how does live555 decide to which interace to use? Is there a way to specify which interface to use ? Regards, Sambhav -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Sun Feb 5 22:33:45 2012 From: finlayson at live555.com (Ross Finlayson) Date: Sun, 5 Feb 2012 22:33:45 -0800 Subject: [Live-devel] Multiple network interfaces In-Reply-To: References: Message-ID: > When there are multiple network interfaces on a machine, how does live555 decide to which interace to use? It uses the interface on which multicast routing is enabled - i.e., the interface that has a route for 224.0.0.0/4 > Is there a way to specify which interface to use ? The easiest way to do this to make sure that the interface that you want to use has a route for 224.0.0.0/4, as noted above. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From 6.45.vapuru at gmail.com Sun Feb 5 23:52:34 2012 From: 6.45.vapuru at gmail.com (Novalis Vapuru) Date: Mon, 6 Feb 2012 09:52:34 +0200 Subject: [Live-devel] How to increase H264VideoRTPSink buffer size ? [ fMaxSize] In-Reply-To: <064ADA42-C6A3-4093-874D-064B978E2E62@live555.com> References: <064ADA42-C6A3-4093-874D-064B978E2E62@live555.com> Message-ID: Does not work in my case Before I create custom OnDemandServerMediaSubsession objects(which inherits from ServerMediaSubsession) i set to a high value OutPacketBuffer::maxSize, but at my custom FramedSource deliverFrame() method it still complain that newFrameSize > fMaxSize .It seems that this does not affect custom framed source fmaxSize. Best Wishes Novalis 2012/2/6 Ross Finlayson : > How can I ?increase H264VideoRTPSink buffer size ? > > > Set the variable > OutPacketBuffer::maxSize > before you create any "RTPSink" objects (or before you create any > "ServerMediaSubsession" objects). ?Note some examples of this in > "testProgs". > > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > From Anarchist666 at yandex.ru Mon Feb 6 03:53:47 2012 From: Anarchist666 at yandex.ru (=?koi8-r?B?5sHazMXF1yDtwcvTyc0=?=) Date: Mon, 06 Feb 2012 15:53:47 +0400 Subject: [Live-devel] RTSP Server in separate thread In-Reply-To: <5286797D-2E81-41E5-AFF8-69215514B45D@live555.com> References: <643031328292328@web22.yandex.ru> <5286797D-2E81-41E5-AFF8-69215514B45D@live555.com> Message-ID: <7051328529227@web62.yandex.ru> An HTML attachment was scrubbed... URL: From sambhav at saranyu.in Mon Feb 6 08:59:14 2012 From: sambhav at saranyu.in (Kumar Sambhav) Date: Mon, 6 Feb 2012 22:29:14 +0530 Subject: [Live-devel] Hardcoding SDP information Message-ID: Hi, When RTSP server receives a DESCRIBE command, it generates the SDP by parsing the source data. If description of the source is known, can the SDP can be hard-coded in ServerMediaSession::generateSDPDescription() ? Is there any dependency which has to be taken care if the SDP is hard-coded ? Regards, Sambhav From finlayson at live555.com Mon Feb 6 18:12:02 2012 From: finlayson at live555.com (Ross Finlayson) Date: Mon, 6 Feb 2012 18:12:02 -0800 Subject: [Live-devel] How to increase H264VideoRTPSink buffer size ? [ fMaxSize] In-Reply-To: References: <064ADA42-C6A3-4093-874D-064B978E2E62@live555.com> Message-ID: <4BED4F21-0906-4E4B-8381-F23BC93E6128@live555.com> > Does not work in my case > Before I create custom OnDemandServerMediaSubsession objects(which > inherits from ServerMediaSubsession) i set to a high value > OutPacketBuffer::maxSize, but > at my custom FramedSource deliverFrame() method it still complain > that newFrameSize > fMaxSize .It seems that this does not affect > custom framed source fmaxSize. "OutPacketBuffer::maxSize" does, indeed, define the size of the buffer that's used by "RTPSink" subclasses. You should make sure that you're not assigning it more than once, by mistake. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Mon Feb 6 18:43:54 2012 From: finlayson at live555.com (Ross Finlayson) Date: Mon, 6 Feb 2012 18:43:54 -0800 Subject: [Live-devel] Hardcoding SDP information In-Reply-To: References: Message-ID: <64CA137C-21C8-41FC-92FF-FE8E4EB74B54@live555.com> > When RTSP server receives a DESCRIBE command, it generates the SDP by parsing the source data. This depends upon the particular type of data - but for H.264, that's true, by default. > If description of the source is known, can the SDP can be hard-coded in ServerMediaSession::generateSDPDescription() ? I presume that you're talking about the "a=fmtp:..." line, because that's the part of the SDP description that it makes the most sense to want to 'hard code'. This line is set by the virtual function "auxSDPLine()", which many "RTPSink" subclasses redefine, to set this line. Note, for example, the implementation of this function in "H264VideoRTPSink". This implementation sets the "a=fmtp:..." line by reading the SPS and PPS NAL units from the upstream 'framer' object, as the input H.264 data is read. If, however, you know this information ahead of time, you could define your own subclass of "H264VideoRTPSink" that reimplements "auxSDPLine()" once again. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Marlon at scansoft.co.za Tue Feb 7 06:19:51 2012 From: Marlon at scansoft.co.za (Marlon Reid) Date: Tue, 7 Feb 2012 16:19:51 +0200 Subject: [Live-devel] Garbled sound with multiple listeners Message-ID: <002962EA5927BE45B2FFAB0B5B5D67970A8B3B@SSTSVR1.sst.local> Hi Ross, My application streams an MP3 file and it works fine with just one listener, which for the purpose of my testing is VLC. If I open up a second instance of VLC, connect to my stream and play the sound on both of the listeners is garbled. Note that both instances of VLC are on the same machine. Is this behaviour normal? I assumed that both instances of VLC will connect to the stream and play it fine, but that appears not to be the case. Any direction will be appreciated. Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From jarod at solarisadmin.pl Tue Feb 7 01:26:29 2012 From: jarod at solarisadmin.pl (Lukasz Chelek) Date: Tue, 07 Feb 2012 10:26:29 +0100 Subject: [Live-devel] =?utf-8?q?no_SIGHUP_or_SIGUSR1_=2E=2E=2E_any_chance_?= =?utf-8?q?to_rescue_=3F?= Message-ID: Hi, Im trying to build a system that will be able to save h.264 streams to small ARM-embeeded linux platform. It works perfectly even on ARM926EJ-S (saving 1080 HD stream to .mov or .mp4 causes almost no visible impact to the platform). However there is one caveat which I have faced , that is well desribed in openrtsp documentation : Unexpected termination of application (like power-off the entire platform). Does it mean that in case of H264 if openRTSP is not terminated with appropriate signal there is no chance to recover mp4 or mov file. Maybe some of you already tried to recoved openRTSP corrupted file ? Im simply trying to save a stream that i can open even if the file will be closed without SIGUP/SIGUSR1 ... Sorry for puting this post here (i know its devel-list) , but couldn't find a better place to ask. Regards Lukas -------------- next part -------------- An HTML attachment was scrubbed... URL: From gahon0104 at yahoo.com.tw Tue Feb 7 04:23:27 2012 From: gahon0104 at yahoo.com.tw (gahon) Date: Tue, 7 Feb 2012 20:23:27 +0800 (CST) Subject: [Live-devel] RTSP over TCP streaming memory leak. Message-ID: <1328617407.3545.YahooMailNeo@web74202.mail.tp2.yahoo.com> Dear all, MPlayer SVN-r34578-4.5.1 FFMPEG 0.10 livem555 rev live.2012.02.03 When I play a streaming via RTSP, I found the memory will increase slowly. ( Every serveral minutes increase 0.1%) At first the memory will use 1.0%, but it will use 14% mem after play serveral hours. If I play the local file, it will always use 1.0% mem. Does somebody what happen? How the memory will increase for a long time test? Thanks. John -------------- next part -------------- An HTML attachment was scrubbed... URL: From Gaurav.Dwivedi at hcl.com Tue Feb 7 06:16:46 2012 From: Gaurav.Dwivedi at hcl.com (Gaurav Dwivedi -ERS, HCL Tech) Date: Tue, 7 Feb 2012 19:46:46 +0530 Subject: [Live-devel] Injecting live stream to Darwin Server Message-ID: Hi, I want to Capture a RTSP stream from an IP camera, then retransmit it using Darwin Server (Since I need HTTP tunneling support). I am able to capture/play the stream using "openRTSPclient" testprogram, as well as I have "testH264VideoToDarwin" program to inject test.sdp to Darwin server. Now these two applications are working fine independently, can anyone please suggest how should I integrate them? So that I can capture RTSP packets from IP Camera and then inject it to Darwin for further streaming. Thanks and Regards Gaurav Dwivedi ________________________________ ::DISCLAIMER:: ----------------------------------------------------------------------------------------------------------------------- The contents of this e-mail and any attachment(s) are confidential and intended for the named recipient(s) only. It shall not attach any liability on the originator or HCL or its affiliates. Any views or opinions presented in this email are solely those of the author and may not necessarily reflect the opinions of HCL or its affiliates. Any form of reproduction, dissemination, copying, disclosure, modification, distribution and / or publication of this message without the prior written consent of the author of this e-mail is strictly prohibited. If you have received this email in error please delete it and notify the sender immediately. Before opening any mail and attachments please check them for viruses and defect. ----------------------------------------------------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Feb 7 16:13:12 2012 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 8 Feb 2012 10:13:12 +1000 Subject: [Live-devel] no SIGHUP or SIGUSR1 ... any chance to rescue ? In-Reply-To: References: Message-ID: <3A143D01-D00A-472A-95BD-88F487C8B10E@live555.com> > Does it mean that in case of H264 if openRTSP is not terminated with appropriate signal there is no chance to recover mp4 or mov file. > Unfortunately not, because to work, these files (".mov" or ".mp4" format) need a structure (a 'moov atom') whose contents aren't known until the entire file has finished being recorded. Therefore, "openRTSP" (actually, the "QuickTimeFileSink" object) needs to end cleanly, in order for this structure to be written to the file properly. (Note that - as an alternative to signaling "openRTSP" to end recording - you can also give "openRTSP" the "-d " option.) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Feb 7 16:24:56 2012 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 8 Feb 2012 10:24:56 +1000 Subject: [Live-devel] Garbled sound with multiple listeners In-Reply-To: <002962EA5927BE45B2FFAB0B5B5D67970A8B3B@SSTSVR1.sst.local> References: <002962EA5927BE45B2FFAB0B5B5D67970A8B3B@SSTSVR1.sst.local> Message-ID: > My application streams an MP3 file and it works fine with just one listener, which for the purpose of my testing is VLC. If I open up a second instance of VLC, connect to my stream and play the sound on both of the listeners is garbled. Note that both instances of VLC are on the same machine. > > Is this behaviour normal? I assumed that both instances of VLC will connect to the stream and play it fine, but that appears not to be the case. There's a lot of information missing here. You haven't said much about what your application does. Does it stream the MP3 via multicast (e.g., like "testMP3Streamer"), or does it stream via unicast (from a RTSP server)? And are you really streaming from a MP3 file, or from a single MP3 source (in which case don't forget to set "reuseFirstSource" to True if you're streaming unicast from a RTSP server)? Also, you say that you are running two instances of VLC on the same computer. I have no idea what VLC is supposed to do with audio in this case (do you have separate sound cards on this computer, or just one??), but any problems that you have with audio in this situation will likely have nothing to do with our software. Why not first try to run your two instances of VLC on *different* machines? And remember that you can also use "openRTSP" to test RTSP client behavior (and in a way that involves only our software, not VLC). Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Feb 7 16:26:08 2012 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 8 Feb 2012 10:26:08 +1000 Subject: [Live-devel] Injecting live stream to Darwin Server In-Reply-To: References: Message-ID: > I want to Capture a RTSP stream from an IP camera, then retransmit it using Darwin Server (Since I need HTTP tunneling support). Do you realize that our own RTSP server implementation supports HTTP tunneling? You don't need a 3rd party server (which we don't support) to do this. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Layne.Berge.1 at my.ndsu.edu Tue Feb 7 17:03:37 2012 From: Layne.Berge.1 at my.ndsu.edu (Layne Berge) Date: Wed, 8 Feb 2012 01:03:37 +0000 Subject: [Live-devel] openRTSP Test Program Fails to Export a File In-Reply-To: References: Message-ID: <97C2C9BC-7CED-4F6D-9344-E887E50E44EC@my.ndsu.edu> I'm using the live555 library in order to save an RTSP stream from a PTZ camera. I used the openRTSP program in the "testProgs" directory in order to test whether it would work with the camera I was using. From the output, the program was able to connect to and talk to the camera and request frames from it. However, upon examining the output files from the program, the data was corrupt. Are you sure? You haven't said what RTP payload format the stream was using. Please post the RTSP protocol exchange (the debugging output from "openRTSP"). H:\live\testProgs>openRTSP -d 10 -V -Q rtsp://10.248.123.150/live2.sdp Opened URL "rtsp://10.248.123.150/live2.sdp", returning a SDP description: v=0 o=RTSP 1327875302 714 IN IP4 0.0.0.0 s=RTSP server c=IN IP4 0.0.0.0 t=0 0 a=charset:Shift_JIS a=range:npt=0- a=control:* a=etag:1234567890 m=video 0 RTP/AVP 96 b=AS:0 a=rtpmap:96 MP4V-ES/30000 a=control:trackID=2 a=fmtp:96 profile-level-id=3;config=000001B003000001B2464D5F5047204D6F6465000001 B509000001000000012000C48881F4514043C1463F;decode_buf=76800 m=audio 0 RTP/AVP 97 a=control:trackID=3 a=rtpmap:97 mpeg4-generic/44100/2 a=fmtp:97 streamtype=5; profile-level-id=15; mode=AAC-hbr; config=1210;SizeLengt h=13; IndexLength=3; IndexDeltaLength=3; CTSDeltaLength=0; DTSDeltaLength=0; OK, this shows that your received video file contained "MPEG-4 Elementary Stream" data, and that your received audio file contained "MPEG-4 (AAC) Elementary Stream" data. It's unlikely that this data really was 'corrupt'. It's more likely that whatever media player you used to try to play these files failed to do so, because it did not know how to play this type of file. My next test was to try outputting a quicktime file. When I tried this, the program started printing Klingon to the screen and producing an annoying beeping. That's because the "-q" (and the "-i" and "-4" options) output to 'stdout'. If you want to write the output to a file, then you will need to redirect 'stdout' to it. Is there a simple switch to do that or do I have to compile the test program in order to allow that? Oh dear. Read http://en.wikipedia.org/wiki/Redirection_%28computing%29 This should be common knowledge for anyone who wants to use software like this. Run openRTSP -q -w frame-width -h frame-height -f frame-rate rtsp://url > outputfile.mov Alright, redirecting stdout, which I now know how to do, solved my problem. I am able to successfully output a file which plays with both video and audio. Note: - As noted in the "openRTSP" documentation http://www.live555.com/openRTSP/#quicktime the "-w ", "-h " and "-f " options are important, and should not be omitted. - Note also the "important note" at the end of that section of the documentation. Is it possible to not specify a frame-rate? The reason I ask is that the camera dynamically changes its frame-rate based on network traffic. If I guess wrong, then the video track is not the same length as the audio track. Would the "-y" switch fix this or should I figure out a way to set a constant frame-rate on the camera (which maybe interesting since it's made by Cisco)? Again, thank you for all your help! I've learned a lot from this project. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Gaurav.Dwivedi at hcl.com Tue Feb 7 21:37:56 2012 From: Gaurav.Dwivedi at hcl.com (Gaurav Dwivedi -ERS, HCL Tech) Date: Wed, 8 Feb 2012 11:07:56 +0530 Subject: [Live-devel] Injecting live stream to Darwin Server In-Reply-To: References: Message-ID: Hi Ross, Ok I understand that I can use LIVE Media RTSP server for http tunneling, but the basic question still is that how do I read from an URL (rtsp://) instead of reading from file and send the stream via RTSP server. Thanks and Regards Gaurav Dwivedi From: live-devel-bounces at ns.live555.com [mailto:live-devel-bounces at ns.live555.com] On Behalf Of Ross Finlayson Sent: Wednesday, February 08, 2012 5:56 AM To: LIVE555 Streaming Media - development & use Subject: Re: [Live-devel] Injecting live stream to Darwin Server Importance: High I want to Capture a RTSP stream from an IP camera, then retransmit it using Darwin Server (Since I need HTTP tunneling support). Do you realize that our own RTSP server implementation supports HTTP tunneling? You don't need a 3rd party server (which we don't support) to do this. Ross Finlayson Live Networks, Inc. http://www.live555.com/ ________________________________ ::DISCLAIMER:: ----------------------------------------------------------------------------------------------------------------------- The contents of this e-mail and any attachment(s) are confidential and intended for the named recipient(s) only. It shall not attach any liability on the originator or HCL or its affiliates. Any views or opinions presented in this email are solely those of the author and may not necessarily reflect the opinions of HCL or its affiliates. Any form of reproduction, dissemination, copying, disclosure, modification, distribution and / or publication of this message without the prior written consent of the author of this e-mail is strictly prohibited. If you have received this email in error please delete it and notify the sender immediately. Before opening any mail and attachments please check them for viruses and defect. ----------------------------------------------------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From yuri.timenkov at itv.ru Tue Feb 7 22:19:41 2012 From: yuri.timenkov at itv.ru (Yuri Timenkov) Date: Wed, 08 Feb 2012 10:19:41 +0400 Subject: [Live-devel] Injecting live stream to Darwin Server In-Reply-To: References: Message-ID: <4F3213FD.9040907@itv.ru> Hi Gaurav, I don't understand why can't you connect a Darwin server directly to your RTSP source? Why do you need a RTSP proxy? But if you really need this, you can use VLC: just open RTSP url and stream it: Go Media -> Streaming (Ctrl + S), choose Network in source, enter your RTSP source URL. After that select appropriate streaming parameters (port, URL, disable transcoding, etc). If you need to do this in your program, then you have to write your own RTP Source receiving data from RTP sink (e.g.using PIPE). Regards, Yuri On 08.02.2012 9:37, Gaurav Dwivedi -ERS, HCL Tech wrote: > > Hi Ross, > > Ok I understand that I can use LIVE Media RTSP server for http > tunneling, but the basic question still is that how do I read from an > URL (rtsp://) instead of reading from file and send the stream via > RTSP server. > > Thanks and Regards > > Gaurav Dwivedi > > *From:*live-devel-bounces at ns.live555.com > [mailto:live-devel-bounces at ns.live555.com] *On Behalf Of *Ross Finlayson > *Sent:* Wednesday, February 08, 2012 5:56 AM > *To:* LIVE555 Streaming Media - development & use > *Subject:* Re: [Live-devel] Injecting live stream to Darwin Server > *Importance:* High > > I want to Capture a RTSP stream from an IP camera, then retransmit > it using Darwin Server (Since I need HTTP tunneling support). > > Do you realize that our own RTSP server implementation supports HTTP > tunneling? You don't need a 3rd party server (which we don't support) > to do this. > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > > ------------------------------------------------------------------------ > ::DISCLAIMER:: > ----------------------------------------------------------------------------------------------------------------------- > > The contents of this e-mail and any attachment(s) are confidential and > intended for the named recipient(s) only. > It shall not attach any liability on the originator or HCL or its > affiliates. Any views or opinions presented in > this email are solely those of the author and may not necessarily > reflect the opinions of HCL or its affiliates. > Any form of reproduction, dissemination, copying, disclosure, > modification, distribution and / or publication of > this message without the prior written consent of the author of this > e-mail is strictly prohibited. If you have > received this email in error please delete it and notify the sender > immediately. Before opening any mail and > attachments please check them for viruses and defect. > > ----------------------------------------------------------------------------------------------------------------------- > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From jadder.antonio at gmail.com Tue Feb 7 19:42:32 2012 From: jadder.antonio at gmail.com (=?ISO-8859-1?Q?Jadder_Antonio_Moya_Urb=E1ez?=) Date: Tue, 7 Feb 2012 23:42:32 -0400 Subject: [Live-devel] how to install live555 library on a normal IDE like CodeBLocks Message-ID: Hi Members I am new on this list, also I am new with this library I want to know how to install Live555 library on my codeblock IDE that have GCC compiler. thanks -- *webmaster Jadder *** -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Feb 7 22:46:01 2012 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 8 Feb 2012 16:46:01 +1000 Subject: [Live-devel] openRTSP Test Program Fails to Export a File In-Reply-To: <97C2C9BC-7CED-4F6D-9344-E887E50E44EC@my.ndsu.edu> References: <97C2C9BC-7CED-4F6D-9344-E887E50E44EC@my.ndsu.edu> Message-ID: <7EE52B71-E0B0-4F36-ABB3-18F5D47D153C@live555.com> > Is it possible to not specify a frame-rate? Of course. However - due to the limitations of the ".mov"/".mp4" file format - if you omit this parameter (or if the frame rate varies), it's unlikely that you'll end up with a file that you'll be able to play. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Marlon at scansoft.co.za Tue Feb 7 22:55:03 2012 From: Marlon at scansoft.co.za (Marlon Reid) Date: Wed, 8 Feb 2012 08:55:03 +0200 Subject: [Live-devel] Garbled sound with multiple listeners In-Reply-To: References: <002962EA5927BE45B2FFAB0B5B5D67970A8B3B@SSTSVR1.sst.local> Message-ID: <002962EA5927BE45B2FFAB0B5B5D67970A8B3D@SSTSVR1.sst.local> Hi Ross, My application streams a single mp3 source via unicast. Several instances of VLC are capable of playing a RTSP stream on a single machine with a single sound card. This was tested by streaming from an instance of VLC and connecting several other instances of VLC to this source. In any case if someone else has this problem the solution is to ensure that reuseFirstSource is true. There was a bug in my code that passed a false value to reuseFirstSource and hence the problem. Thanks Ross. ________________________________ From: live-devel-bounces at ns.live555.com [mailto:live-devel-bounces at ns.live555.com] On Behalf Of Ross Finlayson Sent: 08 February 2012 02:25 AM To: LIVE555 Streaming Media - development & use Subject: Re: [Live-devel] Garbled sound with multiple listeners My application streams an MP3 file and it works fine with just one listener, which for the purpose of my testing is VLC. If I open up a second instance of VLC, connect to my stream and play the sound on both of the listeners is garbled. Note that both instances of VLC are on the same machine. Is this behaviour normal? I assumed that both instances of VLC will connect to the stream and play it fine, but that appears not to be the case. There's a lot of information missing here. You haven't said much about what your application does. Does it stream the MP3 via multicast (e.g., like "testMP3Streamer"), or does it stream via unicast (from a RTSP server)? And are you really streaming from a MP3 file, or from a single MP3 source (in which case don't forget to set "reuseFirstSource" to True if you're streaming unicast from a RTSP server)? Also, you say that you are running two instances of VLC on the same computer. I have no idea what VLC is supposed to do with audio in this case (do you have separate sound cards on this computer, or just one??), but any problems that you have with audio in this situation will likely have nothing to do with our software. Why not first try to run your two instances of VLC on *different* machines? And remember that you can also use "openRTSP" to test RTSP client behavior (and in a way that involves only our software, not VLC). Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From xzha286 at aucklanduni.ac.nz Wed Feb 8 00:32:49 2012 From: xzha286 at aucklanduni.ac.nz (James Zhang) Date: Wed, 8 Feb 2012 21:32:49 +1300 Subject: [Live-devel] question about parseSPropParameterSets() In-Reply-To: <7D2352CE-B64F-4001-9D85-92DBC044D5B5@live555.com> References: <7D2352CE-B64F-4001-9D85-92DBC044D5B5@live555.com> Message-ID: Hello Ross Sorry for the late reply and thanks alot for your answer I have tried unicast server said: "h264ESVideoTest" stream, from the file "test.264" Play this stream using the URL "rtsp://192.168.1.4:8554/h264ESVideoTest" I do have a test.264 file in that folder Client said: *2012-02-08 21:22:20.324 rtsp[969:6c03] getSDP: --> (null)* * * So whole program crashed. I also use VLC player to play the link directly, I got some errors like: macosx debug: input has stopped, refreshing interface main debug: dying input main debug: dying input macosx debug: input has changed, refreshing interface main debug: dying input live555 debug: DESCRIBE failed with 0: Failed to read response: Invalid argument live555 debug: connection timeout, retrying live555 error: Failed to connect with rtsp:// 192.168.1.4:8554/h264ESVideoTest main warning: no access_demux module matching "rtsp" could be loaded main debug: TIMER module_Need() : 60004.500 ms - Total 60004.500 ms / 1 intvls (Avg 60004.496 ms) main debug: creating access 'rtsp' path='192.168.1.4:8554/h264ESVideoTest' main debug: looking for access module: 1 candidate main debug: net: connecting to 192.168.1.4 port 8554 main debug: connection: Operation now in progress main debug: connection succeeded (socket = 15) access_realrtsp debug: rtsp connected access_realrtsp warning: only real/helix rtsp servers supported for now main warning: no access module matching "rtsp" could be loaded main debug: TIMER module_Need() : 2.376 ms - Total 2.376 ms / 1 intvls (Avg 2.376 ms) main debug: waitpipe: object killed main error: open of `rtsp://192.168.1.4:8554/h264ESVideoTest' failed: could not create access main debug: thread ended main debug: dead input main debug: thread 2967334912 joined (../../src/playlist/engine.c:244) main debug: starting new item main debug: processing request item h264ESVideoTest node Playlist skip 0 main debug: rebuilding array of current - root Playlist main debug: rebuild done - 1 items, index 0 main debug: creating new input thread main debug: Creating an input for 'h264ESVideoTest' main debug: waiting for thread initialization main debug: thread started main debug: `rtsp://192.168.1.4:8554/h264ESVideoTest' gives access `rtsp' demux `' path `192.168.1.4:8554/h264ESVideoTest' main debug: creating demux: access='rtsp' demux='' path=' 192.168.1.4:8554/h264ESVideoTest' main debug: thread 2958893056 (input) created at priority 22 (../../src/input/input.c:370) main debug: looking for access_demux module: 1 candidate macosx debug: input has stopped, refreshing interface main debug: TIMER input launching for 'h264ESVideoTest' : 60023.059 ms - Total 60023.059 ms / 1 intvls (Avg 60023.055 ms) main debug: no fetch required for h264ESVideoTest (art currently (null)) macosx debug: addition to non-blocking error panel received macosx debug: input has changed, refreshing interface Sorry for the long message. Thanks a lot for your help Best Regards James On 3 February 2012 14:58, Ross Finlayson wrote: > Also, your client is using a very old version of the "LIVE555 Streaming > Media" software. You should upgrade it if you can. > > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > -- James Zhang BE (Hons) Department of Electrical and Computer Engineering THE UNIVERSITY OF AUCKLAND -------------- next part -------------- An HTML attachment was scrubbed... URL: From Layne.Berge.1 at my.ndsu.edu Wed Feb 8 05:39:19 2012 From: Layne.Berge.1 at my.ndsu.edu (Layne Berge) Date: Wed, 8 Feb 2012 13:39:19 +0000 Subject: [Live-devel] openRTSP Test Program Fails to Export a File In-Reply-To: <7EE52B71-E0B0-4F36-ABB3-18F5D47D153C@live555.com> References: <97C2C9BC-7CED-4F6D-9344-E887E50E44EC@my.ndsu.edu> <7EE52B71-E0B0-4F36-ABB3-18F5D47D153C@live555.com> Message-ID: <73CEDE31-8BBE-45DD-B9E4-504577AAF9A9@my.ndsu.edu> >> Is it possible to not specify a frame-rate? > > Of course. However - due to the limitations of the ".mov"/".mp4" file format - if you omit this parameter (or if the frame rate varies), it's unlikely that you'll end up with a file that you'll be able to play. OK, I will work with the camera to maintain a fixed frame-rate. Thanks again for all of your help! From jmm55 at psu.edu Wed Feb 8 13:42:23 2012 From: jmm55 at psu.edu (Justin Miller) Date: Wed, 8 Feb 2012 16:42:23 -0500 Subject: [Live-devel] OpenRTSP creating a flattened / self contained file Message-ID: When I record a streamed file from an IP camera the file has perfect audio sync the first time through but upon scrubbing it loses sync and throws a lot of compression artifacts. I found that if I flatten the file or make the movie self contained, essentially moving the tracks from the resource fork to the data fork, I get a perfect file that can be scrubbed. Is there any way to get OpenRTSP to flatten the file or write directly to the data fork from a live stream so that I can skip this extra step? Thanks, Justin M. Miller Multimedia Specialist Media Commons Project Manager 212A Rider bld. University Park, PA 16802 814-863-7764 http://mediacommons.psu.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Wed Feb 8 23:12:42 2012 From: finlayson at live555.com (Ross Finlayson) Date: Thu, 9 Feb 2012 17:12:42 +1000 Subject: [Live-devel] OpenRTSP creating a flattened / self contained file In-Reply-To: References: Message-ID: > When I record a streamed file from an IP camera the file has perfect audio sync the first time through but upon scrubbing it loses sync and throws a lot of compression artifacts. I found that if I flatten the file or make the movie self contained, essentially moving the tracks from the resource fork to the data fork, I get a perfect file that can be scrubbed. Is there any way to get OpenRTSP to flatten the file or write directly to the data fork from a live stream so that I can skip this extra step? No, but feel free to propose improvements to the implementation of our "QuickTimeFileSink" class, which we use to implement ".mov"/".mp4" file output. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at janteq.com Wed Feb 8 15:39:32 2012 From: jim at janteq.com (Jim Van Vorst) Date: Wed, 8 Feb 2012 15:39:32 -0800 Subject: [Live-devel] openRTSP output .mp4 to a pipe Message-ID: <001d01cce6ba$e96fb720$bc4f2560$@janteq.com> Can I use openRTSP to receive a stream, convert to .mp4 or .mov, and pipe to another program, or does the container format prevent this? I want to: openRTSP -4 -d 10 rtsp://myip/ | myfilter - > test.mp4.filter Or do I have to do this in two steps? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Wed Feb 8 23:32:54 2012 From: finlayson at live555.com (Ross Finlayson) Date: Thu, 9 Feb 2012 17:32:54 +1000 Subject: [Live-devel] openRTSP output .mp4 to a pipe In-Reply-To: <001d01cce6ba$e96fb720$bc4f2560$@janteq.com> References: <001d01cce6ba$e96fb720$bc4f2560$@janteq.com> Message-ID: <8ABEB20E-EDC5-4A55-BA53-57D51345CB46@live555.com> On Feb 9, 2012, at 9:39 AM, Jim Van Vorst wrote: > Can I use openRTSP to receive a stream, convert to .mp4 or .mov, and pipe to another program, or does the container format prevent this? Yes, the file format prevents this. ".mov"/".mp4" files must be seekable files; they can't be unseekable byte streams. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Gaurav.Dwivedi at hcl.com Thu Feb 9 05:07:31 2012 From: Gaurav.Dwivedi at hcl.com (Gaurav Dwivedi -ERS, HCL Tech) Date: Thu, 9 Feb 2012 18:37:31 +0530 Subject: [Live-devel] Injecting live stream to Darwin Server In-Reply-To: <4F3213FD.9040907@itv.ru> References: <4F3213FD.9040907@itv.ru> Message-ID: Hi Yuri, I don?t have any idea how to connect Darwin server directly to the RTSP source? If Darwin server can capture a RTSP stream as a client and then retransmit as a server then please tell me how to do this. Thanks and regards Gaurav From: Yuri Timenkov [mailto:yuri.timenkov at itv.ru] Sent: Wednesday, February 08, 2012 11:50 AM To: LIVE555 Streaming Media - development & use Cc: Gaurav Dwivedi -ERS, HCL Tech Subject: Re: [Live-devel] Injecting live stream to Darwin Server Hi Gaurav, I don't understand why can't you connect a Darwin server directly to your RTSP source? Why do you need a RTSP proxy? But if you really need this, you can use VLC: just open RTSP url and stream it: Go Media -> Streaming (Ctrl + S), choose Network in source, enter your RTSP source URL. After that select appropriate streaming parameters (port, URL, disable transcoding, etc). If you need to do this in your program, then you have to write your own RTP Source receiving data from RTP sink (e.g. using PIPE). Regards, Yuri On 08.02.2012 9:37, Gaurav Dwivedi -ERS, HCL Tech wrote: Hi Ross, Ok I understand that I can use LIVE Media RTSP server for http tunneling, but the basic question still is that how do I read from an URL (rtsp://) instead of reading from file and send the stream via RTSP server. Thanks and Regards Gaurav Dwivedi From: live-devel-bounces at ns.live555.com [mailto:live-devel-bounces at ns.live555.com] On Behalf Of Ross Finlayson Sent: Wednesday, February 08, 2012 5:56 AM To: LIVE555 Streaming Media - development & use Subject: Re: [Live-devel] Injecting live stream to Darwin Server Importance: High I want to Capture a RTSP stream from an IP camera, then retransmit it using Darwin Server (Since I need HTTP tunneling support). Do you realize that our own RTSP server implementation supports HTTP tunneling? You don't need a 3rd party server (which we don't support) to do this. Ross Finlayson Live Networks, Inc. http://www.live555.com/ ________________________________ ::DISCLAIMER:: ----------------------------------------------------------------------------------------------------------------------- The contents of this e-mail and any attachment(s) are confidential and intended for the named recipient(s) only. It shall not attach any liability on the originator or HCL or its affiliates. Any views or opinions presented in this email are solely those of the author and may not necessarily reflect the opinions of HCL or its affiliates. Any form of reproduction, dissemination, copying, disclosure, modification, distribution and / or publication of this message without the prior written consent of the author of this e-mail is strictly prohibited. If you have received this email in error please delete it and notify the sender immediately. Before opening any mail and attachments please check them for viruses and defect. ----------------------------------------------------------------------------------------------------------------------- _______________________________________________ live-devel mailing list live-devel at lists.live555.com http://lists.live555.com/mailman/listinfo/live-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Thu Feb 9 15:01:55 2012 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 10 Feb 2012 09:01:55 +1000 Subject: [Live-devel] Injecting live stream to Darwin Server In-Reply-To: References: <4F3213FD.9040907@itv.ru> Message-ID: <676648D7-2D5C-4E8E-B1EF-50E0B72B104C@live555.com> Let me remind everyone that this mailing list is intended for discussion of the "LIVE555 Streaming Media" software, not for other companies' RTSP clients or servers (including Apple's 'Darwin Streaming Server'). Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jshanab at smartwire.com Thu Feb 9 17:15:57 2012 From: jshanab at smartwire.com (Jeff Shanab) Date: Fri, 10 Feb 2012 01:15:57 +0000 Subject: [Live-devel] MPEG4 Message-ID: <615FD77639372542BF647F5EBAA2DBC20B1C4050@IL-BOL-EXCH01.smartwire.com> I have an application modeled after the newer async openRTSP code. It pulls from IP cameras via RTSP. This has been working great for H264 but I am having a bit of trouble with the MPEG4. In debugging I have found that the problem is I get called with a chuck of data that contains 3 or 4 P-VOP's right after the I-VOP. This ends up having all the same presentation timestamp and messes up playback. As far as I understand it, the source is setup to be the MultiFramedRTPSource by the Session Handling code. Do I need to insert the MPEG4VideoStreamFramer as a filter to data mine the timestamps from the VOP headers? The subsession expects to be told who the sink is and has it's own source. I tried : ... if mpeg4 { framedSource = MPEG4VideoStreamFramer::createNew(*env_,currentSubsession_->readSource()); ... currentSubsession_->sink->startPlaying(*(framedSource),callbackSubsessionAfterPLAYING, (void*) playHandlerData_); } But it just stopped progressing. (Using the MPEG4VideoStreamDescriteFramer made progress but not effect on the timestamps.) Jeff Shanab, Manager, Software Engineering D 630.633.4515 | C 630.453.7764 | F 630.633.4815 | jshanab at smartwire.com [Description: Description: Description: Description: cid:706AA5FB-B29A-4B95-B275-FE31EE559CF0 at hsd1.il.comcast.net.] [Description: Description: Description: Description: Description: Description: Description: sig4] [Description: Description: Description: Description: Description: Description: Description: sig3] [Description: Description: Description: Description: Description: Description: Description: sig2] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 7675 bytes Desc: image001.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.gif Type: image/gif Size: 1494 bytes Desc: image002.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.gif Type: image/gif Size: 1470 bytes Desc: image003.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.gif Type: image/gif Size: 1506 bytes Desc: image004.gif URL: From finlayson at live555.com Thu Feb 9 23:48:28 2012 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 10 Feb 2012 17:48:28 +1000 Subject: [Live-devel] MPEG4 In-Reply-To: <615FD77639372542BF647F5EBAA2DBC20B1C4050@IL-BOL-EXCH01.smartwire.com> References: <615FD77639372542BF647F5EBAA2DBC20B1C4050@IL-BOL-EXCH01.smartwire.com> Message-ID: <2D03EC88-D421-4209-A033-9F8E4A8E9F04@live555.com> > Do I need to insert the MPEG4VideoStreamFramer as a filter to data mine the timestamps from the VOP headers? Yes - same as for H.264: Because you're receiving discrete frames (one at a time), then you should use "MPEG4VideoStreamDiscreteFramer". (And as always: Remember, You Have Complete Source Code.) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From sambhav at saranyu.in Fri Feb 10 05:22:12 2012 From: sambhav at saranyu.in (Kumar Sambhav) Date: Fri, 10 Feb 2012 18:52:12 +0530 Subject: [Live-devel] Switching from RTP over UDP to TCP Message-ID: <82F4BEB3-65D3-40F1-904A-C9B5E404741F@saranyu.in> Hi , When does RTSP server decide to switch streaming from RTP over UDP to TCP ? Regards, Sambhav From taylesworth at realitymobile.com Fri Feb 10 14:35:36 2012 From: taylesworth at realitymobile.com (Tom Aylesworth) Date: Fri, 10 Feb 2012 17:35:36 -0500 Subject: [Live-devel] playSIP Message-ID: I'm having trouble with the playSIP sample. As a test, I'm trying to connect to iptel's music test (sip:music at iptel.org). The invite appears to work but playSIP doesn't seem to be receiving any data. With verbosity set to 1 there are no further log statements after "Receiving streamed data ?", and the file that the data is supposedly being archived to remains empty. Am I doing something wrong is or there some known incompatibility between Live 555's SIP support and iptel? Here is the Live 555 output from playSIP for my session: Sending request: INVITE sip:music at iptel.org SIP/2.0 From: music ;tag=2003800537 Via: SIP/2.0/UDP 10.99.1.51:54784 To: sip:music at iptel.org Contact: sip:music at 10.99.1.51:54784 Call-ID: 1528409970 at 10.99.1.51 CSeq: 1 INVITE Content-Type: application/sdp User-Agent: playsip (LIVE555 Streaming Media v2011.07.21) Content-Length: 112 v=0 o=- 1528409970 0 IN IP4 10.99.1.51 s=playsip session c=IN IP4 10.99.1.51 t=0 0 m=audio 8000 RTP/AVP 0 Received INVITE response: SIP/2.0 100 trying -- your call is important to us From: music ;tag=2003800537 Via: SIP/2.0/UDP 10.99.1.51:54784;rport=54784;received=98.173.183.20 To: sip:music at iptel.org Call-ID: 1528409970 at 10.99.1.51 CSeq: 1 INVITE Server: ser (3.3.0-dev0 (i386/linux)) Content-Length: 0 Warning: 392 217.9.36.145:5060 "Noisy feedback tells: pid=25299 req_src_ip=98.173.183.20 req_src_port=54784 in_uri=sip:music at iptel.org out_uri=sip:music at iptel.org via_cnt==1" Received INVITE response: SIP/2.0 200 OK Record-Route: From: music ;tag=2003800537 Via: SIP/2.0/UDP 10.99.1.51:54784;rport=54784;received=98.173.183.20 To: sip:music at iptel.org;tag=7E0BB991-4F359838000420F6-B5C6FB90 Call-ID: 1528409970 at 10.99.1.51 CSeq: 1 INVITE Server: Sip Express Media Server (1.4.2-10-ga03278d (x86/linux)) Contact: Content-Type: application/sdp Content-Length: 142 v=0 o=sems 182045941 2083023842 IN IP4 217.9.36.144 s=session c=IN IP4 217.9.36.145 t=0 0 m=audio 15600 RTP/AVP 0 a=rtpmap:0 PCMU/8000 Opened URL "sip:music at iptel.org", returning a SDP description: v=0 o=- 1528409970 0 IN IP4 10.99.1.51 s=playsip session c=IN IP4 10.99.1.51 t=0 0 m=audio 8000 RTP/AVP 0 Created receiver for "audio/PCMU" subsession (client ports 8000-8001) Setup "audio/PCMU" subsession (client ports 8000-8001) Created output file: "/Users/taylesworth/audio-PCMU-1" Sending request: ACK sip:music at iptel.org SIP/2.0 From: music ;tag=2003800537 Via: SIP/2.0/UDP 10.99.1.51:54784 To: sip:music at iptel.org;tag=7E0BB991-4F359838000420F6-B5C6FB90 Call-ID: 1528409970 at 10.99.1.51 CSeq: 1 ACK Content-Length: 0 Started playing session Receiving streamed data (signal with "kill -HUP 12362" or "kill -USR1 12362" to terminate)... Thanks, Tom -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Fri Feb 10 23:06:08 2012 From: finlayson at live555.com (Ross Finlayson) Date: Sat, 11 Feb 2012 17:06:08 +1000 Subject: [Live-devel] Switching from RTP over UDP to TCP In-Reply-To: <82F4BEB3-65D3-40F1-904A-C9B5E404741F@saranyu.in> References: <82F4BEB3-65D3-40F1-904A-C9B5E404741F@saranyu.in> Message-ID: <700FAF82-8BA6-4AAA-891B-24B55CE8889E@live555.com> > When does RTSP server decide to switch streaming from RTP over UDP to TCP ? It doesn't. It's the client, not the server, that chooses what form of streaming to use (because it's part of the RTSP "SETUP" command that gets sent by the client). Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jshanab at smartwire.com Fri Feb 10 06:32:13 2012 From: jshanab at smartwire.com (Jeff Shanab) Date: Fri, 10 Feb 2012 14:32:13 +0000 Subject: [Live-devel] MPEG4 In-Reply-To: <2D03EC88-D421-4209-A033-9F8E4A8E9F04@live555.com> References: <615FD77639372542BF647F5EBAA2DBC20B1C4050@IL-BOL-EXCH01.smartwire.com> <2D03EC88-D421-4209-A033-9F8E4A8E9F04@live555.com> Message-ID: <615FD77639372542BF647F5EBAA2DBC20B1C4242@IL-BOL-EXCH01.smartwire.com> Thanks, it makes sense, but I do not explicitly use the H264DescriteFramer and when trying to use the MPEG4 descrite framer as indicated everything just stops. So considering the playCommpn.cpp where and how would I add it? From: live-devel-bounces at ns.live555.com [mailto:live-devel-bounces at ns.live555.com] On Behalf Of Ross Finlayson Sent: Friday, February 10, 2012 1:48 AM To: LIVE555 Streaming Media - development & use Subject: Re: [Live-devel] MPEG4 Do I need to insert the MPEG4VideoStreamFramer as a filter to data mine the timestamps from the VOP headers? Yes - same as for H.264: Because you're receiving discrete frames (one at a time), then you should use "MPEG4VideoStreamDiscreteFramer". (And as always: Remember, You Have Complete Source Code.) Ross Finlayson Live Networks, Inc. http://www.live555.com/ ________________________________ No virus found in this message. Checked by AVG - www.avg.com Version: 2012.0.1913 / Virus Database: 2112/4799 - Release Date: 02/09/12 -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Fri Feb 10 23:13:32 2012 From: finlayson at live555.com (Ross Finlayson) Date: Sat, 11 Feb 2012 17:13:32 +1000 Subject: [Live-devel] playSIP In-Reply-To: References: Message-ID: <54B560F5-417D-4BE2-A384-CA116F1EC398@live555.com> The only thing I can think of is that perhaps you have a firewall somewhere that's blocking RTP/UDP packets coming back from the server (but not the packets that formed the SIP command and response). Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jshanab at smartwire.com Sat Feb 11 11:25:59 2012 From: jshanab at smartwire.com (Jeff Shanab) Date: Sat, 11 Feb 2012 19:25:59 +0000 Subject: [Live-devel] Buggy mpeg4 stream and timestamps Message-ID: <615FD77639372542BF647F5EBAA2DBC20B1C4697@IL-BOL-EXCH01.smartwire.com> I have been having a tough time with an IP camera that serves out MPEG4. I have dug into the headers and all that bit parseing and I think I understand why the timestamps are messed up. It has no "Group Of Video Object Plane" headers, but that is ok, they are "optional" But the vop_time_increment rolls over and the modulo_time_base is ALWAYS 0. :( My question is about the if condition in MPEG4VideoStreamParser::parseVideoObjectPlane() that handles these kinds of "buggy" mpeg 4 streams. It reads: ... } else { if (newTotalTicks < fPrevNewTotalTicks && vop_coding_type != 2/*B*/ && modulo_time_base == 0 && vop_time_increment == 0 && !fJustSawTimeCode) { // This is another kind of buggy MPEG-4 video stream, in which // "vop_time_increment" wraps around, but without // "modulo_time_base" changing (or just having had a new time code). // Overcome this by pretending that "vop_time_increment" *did* wrap around: #ifdef DEBUG fprintf(stderr, "Buggy MPEG-4 video stream: \"vop_time_increment\" wrapped around, but without \"modulo_time_base\" changing!\n"); #endif ++fSecondsSinceLastTimeCode; newTotalTicks += vop_time_increment_resolution; ... In my case vop_time_increment is never (or very rarely) exactly 0. It is usually a number below 100. Also the ticks case is never true. When I add a last_vop_time_increment variable and change the test I can get it to execute this code with improved results. ... if ((vop_time_increment - last_vop_time_increment_ < 0) && vop_coding_type != 2/*B*/ && modulo_time_base == 0 && !fJustSawTimeCode_) ... It still has a negative jump in the timestamps, but they are at least now very predictable and patterned. Is MPEG4 just that way? Messy group of non-compliant streams? Jeff Shanab, Manager, Software Engineering D 630.633.4515 | C 630.453.7764 | F 630.633.4815 | jshanab at smartwire.com [Description: Description: Description: Description: cid:706AA5FB-B29A-4B95-B275-FE31EE559CF0 at hsd1.il.comcast.net.] [Description: Description: Description: Description: Description: Description: Description: sig4] [Description: Description: Description: Description: Description: Description: Description: sig3] [Description: Description: Description: Description: Description: Description: Description: sig2] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 7675 bytes Desc: image001.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.gif Type: image/gif Size: 1494 bytes Desc: image002.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.gif Type: image/gif Size: 1470 bytes Desc: image003.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.gif Type: image/gif Size: 1506 bytes Desc: image004.gif URL: From sambhav at saranyu.in Mon Feb 13 00:16:45 2012 From: sambhav at saranyu.in (Kumar Sambhav) Date: Mon, 13 Feb 2012 13:46:45 +0530 Subject: [Live-devel] Switching from RTP over UDP to TCP In-Reply-To: <700FAF82-8BA6-4AAA-891B-24B55CE8889E@live555.com> References: <82F4BEB3-65D3-40F1-904A-C9B5E404741F@saranyu.in> <700FAF82-8BA6-4AAA-891B-24B55CE8889E@live555.com> Message-ID: <3658FA2A-7F64-4FFA-9E4A-276BB75E5F22@saranyu.in> Hi, I have a RTSP server running on a machine (Amazon EC2) that has a private IP address and an internet routable public IP address. When I try to play this stream using VLC it says "live555 warning. no data received for 10s. Switching to TCP" One issue i found is SDP was having private IP address. During SDP creation the getsockname is called to get the ip address which was returning private IP address. I modified this to put public IP address. After this change clients like ffplay and gstreamer is able to play the stream over UDP but VLC is still not able to play. From debug prints I see that RTSP Server is sending UDP data till VLC sends a request after 10s to switch to TCP. Regards, Sambhav On Feb 11, 2012, at 12:36 PM, Ross Finlayson wrote: >> When does RTSP server decide to switch streaming from RTP over UDP to TCP ? > > It doesn't. It's the client, not the server, that chooses what form of streaming to use (because it's part of the RTSP "SETUP" command that gets sent by the client). > > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From Shaheed at scansoft.co.za Mon Feb 13 04:33:35 2012 From: Shaheed at scansoft.co.za (Shaheed Abdol) Date: Mon, 13 Feb 2012 14:33:35 +0200 Subject: [Live-devel] How to assign sinks to subsessions Message-ID: <002962EA5927BE45B2FFAB0B5B5D67970A8B61@SSTSVR1.sst.local> Good afternoon, This is my first post to this list, and I would like to keep it simple. I have adapted the playCommon.cpp example program which pulls down RTSP streams and saves them to disk or stdout. I want to modify the example to open a single RTSP source, containing multiple subsessions (one audio one video), and to display the video and play the audio. My current approach is to iterate through the subsessions of this session, and to assign a new CAudio/CVideo fileSink to the subsession->sink member. This causes the video to play for one frame, and the audio to play continuously. I have read the live555 source more closely and have noticed that When creating AVIFileSink, it seems that all the data is sent to the sink, where it is split into audio/video components, is this the only method to accomplish this, or are there simpler methods? I have done the due debugging work and can confirm that the "after getting" function is called frequently for the audio sink, as a side-effect of the SDP description, first the video sink is created, then the audio. All the functions succeed to the point where we have gotten the first frame from the video sink (I can see this in my Decoder "filter", which is handed input data and successfully decodes the first frame). I am at wits' end with this problem, since the library supports what I want - tested by connecting to the RTSP source with VLC. Thank you for your time Regards ___________________________________ Shaheed Abdol Web: www.scansoft.co.za Tel: +27 21 913 8664 Cell: +27 79 835 8771 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 32497 bytes Desc: SST Email.png URL: From finlayson at live555.com Mon Feb 13 09:48:24 2012 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 14 Feb 2012 06:48:24 +1300 Subject: [Live-devel] How to assign sinks to subsessions In-Reply-To: <002962EA5927BE45B2FFAB0B5B5D67970A8B61@SSTSVR1.sst.local> References: <002962EA5927BE45B2FFAB0B5B5D67970A8B61@SSTSVR1.sst.local> Message-ID: I suggest using the "testRTSPClient" code - rather than the "openRTSP" code - as a model. The "testRTSPClient" code makes it clearer how to deliver the separate audio and video streams to their own 'sink' objects. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nikola.kolarovic at gmail.com Tue Feb 14 02:41:14 2012 From: nikola.kolarovic at gmail.com (Nikola Kolarovic) Date: Tue, 14 Feb 2012 11:41:14 +0100 Subject: [Live-devel] what's mean of this sentence? Message-ID: Hi Ross, I have found out that Live555 uses TTL 0 multicast message in order to find its own IP address. Solution like that might sound dubious, but there must be a reasonable explanation for such an implementation. Network configurations? Portability? Although it seems that it creates more problems (on some network configurations, including mine). I just switched to usual IP query, and the RTSP startup procedure works well now, and faster than the previous solution. Can you elaborate on this method (TTL 0 multicast msg)? Thank you very much, Nikola >> >> I use mplayer which has compile the source of live as the >>client,and I use the helix as the serve ,but I receive the message : >> >> STREAM_LIVE555, URL: rtsp://202.194.20.86/mpg4video.mp4 >> Stream not seekable! >> Unable to determine our source address: This computer has an invalid >> IP address: 0x0 >> Unable to determine our source address: This computer has an invalid >> IP address: 0x0 >>Unable to determine our source address: This computer has an invalid >> IP address: 0x0 >> Initiated "video/MP4V-ES" RTP subsession on port 32822 >> Unable to determine our source address: This computer has an invalid >> IP address: 0x0 >> Unable to determine our source address: This computer has an invalid >> IP address: 0x0 >> Initiated "audio/MPEG4-GENERIC" RTP subsession on port 32824 >> what's wrong with it ? > > The code is having trouble figuring out your IP address, perhaps (in > part) because you don't have multicast configured properly on your > system. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Shaheed at scansoft.co.za Tue Feb 14 05:13:40 2012 From: Shaheed at scansoft.co.za (Shaheed Abdol) Date: Tue, 14 Feb 2012 15:13:40 +0200 Subject: [Live-devel] How to assign sinks to subsessions Message-ID: <002962EA5927BE45B2FFAB0B5B5D67970A8B6D@SSTSVR1.sst.local> Good afternoon, As a second post to this list, I have to thank you for your first advice, I was definitely using the wrong testProg to start with. I have since reworked the testRTSPClient code to function in the fashion that I wish, but still cannot get both subsessions to provide data to me. Allow me to explain my use-case: I am trying to obtain the audio/video stream from a Vivotek IP camera (FD8161), so far I can obtain either the video stream (by supplying a flag stating that I only want to create the video or audio sink) or the audio stream perfectly. The moment I try to create two sinks to handle the data from each subsession, then I fail to get data passed to me in the afterGettingFrame function of my custom sinks. I know the sinks should be working fine since it works individually, I will provide the snippet of code which assigns the sinks to their respective subsessions, this snippet is what I've modified to create seperate sinks for audio and video data: ... void SetupStreams() { static MediaSubsessionIterator* setupIter = NULL; if (g_session == NULL) { Shutdown(); return; } if (setupIter == NULL) setupIter = new MediaSubsessionIterator(*g_session); while ((g_setupSubsession = setupIter->next()) != NULL) { // We have another subsession left to set up: if (g_setupSubsession->clientPortNum() == 0) continue; // port # was not set SetupSubsession(g_setupSubsession, false, &CStreamReceiver::ContinueAfterSETUP); g_madeSetupProgress = true; return; } // We're done setting up subsessions. delete setupIter; setupIter = NULL; //if (!g_madeSetupProgress) // Shutdown(); g_madeSetupProgress = false; MediaSubsessionIterator iter(*g_session); while ((g_setupSubsession = iter.next()) != NULL) { if (g_setupSubsession->readSource() == NULL) continue; // was not initiated PAYLOAD_FORMAT destFormat = PAYLOAD_NONE; //-1 //params.type - The type of input device //payloadType - the input format //params.format - the desired output format //Find a sink that converts from payload type to output type (params). Use recursive function. Chain together. if (g_audioHandler && stricmp(g_setupSubsession->mediumName(), "audio") == 0) { //Need to figure out what the required destination format is. PCM/RGB24/etc. (params.type) map params = TokenizeParameters(g_audioHandler->GetOutputFormatParams()); map::iterator itr; itr = params.find("DestinationFormatAudio"); if (itr != params.end()) destFormat = (PAYLOAD_FORMAT)(strtol((*itr).second.c_str(), NULL, 10)); //create a payload details struct specifically for each sink, cannot use one global struct for two subsession streams. sPayloadInfo info; info.payloadType = (PAYLOAD_FORMAT)g_setupSubsession->rtpPayloadFormat(); //hopefully this populates with constant invalid values (that can be checked against) info.channels = g_setupSubsession->numChannels(); info.frequency = g_setupSubsession->rtpTimestampFrequency(); if (info.payloadType == PAYLOAD_PCMU || info.payloadType == PAYLOAD_PCMA) info.bits = 16; //hopefully this is the correct value. (based on u-law documen tation) g_audioMediaSink = g_setupSubsession->sink = static_cast(CFFMPEGAudioMediaSink::createNew(*g_env, g_audioHandler, info, destFormat)); } else if (g_videoHandler && stricmp(g_setupSubsession->mediumName(), "video") == 0) //make sure this subsession is a video stream { //Need to figure out what the required destination format is. PCM/RGB24/etc. (params.type) map params = TokenizeParameters(g_videoHandler->GetOutputFormatParams()); map::iterator itr; itr = params.find("DestinationFormatVideo"); if (itr != params.end()) destFormat = (PAYLOAD_FORMAT)(strtol((*itr).second.c_str(), NULL, 10)); sPayloadInfo info; info.payloadType = (PAYLOAD_FORMAT)g_setupSubsession->rtpPayloadFormat(); //hopefully this populates with constant invalid values (that can be checked against) info.channels = g_setupSubsession->numChannels(); info.frequency = g_setupSubsession->rtpTimestampFrequency(); //I'm sure there must be a way to get bpp, etc out of the subsession. g_videoMediaSink = g_setupSubsession->sink = static_cast(CFFMPEGVideoMediaSink::createNew(*g_env, g_videoHandler, info, destFormat)); } else { g_setupSubsession->sink = NULL; LOG((*g_logger), "Failed to create sink for - " << g_setupSubsession->mediumName()); } if (g_setupSubsession->sink) { LOG((*g_logger), "Playing sink - " << g_setupSubsession->mediumName()); g_setupSubsession->sink->startPlaying(*(g_setupSubsession->readSource()) , SubsessionAfterPlaying, g_setupSubsession); // Also set a handler to be called if a RTCP "BYE" arrives for this subsession: if (g_setupSubsession->rtcpInstance() != NULL) g_setupSubsession->rtcpInstance()->setByeHandler(&CStreamReceiver::Subse ssionByeHandler, g_setupSubsession); g_madeSetupProgress = true; } } if (!g_madeSetupProgress) Shutdown(); // Finally, start playing each subsession, to start the data flow: if (g_duration == 0) { if (g_scale > 0) g_duration = g_session->playEndTime() - g_initialSeekTime; // use SDP end time else if (g_scale < 0) g_duration = g_initialSeekTime; } if (g_duration < 0) g_duration = 0.0; g_endTime = g_initialSeekTime; if (g_scale > 0) { if (g_duration <= 0) g_endTime = -1.0f; else g_endTime = g_initialSeekTime + g_duration; } else { g_endTime = g_initialSeekTime - g_duration; if (g_endTime < 0) g_endTime = 0.0f; } StartPlayingSession(g_session, g_initialSeekTime, g_endTime, g_scale, &CStreamReceiver::ContinueAfterPLAY); I can confirm that the parameters are correct to each subsession and to each sink, which I have logs for which indicate that everything succeeds, but the data simply does not stream, possibly it's a deadlock, or a function I should be calling... I have written a filter for the MPEG4 (using FFMPEG), and a simple audio filter to conver the u-law/a-law data (camera supports both) to 16-bit signed linear pcm data - which plays perfectly. I just need a tiny nudge in the right direction. I have included my SDP response from the camera, just in case I have misread something and the camera requires another step. v=0 o=RTSP 959993539 572 IN IP4 0.0.0.0 s=RTSP server c=IN IP4 0.0.0.0 t=0 0 a=charset:Shift_JIS a=range:npt=0- a=control:* a=etag:1234567890 m=video 0 RTP/AVP 96 b=AS:0 a=rtpmap:96 MP4V-ES/30000 a=control:trackID=1 a=fmtp:96 profile-level-id=3;config=000001B003000001B509000001010000012000845D4C28 C82258A200;decode_buf=76800 m=audio 0 RTP/AVP 0 a=control:trackID=6 a=rtpmap:0 pcmu/8000 Thank you Regards ___________________________________ Shaheed Abdol Web: www.scansoft.co.za Tel: +27 21 913 8664 Cell: +27 79 835 8771 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 32497 bytes Desc: SST Email.png URL: From taylesworth at realitymobile.com Tue Feb 14 08:50:07 2012 From: taylesworth at realitymobile.com (Tom Aylesworth) Date: Tue, 14 Feb 2012 11:50:07 -0500 Subject: [Live-devel] playSip In-Reply-To: References: Message-ID: <91A2D54C-DA06-458B-8934-998C5BBEE9A7@realitymobile.com> The only thing I can think of is that perhaps you have a firewall somewhere that's blocking RTP/UDP packets coming back from the server (but not the packets that formed the SIP command and response). Thanks, Ross. Another question: what does it take to extend the sipPlayer test program to support two-way traffic? Sorry if that seems like a basic question but I didn't see an obvious way to do it, and didn't see it mentioned in the FAQ. Thanks again, Tom -------------- next part -------------- An HTML attachment was scrubbed... URL: From Shaheed at scansoft.co.za Wed Feb 15 05:43:23 2012 From: Shaheed at scansoft.co.za (Shaheed Abdol) Date: Wed, 15 Feb 2012 15:43:23 +0200 Subject: [Live-devel] How to assign sinks to subsessions Message-ID: <002962EA5927BE45B2FFAB0B5B5D67970A8B7A@SSTSVR1.sst.local> Good afternoon, It took me the better part of two full working days, but I discovered the solution. I would like to post the answer in case somebody else gets stuck with it. I am pulling data from an IP camera, then subclassing MediaSink class to decode the data from the camera, then passing the data to a dialog which displays the output and plays the sound chunks. When I try to play both video and audio, then the video stream gets stuck, I can play the individual streams perfectly. After meticulously making incremental changes to discover the issue, I found that when playing audio using the PortAudio library, it is required to call the PA_WriteStream(...) function within a separate thread. The only reason the video was getting stuck is due to some side-effect (unknown) of the way PortAudio writes to the default sound device which is not compatible with the way in which I am using live555. To allow for a bit more clarity, I am using live555 to stream rtsp data, ffmpeg to decode/transcode the streamed data, Direct3D to display video, and PortAudio to play sound. With this toolchain it is essential to play the audio in a seperate thread, else the video sink will lock up. Thank you for all your assistance. Regards ____________________________ Shaheed Abdol Web: www.scansoft.co.za Tel: +27 21 913 8664 Cell: +27 79 835 8771 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 32497 bytes Desc: SST Email.png URL: From Marlon at scansoft.co.za Thu Feb 16 23:10:54 2012 From: Marlon at scansoft.co.za (Marlon Reid) Date: Fri, 17 Feb 2012 09:10:54 +0200 Subject: [Live-devel] Server liveness Message-ID: <002962EA5927BE45B2FFAB0B5B5D67970A8B82@SSTSVR1.sst.local> Hi, I was wondering if there is a way for the clients to check if the server is still alive. It may happen that the server dies unexpectedly and then I want all the clients to stop listening. I looked on the mailing list but I was unable to find a way to check server liveness. Any advise will be appreciated. Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Fri Feb 17 15:12:33 2012 From: finlayson at live555.com (Ross Finlayson) Date: Sat, 18 Feb 2012 09:12:33 +1000 Subject: [Live-devel] what's mean of this sentence? In-Reply-To: References: Message-ID: <0D888D0D-069D-41AC-9476-5C7C24D7308A@live555.com> > I have found out that Live555 uses TTL 0 multicast message in order to find its own IP address. Yes, this is the *first* thing that it tries, because - despite what you might think - is an approach that is very effective and portable across many different OSs. However, it's important to realize that this is not the *only* technique that it tries. If the 'multicast loopback' technique fails, then the code then tries a second technique - one that is more conventional, but less portable: - Call "gethostname()" to get the computer's domain name; then - Call "getaddrinfo()" (or "gethostbyname()" if "getaddrinfo()" is not available), to resolve this name into an IP address. If both of these techniques (multicast loopback and "gethostname()"/"getaddrinfo()") fail, then it usually means that your network interface is not configured properly. > Solution like that might sound dubious, but there must be a reasonable explanation for such an implementation. > Network configurations? Portability? Although it seems that it creates more problems (on some network configurations, including mine). Actually, on most configurations, it works just fine. > I just switched to usual IP query I don't know what "usual IP query" is supposed to mean, but - as noted above - you should check to make sure that your network interface is configured properly (and has a router for 224.0.0.0/8 - i.e., for IP multicast). Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Fri Feb 17 18:21:31 2012 From: finlayson at live555.com (Ross Finlayson) Date: Sat, 18 Feb 2012 12:21:31 +1000 Subject: [Live-devel] playSip In-Reply-To: <91A2D54C-DA06-458B-8934-998C5BBEE9A7@realitymobile.com> References: <91A2D54C-DA06-458B-8934-998C5BBEE9A7@realitymobile.com> Message-ID: <01FBB2C9-C349-44EC-9257-09E37AF66D73@live555.com> > Another question: what does it take to extend the sipPlayer test program to support two-way traffic? Sorry if that seems like a basic question but I didn't see an obvious way to do it, and didn't see it mentioned in the FAQ. "playSIP" was intended basically as a simple 'proof of concept' - to show that code similar to the existing RTSP client code could be used to receive streams via a SIP connection as well. However, extending its functionality to support two-way traffic (effectively, becoming an 'IP telephone') would be a major undertaking, and is not currently on the drawing board. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailme_vinaytyagi at yahoo.com Wed Feb 15 09:38:29 2012 From: mailme_vinaytyagi at yahoo.com (Vinay Tyagi) Date: Wed, 15 Feb 2012 09:38:29 -0800 (PST) Subject: [Live-devel] live-devel Digest, Vol 100, Issue 18 In-Reply-To: References: Message-ID: <1329327509.30118.YahooMailNeo@web113415.mail.gq1.yahoo.com> Hi live-dev family, ? I am new to mailing group, can someone help me one the following queries - ? 1. How can I stream multiple files (100 mp3 files) in a sequence or random using live555MediaServer ? ? 2. I am able to fetch only one audio file at a time using openRTSP. I am fetching rtsp stream?generated from live555MediaServer in?openRTSP using format -> openRTSP -V rtsp://192.168.245.1:8554/Songs/1.mp3? (where 1.mp3 is a single audio file) How can I fetch all .mp3 files in a sequence? ? 3.?openRTSP creates a file name -> audio-MPA-1 in testProgs folder, how can I play this file in VLC player ? I am trying to use a format -> rtsp://192.168.245.1:2146/audio-MPA-1 which is giving an error. is something wrong here ? ? Kindly help! ? Regards, Vinay ? ? ________________________________ From: "live-devel-request at ns.live555.com" To: live-devel at ns.live555.com Sent: Wednesday, February 15, 2012 7:13 PM Subject: live-devel Digest, Vol 100, Issue 18 Send live-devel mailing list submissions to ??? live-devel at lists.live555.com To subscribe or unsubscribe via the World Wide Web, visit ??? http://lists.live555.com/mailman/listinfo/live-devel or, via email, send a message with subject or body 'help' to ??? live-devel-request at lists.live555.com You can reach the person managing the list at ??? live-devel-owner at lists.live555.com When replying, please edit your Subject line so it is more specific than "Re: Contents of live-devel digest..." Today's Topics: ? 1. Re: playSip (Tom Aylesworth) ? 2. How to assign sinks to subsessions (Shaheed Abdol) ---------------------------------------------------------------------- Message: 1 Date: Tue, 14 Feb 2012 11:50:07 -0500 From: Tom Aylesworth To: "live-devel at ns.live555.com" Subject: Re: [Live-devel] playSip Message-ID: <91A2D54C-DA06-458B-8934-998C5BBEE9A7 at realitymobile.com> Content-Type: text/plain; charset="us-ascii" The only thing I can think of is that perhaps you have a firewall somewhere that's blocking RTP/UDP packets coming back from the server (but not the packets that formed the SIP command and response). Thanks, Ross. Another question: what does it take to extend the sipPlayer test program to support two-way traffic?? Sorry if that seems like a basic question but I didn't see an obvious way to do it, and didn't see it mentioned in the FAQ. Thanks again, Tom -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Wed, 15 Feb 2012 15:43:23 +0200 From: "Shaheed Abdol" To: Subject: [Live-devel] How to assign sinks to subsessions Message-ID: <002962EA5927BE45B2FFAB0B5B5D67970A8B7A at SSTSVR1.sst.local> Content-Type: text/plain; charset="us-ascii" Good afternoon, It took me the better part of two full working days, but I discovered the solution. I would like to post the answer in case somebody else gets stuck with it. I am pulling data from an IP camera, then subclassing MediaSink class to decode the data from the camera, then passing the data to a dialog which displays the output and plays the sound chunks. When I try to play both video and audio, then the video stream gets stuck, I can play the individual streams perfectly. After meticulously making incremental changes to discover the issue, I found that when playing audio using the PortAudio library, it is required to call the PA_WriteStream(...) function within a separate thread. The only reason the video was getting stuck is due to some side-effect (unknown) of the way PortAudio writes to the default sound device which is not compatible with the way in which I am using live555. To allow for a bit more clarity, I am using live555 to stream rtsp data, ffmpeg to decode/transcode the streamed data, Direct3D to display video, and PortAudio to play sound. With this toolchain it is essential to play the audio in a seperate thread, else the video sink will lock up. Thank you for all your assistance. Regards ____________________________ Shaheed Abdol Web: www.scansoft.co.za Tel:? +27 21 913 8664 Cell: +27 79 835 8771 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 32497 bytes Desc: SST Email.png URL: ------------------------------ _______________________________________________ live-devel mailing list live-devel at lists.live555.com http://lists.live555.com/mailman/listinfo/live-devel End of live-devel Digest, Vol 100, Issue 18 ******************************************* -------------- next part -------------- An HTML attachment was scrubbed... URL: From dkumars1981 at gmail.com Fri Feb 17 05:15:15 2012 From: dkumars1981 at gmail.com (vineeth kumar) Date: Fri, 17 Feb 2012 18:45:15 +0530 Subject: [Live-devel] streaming usecase with ffmpeg Message-ID: Hi, >From your blogs I read like you use the ffmpeg for streaming video playback. I wanted to check how seek and trick play works with ffmpeg. 1) Whenever I seek how the new position is calculated seek by time or seek by byte? 2) When we get new position, how the streaming behaves, does it start new streaming with new position? or how the streaming changes from that position? Thanks and Regards, Sunil Deshpande -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Sat Feb 18 09:08:54 2012 From: finlayson at live555.com (Ross Finlayson) Date: Sun, 19 Feb 2012 03:08:54 +1000 Subject: [Live-devel] Server liveness In-Reply-To: <002962EA5927BE45B2FFAB0B5B5D67970A8B82@SSTSVR1.sst.local> References: <002962EA5927BE45B2FFAB0B5B5D67970A8B82@SSTSVR1.sst.local> Message-ID: <968F5FFA-6F96-4523-ABB0-611CDB6542F9@live555.com> > I was wondering if there is a way for the clients to check if the server is still alive. Yes, you could listen for RTCP "SR" packets (that the server sends frequently), and close the client's connection if such packets (and, of course, regular data) doesn't arrive within a certain period of time (e.g., within 30 seconds). Note the function "RTCPInstance::setSRHandler()" (in "liveMedia/include/RCTP.hh"). Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.orr at scala.com Mon Feb 20 10:16:37 2012 From: john.orr at scala.com (John Orr) Date: Mon, 20 Feb 2012 13:16:37 -0500 Subject: [Live-devel] odd port number for transport stream over UDP Message-ID: <4F428E05.3070402@scala.com> I have a media player that uses Live555 to stream from RTSP/RTP as well as transport stream over UDP (IPTV style). When playing transport stream over UDP, I put together an SDP description like this: s=MPEG Transport Stream over UDP i=TS over UDP t=0 0 a=type:broadcast m=video UDP 33 c=IN IP4 b=AS:5000 Then I create a MediaSession with it, like this: m_pLive555Session = MediaSession::createNew(*m_pLive555Env, sdpDescription); The call initiate() on the subsession. This creates a MPEG2TransportStreamFramer, with a BasicUDPSource to feed it. This works great as long as the port number for the UDP transport stream is even. If it is odd, liveMedia/MediaSession.cpp/MediaSubsession::initiate() forces the data delivery port to be even. There is convention/rule that says you use even port numbers for RTP and odd for the corresponding RTCP backchannel. I'm not disputing that, but this code can also handle TS over UDP. In the case of raw transport stream over UDP there is no RTCP back channel and the port number isn't generally negotiable. To get around the problem, I hacked initiate() to avoid changing the port number when fProtocolName is UDP, I added the diff to the end of this message. I don't have a very deep understanding of this code base, maybe this was the wrong approach, but this patch met my immediate needs. I'm guessing my case here is either fairly obscure or I missed a better way of dealing with this. Comments and criticisms are welcome. --Johno diff --git a/liveMedia/MediaSession.cpp b/liveMedia/MediaSession.cpp index 8e17d26..9b49bba 100644 --- a/liveMedia/MediaSession.cpp +++ b/liveMedia/MediaSession.cpp @@ -588,9 +588,18 @@ Boolean MediaSubsession::initiate(int useSpecialRTPoffset) { tempAddr.s_addr = connectionEndpointAddress(); // This could get changed later, as a result of a RTSP "SETUP" + // ZZZ: Feb 2012 + // do not bother forcing port to be even when using UDP, should only apply to RTP + // + bool bIsUDP = (strcmp(fProtocolName, "UDP") == 0); + if (fClientPortNum != 0) { - // The sockets' port numbers were specified for us. Use these: - fClientPortNum = fClientPortNum&~1; // even + + if ( !bIsUDP ) + { + // The sockets' port numbers were specified for us. Use these: + fClientPortNum = fClientPortNum&~1; // even + } if (isSSM()) { fRTPSocket = new Groupsock(env(), tempAddr, fSourceFilterAddr, fClientPortNum); } else { @@ -600,20 +609,23 @@ Boolean MediaSubsession::initiate(int useSpecialRTPoffset) { env().setResultMsg("Failed to create RTP socket"); break; } - - // Set our RTCP port to be the RTP port +1 - portNumBits const rtcpPortNum = fClientPortNum|1; - if (isSSM()) { - fRTCPSocket = new Groupsock(env(), tempAddr, fSourceFilterAddr, rtcpPortNum); - } else { - fRTCPSocket = new Groupsock(env(), tempAddr, rtcpPortNum, 255); - } - if (fRTCPSocket == NULL) { - char tmpBuf[100]; - sprintf(tmpBuf, "Failed to create RTCP socket (port %d)", rtcpPortNum); - env().setResultMsg(tmpBuf); - break; - } + + if ( !bIsUDP ) + { + // Set our RTCP port to be the RTP port +1 + portNumBits const rtcpPortNum = fClientPortNum|1; + if (isSSM()) { + fRTCPSocket = new Groupsock(env(), tempAddr, fSourceFilterAddr, rtcpPortNum); + } else { + fRTCPSocket = new Groupsock(env(), tempAddr, rtcpPortNum, 255); + } + if (fRTCPSocket == NULL) { + char tmpBuf[100]; + sprintf(tmpBuf, "Failed to create RTCP socket (port %d)", rtcpPortNum); + env().setResultMsg(tmpBuf); + break; + } + } } else { // Port numbers were not specified in advance, so we use ephemeral port numbers. // Create sockets until we get a port-number pair (even: RTP; even+1: RTCP). @@ -691,7 +703,7 @@ Boolean MediaSubsession::initiate(int useSpecialRTPoffset) { increaseReceiveBufferTo(env(), fRTPSocket->socketNum(), rtpBufSize); // ASSERT: fRTPSocket != NULL && fRTCPSocket != NULL - if (isSSM()) { + if (isSSM() && !bIsUDP ) { // Special case for RTCP SSM: Send RTCP packets back to the source via unicast: fRTCPSocket->changeDestinationParameters(fSourceFilterAddr,0,~0); } From vindoctor2 at aol.com Sat Feb 18 19:02:49 2012 From: vindoctor2 at aol.com (Vindoctor2) Date: Sat, 18 Feb 2012 22:02:49 -0500 (EST) Subject: [Live-devel] single channel video Message-ID: <8CEBCCA19341E64-1890-2B93F@web-mmc-m10.sysops.aol.com> I'm trying to figure out how its possible to send a single channel 24bit video stream that needs to be lossless along side an h264 stream. I don't see a way to embedded this single channel into the h264 unless its lossy, which I cannot do. I think I have to make a custom payload and send this extra data that way, as I know some codec will compress this single channel significantly. The problem is I'm having a hard time figuring out how to do this with the live555 library. The closest way that I can see how to do what I want is described in this thread. http://lists.live555.com/pipermail/live-devel/2009-November/011476.html but I'm so green with live 555 and RTP I need a little more help/example of someone already extending the library to do this. If someone knows of a payload I can use that live 555 supports, that would be the best path. Second would be to try to figure out all the ins and outs of trying to do what that thread above describes as a solution to a problem similar to what I want to try to do. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailme_vinaytyagi at yahoo.com Mon Feb 20 07:50:24 2012 From: mailme_vinaytyagi at yahoo.com (Vinay Tyagi) Date: Mon, 20 Feb 2012 07:50:24 -0800 (PST) Subject: [Live-devel] help In-Reply-To: <1329559883.53799.YahooMailNeo@web113401.mail.gq1.yahoo.com> References: <1329559883.53799.YahooMailNeo@web113401.mail.gq1.yahoo.com> Message-ID: <1329753024.56002.YahooMailNeo@web113415.mail.gq1.yahoo.com> Hi, ? I have got an stream playing in one machine using http:// address (http://xxx.xxx.xxx.xxx). (Stream input is given from content server to this machin by phyical audio link connectivity and media is playing in windows media player by fetching open URL option and giving http://) I need to fetch/store this stream and broadcast it over MPLS, I also need a receiver to fetch the stream and play it in media player at client end (other end of MPLS). Question: 1. how to fetch this stream using live555MediaServer or openRTSP server or others?? Please write down proper format to fetch. 2. how to broadcast it?for public access (mass audiance) ? 3. how to receive it and play using http:// address at user end (client end using any media player) ? I hope I cleared my queries to you. Please let me know for further clarity. Kindly help! Regards, Vinay -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Mon Feb 20 17:17:29 2012 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 21 Feb 2012 11:17:29 +1000 Subject: [Live-devel] odd port number for transport stream over UDP In-Reply-To: <4F428E05.3070402@scala.com> References: <4F428E05.3070402@scala.com> Message-ID: > To get around the problem, I hacked initiate() to avoid changing the port number when fProtocolName is UDP, I added the diff to the end of this message. I don't have a very deep understanding of this code base, maybe this was the wrong approach, but this patch met my immediate needs. I'm guessing my case here is either fairly obscure or I missed a better way of dealing with this. Thanks for the note. Yes, your case is somewhat obscure (and I hope that your network doesn't reorder packets, because - without RTP - you won't be able to handle this properly). If you have any control over your server, you should consider fixing it so that it uses RTP (and with an even-numbered port) rather than raw-UDP. Nonetheless, I'll update this code in a future release of the software to do something similar. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From liquidl at email.com Mon Feb 20 18:32:04 2012 From: liquidl at email.com (liquidl at email.com) Date: Mon, 20 Feb 2012 21:32:04 -0500 Subject: [Live-devel] Live switch of audio codec Message-ID: <20120221023204.107150@gmx.com> Hi, I was wondering if it's possible to switch audio codecs during live streaming. ?For instance, if I am streaming AAC then to switch to MP3 without stopping. ?I searched the list and you recommend switching the source, so basically change the audio source from AAC to MP3 without getting rtsp server involved at all. I can see that the source switching would work if the codecs are the same but if I want to switch between completely different codecs at the source encoding point, would it work? Any tips to get it working would be apppreciated. Thanks. PS: thanks for your earlier info about accessing RR packets, I got that working with your tip. From finlayson at live555.com Mon Feb 20 20:17:17 2012 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 21 Feb 2012 14:17:17 +1000 Subject: [Live-devel] Live switch of audio codec In-Reply-To: <20120221023204.107150@gmx.com> References: <20120221023204.107150@gmx.com> Message-ID: <7BA2C6F2-7462-4FCC-BCF6-158A622005BF@live555.com> > I was wondering if it's possible to switch audio codecs during live streaming. The standards support it; however, we currently don't (either at the server end, nor at the client end). Sorry. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.orr at scala.com Tue Feb 21 09:39:57 2012 From: john.orr at scala.com (John Orr) Date: Tue, 21 Feb 2012 12:39:57 -0500 Subject: [Live-devel] odd port number for transport stream over UDP In-Reply-To: References: Message-ID: <4F43D6ED.4070300@scala.com> > Thanks for the note. Yes, your case is somewhat obscure (and I hope that your network doesn't reorder packets, because - without RTP - you won't be able to handle this properly). If you have any control over your server, you should consider fixing it so that it uses RTP (and with an even-numbered port) rather than raw-UDP. I don't generally have a say over what the source is, so I have to be accommodating. Sort of falls into the "Be liberal in what you accept" mantra. > Nonetheless, I'll update this code in a future release of the software to do something similar. > Ok, that sounds great. Thanks! --Johno From liquidl at email.com Tue Feb 21 10:26:09 2012 From: liquidl at email.com (liquidl at email.com) Date: Tue, 21 Feb 2012 13:26:09 -0500 Subject: [Live-devel] Live switch of audio codec Message-ID: <20120221182609.107150@gmx.com> >> I was wondering if it's possible to switch audio codecs during live streaming. >The standards support it; however, we currently don't (either at the server end, nor at the client end). ?Sorry. >Ross Finlayson >Live Networks, Inc. >http://www.live555.com/ Ok, then to support multiple audio codecs, I am presuming the RTSP client would have to stop and restart the stream. ?So to support a second audio codec, would I add another subsession for that audio codec? ?Then how do I make rtsp server switch between the codecs? Thanks. From finlayson at live555.com Tue Feb 21 17:03:15 2012 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 22 Feb 2012 11:03:15 +1000 Subject: [Live-devel] Live switch of audio codec In-Reply-To: <20120221182609.107150@gmx.com> References: <20120221182609.107150@gmx.com> Message-ID: <5B5C7DDA-F524-48FF-8C7D-CB78B4499C79@live555.com> You apparently misunderstood me. When I said that we don't support this, I meant just that: We don't support it. Meaning: Our library cannot currently be used to stream (or receive) more than one codec within the same RTP stream. But because (as evidenced by your email address) you are just a casual hobbyist, this really isn't something that should concern you. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From romain.bednarowicz at epitech.eu Thu Feb 23 02:00:46 2012 From: romain.bednarowicz at epitech.eu (bednar_r) Date: Thu, 23 Feb 2012 10:00:46 +0000 Subject: [Live-devel] Get a buffer from a sink Message-ID: <4EB2BBC5AF5AA6419F3B8FAAB08F70FF04F232@AMSPRD0104MB126.eurprd01.prod.exchangelabs.com> Hello, I'm working on a project using an Elphel NC353L camera to feed a RTSP stream. I would get a buffer from that stream. I already got one using a v4l2loopback modul, but i have to use an other software (memcoder) to feed my video device and opencv to read it. That's why i would directly get a buffer from the RTSP stream. So i tried to understand the examples in the testProgs directory of live555. Especially testRTSPClient.cpp, is it the good one? I have no comprehension problem until the taskscheduler event loop. First of all it open the rtsp url. Then the event loop, what is it exactly doing? Running all messages sent by the stream? I have found this : http://comments.gmane.org/gmane.comp.multimedia.live555.devel/7642 Must i modify the DummySink to get the buffer ? If yes, can i get the buffer from the fsource (testRTSPClient.cpp:493, don't know where he is defined) ? Here is the output of testRTSPClient : [------------------------------------------------------------------------------ Opening connection to 192.168.0.12, port 554... ...remote connection opened Sending request: DESCRIBE rtsp://192.168.0.12 RTSP/1.0 CSeq: 2 User-Agent: ./testRTSPClient (LIVE555 Streaming Media v2012.01.13) Accept: application/sdp Received 247 new bytes of response data. Received a complete DESCRIBE response: RTSP/1.0 200 OK CSeq: 2 Content-Length: 167 Content-Type: application/sdp m=video 0 RTP/AVP 26 a=type:unicast c=IN IP4 0.0.0.0 a=x-framerate:49.8355 a=x-width:640 a=x-height:480 a=x-dimensions:640,480 a=control:rtsp://192.168.0.12 [URL:"rtsp://192.168.0.12"]: Got a SDP description: m=video 0 RTP/AVP 26 a=type:unicast c=IN IP4 0.0.0.0 a=x-framerate:49.8355 a=x-width:640 a=x-height:480 a=x-dimensions:640,480 a=control:rtsp://192.168.0.12 [URL:"rtsp://192.168.0.12"]: Initiated the "video/JPEG" subsession (client ports 46636-46637) Sending request: SETUP rtsp://192.168.0.12 RTSP/1.0 CSeq: 3 User-Agent: ./testRTSPClient (LIVE555 Streaming Media v2012.01.13) Transport: RTP/AVP;unicast;client_port=46636-46637 Received 100 new bytes of response data. Received a complete SETUP response: RTSP/1.0 200 OK CSeq: 3 Session: 47112344 Transport: RTP/AVP;unicast;destination=0;port=46636 [URL:"rtsp://192.168.0.12"]: Failed to set up the "video/JPEG" subsession: Missing or bad "Transport:" header Sending request: PLAY rtsp://192.168.0.12 RTSP/1.0 CSeq: 4 User-Agent: ./testRTSPClient (LIVE555 Streaming Media v2012.01.13) Session: 47112344 Range: npt=0.000- Received 28 new bytes of response data. Received a complete PLAY response: RTSP/1.0 200 OK CSeq: 4 [URL:"rtsp://192.168.0.12"]: Started playing session... --------------------------------------------------------------------------------------] The stream is "Started playing", and i added a cout to know if he use the afterGettingFrame or continuePlaying functions. Unfortunaltely there is no output from these functions. I hope you could help me. Thanks for reading. -------------- next part -------------- An HTML attachment was scrubbed... URL: From aviadr1 at gmail.com Thu Feb 23 06:55:20 2012 From: aviadr1 at gmail.com (aviad rozenhek) Date: Thu, 23 Feb 2012 16:55:20 +0200 Subject: [Live-devel] crash in H264VideoRTPSink::stopPlaying() Message-ID: Hi, I'm running an RTSP server based off live555. when a client pauses, my server crashes in H264FUAFragmenter::afterGettingFrame1() I believe the reason for the crash is that H264VideoRTPSink::stopPlaying(), is called after a H264FUAFragmenter::afterGettingFrame1() has already been scheduled to run. when it will run eventually, the fragmenter has already been destroyed resulting in crash. -- *Aviad Rozenhek *Media Technologies Architect RayV Technologies Mobile: *+972 54 9764671 *Tel: +972 3 7979200 Ext:216 www.rayv.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Thu Feb 23 13:21:23 2012 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 24 Feb 2012 07:21:23 +1000 Subject: [Live-devel] crash in H264VideoRTPSink::stopPlaying() In-Reply-To: References: Message-ID: <199CB7FD-386B-4384-B61D-197E9423FBB5@live555.com> Are you running an up-to-date version of the software? (A bug similar to what you're reporting was fixed back in September of last year.) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Fri Feb 24 07:57:18 2012 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 24 Feb 2012 07:57:18 -0800 Subject: [Live-devel] Get a buffer from a sink In-Reply-To: <4EB2BBC5AF5AA6419F3B8FAAB08F70FF04F232@AMSPRD0104MB126.eurprd01.prod.exchangelabs.com> References: <4EB2BBC5AF5AA6419F3B8FAAB08F70FF04F232@AMSPRD0104MB126.eurprd01.prod.exchangelabs.com> Message-ID: <05621AEC-BC02-40F6-872F-F7EA8FF10CBC@live555.com> Your email indicates a lot of confusion, and a lot of unnecessary flailing around. It's better to ask questions one-at-a-time, in chronological order, as they arise. The first question you should be asking is: Why is "testRTSPClient" failing when it tries to receive a stream from your server? Fortunately, the debugging output from "testRTSPClient" helps tell you the answer: > Sending request: SETUP rtsp://192.168.0.12 RTSP/1.0 > CSeq: 3 > User-Agent: ./testRTSPClient (LIVE555 Streaming Media v2012.01.13) > Transport: RTP/AVP;unicast;client_port=46636-46637 > > > Received 100 new bytes of response data. > Received a complete SETUP response: > RTSP/1.0 200 OK > CSeq: 3 > Session: 47112344 > Transport: RTP/AVP;unicast;destination=0;port=46636 > > > [URL:"rtsp://192.168.0.12"]: Failed to set up the "video/JPEG" subsession: Missing or bad "Transport:" header So the problem here is that your server (your "Elphel NC353L camera") is returning a bad "Transport:" header in its response to the RTSP "SETUP" command. It is incorrectly including a "port" parameter for a unicast stream. The "port" parameter (as noted in RFC 2326, section 12.38) is supposed to be used for a multicast stream. For a unicast stream - such as this one - the server should be including a "server_port" parameter instead. You should contact your camera's manufacturer (Elphel), to check if they have a firmware upgrade. If they don't then please tell them about this problem, and ask them to fix it. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrey at elphel.com Fri Feb 24 10:18:48 2012 From: andrey at elphel.com (Andrey Filippov) Date: Fri, 24 Feb 2012 11:18:48 -0700 Subject: [Live-devel] Get a buffer from a sink In-Reply-To: <05621AEC-BC02-40F6-872F-F7EA8FF10CBC@live555.com> References: <4EB2BBC5AF5AA6419F3B8FAAB08F70FF04F232@AMSPRD0104MB126.eurprd01.prod.exchangelabs.com> <05621AEC-BC02-40F6-872F-F7EA8FF10CBC@live555.com> Message-ID: Ross, We'll look into this problem. Thanks for the explanations. Andrey On Fri, Feb 24, 2012 at 8:57 AM, Ross Finlayson wrote: > Your email indicates a lot of confusion, and a lot of unnecessary flailing > around. ?It's better to ask questions one-at-a-time, in chronological order, > as they arise. > > The first question you should be asking is: Why is "testRTSPClient" failing > when it tries to receive a stream from your server? ?Fortunately, the > debugging output from "testRTSPClient" helps tell you the answer: > > Sending request: SETUP?rtsp://192.168.0.12?RTSP/1.0 > CSeq: 3 > User-Agent: ./testRTSPClient (LIVE555 Streaming Media v2012.01.13) > Transport: RTP/AVP;unicast;client_port=46636-46637 > > > Received 100 new bytes of response data. > Received a complete SETUP response: > RTSP/1.0 200 OK > CSeq: 3 > Session: 47112344 > Transport: RTP/AVP;unicast;destination=0;port=46636 > > > [URL:"rtsp://192.168.0.12"]: Failed to set up the "video/JPEG" subsession: > Missing or bad "Transport:" header > > > So the problem here is that your server (your "Elphel NC353L camera") is > returning a bad "Transport:" header in its response to the RTSP "SETUP" > command. ?It is incorrectly including a "port" parameter for a unicast > stream. ?The "port" parameter (as noted in RFC 2326, section 12.38) is > supposed to be used for a multicast stream. ?For a unicast stream - such as > this one - the server should be including a "server_port" parameter instead. > > You should contact your camera's manufacturer (Elphel), to check if they > have a firmware upgrade. ?If they don't then please tell them about this > problem, and ask them to fix it. > > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > From brado at bighillsoftware.com Fri Feb 24 16:38:38 2012 From: brado at bighillsoftware.com (Brad O'Hearne) Date: Fri, 24 Feb 2012 17:38:38 -0700 Subject: [Live-devel] Need help: RTSP Stream -> video render Message-ID: <1DA785A8-30FE-4148-A59E-BAFC5FB26C5C@bighillsoftware.com> Hello, I am reaching out to anyone out there who would be willing to give me some guidance with some issues I'm running into getting the Live555 library integrated into an app I'm writing. My use case is pretty simple to understand: I am writing a mobile app (on both iOS and Android, but for the purposes of discussion here, I am addressing iOS/Objective C first), and I need to consume an RTSP stream over the network, and render the stream to the device's screen. The stream is H.264. I am developing on and targeting only iOS 5. I cannot use built-in video-playback capabilities in iOS because they support HTTP LIVE Streaming but don't support RTSP; and in addition, it would seem that the iOS API controls the source endpoints in AVFoundation, so injecting there is not an option. In this use case, I need real-time video -- minimization of latency is paramount (i.e. I do not want latency due to buffering -- I'd rather drop frames than buffer). I seem to have gotten Live555 compiled properly on iOS 5 (to the best of my knowledge), and linked in with an iOS5 app. I can even run one of the sample clients and pull this RTSP stream, which outputs info to the console using a DummySink. I also have compiled ffmpeg on iOS 5, and have that linked into my project in Xcode. So I have both Live555 and ffmpeg libraries in my Xcode project now. What I need to do is take the data received over RTSP, decode the H.264 video, and then output it to the screen. It would seem that this wouldn't be too utterly terrible. However, referencing some of these libraries / headers inside Xcode, and trying to move some of this code around into a more Objective-C friendly fashion is giving me fits. If there is anyone out there familiar with using Live555 on iOS, or anyone who can give guidance here, I would very much appreciate it. Please feel free to reply here, or contact me offline as well at brado at bighillsoftware.com. Regards, Brad Brad O'Hearne Founder / Lead Developer Big Hill Software LLC http://www.bighillsoftware.com From vigosslive at rambler.ru Sat Feb 25 08:08:32 2012 From: vigosslive at rambler.ru (Rustam) Date: Sat, 25 Feb 2012 20:08:32 +0400 Subject: [Live-devel] ubsent audio when i try play VOB file. Message-ID: <1227831526.1330186112.92365640.33615@mperl103.rambler.ru> Hi. Can you help me. When i try in test file testOnDemandRTSPServer play VOB file without audio (ac3) . Maybe your mediaServer cannot play ac3. From finlayson at live555.com Sat Feb 25 12:35:33 2012 From: finlayson at live555.com (Ross Finlayson) Date: Sat, 25 Feb 2012 12:35:33 -0800 Subject: [Live-devel] ubsent audio when i try play VOB file. In-Reply-To: <1227831526.1330186112.92365640.33615@mperl103.rambler.ru> References: <1227831526.1330186112.92365640.33615@mperl103.rambler.ru> Message-ID: <1793BF07-327C-4AD1-AE6B-9CC6990543EA@live555.com> > Hi. Can you help me. When i try in test file testOnDemandRTSPServer play VOB file without audio (ac3) . See http://www.live555.com/liveMedia/faq.html#my-file-doesnt-work Please put your file it on a publically-accessible web (or FTP) server, and post the URL (not the file itself) to this mailing list, so we can download it and take a look at it. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From michel.promonet at thalesgroup.com Mon Feb 27 06:32:18 2012 From: michel.promonet at thalesgroup.com (PROMONET Michel) Date: Mon, 27 Feb 2012 15:32:18 +0100 Subject: [Live-devel] video/JPEG streaming Message-ID: <1677_1330353151_4F4B93FF_1677_4457_1_1BE8971B6CFF3A4F97AF4011882AA2550155FAEC4A4E@THSONEA01CMS01P.one.grp> Hi Ross, I read severals times in the live555 mailing list and on your site that you seems to consider MJPEG streaming as something to avoid. But in some situation (dedicated network, when it is needed to garanty precise positionning) it could be interesting. Reading the RFC 2435 and the code of live555, it seems that what is missing is very things, so I tried to improve my understanding of live555 library. I attached an class inherited from JPEGVideoSource interface that get RTP informations from the JPEG header as you suggest http://lists.live555.com/pipermail/live-devel/2003-November/000037.html and a test program. I did not really understood debate about qFactor that seems not extratable from the quantification table data. Using different precision for quantification table (0,8,255), and different qFactor (128,255) doesnot seems to have big impact on the displayed picture. Using qFactor below 128, produce a image with very few contrast. But perhaps the viewer is translating values before display ? Do you have your own favourite figure for the qFactor ? or do I miss things in the RFC ? Thanks for your advices. Best Regards, Michel. [@@THALES GROUP RESTRICTED@@] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: MJPEGVideoSource.cpp URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: MJPEGVideoSource.hh Type: application/octet-stream Size: 1586 bytes Desc: MJPEGVideoSource.hh URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: testMJPEGVideoStreamer.cpp URL: From carlos.duran at justsync.com Mon Feb 27 11:07:50 2012 From: carlos.duran at justsync.com (Carlos Duran) Date: Mon, 27 Feb 2012 14:07:50 -0500 Subject: [Live-devel] openRTSP issue in Mac OSX Message-ID: <08C310B8-BE30-40DE-A256-E9EA459631C6@isaacdaniel.com> Hi, I am new to the framework and video/audio processing in general, so I am using the openRTSP sample to wrap my head around the ideas, but I am having an issue where I execute the command RTSP and I get all returned commands (OPTIONS, DESCRIBE, SETUP, PLAY) to run successfully, then I get this message: Started playing session Receiving streamed data (signal with "kill -HUP 1587" or "kill -USR1 1587" to terminate)... My problem is that I get no audio at all, also I know that a file is created: audio-MP4A-LATM-1, I've tried adding an .mp3 extension to hear it using either VLC ot Quicktime Player to no avail. Any orientation with this issue would be greatly appreciated. Regards, Carlos From finlayson at live555.com Mon Feb 27 14:13:00 2012 From: finlayson at live555.com (Ross Finlayson) Date: Mon, 27 Feb 2012 14:13:00 -0800 Subject: [Live-devel] openRTSP issue in Mac OSX In-Reply-To: <08C310B8-BE30-40DE-A256-E9EA459631C6@isaacdaniel.com> References: <08C310B8-BE30-40DE-A256-E9EA459631C6@isaacdaniel.com> Message-ID: > My problem is that I get no audio at all No, you are 'getting audio' just fine, as evidenced by: > also I know that a file is created: audio-MP4A-LATM-1 The "openRTSP" application reads audio and video data from a RTSP stream, but does not 'play' it. Instead, it just writes the incoming data to a file. If you want to actually play the incoming audio and/or video, then you need an actual media player client - e.g., VLC. > , I've tried adding an .mp3 extension to hear it using either VLC ot Quicktime Player to no avail. That won't work, because the file - as noted by its name - is LATM-formatted MPEG-4 (AAC) audio, not MP3 audio. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jkburges at gmail.com Mon Feb 27 14:37:08 2012 From: jkburges at gmail.com (Jon Burgess) Date: Tue, 28 Feb 2012 09:37:08 +1100 Subject: [Live-devel] Need help: RTSP Stream -> video render Message-ID: Hi Brad, > What I need to do is take the data received over RTSP, decode the H.264 > video, and then output it to the screen. It would seem that this wouldn't > be too utterly terrible. However, referencing some of these libraries / > headers inside Xcode, and trying to move some of this code around into a > more Objective-C friendly fashion is giving me fits. > > If there is anyone out there familiar with using Live555 on iOS, or anyone > who can give guidance here, I would very much appreciate it. Please feel > free to reply here, or contact me offline as well at > brado at bighillsoftware.com. > > It's a bit of a vague question, so I'll give you some vague advice. If you want live555 and cocoa/iOS to play nicely with each other, I'd suggest providing a cocoa based implementation of the TaskScheduler (and also potentially the UsageEnvironment), so that live555 events are posted and consumed via the cocoa event loop. See http://www.live555.com/liveMedia/faq.html#threads for more info on what those classes are for. Basically this means you can use live555 on the same thread as the main iOS one, and potentially avoid a lot of threading issues. I've posted my implementation at the bottom of this email, feel free to use, although no guarantee can be made as to the quality! Regarding rendering of data, for my app, I'm rendering audio rather than video. I've subclassed MediaSink, with the implementation taking packets of data from live555 and sending to the iOS audio queue. You will want to do a similar thing, except with an extra decoding step in there and sending to some sort of video/image buffer (not familiar with what's available on iOS, although I would've thought that there'd be an API to decode H.264 in the iOS API so you may not need ffmpeg). Cheers, Jon /* * CocoaTaskScheduler.h * openRTSP * * Created by Jon Burgess on 22/10/10. * Copyright Jon Burgess. All rights reserved. * */ #ifndef COCOA_TASK_SCHEDULER_HH #define COCOA_TASK_SCHEDULER_HH #include "BasicUsageEnvironment.hh" #include class CocoaTaskScheduler : public BasicTaskScheduler { public: CocoaTaskScheduler(); // abstract base class virtual ~CocoaTaskScheduler(); virtual TaskToken scheduleDelayedTask(int64_t microseconds, TaskFunc* proc, void* clientData); // Schedules a task to occur (after a delay) when we next // reach a scheduling point. // (Does not delay if "microseconds" <= 0) // Returns a token that can be used in a subsequent call to // unscheduleDelayedTask() virtual void unscheduleDelayedTask(TaskToken& prevTask); // (Has no effect if "prevTask" == NULL) // Sets "prevTask" to NULL afterwards. virtual void rescheduleDelayedTask(TaskToken& task, int64_t microseconds, TaskFunc* proc, void* clientData); // Combines "unscheduleDelayedTask()" with "scheduleDelayedTask()" // (setting "task" to the new task token). // For handling socket operations in the background (from the event loop): // typedef void BackgroundHandlerProc(void* clientData, int mask); // Possible bits to set in "mask". (These are deliberately defined // the same as those in Tcl, to make a Tcl-based subclass easy.) virtual void setBackgroundHandling(int socketNum, int conditionSet, BackgroundHandlerProc* handlerProc, void* clientData); virtual void moveSocketHandling(int oldSocketNum, int newSocketNum); // Changes any socket handling for "oldSocketNum" so that occurs with "newSocketNum" instead. virtual void doEventLoop(char* watchVariable = NULL); // Stops the current thread of control from proceeding, // but allows delayed tasks (and/or background I/O handling) // to proceed. // (If "watchVariable" is not NULL, then we return from this // routine when *watchVariable != 0) private: /** * This acts as a bridge between the NSInvocation given to an NSTimer * and the TaskFunc function pointer. */ void* mTimerInvokee; /** * Keep a map of socket to run loop ref, so that we can remove socket handling * from run loop. */ // TODO: shouldn't be a void* - compilation problems though. std::map mSocketToRunLoopRef; /** * Map of socket to CFSocketRef (for later invalidation). * TODO: */ std::map mSocketToCFSocket; }; #endif /* COCOA_TASK_SCHEDULER_HH */ /* * CocoaTaskScheduler.cpp * openRTSP * * Created by Jon Burgess on 22/10/10. * Copyright Jon Burgess. All rights reserved. * */ #include "CocoaTaskScheduler.h" #import #include #include "constants.h" #import "TimerInvokee.h" CocoaTaskScheduler::CocoaTaskScheduler() : BasicTaskScheduler() { mTimerInvokee = [[TimerInvokee alloc] init]; } CocoaTaskScheduler::~CocoaTaskScheduler() { [(TimerInvokee*)mTimerInvokee release]; } // Schedules a task to occur (after a delay) when we next // reach a scheduling point. // (Does not delay if "microseconds" <= 0) // Returns a token that can be used in a subsequent call to // unscheduleDelayedTask() TaskToken CocoaTaskScheduler::scheduleDelayedTask(int64_t microseconds, TaskFunc* proc, void* clientData) { NSTimeInterval seconds = microseconds / MICROS_IN_SEC; SEL timerSelector = @selector(execute:clientData:invoker:); NSMethodSignature* methodSig = [(TimerInvokee*)mTimerInvokee methodSignatureForSelector:timerSelector]; NSInvocation* invocation = [NSInvocation invocationWithMethodSignature:methodSig]; [invocation setTarget:(TimerInvokee*)mTimerInvokee]; [invocation setSelector:timerSelector]; [invocation setArgument:&proc atIndex:2]; [invocation setArgument:&clientData atIndex:3]; NSTimer* timer = [NSTimer scheduledTimerWithTimeInterval:seconds invocation:invocation repeats:NO]; [invocation setArgument:&timer atIndex:4]; // Need an extra retain here (maybe) because otherwise "unschedule...()" // may be called with an invalid NSTimer. [timer retain]; // NSLog(@"Scheduled task: %u, microseconds: %li, proc: %i", timer, microseconds, proc); return timer; } // (Has no effect if "prevTask" == NULL) // Sets "prevTask" to NULL afterwards. void CocoaTaskScheduler::unscheduleDelayedTask(TaskToken& prevTask) { if (prevTask == NULL) { return; } NSTimer* timer = (NSTimer*)prevTask; if (timer == nil) { // Do nothing. } else if (![timer isValid]) { NSLog(@"unscheduleDelayedTask: invalid timer, nothing to do."); } else { // NSLog(@"Unscheduling task: %u", timer); [timer invalidate]; timer = nil; // NSLog(@"Unscheduled task: %u", timer); } //[timer release]; prevTask = NULL; } // Combines "unscheduleDelayedTask()" with "scheduleDelayedTask()" // (setting "task" to the new task token). void CocoaTaskScheduler::rescheduleDelayedTask(TaskToken& task, int64_t microseconds, TaskFunc* proc, void* clientData) { // NSLog(@"Reschedule task: %u", task); unscheduleDelayedTask(task); task = scheduleDelayedTask(microseconds, proc, clientData); } struct HandlerAndClientData { CocoaTaskScheduler::BackgroundHandlerProc* handlerProc; void* clientData; }; typedef struct HandlerAndClientData HandlerAndClientData; static void MyCallBack ( CFSocketRef s, CFSocketCallBackType callbackType, CFDataRef address, const void *data, void *info ) { HandlerAndClientData* handlerAndClientData = (HandlerAndClientData*)info; CocoaTaskScheduler::BackgroundHandlerProc* handlerProc = handlerAndClientData->handlerProc; void* clientData = handlerAndClientData->clientData; int resultConditionSet = 0; if (callbackType == kCFSocketReadCallBack) { // NSLog(@"Socket read callback for socket ref: %u", s); resultConditionSet |= SOCKET_READABLE; } if (callbackType == kCFSocketWriteCallBack) { // NSLog(@"Socket write callback for socket ref: %u", s); resultConditionSet |= SOCKET_WRITABLE; } // TODO: SOCKET_EXCEPTION (*handlerProc)(clientData, resultConditionSet); } void CocoaTaskScheduler::setBackgroundHandling(int socketNum, int conditionSet, BackgroundHandlerProc* handlerProc, void* clientData) { // NSLog(@"CocoaTaskScheduler::setBackgroundHandling(), socketNum: %i, condition set: %i, handler: %u", // socketNum, conditionSet, handlerProc); // // NSLog(@"Start of setBackgroundHandling"); // std::map::iterator iter; // for (iter = mSocketToRunLoopRef.begin(); // iter != mSocketToRunLoopRef.end(); // ++iter) // { // NSLog(@"socketNum: %i", iter->first); // } // NSLog(@"*****"); if ( (conditionSet == 0) && (mSocketToRunLoopRef.find(socketNum) != mSocketToRunLoopRef.end())) { // Remove socket handling. CFRunLoopSourceRef runLoopSourceRef = (CFRunLoopSourceRef)mSocketToRunLoopRef[socketNum]; assert(runLoopSourceRef != NULL); CFRunLoopRemoveSource(CFRunLoopGetCurrent(), runLoopSourceRef, kCFRunLoopDefaultMode); // Remove from the map. mSocketToRunLoopRef.erase(socketNum); // Invalidate the CFSocket (so that the same native socket can be used // again in a different CFSocket). // TODO: revisit, probably very inefficient to not re-use CFSockets. CFSocketRef socketToRemove = (CFSocketRef)mSocketToCFSocket[socketNum]; assert(socketToRemove != NULL); CFSocketInvalidate(socketToRemove); mSocketToCFSocket.erase(socketNum); // NSLog(@"Removed background handling for socket %i", socketNum); } else { // If the socket is already being used as a run loop source, clean up // before trying to register it again (or the new handler won't be used. if (mSocketToRunLoopRef.find(socketNum) != mSocketToRunLoopRef.end()) { // NSLog(@"Socket %i already used, removing from run loop before re-adding...", // socketNum); setBackgroundHandling(socketNum, 0, NULL, // handlerProc NULL); // clientData } CFOptionFlags optionFlags = kCFSocketNoCallBack; if (conditionSet & SOCKET_READABLE) { optionFlags |= kCFSocketReadCallBack; // NSLog(@"Enabled read callback for socket %i", socketNum); } if (conditionSet & SOCKET_WRITABLE) { optionFlags |= kCFSocketWriteCallBack; // NSLog(@"Enabled write callback for socket %i", socketNum); } if (conditionSet & SOCKET_EXCEPTION) { // optionFlags |= TODO } CFSocketCallBack callback = MyCallBack; // TODO HandlerAndClientData* handlerAndClientData = (HandlerAndClientData*)malloc(sizeof(HandlerAndClientData)); handlerAndClientData->handlerProc = handlerProc; handlerAndClientData->clientData = clientData; // TODO: leak. CFSocketContext* pSocketContext = (CFSocketContext*)malloc(sizeof(CFSocketContext)); pSocketContext->version = 0; pSocketContext->info = handlerAndClientData; pSocketContext->retain = NULL; // TODO: revisit this. pSocketContext->release = NULL; pSocketContext->copyDescription = NULL; CFSocketRef socketRef = CFSocketCreateWithNative(NULL, socketNum, optionFlags, callback, pSocketContext); mSocketToCFSocket[socketNum] = socketRef; NSLog(@"Created CFSocket %u for socketNum %i", socketRef, socketNum); // Don't close the native socket when the CFSocket is invalidated. optionFlags = CFSocketGetSocketFlags(socketRef); optionFlags &= ~kCFSocketCloseOnInvalidate; CFSocketSetSocketFlags(socketRef, optionFlags); CFRunLoopSourceRef runLoopSourceRef = CFSocketCreateRunLoopSource(NULL, socketRef, 0); CFRunLoopAddSource(CFRunLoopGetCurrent(), runLoopSourceRef, kCFRunLoopDefaultMode); // Record the mapping between socket and run loop source, so that we can // remove it later. mSocketToRunLoopRef[socketNum] = runLoopSourceRef; } // NSLog(@"End of setBackgroundHandling"); // for (iter = mSocketToRunLoopRef.begin(); // iter != mSocketToRunLoopRef.end(); // ++iter) // { // NSLog(@"socketNum: %i", iter->first); // } // NSLog(@"*****"); } void CocoaTaskScheduler::moveSocketHandling(int oldSocketNum, int newSocketNum) { assert(false); } // Changes any socket handling for "oldSocketNum" so that occurs with "newSocketNum" instead. // Stops the current thread of control from proceeding, // but allows delayed tasks (and/or background I/O handling) // to proceed. // (If "watchVariable" is not NULL, then we return from this // routine when *watchVariable != 0) // void CocoaTaskScheduler::doEventLoop(char* watchVariable) { // Don't think it is necessary to do anything here, // as the iOS main event loop will be used. NSLog(@"CocoaTaskScheduler::doEventLoop() called"); } /** // TODO: maybe we do need to handle watchVariable though? if ((watchVariable != NULL) && (*watchVariable != 0)) { // NSAssert(false, "watchVariable is set"); assert(false); } } */ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bstump at codemass.com Mon Feb 27 14:41:11 2012 From: bstump at codemass.com (Barry Stump) Date: Mon, 27 Feb 2012 14:41:11 -0800 Subject: [Live-devel] Need help: RTSP Stream -> video render In-Reply-To: References: <1DA785A8-30FE-4148-A59E-BAFC5FB26C5C@bighillsoftware.com> Message-ID: I am working on an iOS project similar to what you describe: H.264 video + AAC audio with Live555 and FFmpeg handling RTSP and video decoding (respectively). I recommend basing your work on testRTSPClient.cpp, with an Objective-C++ wrapper class as the interface between the rest of your code and the C++ Live555 library. You will need to create one or more custom subclasses of MediaSink to handle the media streams, with the data payload being delivered in the afterGettingFrame() functions. For AAC, an RTSP frame of data is equivalent to Core Audio's concept of an audio "packet" not an audio "frame". On the video side, at a minimum, you will need to add the MPEG start code 0x00000001 to each frame before you hand it off to FFmpeg to decode. See the H264VideoFileSink.cpp file for an example of this. The details of using FFmpeg inside iOS are beyond the scope of this list, but the Dropcam source code may be of help to you. Note that it also uses Live555 for RTSP handling, but uses the deprecated synchronous methods. You are better off following the testRTSPClient example for Live555. https://github.com/dropcam/dropcam_for_iphone -Barry On Mon, Feb 27, 2012 at 2:39 PM, Barry Stump wrote: > I am working on an iOS project similar to what you describe: H.264 video + > AAC audio with Live555 and FFmpeg handling RTSP and video decoding > (respectively). I recommend basing your work on testRTSPClient.cpp, with > an Objective-C++ wrapper class as the interface between the rest of your > code and the C++ Live555 library. You will need to create one or more > custom subclasses of MediaSink to handle the media streams, with the data > payload being delivered in the afterGettingFrame() functions. For AAC, an > RTSP frame of data is equivalent to Core Audio's concept of an audio > "packet" not an audio "frame". On the video side, at a minimum, you will > need to add the MPEG start code 0x00000001 to each frame before you hand it > off to FFmpeg to decode. See the H264VideoFileSink.cpp file for an example > of this. > > The details of using FFmpeg inside iOS are beyond the scope of this list, > but the Dropcam source code may be of help to you. Note that it also uses > Live555 for RTSP handling, but uses the deprecated synchronous methods. > You are better off following the testRTSPClient example for Live555. > > https://github.com/dropcam/dropcam_for_iphone > > -Barry > > > On Fri, Feb 24, 2012 at 4:38 PM, Brad O'Hearne wrote: > >> Hello, >> >> I am reaching out to anyone out there who would be willing to give me >> some guidance with some issues I'm running into getting the Live555 library >> integrated into an app I'm writing. My use case is pretty simple to >> understand: >> >> I am writing a mobile app (on both iOS and Android, but for the purposes >> of discussion here, I am addressing iOS/Objective C first), and I need to >> consume an RTSP stream over the network, and render the stream to the >> device's screen. The stream is H.264. I am developing on and targeting only >> iOS 5. I cannot use built-in video-playback capabilities in iOS because >> they support HTTP LIVE Streaming but don't support RTSP; and in addition, >> it would seem that the iOS API controls the source endpoints in >> AVFoundation, so injecting there is not an option. In this use case, I need >> real-time video -- minimization of latency is paramount (i.e. I do not want >> latency due to buffering -- I'd rather drop frames than buffer). >> >> I seem to have gotten Live555 compiled properly on iOS 5 (to the best of >> my knowledge), and linked in with an iOS5 app. I can even run one of the >> sample clients and pull this RTSP stream, which outputs info to the console >> using a DummySink. I also have compiled ffmpeg on iOS 5, and have that >> linked into my project in Xcode. So I have both Live555 and ffmpeg >> libraries in my Xcode project now. >> >> What I need to do is take the data received over RTSP, decode the H.264 >> video, and then output it to the screen. It would seem that this wouldn't >> be too utterly terrible. However, referencing some of these libraries / >> headers inside Xcode, and trying to move some of this code around into a >> more Objective-C friendly fashion is giving me fits. >> >> If there is anyone out there familiar with using Live555 on iOS, or >> anyone who can give guidance here, I would very much appreciate it. Please >> feel free to reply here, or contact me offline as well at >> brado at bighillsoftware.com. >> >> Regards, >> >> Brad >> >> Brad O'Hearne >> Founder / Lead Developer >> Big Hill Software LLC >> http://www.bighillsoftware.com >> >> >> _______________________________________________ >> live-devel mailing list >> live-devel at lists.live555.com >> http://lists.live555.com/mailman/listinfo/live-devel >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jkburges at gmail.com Mon Feb 27 14:45:19 2012 From: jkburges at gmail.com (Jon Burgess) Date: Tue, 28 Feb 2012 09:45:19 +1100 Subject: [Live-devel] Fwd: Need help: RTSP Stream -> video render In-Reply-To: References: Message-ID: Further to my previous email, you'll need the following class to run my code: // // TimerInvokee.m // openRTSP // // Created by Jon Burgess on 22/10/10. // Copyright Jon Burgess. All rights reserved. // #import "TimerInvokee.h" @implementation TimerInvokee - (void)execute:(TaskFunc*)proc clientData:(void*)clientData invoker:(NSTimer*)invoker { // NSLog(@"Invoking task for timer: %u", invoker); proc(clientData); // We did an extra retain when scheduling, so release equally here now that the timer // has fired to avoid leak. [invoker release]; } @end // // TimerInvokee.m // openRTSP // // Created by Jon Burgess on 22/10/10. // Copyright 2010 Jon Burgess. All rights reserved. // #import "TimerInvokee.h" @implementation TimerInvokee - (void)execute:(TaskFunc*)proc clientData:(void*)clientData invoker:(NSTimer*)invoker { // NSLog(@"Invoking task for timer: %u", invoker); proc(clientData); // We did an extra retain when scheduling, so release equally here now that the timer // has fired to avoid leak. [invoker release]; } @end -------------- next part -------------- An HTML attachment was scrubbed... URL: From brado at bighillsoftware.com Mon Feb 27 15:25:53 2012 From: brado at bighillsoftware.com (Brad O'Hearne) Date: Mon, 27 Feb 2012 16:25:53 -0700 Subject: [Live-devel] Need help: RTSP Stream -> video render In-Reply-To: References: <1DA785A8-30FE-4148-A59E-BAFC5FB26C5C@bighillsoftware.com> Message-ID: <0DB63F42-7226-4C9A-BFF6-5E25A089D76D@bighillsoftware.com> Barry, Thank you very much for your reply. What you have spoken to here is exactly the model that I have followed, but the crux of the issue I am trying to solve involves exactly what you are touching on. In short what I have is: H.264 video -> RTSP -> RTSPClient -> MediaSink subclass exactly as you have described. My present challenge is properly grabbing the received data in the afterGettingFrame() function, and feeding to the ffmpeg avcodec. Specifically, I am trying to make sure that the data received from Live555 and handed to ffmpeg is the proper format and quantity. In other words, the ffmpeg avcodec is seeking a packet (or more properly put, an "AVPacket" according to their parlance. I know this isn't an ffmpeg discussion list, so my purpose isn't to discuss ffmpeg, but I do need to gather and construct (if necessary) the received data to construct this packet. My questions are: 1. Is such a packet the payload within the afterGettingFrame() function, and if so, how do I grab it? 2. If the answer to 1 is no, then I need some any pointers on how to construct it. I do indeed currently have adapted code similar to the testRTSPClient example and can receive RTSP data successfully. However, what I'm seeing is that when the afterGettingFrame() function is getting called, I am seeing a cycle of data received....2 calls with a 9 and 5 bytes respectively, followed by the receipt of about 23K+/- of data. (see the attached log). I am suspecting that these aren't all full packets. Any insight you can give would be greatly appreciated, thx! Brad Brad O'Hearne Founder / Lead Developer Big Hill Software LLC http://www.bighillsoftware.com On Feb 27, 2012, at 3:41 PM, Barry Stump wrote: > I am working on an iOS project similar to what you describe: H.264 video + AAC audio with Live555 and FFmpeg handling RTSP and video decoding (respectively). I recommend basing your work on testRTSPClient.cpp, with an Objective-C++ wrapper class as the interface between the rest of your code and the C++ Live555 library. You will need to create one or more custom subclasses of MediaSink to handle the media streams, with the data payload being delivered in the afterGettingFrame() functions. For AAC, an RTSP frame of data is equivalent to Core Audio's concept of an audio "packet" not an audio "frame". On the video side, at a minimum, you will need to add the MPEG start code 0x00000001 to each frame before you hand it off to FFmpeg to decode. See the H264VideoFileSink.cpp file for an example of this. > > The details of using FFmpeg inside iOS are beyond the scope of this list, but the Dropcam source code may be of help to you. Note that it also uses Live555 for RTSP handling, but uses the deprecated synchronous methods. You are better off following the testRTSPClient example for Live555. > > https://github.com/dropcam/dropcam_for_iphone > > -Barry > > > On Fri, Feb 24, 2012 at 4:38 PM, Brad O'Hearne wrote: > Hello, > > I am reaching out to anyone out there who would be willing to give me some guidance with some issues I'm running into getting the Live555 library integrated into an app I'm writing. My use case is pretty simple to understand: > > I am writing a mobile app (on both iOS and Android, but for the purposes of discussion here, I am addressing iOS/Objective C first), and I need to consume an RTSP stream over the network, and render the stream to the device's screen. The stream is H.264. I am developing on and targeting only iOS 5. I cannot use built-in video-playback capabilities in iOS because they support HTTP LIVE Streaming but don't support RTSP; and in addition, it would seem that the iOS API controls the source endpoints in AVFoundation, so injecting there is not an option. In this use case, I need real-time video -- minimization of latency is paramount (i.e. I do not want latency due to buffering -- I'd rather drop frames than buffer). > > I seem to have gotten Live555 compiled properly on iOS 5 (to the best of my knowledge), and linked in with an iOS5 app. I can even run one of the sample clients and pull this RTSP stream, which outputs info to the console using a DummySink. I also have compiled ffmpeg on iOS 5, and have that linked into my project in Xcode. So I have both Live555 and ffmpeg libraries in my Xcode project now. > > What I need to do is take the data received over RTSP, decode the H.264 video, and then output it to the screen. It would seem that this wouldn't be too utterly terrible. However, referencing some of these libraries / headers inside Xcode, and trying to move some of this code around into a more Objective-C friendly fashion is giving me fits. > > If there is anyone out there familiar with using Live555 on iOS, or anyone who can give guidance here, I would very much appreciate it. Please feel free to reply here, or contact me offline as well at brado at bighillsoftware.com. > > Regards, > > Brad > > Brad O'Hearne > Founder / Lead Developer > Big Hill Software LLC > http://www.bighillsoftware.com > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: data.txt URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From brado at bighillsoftware.com Mon Feb 27 15:34:06 2012 From: brado at bighillsoftware.com (Brad O'Hearne) Date: Mon, 27 Feb 2012 16:34:06 -0700 Subject: [Live-devel] Fwd: Need help: RTSP Stream -> video render In-Reply-To: References: Message-ID: Jon, Thank you so much for your reply, and the code, it helps tremendously. In regards to being vague, the simplest I can describe the use case is this: - I have an external device streaming H.264 video over Wifi via RTSP. - I am constructing an iOS app to receive the RTSP stream, decode the H.264 video, and render it to the device screen. - I need to eliminate any and all latency (buffering). I just posted a response to Barry, that will probably give even more insight to what I am trying to do. Presently, I have both Live555 and ffmpeg compiled on iOS5, and both libraries linked into an iOS 5 Xcode project. I have implemented a MediaSink object, though I don't believe it does completely what I need it to do yet. What I need to do is gather the received video stream data and construct an appropriate video packet to feed to ffmpeg. I'm not completely sure about the nature of the data available in the MediaSink afterGettingFrame() method, and so I'm hoping that a few of the familiar minds here can help bridge that gap. Thanks again for your reply -- any additional help on this is greatly appreciated. Feel free also to contact me offline if necessary. Brad Brad O'Hearne Founder / Lead Developer Big Hill Software LLC http://www.bighillsoftware.com On Feb 27, 2012, at 3:45 PM, Jon Burgess wrote: > Further to my previous email, you'll need the following class to run my code: > > // > // TimerInvokee.m > // openRTSP > // > // Created by Jon Burgess on 22/10/10. > // Copyright Jon Burgess. All rights reserved. > // > > #import "TimerInvokee.h" > > @implementation TimerInvokee > > - (void)execute:(TaskFunc*)proc clientData:(void*)clientData invoker:(NSTimer*)invoker > { > // NSLog(@"Invoking task for timer: %u", invoker); > proc(clientData); > > // We did an extra retain when scheduling, so release equally here now that the timer > // has fired to avoid leak. > [invoker release]; > } > > @end > > > // > // TimerInvokee.m > // openRTSP > // > // Created by Jon Burgess on 22/10/10. > // Copyright 2010 Jon Burgess. All rights reserved. > // > > #import "TimerInvokee.h" > > @implementation TimerInvokee > > - (void)execute:(TaskFunc*)proc clientData:(void*)clientData invoker:(NSTimer*)invoker > { > // NSLog(@"Invoking task for timer: %u", invoker); > proc(clientData); > > // We did an extra retain when scheduling, so release equally here now that the timer > // has fired to avoid leak. > [invoker release]; > } > > @end > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel From bstump at codemass.com Mon Feb 27 17:11:19 2012 From: bstump at codemass.com (Barry Stump) Date: Mon, 27 Feb 2012 17:11:19 -0800 Subject: [Live-devel] Need help: RTSP Stream -> video render In-Reply-To: <0DB63F42-7226-4C9A-BFF6-5E25A089D76D@bighillsoftware.com> References: <1DA785A8-30FE-4148-A59E-BAFC5FB26C5C@bighillsoftware.com> <0DB63F42-7226-4C9A-BFF6-5E25A089D76D@bighillsoftware.com> Message-ID: For H.264, the payload delivered in the afterGettingFrame() method is a NAL unit. The big ones are (usually) the coded video frames (there are various types depending on your encoder settings) and the small ones are (usually) SPS and PPS NAL units which contain various settings needed by your decoder. You can read up on all the technical details in RFC 3984: http://tools.ietf.org/html/rfc3984 My particular hardware encoder device sends NAL units in the following order: 7, 8, 5, 1, 1, 1, 1, 1, 1, 1, 1, 1, 7, 8, 5, ... The 5 and the 1's are actual frame data, the 7's and 8's are SPS/PPS units. All that is necessary for me to decode these in FFmpeg is to prepend the 4 bytes 0x00000001 to each NAL unit, bundle each one up in a AVPacket struct, and hand it off to FFmpeg's avcodec_decode_video2() function. -Barry On Mon, Feb 27, 2012 at 3:25 PM, Brad O'Hearne wrote: > Barry, > > Thank you very much for your reply. What you have spoken to here is > exactly the model that I have followed, but the crux of the issue I am > trying to solve involves exactly what you are touching on. In short what I > have is: > > H.264 video -> RTSP -> RTSPClient -> MediaSink subclass > > exactly as you have described. My present challenge is properly grabbing > the received data in the afterGettingFrame() function, and feeding to the > ffmpeg avcodec. Specifically, I am trying to make sure that the data > received from Live555 and handed to ffmpeg is the proper format and > quantity. In other words, the ffmpeg avcodec is seeking a packet (or more > properly put, an "AVPacket" according to their parlance. I know this isn't > an ffmpeg discussion list, so my purpose isn't to discuss ffmpeg, but I do > need to gather and construct (if necessary) the received data to construct > this packet. > > My questions are: > > 1. Is such a packet the payload within the afterGettingFrame() function, > and if so, how do I grab it? > > 2. If the answer to 1 is no, then I need some any pointers on how to > construct it. > > I do indeed currently have adapted code similar to the testRTSPClient > example and can receive RTSP data successfully. However, what I'm seeing is > that when the afterGettingFrame() function is getting called, I am seeing a > cycle of data received....2 calls with a 9 and 5 bytes respectively, > followed by the receipt of about 23K+/- of data. (see the attached log). I > am suspecting that these aren't all full packets. > > Any insight you can give would be greatly appreciated, thx! > > Brad > > Brad O'Hearne > Founder / Lead Developer > Big Hill Software LLC > http://www.bighillsoftware.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yanggr at hotmail.com Mon Feb 27 17:38:12 2012 From: yanggr at hotmail.com (YangGuanRong) Date: Tue, 28 Feb 2012 09:38:12 +0800 Subject: [Live-devel] fast forward / backward switch not smooth In-Reply-To: References: , Message-ID: Hi all, I'm doing some test with "live555MediaServer" as RTSP server, everything works pretty well except the switch in / out / between fast forward / backward mode and regular mode, version / code: 2012.01.13 Client: VLC, latest got from videolan website "VLC media player 1.1.11 The Luggage" Phenomenon: 1, when set to fast forfard 16 or 32 times speed, most of the time works smoothly and get video quickly 2, when set to fast backfard 16 or 32 times speed, sometimes need wait several seconds for the video to display 3, when set to fast forfard or backward at anyother speed, such as forward at 2 or 4 times speed, most of the time need wait a long time to get video displaying, or even don't move at all; Does anyone observed or got report about this kind of issue? If yes then is there any idea to avoid it or improve it? Thanks! GuanRong Yang -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Mon Feb 27 18:31:31 2012 From: finlayson at live555.com (Ross Finlayson) Date: Mon, 27 Feb 2012 18:31:31 -0800 Subject: [Live-devel] fast forward / backward switch not smooth In-Reply-To: References: , Message-ID: <7D2D5586-CCB7-49A0-869E-6B7243C67A32@live555.com> > I'm doing some test with "live555MediaServer" as RTSP server, everything works pretty well except the switch in / out / between fast forward / backward mode and regular mode, version / code: 2012.01.13 > Client: VLC, latest got from videolan website "VLC media player 1.1.11 The Luggage" > > Phenomenon: > 1, when set to fast forfard 16 or 32 times speed, most of the time works smoothly and get video quickly > 2, when set to fast backfard 16 or 32 times speed, sometimes need wait several seconds for the video to display > 3, when set to fast forfard or backward at anyother speed, such as forward at 2 or 4 times speed, most of the time need wait a long time to get video displaying, or even don't move at all; VLC is not our software (although it does use our library for its RTSP client implementation), and this is not a VLC mailing list. Unfortunately, it's not clear if the problems that you're seeing are due to issues with our software, or issues with VLC. If the problems are with VLC, then you'll need to report them on a VLC mailing list instead. One way to check where your 'trick play' problems lie is to use our "openRTSP" command-line client instead of VLC. Note the 'trick play' options: http://www.live555.com/openRTSP/#trick-play Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From sambhav at saranyu.in Mon Feb 27 19:35:28 2012 From: sambhav at saranyu.in (Kumar Sambhav) Date: Tue, 28 Feb 2012 09:05:28 +0530 Subject: [Live-devel] Non blocking read in ByteStreamFileSource Message-ID: Hi, I am using named pipes instead of a file for on demand server. In ByteStreamFileSource::doReadFromFile(), read/fread blocks until there is any data. I want to make this non-blocking by calling select on the pipe to check if there is any data on the pipe. If there is no data , can doReadFromFile() be scheduled after a specified time ? Regards, Sambhav From finlayson at live555.com Mon Feb 27 19:50:50 2012 From: finlayson at live555.com (Ross Finlayson) Date: Mon, 27 Feb 2012 19:50:50 -0800 Subject: [Live-devel] Non blocking read in ByteStreamFileSource In-Reply-To: References: Message-ID: > In ByteStreamFileSource::doReadFromFile(), read/fread blocks until there is any data. No, that happens only if READ_FROM_FILES_SYNCHRONOUSLY is defined, and that's defined only for Windows, where you have no choice in the matter (because Windows doesn't let you treat open files as select()able sockets). To overcome this, you need to use some other OS. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From sambhav at saranyu.in Mon Feb 27 23:20:49 2012 From: sambhav at saranyu.in (Kumar Sambhav) Date: Tue, 28 Feb 2012 12:50:49 +0530 Subject: [Live-devel] Non blocking read in ByteStreamFileSource In-Reply-To: References: Message-ID: <98354C93-FBE1-4685-8A29-F354BE34FC2F@saranyu.in> Hi Ross, I am using Linux. Before going to read function, the program blocks at ByteStreamFileSource::createNew when it tries to do OpenInputFile on a pipe. I was not able to find any options to call fopen in non blocking mode. Regards, Sambhav On Feb 28, 2012, at 9:20 AM, Ross Finlayson wrote: >> In ByteStreamFileSource::doReadFromFile(), read/fread blocks until there is any data. > > No, that happens only if READ_FROM_FILES_SYNCHRONOUSLY is defined, and that's defined only for Windows, where you have no choice in the matter (because Windows doesn't let you treat open files as select()able sockets). To overcome this, you need to use some other OS. > > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Mon Feb 27 23:31:37 2012 From: finlayson at live555.com (Ross Finlayson) Date: Mon, 27 Feb 2012 23:31:37 -0800 Subject: [Live-devel] Non blocking read in ByteStreamFileSource In-Reply-To: <98354C93-FBE1-4685-8A29-F354BE34FC2F@saranyu.in> References: <98354C93-FBE1-4685-8A29-F354BE34FC2F@saranyu.in> Message-ID: > I am using Linux. Good. > Before going to read function, the program blocks at ByteStreamFileSource::createNew when it tries to do OpenInputFile on a pipe. OK, now you're talking about something else. Beforehand, you were talking about reading from a pipe - which is done (in your case) using "read()", is called from the event loop (only when data is available to be read), and should not block. Now, you seem to be talking about *opening* the pipe, which is done using "OpenInputFile()", which is implemented as a call to "fopen()". I don't know why that would block, however... Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From sambhav at saranyu.in Tue Feb 28 00:39:47 2012 From: sambhav at saranyu.in (Kumar Sambhav) Date: Tue, 28 Feb 2012 14:09:47 +0530 Subject: [Live-devel] Non blocking read in ByteStreamFileSource In-Reply-To: References: <98354C93-FBE1-4685-8A29-F354BE34FC2F@saranyu.in> Message-ID: <1C70E2F9-DC58-4531-B7F9-BCEE2BF87D02@saranyu.in> Initially i thought the pipe read would be blocking till data is available. You clarified in linux it will not block. Then figured out that blocking is happening at fopen. alternative to fopen, open() can be used to open a PIPE in non-blocking mode. open("name" ,O_RDONLY | O_NONBLOCK) On Feb 28, 2012, at 1:01 PM, Ross Finlayson wrote: >> I am using Linux. > > Good. > > >> Before going to read function, the program blocks at ByteStreamFileSource::createNew when it tries to do OpenInputFile on a pipe. > > OK, now you're talking about something else. Beforehand, you were talking about reading from a pipe - which is done (in your case) using "read()", is called from the event loop (only when data is available to be read), and should not block. > > Now, you seem to be talking about *opening* the pipe, which is done using "OpenInputFile()", which is implemented as a call to "fopen()". I don't know why that would block, however... > > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Feb 28 00:55:12 2012 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 28 Feb 2012 00:55:12 -0800 Subject: [Live-devel] Non blocking read in ByteStreamFileSource In-Reply-To: <1C70E2F9-DC58-4531-B7F9-BCEE2BF87D02@saranyu.in> References: <98354C93-FBE1-4685-8A29-F354BE34FC2F@saranyu.in> <1C70E2F9-DC58-4531-B7F9-BCEE2BF87D02@saranyu.in> Message-ID: <59292966-026F-4606-BA29-7690B4903921@live555.com> > Then figured out that blocking is happening at fopen. > > alternative to fopen, open() can be used to open a PIPE in non-blocking mode. > open("name" ,O_RDONLY | O_NONBLOCK) If you want to do this, then I suggest that you subclass "ByteStreamFileSource", and define a new function "yourByteStreamFileSourceSubclass::createNew()" that works the same as "ByteStreamFileSource::createNew()", except for how it creates the FID that ends up getting passed to the "ByteStreamFileSource" constructor. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aviadr1 at gmail.com Tue Feb 28 01:53:55 2012 From: aviadr1 at gmail.com (aviad rozenhek) Date: Tue, 28 Feb 2012 11:53:55 +0200 Subject: [Live-devel] crash in H264VideoRTPSink::stopPlaying() In-Reply-To: <199CB7FD-386B-4384-B61D-197E9423FBB5@live555.com> References: <199CB7FD-386B-4384-B61D-197E9423FBB5@live555.com> Message-ID: i'm running a version from 2010.09.22 if I upgrade to a later version, streaming to android stops working. the android RTSP client complains about timestamps in messages like "Huh? Time moving backwards? 29733148 > 28809389 " On Thu, Feb 23, 2012 at 23:21, Ross Finlayson wrote: > Are you running an up-to-date version of the software? (A bug similar to > what you're reporting was fixed back in September of last year.) > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > -- Aviad Rozenhek -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Feb 28 07:40:42 2012 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 28 Feb 2012 07:40:42 -0800 Subject: [Live-devel] crash in H264VideoRTPSink::stopPlaying() In-Reply-To: References: <199CB7FD-386B-4384-B61D-197E9423FBB5@live555.com> Message-ID: <005A285F-1317-44FC-99CD-5D64F2A16230@live555.com> > i'm running a version from 2010.09.22 > if I upgrade to a later version Not just 'a later' version. The only version we support is *the latest* version. > , streaming to android stops working. > the android RTSP client complains about timestamps in messages like "Huh? Time moving backwards? 29733148 > 28809389 " OK, so you'll need to ask an appropriate mailing list for the 'android RTSP client' - not this mailing list - about this. (Note that RTP timestamps - and thus presentation times - can go 'backwards' for codecs that have 'B' frames (or their equivalent in H.264). This is not a bug.) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From vigosslive at rambler.ru Tue Feb 28 10:37:08 2012 From: vigosslive at rambler.ru (Rustam) Date: Tue, 28 Feb 2012 22:37:08 +0400 Subject: [Live-devel] ubsent audio when i try play VOB file. Message-ID: <926930449.1330454228.71391384.73222@mperl113.rambler.ru> >Please put your file it on a publically-accessible web (or FTP) server, and post the URL (not the file itself) to this mailing >list, so we can download it and take a look at it. Ross I have tried about 50 discs and do you think that the case in the file? You can vob files to play with the sounds? Thanks. From yanggr at hotmail.com Tue Feb 28 10:49:06 2012 From: yanggr at hotmail.com (YangGuanRong) Date: Wed, 29 Feb 2012 02:49:06 +0800 Subject: [Live-devel] fast forward / backward switch not smooth In-Reply-To: <7D2D5586-CCB7-49A0-869E-6B7243C67A32@live555.com> References: , , , , <7D2D5586-CCB7-49A0-869E-6B7243C67A32@live555.com> Message-ID: Hi,I report the issue to live555, because I have do some investigation and believe it is live555 RTSP server issue, here mentioned VLC just a method to reproduce the issue, you can use and client to reproduce it; Actually I found the issue with our own RTSP client first and then I tried with VLC to confirm; I put some print to when socket send data out:When the issue happens, the sending is pretty slow; while with same scale speed, there will be lots of prints;And most of the time the issue can recover automatically if you wait a-enough-long time, when this happens, there will be very few issue prints at beginning (bad case) and lots of prints prints (good case); in this period I don't do any operation; It is easy to reproduce, just play a *.ts file with index file *.tsx, switch the speed to 2 or 4 times fast forward / backward, then you will see it; From: finlayson at live555.com Date: Mon, 27 Feb 2012 18:31:31 -0800 To: live-devel at ns.live555.com Subject: Re: [Live-devel] fast forward / backward switch not smooth I'm doing some test with "live555MediaServer" as RTSP server, everything works pretty well except the switch in / out / between fast forward / backward mode and regular mode, version / code: 2012.01.13 Client: VLC, latest got from videolan website "VLC media player 1.1.11 The Luggage" Phenomenon: 1, when set to fast forfard 16 or 32 times speed, most of the time works smoothly and get video quickly 2, when set to fast backfard 16 or 32 times speed, sometimes need wait several seconds for the video to display 3, when set to fast forfard or backward at anyother speed, such as forward at 2 or 4 times speed, most of the time need wait a long time to get video displaying, or even don't move at all; VLC is not our software (although it does use our library for its RTSP client implementation), and this is not a VLC mailing list. Unfortunately, it's not clear if the problems that you're seeing are due to issues with our software, or issues with VLC. If the problems are with VLC, then you'll need to report them on a VLC mailing list instead. One way to check where your 'trick play' problems lie is to use our "openRTSP" command-line client instead of VLC. Note the 'trick play' options: http://www.live555.com/openRTSP/#trick-play Ross Finlayson Live Networks, Inc. http://www.live555.com/ _______________________________________________ live-devel mailing list live-devel at lists.live555.com http://lists.live555.com/mailman/listinfo/live-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Feb 28 12:00:08 2012 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 28 Feb 2012 12:00:08 -0800 Subject: [Live-devel] ubsent audio when i try play VOB file. In-Reply-To: <926930449.1330454228.71391384.73222@mperl113.rambler.ru> References: <926930449.1330454228.71391384.73222@mperl113.rambler.ru> Message-ID: <0E718C63-9C47-4784-A38D-31CD3AB8575D@live555.com> >> Please put your file it on a publically-accessible web (or FTP) server, > and post the URL (not the file itself) to this mailing >list, so we can download it and take a look at it. > > Ross I have tried about 50 discs and do you think that the case in the file? Unless you can give me a specific example of a file that fails for you, and a specific description of why it fails, then I won't be able to help you. Do not post again about this until you do. Note also that you should use "openRTSP" - not a media player application - as your RTSP client. (If you use a media player application, and find that it doesn't play the audio, then that might just be the fault of the media player application. It might be receiving the data OK, but, for some reason, not playing it.) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Feb 28 12:22:00 2012 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 28 Feb 2012 12:22:00 -0800 Subject: [Live-devel] fast forward / backward switch not smooth In-Reply-To: References: , , , , <7D2D5586-CCB7-49A0-869E-6B7243C67A32@live555.com> Message-ID: <0B80FACF-3679-42DE-9C72-36CAE2056BD9@live555.com> > I report the issue to live555, because I have do some investigation and believe it is live555 RTSP server issue 'Believing' is not enough. To confirm that it is a problem with our software, you will need to use our software alone, and not VLC. I suggest that you use our "testMPEG2TransportStreamTrickPlay" application - see - to test this. If you can find - using this application - that 'trick play' operations do not work properly on one of your input transport stream files, then please point us at this input transport stream file, and tell us the "testMPEG2TransportStreamTrickPlay" commands that you used. If you can't do this, then please don't post about this again. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From sambhav at saranyu.in Tue Feb 28 21:15:41 2012 From: sambhav at saranyu.in (Kumar Sambhav) Date: Wed, 29 Feb 2012 10:45:41 +0530 Subject: [Live-devel] Live input to RTSP Server Message-ID: <9B90760C-ACDD-4E96-98E1-A8D62AD6FA92@saranyu.in> Hi, I am trying to get live RTP data as input to the RTSP Server. Modified H264VideoFileServerMediaSubsession::createNewStreamSource to create a RTPSource instead of ByteStreamFileSource. char const* inputAddressStr = "0.0.0.0"; struct in_addr inputAddress; inputAddress.s_addr = our_inet_addr(inputAddressStr); Port const inputPort(50003); unsigned char const inputTTL = 0; // we're only reading from this mcast group Groupsock inputGroupsock(envir(), inputAddress, inputPort, inputTTL); H264VideoRTPSource *rtpSource = H264VideoRTPSource::createNew(envir(), &inputGroupsock,96,90000); return H264VideoStreamDiscreteFramer::createNew(envir(), rtpSource); The program is crashing when RTPInterface::startNetworkReading calls BasicTaskScheduler::setBackgroundHandling. From the gdb trace found that the socketNum in RTPInterface is corrupted. When the RTPInterface object was initialized its value was 6. Any idea where things might be going wrong ? GDB Trace when the application crashed. BasicTaskScheduler::setBackgroundHandling (this=0x1002008c0, socketNum=1606412024, conditionSet=2, handlerProc=0xbf7fe24, clientData=0x7fff8d02a600) at BasicTaskScheduler.cpp:197 RTPInterface::startNetworkReading (this=0x100204d90, handlerProc=0x10001b2d8 ) at UsageEnvironment.hh:156 MultiFramedRTPSource::doGetNextFrame (this=0x0) at MultiFramedRTPSource.cpp:119 H264VideoStreamDiscreteFramer::doGetNextFrame (this=0x100204d00) at H264VideoStreamDiscreteFramer.cpp:46 H264FUAFragmenter::doGetNextFrame (this=0x0) at H264VideoRTPSink.cpp:167 MultiFramedRTPSink::packFrame (this=0x0) at MultiFramedRTPSink.cpp:216 MultiFramedRTPSink::continuePlaying (this=0x0) at MultiFramedRTPSink.cpp:152 StreamState::startPlaying (this=0x1002055d0, dests=0x7fff5fbfe6f8, rtcpRRHandler=0x100020ac4 , rtcpRRHandlerClientData=0x7fff5fbff1c0, serverRequestAlternativeByteHandler=0x100022dac , serverRequestAlternativeByteHandlerClientData=0x100801600) at OnDemandServerMediaSubsession.cpp:427 OnDemandServerMediaSubsession::startStream (this=0x0, clientSessionId=1606412024, streamToken=0x1002055d0, rtcpRRHandler=0, rtcpRRHandlerClientData=0x10001b2d8, rtpSeqNum=@0x7fff5fbff266, rtpTimestamp=@0x7fff5fbff260, serverRequestAlternativeByteHandler=0x100022dac , serverRequestAlternativeByteHandlerClientData=0x100801600) at OnDemandServerMediaSubsession.cpp:210 RTSPServer::RTSPClientSession::handleCmd_PLAY (this=0x100801600, subsession=0x7fff5fbff320, cseq=0x7fff5fbff320 "??_?", fullRequestStr=0x7fff5fbff320 "??_?") at RTSPServer.cpp:1209 Regards, Sambhav -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Feb 28 21:31:13 2012 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 28 Feb 2012 21:31:13 -0800 Subject: [Live-devel] Live input to RTSP Server In-Reply-To: <9B90760C-ACDD-4E96-98E1-A8D62AD6FA92@saranyu.in> References: <9B90760C-ACDD-4E96-98E1-A8D62AD6FA92@saranyu.in> Message-ID: > I am trying to get live RTP data as input to the RTSP Server. > > Modified H264VideoFileServerMediaSubsession::createNewStreamSource to create a RTPSource instead of ByteStreamFileSource. As explained in the FAQ: http://www.live555.com/liveMedia/faq.html#modifying-and-extending you should not be modifying the exiting code 'in place'. Instead, you should be defining and implementing your own class (a subclass of "OnDemandServerMediaSubsession") - using the exiting code as a model, where appropriate. > Groupsock inputGroupsock(envir(), inputAddress, inputPort, inputTTL); I suspect that this is the problem: You are declaring "inputGroupsock" on the stack, which means that it will automatically get destroyed when the function exits. You don't want this. Instead, do: Groupsock* inputGroupsock = new Groupsock(envir(), inputAddress, inputPort, inputTTL); and then H264VideoRTPSource *rtpSource = H264VideoRTPSource::createNew(envir(), inputGroupsock,96,90000); Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From sambhav at saranyu.in Tue Feb 28 21:59:10 2012 From: sambhav at saranyu.in (Kumar Sambhav) Date: Wed, 29 Feb 2012 11:29:10 +0530 Subject: [Live-devel] Live input to RTSP Server In-Reply-To: References: <9B90760C-ACDD-4E96-98E1-A8D62AD6FA92@saranyu.in> Message-ID: <3327161A-9577-488B-B839-64D728A25ED6@saranyu.in> Thanks Ross. Creating a Groupsock object solved the problem. Groupsock* inputGroupsock = new Groupsock(envir(), inputAddress, inputPort, inputTTL); I was quickly trying to get things to work. Once it done I will implement custom class :) On Feb 29, 2012, at 11:01 AM, Ross Finlayson wrote: > Groupsock* inputGroupsock = new Groupsock(envir(), inputAddress, inputPort, inputTTL); -------------- next part -------------- An HTML attachment was scrubbed... URL: From ricardo at kafemeeting.com Wed Feb 29 03:58:30 2012 From: ricardo at kafemeeting.com (Ricardo Acosta) Date: Wed, 29 Feb 2012 12:58:30 +0100 Subject: [Live-devel] RTCP functions when using BasicUDPSource Message-ID: On Thu, Jan 12, 2012 at 5:08 AM, Ross Finlayson wrote: > We have implemented a sender and a receiver for MPEG2TS using Livemedia. > > Do 'we' not have our own domain name? :-) > > Now "we" have a domaine :-) ! Hi Ross I would like to know what is the best way to get some of the RTCP info when using UDP in the server side. Server side : we are using BasicUDPSource and StreamReplicator to send replicas towards the client apps. Our client app uses RTP (RTPSource and RTPSink) . For example, using the BasicUDPSource can we detect the missing SR packet in the server side (using RTCPInstance::setSRHandler); To cut the connection if the client finish streaming). I saw the SR packet is arriving at the server side. Also I would like just to confirm we can get all the NetInterfaceTrafficStats from BasicUDPSource, we need them to take some actions toward the clients. Thank you in advance Ricardo -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Wed Feb 29 07:00:45 2012 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 29 Feb 2012 07:00:45 -0800 Subject: [Live-devel] RTCP functions when using BasicUDPSource In-Reply-To: References: Message-ID: <939658E7-96A8-4C8D-B6CD-0ABE8050926C@live555.com> > I would like to know what is the best way to get some of the RTCP info when using UDP in the server side. > > Server side : we are using BasicUDPSource and StreamReplicator to send replicas towards the client apps. So, is your server's input data RTP/UDP, or raw-UDP? I.e., is your intention to: 1/ Make a direct copy of incoming RTP/UDP packets into outgoing RTP/UDP packets (i.e., keeping the RTP headers exactly the same), or 2/ Copy data from incoming raw-UDP packets (which are *not* RTP packets) into outgoing RTP/UDP packets? If you're trying to do 1/ (a simple 'UDP relay'), then you should also be copying the RTCP stream (that comes from the same source as the input RTP stream). Note that this RTCP stream will (normally) be using the RTP stream's port number +1; and you should do the same for the output ('relayed') RTCP packets. And, ideally, you should also 'relay' RTCP packets from the receiver back to the original source (i.e., also set up a 'relay' for RTCP packets that come in the reverse direction). But if you're trying to do 2/ (a 'raw-UDP-to-RTP relay'), then your server should be using an appropriate "RTPSink" subclass, *not* a "BasicUDPSink". And then you should also be creating a "RTCPInstance", tied to this "RTPSink". And once again, the output "RTPSink" should use an even-numbered port, and the corresponding "RTCPInstance" should use that port number +1 (i.e., odd-numbered). Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Wed Feb 29 13:13:07 2012 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 29 Feb 2012 13:13:07 -0800 Subject: [Live-devel] Latest LIVE555 version no longer supports the synchronous RTSPClient interface (by default) Message-ID: In the latest version (2012.02.29) of the "LIVE555 Streaming Media" code, the old, synchronous "RTSPClient" interface is no longer supported, by default. If you are still using this old interface, then you should upgrade to the asynchronous interface. (You can use the code for the "testRTSPClient" demo application for guidance.) If, however, you really want to continue to use the old synchronous "RTSPClient" interface, you can do so by "#define"ing RTSPCLIENT_SYNCHRONOUS_INTERFACE before "include/RTSPClient.hh" is included for the first time. (At some point in the future the old synchronous interface might get removed entirely.) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From umar at janteq.com Wed Feb 29 18:31:29 2012 From: umar at janteq.com (Umar Qureshey) Date: Wed, 29 Feb 2012 18:31:29 -0800 Subject: [Live-devel] Selectively disabling subsessions Message-ID: <000601ccf753$6a45fe60$3ed1fb20$@janteq.com> Hi, If I have my live555 rtsp_server running advertising say two streams: MyStream0 MyStream1 Would it be possible for me to disable MyStream1 while MyStream0 is actively being streamed so that the client cannot stream MyStream1 (and vice versa)? Then upon the termination of MyStream1, MyStream0 will be automatically enabled. Best solution I can think of is to remove MyStream1's socket from select to disable it and then to restore it when MyStream0 exits. I'd like to do this without hacking into the original live555 library code i.e. within my subsession's code. Any tips would be great. Thanks, Umar From finlayson at live555.com Wed Feb 29 21:44:21 2012 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 29 Feb 2012 21:44:21 -0800 Subject: [Live-devel] Selectively disabling subsessions In-Reply-To: <000601ccf753$6a45fe60$3ed1fb20$@janteq.com> References: <000601ccf753$6a45fe60$3ed1fb20$@janteq.com> Message-ID: > If I have my live555 rtsp_server running advertising say two streams: > > MyStream0 > MyStream1 > > Would it be possible for me to disable MyStream1 while MyStream0 is actively > being streamed so that the client cannot stream MyStream1 (and vice versa)? > Then upon the termination of MyStream1, MyStream0 will be automatically > enabled. I'm not sure I totally understand your question. You refer to your server advertising two "streams", but the "Subject:" line of your email refers to "subsessions". Note the difference between "ServerMediaSessions" and "ServerMediaSubsessions": - A "ServerMediaSession" refers to a named multimedia stream, that contains one or more "ServerMediaSubsession"s - i.e., 'tracks' - one for each medium (audio, video, text) that forms the stream. - A "ServerMediaSubsession" refers to a particular 'track' (audio, video, or text) that makes up a stream. I'm assuming here - despite your "Subject:" line - that you want to to 'selectively disable "ServerMediaSessions"'. You can do this quite easily, using the "RTSPServer::removeServerMediaSession()" function. If you do this, then any clients that are currently streaming the session will continue, but no new clients will be able to play the stream.