From amit.yedidia at elbitsystems.com Mon Feb 2 06:16:42 2009 From: amit.yedidia at elbitsystems.com (Yedidia Amit) Date: Mon, 2 Feb 2009 16:16:42 +0200 Subject: [Live-devel] Serial port Message-ID: Hi all, I know this not exactly the right place to ask this so feel free to ignore me. I want to use the task schedualrer provided in the live555 for (among all things) waiting for data on the serial port. On linux it is very easy (its allways easy in linux...) since the com port is a file descriptor. What should I do in Windows? Should I implement a thread that will be blocking and then send the data to a socket that will be inserted to the task schedualer?d Regards, Amit Yedidia Elbit System Ltd. Email: amit.yedidia at elbitsystems.com Tel: 972-4-8318905 ---------------------------------------------------------- The information in this e-mail transmission contains proprietary and business sensitive information. Unauthorized interception of this e-mail may constitute a violation of law. If you are not the intended recipient, you are hereby notified that any review, dissemination, distribution or duplication of this communication is strictly prohibited. You are also asked to contact the sender by reply email and immediately destroy all copies of the original message. -------------- next part -------------- An HTML attachment was scrubbed... URL: From amit.yedidia at elbitsystems.com Mon Feb 2 06:19:40 2009 From: amit.yedidia at elbitsystems.com (Yedidia Amit) Date: Mon, 2 Feb 2009 16:19:40 +0200 Subject: [Live-devel] Serial port on the TaskSchedualer Message-ID: > Hi all, > > I know this not exactly the right place to ask this so feel free to > ignore me. > > I want to use the task schedualrer provided in the live555 for (among > all things) waiting for data on the serial port. > On linux it is very easy (its allways easy in linux...) since the com > port is a file descriptor. > > What should I do in Windows? Should I implement a thread that will be > blocking and then send the data to a socket that will be inserted to > the task schedualer?d > > > Regards, > > > Amit Yedidia > Elbit System Ltd. > > Email: amit.yedidia at elbitsystems.com > Tel: 972-4-8318905 > ---------------------------------------------------------- > > The information in this e-mail transmission contains proprietary and business sensitive information. Unauthorized interception of this e-mail may constitute a violation of law. If you are not the intended recipient, you are hereby notified that any review, dissemination, distribution or duplication of this communication is strictly prohibited. You are also asked to contact the sender by reply email and immediately destroy all copies of the original message. -------------- next part -------------- An HTML attachment was scrubbed... URL: From patbob at imoveinc.com Mon Feb 2 09:37:13 2009 From: patbob at imoveinc.com (Patrick White) Date: Mon, 2 Feb 2009 09:37:13 -0800 Subject: [Live-devel] Serial port In-Reply-To: References: Message-ID: <200902020937.13892.patbob@imoveinc.com> Why not just use a scheduled task? It can either poll to check for characters on the serial port or get them from a queue. later, patbob On Monday 02 February 2009 6:16 am, Yedidia Amit wrote: > Hi all, > > I know this not exactly the right place to ask this so feel free to > ignore me. > > I want to use the task schedualrer provided in the live555 for (among > all things) waiting for data on the serial port. > On linux it is very easy (its allways easy in linux...) since the com > port is a file descriptor. > > What should I do in Windows? Should I implement a thread that will be > blocking and then send the data to a socket that will be inserted to the > task schedualer?d > > > Regards, > > > Amit Yedidia > Elbit System Ltd. > > Email: amit.yedidia at elbitsystems.com > Tel: 972-4-8318905 > ---------------------------------------------------------- > > > > The information in this e-mail transmission contains proprietary and > business sensitive information. Unauthorized interception of this e-mail > may constitute a violation of law. If you are not the intended recipient, > you are hereby notified that any review, dissemination, distribution or > duplication of this communication is strictly prohibited. You are also > asked to contact the sender by reply email and immediately destroy all > copies of the original message. From debargha.mukherjee at hp.com Tue Feb 3 09:43:19 2009 From: debargha.mukherjee at hp.com (Mukherjee, Debargha) Date: Tue, 3 Feb 2009 17:43:19 +0000 Subject: [Live-devel] OpenRTSP w/ Axis 207 series cameras: AAC issue Message-ID: <73833378E80044458EC175FF8C1E63D56DF7FE07A0@GVW0433EXB.americas.hpqcorp.net> Hi, I am experimenting with receiving streamed data from Axis 207W cameras using (a modified version of) the openRTSP utility, and is confronted with the following issue. When I use the -q option to receive a quicktime file, I can later use the ffmpeg avcodec libraries to read packets and decode the audio and video streams. However, when I just save the raw streamed data into separate video and audio elementary streams, I cannot decode the AAC audio stream anymore. My guess is it is an issue related to the transport format parsing, but I may be wrong. Any help would be appreciated. Thanks, DM. From finlayson at live555.com Tue Feb 3 18:06:38 2009 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 3 Feb 2009 18:06:38 -0800 Subject: [Live-devel] OpenRTSP w/ Axis 207 series cameras: AAC issue In-Reply-To: <73833378E80044458EC175FF8C1E63D56DF7FE07A0@GVW0433EXB.americas.hpqcorp.ne t> References: <73833378E80044458EC175FF8C1E63D56DF7FE07A0@GVW0433EXB.americas.hpqcorp.ne t> Message-ID: >I am experimenting with receiving streamed data from Axis 207W >cameras using (a modified version of) the openRTSP utility, and is >confronted with the following issue. When I use the -q option to >receive a quicktime file, I can later use the ffmpeg avcodec >libraries to read packets and decode the audio and video streams. >However, when I just save the raw streamed data into separate video >and audio elementary streams, I cannot decode the AAC audio stream >anymore. The output file contains raw AAC audio data - the exact same data that was sent within the RTP packets. There is nothing wrong with this data, and audio decoders can be used to decode it as it arrives (note, for example, VLC, which does this). However, a media player might not be able to play the data when it is read from a file. If this is the case, then this is a problem with the media player, not our software. -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ From finlayson at live555.com Tue Feb 3 18:31:30 2009 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 3 Feb 2009 18:31:30 -0800 Subject: [Live-devel] Serial port In-Reply-To: References: Message-ID: >I know this not exactly the right place to ask this Actually, this mailing list was exactly the right place to ask this (but only once :-) > >I want to use the task schedualrer provided in the live555 for >(among all things) waiting for data on the serial port. > >On linux it is very easy (its allways easy in linux...) since the >com port is a file descriptor. > >What should I do in Windows? Should I implement a thread that will >be blocking and then send the data to a socket that will be inserted >to the task schedualer? That sounds like a good idea, yes. -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcosolari at gmail.com Wed Feb 4 02:58:48 2009 From: marcosolari at gmail.com (Marco Solari) Date: Wed, 4 Feb 2009 11:58:48 +0100 Subject: [Live-devel] Live streaming? Message-ID: Is it possible to publish a live stream, with LIVE555? Thanks in advance! Marco -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcosolari at gmail.com Wed Feb 4 03:02:03 2009 From: marcosolari at gmail.com (Marco Solari) Date: Wed, 4 Feb 2009 12:02:03 +0100 Subject: [Live-devel] Live streaming? Message-ID: Is it possible to publish a live stream, with LIVE555? Thanks in advance! Marco -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Wed Feb 4 06:39:01 2009 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 4 Feb 2009 06:39:01 -0800 Subject: [Live-devel] Live streaming? In-Reply-To: References: Message-ID: >Is it possible to publish a live stream, with LIVE555? Please read the FAQ! -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ From neville.bradbury at opensoftaustralia.com.au Wed Feb 4 20:17:19 2009 From: neville.bradbury at opensoftaustralia.com.au (neville bradbury) Date: Thu, 5 Feb 2009 15:17:19 +1100 Subject: [Live-devel] Live555 performance configuration Message-ID: <200902050417.n154HKke008117@mail02.syd.optusnet.com.au> Hi, I am looking at a specification on the best way to ensure Live555 on windows advanced server 2003 is running at best as it can. Some of the areas I am looking at as a guidance are: 1. best speed of disks 7200 rpm. 2. how many disk and type of confirue raid 1 3. regarding tcp/ip , i have heard and would like to understand if there are issues re decoder for buffer delay of tcp/ip packets,as tcp/ip over rtsp can have dropped packets 4. decoder configuration Basically, I am looking for a benchmark configuration I can setup on the machine to ensure live555 can be as robust and fast as possible, though I do understand there could be other areas I have not been able to address (if they are missed it would be great to have them as well). I am looking to service around 200 amino set top box request with a possible 100 vod movies downloaded at any one time. I am also looking for purchasing any web jmac frameworks that we can extend our development with, outside of what amino has. I am just looking for some information that will help set a base level for setting up a server , disk etc Many thanks, Neville Bradbury OpenSoft Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From amadorim at vdavda.com Wed Feb 4 23:48:30 2009 From: amadorim at vdavda.com (Marco Amadori) Date: Thu, 5 Feb 2009 08:48:30 +0100 Subject: [Live-devel] Live555 performance configuration In-Reply-To: <200902050417.n154HKke008117@mail02.syd.optusnet.com.au> References: <200902050417.n154HKke008117@mail02.syd.optusnet.com.au> Message-ID: <200902050848.30427.amadorim@vdavda.com> On Thursday 05 February 2009, 05:17:19, neville bradbury wrote: > 1. best speed of disks 7200 rpm. Better 15k SAS/scsi, to serve a lot of stream the seek time is really an important feature. Load it with RAM too, which is cheap nowadays and serve as "seek buffer" very well (on linux systems). > 2. how many disk and type of confirue raid 1 I would use raid 5 for a vod if you have a lot of content, it is cheaper at $/GB. > I am looking to service around 200 amino set top box request with a > possible 100 vod movies downloaded at any one time. 100 movies at which rate? I use live555 on Debian/Linux and I assure it can serve that load with above mentioned configuration with 4.5 Mbps mpeg2 streams without problems. I cannot spell "windows" and "server" in the same sentence, I'm sorry :-) -- ESC:wq -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. From matt at schuckmannacres.com Thu Feb 5 16:06:49 2009 From: matt at schuckmannacres.com (Matt Schuckmann) Date: Thu, 05 Feb 2009 16:06:49 -0800 Subject: [Live-devel] Patch against release dated 1/26/2009 Message-ID: <498B7F19.1040807@schuckmannacres.com> Attached is a patch file for changes we've made to LIVE555 to support functionality we needed for our application. The patch is a diff of the 1/26/2009 release and our changes. I hope this patch isn't to big and all of the changes make sense and work with in your vision for the library. If anything looks wrong or doesn't follow the direction you had intended for the library please let me know. We intend to make more changes and would like to submit any improvements we make back to the project so let me know if there are any problems. Thanks, for the great library and all the support. Matt Schuckmann A synopsis of the changes are listed below. 1. Modified BasicUsageEnvironment0::reportBackgroundError() to get the errorResultMsgBuffer contents via a call to getResultMsg() instead of accessing the private member directly so that it will work with derived class that doesn't use fResultMsgBuffer 2. Modified RTSPServer to use a virtual factory function, createNewClientSession(), to create RTSPClientSession objects (or derived children of that class) so that users that want to use a class derived from RTSPClientSession don't have to re-impliment all of the incomingConnectionHandler() code. 3. Added support recognizing SET_PARAMETER commands and passing the command body into the RTSPClientSession. 4. Added support to specify a range of ports that can be used for RTP/RTCP streams instead of just a starting port. Also added failure condition for when you run out of ports. Range is specified by a starting port and a count of following ports. The count can be set to -1 to allow a open ended range. 5. Added an iterator class to iterate over ServerMediaSession objects. 6. Added accessors to OnDemandServerMediaSubsession to get the RTP port and RTCP port for a client session ID associated with a void* streamToken. This primarily so that the server application can list the ports in use with each session. We will probably be adding more of these types of accessors please tell me if we aren't following your vision got this type of access. 7. Changed MS Visual Studio Makefile.tail make files to name the pdb files uniquely for each library so that all the libraries and there pdb files can be copied to a common directory. -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: LIVE555_mschuck_20090205.patch URL: From prbhagwat at gmail.com Fri Feb 6 01:07:11 2009 From: prbhagwat at gmail.com (Pramod Bhagwat) Date: Fri, 6 Feb 2009 14:37:11 +0530 Subject: [Live-devel] Regarding JPEG streaming Message-ID: Hi Ross, This is regarding streaming jpeg over rtp. I read the FAQ and in the mail mentioned below http://lists.live555.com/pipermail/live-devel/2005-January/001908.html You mentioned following: Note that "JPEGVideoSource" is an abstract base class. You must define and implement your own subclass of this class that delivers (in the "doGetNextFrame()" virtual function) complete JPEG video frames (but without the usual JPEG header). Here i am not clear about meaning of *JPEG frame without usual JPEG header*. Does it mean JPEG frame starting from Marker S0F0 i.e. 0xFF 0xC0 is considered for streaming over RTP? or my understanding is not correct. Please let me know. Warm Regards, pramod From finlayson at live555.com Fri Feb 6 01:17:19 2009 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 6 Feb 2009 01:17:19 -0800 Subject: [Live-devel] Regarding JPEG streaming In-Reply-To: References: Message-ID: >Here i am not clear about meaning of *JPEG frame without usual JPEG header*. See RFC 2435, which defines how JPEG frame data is carried in RTP packets. This is what we implement. -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ From gabriele.deluca at hotmail.com Fri Feb 6 01:07:49 2009 From: gabriele.deluca at hotmail.com (Gabriele De Luca) Date: Fri, 6 Feb 2009 10:07:49 +0100 Subject: [Live-devel] Patch against release dated 1/26/2009 Message-ID: In answer to Matt Schuckmann: I think that RTCP client port can be set from SDP description (MediaSession).See RFC 3650. For the server RTCP port I think that can be implemented in the object OnDemandServerSubsession. What do you think Ross? _________________________________________________________________ Quante ne sai? Gioca con i 50 nuovi schemi di CrossWire! http://livesearch.games.msn.com/crosswire/play_it/ From skramer at inbeeld.eu Fri Feb 6 02:04:33 2009 From: skramer at inbeeld.eu (Steven Kramer) Date: Fri, 6 Feb 2009 11:04:33 +0100 Subject: [Live-devel] Locale bug in RTSP range: header Message-ID: Hi, my system is apparently not using a US locale. This affected the generated range string (replaced dots by comma's). Fixed this by explicitly scoping the locale with a local variable. I.e., in createRangeString I replaced Locale ("C", LC_NUMERIC); with Locale locale ("C", LC_NUMERIC); That might generate 'unused' warnings in other compilers, though. I'm using gcc version 4.0.1 (Apple Inc. build 5488) Thanks Steven Kramer From finlayson at live555.com Fri Feb 6 02:14:19 2009 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 6 Feb 2009 02:14:19 -0800 Subject: [Live-devel] Locale bug in RTSP range: header In-Reply-To: References: Message-ID: >my system is apparently not using a US locale. This affected the >generated range string (replaced dots by comma's). Fixed this by >explicitly scoping the locale with a local variable. In that case, the 'bug' is probably in your compiler, not our code. Is anyone else ancountering this problem? -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ From skramer at inbeeld.eu Fri Feb 6 02:35:01 2009 From: skramer at inbeeld.eu (Steven Kramer) Date: Fri, 6 Feb 2009 11:35:01 +0100 Subject: [Live-devel] Locale bug in RTSP range: header In-Reply-To: References: Message-ID: <1642B1E4-EC32-486F-9986-003E0CAEB52E@inbeeld.eu> Op 6 feb 2009, om 11:14 heeft Ross Finlayson het volgende geschreven: >> my system is apparently not using a US locale. This affected the >> generated range string (replaced dots by comma's). Fixed this by >> explicitly scoping the locale with a local variable. > > In that case, the 'bug' is probably in your compiler, not our code. Could be, I'm not sure what the lifetime is for such a construct - an 'anonymous' object perhaps? I think the compiler is at liberty to call the destructor right away, though, just like if it were used as an argument to a function after the function completed.... But I'll gladly defer to ANSI experts. From matt at schuckmannacres.com Fri Feb 6 10:21:42 2009 From: matt at schuckmannacres.com (Matt Schuckmann) Date: Fri, 06 Feb 2009 10:21:42 -0800 Subject: [Live-devel] Patch against release dated 1/26/2009 In-Reply-To: References: Message-ID: <498C7FB6.3020106@schuckmannacres.com> Yes the RTP and RTCP ports could be obtained by parsing the SDP however that's rather ugly and inefficient don't you think? The new code is implemented in the OnDemandServerSubsession class. We couldn't just subclass OnDemandServerSubsession as the information is keep in the StreamState class which is only defined in OnDemandServerSubSession.cpp instances of which are keep and referenced via opaque void pointers else where in the code. So we had to implement new accessors on OnDemandServerSubsession to get at the data in the StreamState object associated with the session id. The other option would have been to move the definition of StreamState to a header file and expose that structure to the rest of the code. We felt that the path we choose was less invasive and more in line with how the code works now. Matt S. Gabriele De Luca wrote: > In answer to Matt Schuckmann: > I think that RTCP client port can be set from SDP description (MediaSession).See RFC 3650. > For the server RTCP port I think that can be implemented in the object OnDemandServerSubsession. > What do you think Ross? > > > > _________________________________________________________________ > Quante ne sai? Gioca con i 50 nuovi schemi di CrossWire! > http://livesearch.games.msn.com/crosswire/play_it/ > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > From finlayson at live555.com Fri Feb 6 10:46:10 2009 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 6 Feb 2009 10:46:10 -0800 Subject: [Live-devel] Locale bug in RTSP range: header In-Reply-To: References: Message-ID: I tested this with another compiler, and got the same result as Steven - the "Locale" destructor was getting called immediately after the constructor, rather than at the end of the block. I still think that this is a compiler bug, but because it appears to be common, I've decided to change all uses of "Locale" to use explicit variable names, as Steven did. This change will appear in the next release of the code. -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ From live-devel at lists.lammerts.org Sat Feb 7 12:11:52 2009 From: live-devel at lists.lammerts.org (Eric Lammerts) Date: Sat, 07 Feb 2009 15:11:52 -0500 Subject: [Live-devel] OpenRTSP w/ Axis 207 series cameras: AAC issue In-Reply-To: References: <73833378E80044458EC175FF8C1E63D56DF7FE07A0@GVW0433EXB.americas.hpqcorp.ne t> Message-ID: <498DEB08.8070502@lists.lammerts.org> >> I am experimenting with receiving streamed data from Axis 207W >> cameras using (a modified version of) the openRTSP utility, and is >> confronted with the following issue. When I use the -q option to >> receive a quicktime file, I can later use the ffmpeg avcodec >> libraries to read packets and decode the audio and video streams. >> However, when I just save the raw streamed data into separate video >> and audio elementary streams, I cannot decode the AAC audio stream >> anymore. > > The output file contains raw AAC audio data - the exact same data > that was sent within the RTP packets. There is nothing wrong with > this data, and audio decoders can be used to decode it as it arrives > (note, for example, VLC, which does this). I had the same problem with a Vivotek camera. The SDP contains some additional header info that you need for decoding. This info will be saved into the quicktime file but not into the raw file. If you know the sample frequency of the audio you could try "faad -s ". Sometimes that's sufficient info to make it work. Otherwise, get the header info with ->fmtp_config() from your audio MediaSubsession. This gives you a hex string. When you decode with libfaad, convert the hex string to binary and pass it as the AudioSpecificConfig to NeAACDecInit2(). That should make it work. Eric From morgan.torvolt at gmail.com Sat Feb 7 14:00:07 2009 From: morgan.torvolt at gmail.com (=?ISO-8859-1?Q?Morgan_T=F8rvolt?=) Date: Sat, 7 Feb 2009 23:00:07 +0100 Subject: [Live-devel] Live555 performance configuration In-Reply-To: <200902050848.30427.amadorim@vdavda.com> References: <200902050417.n154HKke008117@mail02.syd.optusnet.com.au> <200902050848.30427.amadorim@vdavda.com> Message-ID: <3cc3561f0902071400n2908d760m25072f4028b5c48c@mail.gmail.com> >> 1. best speed of disks 7200 rpm. > > Better 15k SAS/scsi, to serve a lot of stream the seek time is really an > important feature. Not necessarily. > Load it with RAM too, which is cheap nowadays and serve > as "seek buffer" very well (on linux systems). This is of course true. As much as the motherboard can handle. >> 2. how many disk and type of confirue raid 1 > > I would use raid 5 for a vod if you have a lot of content, it is cheaper at > $/GB. This is not a very good idea though. What you achieve do by doing this, is that you make sure that all disks move their heads for every read of the disk. Given 100 different open files, and an average disk access time of 10ms, each process will be able to read exactly once per second if in a RAID5/6. Even though you get a very high read speed, it is only for a very short period, and not very often. If you only read a couple of TS packets each time, you are in a shitload of trouble right there. More explanation further down. >> I am looking to service around 200 amino set top box request with a >> possible 100 vod movies downloaded at any one time. > > 100 movies at which rate? > > I use live555 on Debian/Linux and I assure it can serve that load with above > mentioned configuration with 4.5 Mbps mpeg2 streams without problems. 100 movies at 4,5Mbit is more that 50MB/s. On real random access, there is no harddrive in the world that will be able to give you that throughput, no matter how big a RAID5 system you have. If you have lots of cache hits, the maybe, but on many large files that is very unlikely. Raid does not help you here at all. The problem is simple. Random access will cause the disks to spend most of their time moving the read head around. Have a look at storagereview.com and their random access tests. 100MB/s sustained on sequential data, but <2MB/s on random access. Say your RAID has 1GB/s transfer rate, it will still not be able to overcome the issue of the >8 ms it takes moving the read disk read heads around. > I cannot spell "windows" and "server" in the same sentence, I'm sorry :-) Your main problem with this is loss of control actually, but putting that aside for now. What you need to do to be able to "survive" a huge amount of simultaneous clients is to do the RAID thing manually. Do not set up a RAID, because that causes the disks to read and move the disk heads at the same time. If you have two disk with the same content on, but not in RAID, then each disk could be used better. Making the VOD server read ahead a lot (like 1 MB), would make a 4Mbit/s stream read every 2 seconds, but for longer periods. The disk would actually spend only 20ms every 2 seconds for this (given 10ms seek time and 100MB/s sustained data rate). If reads don't crash, this would be able to give you 100 streams off one single disk. This gives a higher read duty cycle, giving you a lot higher throughput as you see. Also, doing this on two separate instead of one raid1 (which would give you 15ms read time per 2 seconds per process, and a maximum of 133 stream totally) you get a >50% higher throughput using two separate disks. In essence, forget RAID, raid is for high throughput sequential read. Make a manual mirror, and utilize the disks well by reading ahead a lot. Avoid a lot of threads reading a little data each time from the same disk. If you are really serious about getting high throughput, abstract the disk access away to a separate service who's only job is to make sure that all reads and writes from the disk are serialized, and make all data go trough this. Now the reads will never crash, and the theoretical 100 clients per hard-drive is not sot theoretical any more. The calculations are easy. A disk with 100MB/s read speed and 10ms access time will give you 1MB per 10ms read time. If you read 1MB at a time on random spots on the disk, you will use 10ms per read, and 10ms on average for moving to the next spot on the disk. Effectively giving you a maximum 50MB/s because half of your read time goes to moving the disk read heads. If you only read 10KB per read, the read will take only 0.1ms, but the head will still use 10ms to move around. Theory will then give you 0.1ms read time every 10.1ms, giving you somewhere around 1MB/s throughput. Cache and block sizes might improve this somewhat, but you get the idea. Read large chucks. Now for a _very_ important lesson if you need disk access speed. 7200rpm disks can have faster access time than 10k disks. When moving the head, the disk also needs to rotate to the correct position for a read to start. On average 0.5 rotations, which is 4.2ms on a 7200rpm disk and 3ms on a 10k disk. A regular 7200rpm disk has an average access time of 13ms or so, while a 10k disk has 8ms. This makes the 7200rpm disk use 9ms moving the head, and the 10k 5ms. Now, given an expensive 147GB 10k disk, and a 1TB 7200rpm disk, if you make a partition on the 1TB disk of 147GB, and only use that part of the disk, the head will only need to move 10% of what it would using the entire disk ( there is more data per rotation on the outer parts of the disk platters ). This could reduce the 9ms to about 1ms. 4.2ms + 1ms = 5.2ms, which is _faster_ than the 10k disk, and really more in the 15k realm, and you have 800GB to spare. Test it if you don't believe it. This can also give you 4 times as many disks for the same price possibly, which will make absolutely sure that the total throughput from these 4 cheap disks will surpass the 15k disks by a mile. So, go for huge SATA disks before expensive 10-15k disks, have lots of them ( 1/4 of the price hints to 4 times as many :-p which alone will heavily increase the throuhtput ), use as small a main partition as you can, and make sure you do not use RAID but spread movies on several pairs of manually mirrored disks to increase the read duty cycle. Also make sure to read a lot at a time and do it less frequently. I might of course be totally wrong on all this, but the tests I have done seems to agree. Hopefully I got it all right in my explanation here. Obviously, doing this will demand more from the developers, but it might very well be worth it. Best regards -Morgan- > -- > ESC:wq > > -- > This message has been scanned for viruses and > dangerous content by MailScanner, and is > believed to be clean. > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > From finlayson at live555.com Sat Feb 7 22:54:55 2009 From: finlayson at live555.com (Ross Finlayson) Date: Sat, 7 Feb 2009 22:54:55 -0800 Subject: [Live-devel] OpenRTSP w/ Axis 207 series cameras: AAC issue In-Reply-To: <498DEB08.8070502@lists.lammerts.org> References: <73833378E80044458EC175FF8C1E63D56DF7FE07A0@GVW0433EXB.americas.hpqcorp.ne t> <498DEB08.8070502@lists.lammerts.org> Message-ID: >Otherwise, get the header info with ->fmtp_config() from your audio >MediaSubsession. This gives you a hex string. When you decode with >libfaad, convert the hex string to binary and pass it as the >AudioSpecificConfig to NeAACDecInit2(). That should make it work. FYI, we provide a function "parseGeneralConfigStr()" that will convert the ASCII configuration ('AudioSpecificConfig') string into binary form. (See "liveMedia/include/MPEG4LATMAudioRTPSource.hh") We also provide a function "samplingFrequencyFromAudioSpecificConfig()" that can be used to extract just the sampling frequency from the ASCII configuration string. (See "liveMedia/include/MPEG4GenericRTPSource.hh".) We use this function in "QuickTimeFileSink" when writing ".mov'" or "mp4" format files. -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ From venugopalpaikr at tataelxsi.co.in Sun Feb 8 20:49:07 2009 From: venugopalpaikr at tataelxsi.co.in (venugopalpaikr) Date: Mon, 9 Feb 2009 10:19:07 +0530 Subject: [Live-devel] Bad file descriptor Message-ID: <000c01c98a71$be6cd1f0$3c033c0a@telxsi.com> Hi , I have ported live555 on DM355 and am using only the RTSP part of live555 to establish connection with VLC. Gstreamer is used for streaming RTP data. The setup works fine for single unicast sessions and multiple unicast. But if i give a simultaneous request from VLC with RTP over RTSP enabled in VLC i get the following error: BasicTaskScheduler: select fails(): Bad file descriptor. This occurs after DESCRIBE's response. Can anyone tell me the reason for the occurence of this error message? Is it because the previous sessions were not closed properly? Regards Venugopal The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments contained in it. From fuzzylai at dynagrid.net Mon Feb 9 02:37:10 2009 From: fuzzylai at dynagrid.net (Fuzzy Lai) Date: Mon, 9 Feb 2009 18:37:10 +0800 Subject: [Live-devel] Modern Streaming Application Design Message-ID: Dear Sir: Live555 has done a great job on the streaming technology but today's streaming applications also become more challenging than before. I hope it never bothers you to ask for your opinions about the design issues according to the philosophy of live555 streaming media. 1. multiple streaming formats that can be added or changed dynamically while current streaming clients are kept - stop the main loop the add a new ServerMediaSession or - insert a scheduler task to do such a management jobs or something else better approach? 2. multiple event-triggered recording/snapshoting that saves media locally or remotely while streaming is operating simultaneously on the same stream - one streaming source with multiple dynamically created streaming sinks that may be performed by multiple threads - does the single-threaded design fit such an application(realtime streaming and recording simultaneously)? - how about creating a special filter putting frames into a shared queue that can be accessed by the streaming sink and the external recording threads? Glad to hear your responses. BR Fuzzy Lai -------------- next part -------------- An HTML attachment was scrubbed... URL: From patbob at imoveinc.com Mon Feb 9 08:31:35 2009 From: patbob at imoveinc.com (Patrick White) Date: Mon, 9 Feb 2009 08:31:35 -0800 Subject: [Live-devel] Live555 performance configuration In-Reply-To: <3cc3561f0902071400n2908d760m25072f4028b5c48c@mail.gmail.com> References: <200902050417.n154HKke008117@mail02.syd.optusnet.com.au> <200902050848.30427.amadorim@vdavda.com> <3cc3561f0902071400n2908d760m25072f4028b5c48c@mail.gmail.com> Message-ID: <200902090831.35076.patbob@imoveinc.com> On Saturday 07 February 2009 2:00 pm, Morgan T?rvolt wrote: > The calculations are easy. A disk with 100MB/s read speed and 10ms > access time will give you 1MB per 10ms read time. If you read 1MB at a > time on random spots on the disk, you will use 10ms per read, and 10ms > on average for moving to the next spot on the disk. Effectively giving > you a maximum 50MB/s because half of your read time goes to moving the > disk read heads. This is only valid if A) the filesystem subsystem supports 1MB atomic reads, and B) the file isn't fragmented. Win, for example, reads in 64KB chunks. Its still faster to ask the filesystem to read 1MBwith a single syscall than ask it to read (16) 64KB chunks, but if you try to do simultaneous reads on that filesystem you'll probably interleave reading of 64KB chunks and kill your throughput with the seeks. > If you only read 10KB per read, the read will take > only 0.1ms, but the head will still use 10ms to move around. Theory > will then give you 0.1ms read time every 10.1ms, giving you somewhere > around 1MB/s throughput. Cache and block sizes might improve this > somewhat, but you get the idea. Read large chucks. This is true, but not for those reasons. Most filesystems, and the disk hardware too these days, reads ahead. Unfortunately, they only read ahead sequentially on the same track, so it only helps if the file is on the disk sequentially. The suggestion to use multiple spindles and manage disk load in your code is probably the best solution. An ordinary DMA100 PATA drive (128GB drive, 3.5", linux, sans filesystem) can do 40 MB/Sec sustaiend sequential read or write, and only 3-4 MB/sec sustained simultaneous sequential read and sequential write. Newer generation of disks and SATA should be able to handle more, but you'll still want to ensure the file is on the media sequentially and not write anything to the spindle that you're reading from. With more spindles on the job, you'll have to worry less about those sorts of issues, and be at less risk if one fails. With a hand-written mirroring program designed not to saturate the disk interfaces, you might even be able to mirror from an existing drive to a new one while up and serving -- if downtime is an issue, this could be a significant time savings as you're only completely down long enough to swap physical media. From matt at schuckmannacres.com Mon Feb 9 14:40:15 2009 From: matt at schuckmannacres.com (Matt Schuckmann) Date: Mon, 09 Feb 2009 14:40:15 -0800 Subject: [Live-devel] Making RTSP commands work when streaming RTP over TCP Message-ID: <4990B0CF.8020502@schuckmannacres.com> I need to make RTSP commands work when streaming RTP over TCP because I need to be able to use the SET_PARAMETER and GET_PARAMETER commands to control my live camera during the session. The way I thought I understood the problem with the current implementation is that once the PLAY command is issued the TCP connection is hijacked to stream the RTP data and the server and the client no longer have a communication channel, does that sound right? My initial thoughts on how to fix this was to change the RTSPServer class to support non-persistent connections for the RTSP commands and accept multiple connections for each RTSP session. I'd then modify the client to open another TCP connection to the server once the initial connection is hijacked for streaming. So far modifying the server to accept multiple connections per session (persistent or non-persistent) was pretty easy (took me about a day) and I'm moving on to the client side which seems a little more tricky. Looking at the RTSPClient code it looks like the simplest thing to do might be to add some sort of connected boolean that the send() method could use to determine a new TCP connection needs to be established. Then the PlayMediaSession() and PlayMediaSubSession() could clear this flag if we are streaming over TCP so that the next command that called send() would open a new connection. The thing I'm not completely sure about is how to make sure the original connection is properly cleaned up when the session is torn down, I don't fully understand if the RTPSource and RTCPSource objects will clean it up or if the RTSPClient client needs to keep track and clean it up. Does this sound like a reasonable approach to fixing RTSP support when streaming RTP over TCP? or had you intended something else or does the standard for RTSP and RTP over TCP specify something else (where is the standard). I am actively working on this and will submit anything I do back to the Live555 project so any guidance would be very helpful. Thanks Matt S. From finlayson at live555.com Mon Feb 9 22:55:29 2009 From: finlayson at live555.com (Ross Finlayson) Date: Mon, 9 Feb 2009 22:55:29 -0800 Subject: [Live-devel] Modern Streaming Application Design In-Reply-To: References: Message-ID: >1. multiple streaming formats that can be added or changed >dynamically while current streaming clients are kept >insert a scheduler task to do such a management jobs Yes, this would work. The code was designed to allow adding, or removing, "ServerMediaSession" objects to/from a "RTSPServer", even while it is servicing clients. >2. multiple event-triggered recording/snapshoting that saves >media locally or remotely while streaming is operating >simultaneously on the same stream >does the single-threaded design fit such an application(realtime >streaming and recording simultaneously)? Yes. -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at schuckmannacres.com Tue Feb 10 10:00:43 2009 From: matt at schuckmannacres.com (Matt Schuckmann) Date: Tue, 10 Feb 2009 10:00:43 -0800 Subject: [Live-devel] RTSPClient::openConnectionFromURL and fBaseURL Message-ID: <4991C0CB.4010901@schuckmannacres.com> In looking at openConnectionFromURL() I think I see a potential problem with fBaseURL. Basically the first thing this method does is free fBaseURL and then strDup the new URL. Next this method parses the url and then checks to see if it a input socket is open and if not opens one using the URL. If parsing fails or an input socket is already open the function aborts and doesn't clean up fBaseURL. The problem I see is if a socket where opened to a URL and then never closed before calling openConnectionFromURL with another URL fBaseURL could be set to a URL that doesn't match the URL the socket is connected to. This seems like a potential for a bug or at the very least much confusion. Matt S. From matt at schuckmannacres.com Tue Feb 10 10:08:02 2009 From: matt at schuckmannacres.com (Matt Schuckmann) Date: Tue, 10 Feb 2009 10:08:02 -0800 Subject: [Live-devel] Making RTSP commands work when streaming RTP over TCP In-Reply-To: <4990B0CF.8020502@schuckmannacres.com> References: <4990B0CF.8020502@schuckmannacres.com> Message-ID: <4991C282.5070102@schuckmannacres.com> Matt Schuckmann wrote: > I need to make RTSP commands work when streaming RTP over TCP because > I need to be able to use the SET_PARAMETER and GET_PARAMETER commands > to control my live camera during the session. > > The way I thought I understood the problem with the current > implementation is that once the PLAY command is issued the TCP > connection is hijacked to stream the RTP data and the server and the > client no longer have a communication channel, does that sound right? > > > My initial thoughts on how to fix this was to change the RTSPServer > class to support non-persistent connections for the RTSP commands and > accept multiple connections for each RTSP session. > I'd then modify the client to open another TCP connection to the > server once the initial connection is hijacked for streaming. > So far modifying the server to accept multiple connections per session > (persistent or non-persistent) was pretty easy (took me about a day) > and I'm moving on to the client side which seems a little more tricky. > Looking at the RTSPClient code it looks like the simplest thing to do > might be to add some sort of connected boolean that the send() method > could use to determine a new TCP connection needs to be established. > Then the PlayMediaSession() and PlayMediaSubSession() could clear this > flag if we are streaming over TCP so that the next command that called > send() would open a new connection. The thing I'm not completely sure > about is how to make sure the original connection is properly cleaned > up when the session is torn down, I don't fully understand if the > RTPSource and RTCPSource objects will clean it up or if the RTSPClient > client needs to keep track and clean it up. > > Does this sound like a reasonable approach to fixing RTSP support when > streaming RTP over TCP? or had you intended something else or does the > standard for RTSP and RTP over TCP specify something else (where is > the standard). > > I am actively working on this and will submit anything I do back to > the Live555 project so any guidance would be very helpful. > > Thanks > Matt S. > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > Ok the more I look at this (and the longer I don't get any response) the more I wonder if I'm going at this the wrong way. I'm starting think that the problem is even more insidious than just RTSP replys don't get recognized by the client. I think the current implementation prevents multiple streams in the same session from being received using RTP over TCP, I think I experienced this the first time I tried it. Does this sound correct? Is the problem that the RTPSource that is created for the first stream now owns the TCP socket and it has no way to forward data that's not for it on to the other interested parties (other RTPSources and the RTSPClient)? Thanks, Matt S. From finlayson at live555.com Tue Feb 10 17:36:13 2009 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 10 Feb 2009 17:36:13 -0800 Subject: [Live-devel] Making RTSP commands work when streaming RTP over TCP In-Reply-To: <4991C282.5070102@schuckmannacres.com> References: <4990B0CF.8020502@schuckmannacres.com> <4991C282.5070102@schuckmannacres.com> Message-ID: >I'm starting think that the problem is even more insidious than just >RTSP replys don't get recognized by the client. I think the current >implementation prevents multiple streams in the same session from >being received using RTP over TCP, I think I experienced this the >first time I tried it. Does this sound correct? No, that's not correct. There is no problem having multiple streams (e.g., audio + video) in a single RTP-over-TCP connection. (The data format includes a tag which identifies each sub-stream, and our receiving software handles this OK.) > >Is the problem that the RTPSource that is created for the first >stream now owns the TCP socket and it has no way to forward data >that's not for it on to the other interested parties (other >RTPSources and the RTSPClient)? Basically, yes. -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ From finlayson at live555.com Tue Feb 10 18:17:18 2009 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 10 Feb 2009 18:17:18 -0800 Subject: [Live-devel] RTSPClient::openConnectionFromURL and fBaseURL In-Reply-To: <4991C0CB.4010901@schuckmannacres.com> References: <4991C0CB.4010901@schuckmannacres.com> Message-ID: >In looking at openConnectionFromURL() I think I see a potential >problem with fBaseURL. >Basically the first thing this method does is free fBaseURL and then >strDup the new URL. >Next this method parses the url and then checks to see if it a input >socket is open and if not opens one using the URL. >If parsing fails or an input socket is already open the function >aborts and doesn't clean up fBaseURL. > >The problem I see is if a socket where opened to a URL and then >never closed before calling openConnectionFromURL with another URL >fBaseURL could be set to a URL that doesn't match the URL the socket >is connected to. >This seems like a potential for a bug or at the very least much confusion. A single "RTSPClient" instance (at least, in its current implementation) was never intended to support more than one RTSP URL - either concurrently, or successively. You should use a different "RTSPClient" object for each RTSP URL that you want to handle. -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ From prbhagwat at gmail.com Wed Feb 11 02:41:33 2009 From: prbhagwat at gmail.com (Pramod Bhagwat) Date: Wed, 11 Feb 2009 16:11:33 +0530 Subject: [Live-devel] Bug in AVIFileSink Message-ID: Hi Ross, This is regarding AVISubsessionIOState class present in AVIFileSink.cpp. The constructor of AVISubsessionIOState should initialize fIsByteSwappedAudio to false. If the SDP file contains only video and does not contain any audio then fIsByteSwappedAudio will have some random value which makes all data to swap in AVISubsessionIOState::useFrame function. One more question related to AVI but not related to above bug. If i try to playback the AVI file created by openRTSP program using VLC player, The VLC player complains *This AVI File is broken. Seeking will not work correctly.* Do you have any suggestion for this? Warm Regards, pramod From finlayson at live555.com Wed Feb 11 06:56:02 2009 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 11 Feb 2009 06:56:02 -0800 Subject: [Live-devel] Bug in AVIFileSink In-Reply-To: References: Message-ID: >This is regarding AVISubsessionIOState class present in >AVIFileSink.cpp. The constructor of AVISubsessionIOState should >initialize fIsByteSwappedAudio to false. Thanks. This will be fixed in the next release of the code. >One more question related to AVI but not related to above bug. If i >try to playback the AVI file created by openRTSP program using VLC >player, The VLC player complains *This AVI File is broken. Seeking >will not work correctly.* Do you have any suggestion for this? VLC is not our product. You need to look at the VLC source code, to figure out why it thinks that the AVI file is 'broken' (and then come back and look for any bug in our code that might be causing this). -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ From matt at schuckmannacres.com Wed Feb 11 10:44:57 2009 From: matt at schuckmannacres.com (Matt Schuckmann) Date: Wed, 11 Feb 2009 10:44:57 -0800 Subject: [Live-devel] Making RTSP commands work when streaming RTP over TCP In-Reply-To: References: <4990B0CF.8020502@schuckmannacres.com> <4991C282.5070102@schuckmannacres.com> Message-ID: <49931CA9.3010805@schuckmannacres.com> Ah yes now I see how the routing to the correct stream handlers work, that's a peace of work. Took me a while to see the statics in RTPInterface.cpp and see how all the routing works. So once I figured that out I started to plumb in a callback so that the SocketDescriptor could notify a non RTP entity (i.e. the RTSPClient object) that non RTP data had come in. That seemed to be going well until I got out to the RTSPClient class and realized that it assumes it owns the TCP stream and whenever it sends data it immediately pends on the socket for the reply, throwing away any RTP data it encounters while waiting (I'm actually puzzled why this doesn't work). This poses a problem for my nonRTPCallback solution because the SocketDescriptor in the RTPInstance will never get a chance to see the data and do the forwarding. I think to make this work we'd have to change the way most of the RTSPClient commands work, it would have to be a little more event driven, i.e. RTSPClient sends the command then lets the TaskScheduler take over when the data arrives the RTPSClient::incomingRequestHandler would get called (via a callback from the SocketDescriptor) and the RTSPClient could parse the response and notify it's client of the result via a callback. This would probably be a breaking change with how RTSPClient works now but it would certainly simplify getting this feature working. I suppose one could provide 2 RTSPClient classes; the current one and a new one that is event driven and works well with RTPovertTCP. What would you prefer to see a radical breaking change in how RTSPClient works or a second event driven RTSPClient class perhaps with the common code between the two re-factored out to a base class? Or can you see yet another solution? So while I debate the merits of changing RTSPClient I went ahead and with my original idea of creating a second TCP connection once the first is taken over for streaming and by golly it works (with the server mods I mentioned before). I don't have all the kinks and corner cases worked out yet but it does work. Would this be something you'd want to have in the library. Thanks, Matt S. Ross Finlayson wrote: >> I'm starting think that the problem is even more insidious than just >> RTSP replys don't get recognized by the client. I think the current >> implementation prevents multiple streams in the same session from >> being received using RTP over TCP, I think I experienced this the >> first time I tried it. Does this sound correct? > > No, that's not correct. There is no problem having multiple streams > (e.g., audio + video) in a single RTP-over-TCP connection. (The data > format includes a tag which identifies each sub-stream, and our > receiving software handles this OK.) > >> >> Is the problem that the RTPSource that is created for the first >> stream now owns the TCP socket and it has no way to forward data >> that's not for it on to the other interested parties (other >> RTPSources and the RTSPClient)? > > Basically, yes. From finlayson at live555.com Wed Feb 11 14:54:14 2009 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 11 Feb 2009 14:54:14 -0800 Subject: [Live-devel] Making RTSP commands work when streaming RTP over TCP In-Reply-To: <49931CA9.3010805@schuckmannacres.com> References: <4990B0CF.8020502@schuckmannacres.com> <4991C282.5070102@schuckmannacres.com> <49931CA9.3010805@schuckmannacres.com> Message-ID: >So while I debate the merits of changing RTSPClient I went ahead and >with my original idea of creating a second TCP connection once the >first is taken over for streaming and by golly it works (with the >server mods I mentioned before). I don't have all the kinks and >corner cases worked out yet but it does work. Would this be >something you'd want to have in the library. At some point soon I'll be making major changes to the "RTSPClient" implementation (primarily to make it do asynchronous socket reads, in line with most of the rest of the code, but perhaps addressing the RTSP/RTP-over-TCP issue as well). Until that time, I don't plan on accepting any patches to this code (except for simple, obvious bug fixes). -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ From matt at schuckmannacres.com Thu Feb 12 10:28:55 2009 From: matt at schuckmannacres.com (Matt Schuckmann) Date: Thu, 12 Feb 2009 10:28:55 -0800 Subject: [Live-devel] Making RTSP commands work when streaming RTP over TCP In-Reply-To: References: <4990B0CF.8020502@schuckmannacres.com> <4991C282.5070102@schuckmannacres.com> <49931CA9.3010805@schuckmannacres.com> Message-ID: <49946A67.3050605@schuckmannacres.com> Ross Finlayson wrote: >> So while I debate the merits of changing RTSPClient I went ahead and >> with my original idea of creating a second TCP connection once the >> first is taken over for streaming and by golly it works (with the >> server mods I mentioned before). I don't have all the kinks and >> corner cases worked out yet but it does work. Would this be something >> you'd want to have in the library. > > At some point soon I'll be making major changes to the "RTSPClient" > implementation (primarily to make it do asynchronous socket reads, in > line with most of the rest of the code, but perhaps addressing the > RTSP/RTP-over-TCP issue as well). Until that time, I don't plan on > accepting any patches to this code (except for simple, obvious bug > fixes). Totally understandable. In the mean time to make RTP over TCP work in my limited closed world application I'll subclass RTSPClient to implement my second TCP socket approach and move on. I'll keep an eye out for your changes of RTSPClient. Let me know if there is anything I can do to get the standardized RTP over TCP working. At some point I'd like to provide you with a patch for my server changes to allow non-persistent RTSP connections as I think it has good application beyond this problem (notably with unreliable network connections, i.e. cell and SAT networks). Thanks Matt S. From finlayson at live555.com Fri Feb 13 00:33:13 2009 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 13 Feb 2009 00:33:13 -0800 Subject: [Live-devel] Patch against release dated 1/26/2009 In-Reply-To: <498B7F19.1040807@schuckmannacres.com> References: <498B7F19.1040807@schuckmannacres.com> Message-ID: I've now released a new version (2009.02.13) that includes some, but not all, of your suggested changes. >A synopsis of the changes are listed below. > >1. Modified BasicUsageEnvironment0::reportBackgroundError() to get >the errorResultMsgBuffer contents via a call to getResultMsg() >instead of accessing the private member directly so that it will >work with derived class that doesn't use fResultMsgBuffer > >2. Modified RTSPServer to use a virtual factory function, >createNewClientSession(), to create RTSPClientSession objects (or >derived children of that class) so that users that want to use a >class derived from RTSPClientSession don't have to re-impliment all >of the incomingConnectionHandler() code. > >3. Added support recognizing SET_PARAMETER commands and passing the >command body into the RTSPClientSession. I have added these. >4. Added support to specify a range of ports that can be used for >RTP/RTCP streams instead of just a starting port. Also added failure >condition for when you run out of ports. Range is specified by a >starting port and a count of following ports. The count can be set >to -1 to allow a open ended range. This change is likely to be generally useful, but I haven't added it yet, because it's quite a substantial change to the code, which I need more time to review. This might get added sometime in the future, though. >5. Added an iterator class to iterate over ServerMediaSession objects. I have added this. >6. Added accessors to OnDemandServerMediaSubsession to get the RTP >port and RTCP port for a client session ID associated with a void* >streamToken. This primarily so that the server application can list >the ports in use with each session. We will probably be adding more >of these types of accessors please tell me if we aren't following >your vision got this type of access. No, I don't want to add stuff like this to "OnDemandServerMediaSubsession" if we can avoid it; that class is complicated enough as it is. Instead, I think you can probably set this information in your subclasses, in your "createNewStreamSource()" and/or "createNewRTPSink()" virtual function implementations, and access it via new functions that you could define in your subclasses. >7. Changed MS Visual Studio Makefile.tail make files to name the pdb >files uniquely for each library so that all the libraries and there >pdb files can be copied to a common directory. No, you can't add Windows-specific stuff like this to the "Makefile.tail" files, because those files are used to create Makefiles for both Windows and Unix platforms. Instead, if necessary, change your "win32config" file, and then run "GenWindowsMakefiles" as usual. -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ From wouter.dhondt at vsk.be Fri Feb 13 04:31:20 2009 From: wouter.dhondt at vsk.be (Wouter Dhondt) Date: Fri, 13 Feb 2009 13:31:20 +0100 Subject: [Live-devel] Membership Report Message-ID: <49956818.2040403@vsk.be> Hi. While tracing a stream using wireshark I noticed the following packets sent from the client between a describe and setup: Destination 224.0.0.22 IGMP V3 Membership Report Destination 228.67.43.91 UDP Source port: 15947 Destination port: 15947 Destination 224.0.0.22 IGMP V3 Membership Report Can anyone tell me what these are for? Can we disable them? Kind regards, Wouter Dhondt From SRawling at pelco.com Fri Feb 13 06:29:46 2009 From: SRawling at pelco.com (Rawling, Stuart) Date: Fri, 13 Feb 2009 06:29:46 -0800 Subject: [Live-devel] Membership Report In-Reply-To: <49956818.2040403@vsk.be> Message-ID: They are the IGMP messages sent to join the multicast group the server will be sending to, which in this case seems to be 228.67.43.91. On most decent (and properly configured) pieces of network hardware you have to join the group in order to receive packets sent to the multicast address. These messages are required for multicast, but not for unicast, so if you configure your server to stream unicast these messages should not appear. For more information see: Stuart On 2/13/09 4:31 AM, "Wouter Dhondt" wrote: > Hi. > > While tracing a stream using wireshark I noticed the following packets > sent from the client between a describe and setup: > > Destination 224.0.0.22 IGMP V3 Membership Report > Destination 228.67.43.91 UDP Source port: 15947 Destination port: 15947 > Destination 224.0.0.22 IGMP V3 Membership Report > > Can anyone tell me what these are for? Can we disable them? > > Kind regards, > > Wouter Dhondt > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > - ------------------------------------------------------------------------------ Confidentiality Notice: The information contained in this transmission is legally privileged and confidential, intended only for the use of the individual(s) or entities named above. This email and any files transmitted with it are the property of Pelco. If the reader of this message is not the intended recipient, or an employee or agent responsible for delivering this message to the intended recipient, you are hereby notified that any review, disclosure, copying, distribution, retention, or any action taken or omitted to be taken in reliance on it is prohibited and may be unlawful. If you receive this communication in error, please notify us immediately by telephone call to +1-559-292-1981 or forward the e-mail to administrator at pelco.com and then permanently delete the e-mail and destroy all soft and hard copies of the message and any attachments. Thank you for your cooperation. - ------------------------------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From geniusbill at gmail.com Fri Feb 13 07:40:31 2009 From: geniusbill at gmail.com (Xiao Li) Date: Fri, 13 Feb 2009 15:40:31 +0000 Subject: [Live-devel] DeviceSource.cpp problem Message-ID: <6b34e06f0902130740h1bc79d4fw33e2aafecf8281c3@mail.gmail.com> Dear All, I followed the instruction in DeviceSourcec.cpp to try to stream the live video from my ARM encoder platform. My current program for the encoder is thread based, i.e. capture thread, encoding thread, and writer thread, the 3 threads shares a common buffer. In the writer thread, once the buffer is accessed, it is delived to a global variable, and then start the RTSP server (like in testMPEG4VideoStreamer.cpp). This global variable is then accessed by the deliverFrame in the DeviceSource subclass (e.g. EncoderSource), and passed to fTo. And the deliverFrame is also simply called in EncoderSource::doGetNextFrame. Those are all the changed I have done, and the remaining part are exactly same as in testMPEG4VideoStreamer.cpp, except that I changed the ByteFileSource instance to EncoderSource in the play() function. When the program running, I can see that the encoded packets being passed to fTo (I use printf to get the frame size). RTSP server started as well, since I can use VLC to connect to it remotely. But the problem is I coundn't see the live video in VLC, it looks like no data is pushed out (Ethernet card doesn't flash). Please could anybody figture out the problem for me? Do I need to do anything in the doEventLoop or anything else? Any help is apprciated! Have a nice weekend! Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From geniusbill at gmail.com Fri Feb 13 07:53:12 2009 From: geniusbill at gmail.com (Xiao Li) Date: Fri, 13 Feb 2009 15:53:12 +0000 Subject: [Live-devel] DeviceSource.cpp problem (code) Message-ID: <6b34e06f0902130753u7b3df94dl899d6a57bcfc1193@mail.gmail.com> *Below is the code to assist you to understand my previous question, many thanks! envpW = envp; (Note: envpW is the global variable that I used to get the encoded frame from the writer thread)* void EncoderSource::doGetNextFrame() { deliverFrame(); if (0 /* the source stops being readable */) { handleClosure(this); return; } } void EncoderSource::deliverFrame() { // Deliver the data here: //Below from original write.cpp //FILE *outputFp = NULL; //WriterEnv *envp = (WriterEnv *) arg; //void *status = THREAD_SUCCESS; extern WriterEnv *envpW; WriterBufferElement we2; //while (TRUE) { /* Get an encoded buffer from the video thread */ if (FifoUtil_get(&envpW->inFifo, &we2) == FIFOUTIL_FAILURE) { } //Set Device Source Params... if (we2.frameSize > fMaxSize){ we2.frameSize = fMaxSize; } //Copy to DeviceSource... //fTo = (unsigned char*)malloc(we2.frameSize); memcpy(fTo, we2.encodedBuffer, we2.frameSize); printf("Frame %d bytes %d\n", we2.frameSize,fMaxSize); /* Send back the buffer to the video thread */ if (FifoUtil_put(&envpW->outFifo, &we2) == FIFOUTIL_FAILURE) { ERR("Failed to put buffer in output fifo\n"); } // Final cleanup. // After delivering the data, inform the reader that it is now available: nextTask() = envir().taskScheduler().scheduleDelayedTask(0, (TaskFunc*)FramedSource::afterGetting, this); } void *writerThread(void *arg) { extern WriterEnv *envpW; WriterBufferElement wFlush = { WRITER_FLUSH }; WriterEnv *envp = (WriterEnv *) arg; void *status = THREAD_SUCCESS; WriterBufferElement we; *envpW = envp;* /* Signal that initialization is done and wait for other threads */ Rendezvous_meet(envp->hRendezvousInit); ERR("Entering writer main loop.\n"); startMPEG4Streamer(); cleanup: /* Make sure the other threads aren't waiting for init to complete */ Rendezvous_force(envp->hRendezvousInit); /* Make sure the other threads aren't stuck pausing */ Pause_off(envp->hPause); /* Make sure the video thread isn't stuck in FifoUtil_get() */ FifoUtil_put(&envp->outFifo, &wFlush); /* Meet up with other threads before cleaning up */ Rendezvous_meet(envp->hRendezvousCleanup); } -------------- next part -------------- An HTML attachment was scrubbed... URL: From ratin3 at gmail.com Sat Feb 14 12:58:26 2009 From: ratin3 at gmail.com (Ratin) Date: Sat, 14 Feb 2009 12:58:26 -0800 Subject: [Live-devel] H.264 frame via Mplayer's RTSP (live 555 library) Message-ID: <5c70701a0902141258i5978a977t72383b2c45540cb1@mail.gmail.com> Hi I am trying to dump a h.264 1080i source stream into raw bitstream file, the stream plays fine on mplayer over rtsp. However when I dump the video stream using -dumpvideo and -umpfile mplayer can't play it back. It looks like the bitstream only has one IDR frame (00 00 00 01 65) and no other NALU like sps/pps and mplayer complains about not finding them. But the rtsp stream version works flawlessly. If I look at the wireshark trace, I see that the stream starts with the rtp payload 0x7c 85 88 84 ..very first rtp packet 0x7c05d34f ... 2nd rtp packet ::::: ::::: 0x7c457c1ff being the last one in the gop (marker bit set) i think thats an idr frame.. the subsequent frames rtp payloads are: 5c819a20 first RTP paket of 2nd frame (possibly a p frame? ) 5c01ff37 2nd RTP packet : 534172c4 last RTP of 2nd frame and mrkar set roughly about 35 frames of type 5c81 .. 5c01 .. 5c41 's then it repeats with 7c858884 etc etc etc Anybody have any clue how to interprete these nalu headers in terms of anex b style bitstream? why would mplayer only convert the first IDR and not the other ones ? Is there anything missing from the subsequent IDR frames? Where are the SPS and PPS? thnx -------------- next part -------------- An HTML attachment was scrubbed... URL: From satheesh at streamprocessors.com Sat Feb 14 20:13:28 2009 From: satheesh at streamprocessors.com (Satheesh Ram) Date: Sun, 15 Feb 2009 09:43:28 +0530 Subject: [Live-devel] H.264 frame via Mplayer's RTSP (live 555 library) In-Reply-To: <5c70701a0902141258i5978a977t72383b2c45540cb1@mail.gmail.com> References: <5c70701a0902141258i5978a977t72383b2c45540cb1@mail.gmail.com> Message-ID: <49979668.5000904@streamprocessors.com> Hi, The sprop-parameter-sets in SDP can be used to transmit SPS and PPS. if the streaming server used sprop-parameter-sets to indicate the SPS and PPS, you will not get them in raw dumped stream. pls refer to http://www.rfc-archive.org/getrfc.php?rfc=3984 Ratin wrote: > Hi I am trying to dump a h.264 1080i source stream into raw bitstream > file, the stream plays fine on mplayer over rtsp. However when I dump > the video stream using -dumpvideo and -umpfile mplayer can't play it > back. It looks like the bitstream only has one IDR frame (00 00 00 01 > 65) and no other NALU like sps/pps and mplayer complains about not > finding them. But the rtsp stream version works flawlessly. If I look > at the wireshark trace, I see that the stream starts with the rtp payload > > 0x7c 85 88 84 ..very first rtp packet > 0x7c05d34f ... 2nd rtp packet > > ::::: > > ::::: > > 0x7c457c1ff being the last one in the gop (marker bit set) > > > i think thats an idr frame.. > > > > the subsequent frames rtp payloads are: > > 5c819a20 first RTP paket of 2nd frame (possibly a p frame? ) > 5c01ff37 2nd RTP packet > : > 534172c4 last RTP of 2nd frame and mrkar set > > > roughly about 35 frames of type 5c81 .. 5c01 .. 5c41 's > > then it repeats with > > 7c858884 etc > > > > etc etc > > > Anybody have any clue how to interprete these nalu headers in terms of > anex b style bitstream? > > why would mplayer only convert the first IDR and not the other ones ? > Is there anything missing from the subsequent IDR frames? Where are > the SPS and PPS? > > thnx > > > ------------------------------------------------------------------------ > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > -- Satheesh Ram -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Sat Feb 14 21:05:03 2009 From: finlayson at live555.com (Ross Finlayson) Date: Sat, 14 Feb 2009 21:05:03 -0800 Subject: [Live-devel] H.264 frame via Mplayer's RTSP (live 555 library) In-Reply-To: <5c70701a0902141258i5978a977t72383b2c45540cb1@mail.gmail.com> References: <5c70701a0902141258i5978a977t72383b2c45540cb1@mail.gmail.com> Message-ID: >Hi I am trying to dump a h.264 1080i source stream into raw >bitstream file, the stream plays fine on mplayer over rtsp. However >when I dump the video stream using -dumpvideo and -umpfile mplayer >can't play it back. A "MPlayer"-specific mailing list would be the best place to ask this question. -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ From geniusbill at gmail.com Sun Feb 15 15:29:43 2009 From: geniusbill at gmail.com (Xiao Li) Date: Sun, 15 Feb 2009 23:29:43 +0000 Subject: [Live-devel] No packet streamed out for customized testMPEG4VideoStreamer with live video encoder Message-ID: <6b34e06f0902151529p35dfe9ccwc3923c317b8f0a5@mail.gmail.com> Hi Ross, I'm new to Live555 development. I changed the DeviceSource.cpp to interface with my hardware encoder, but it was noticed that there is no packet streamed out after the RTSP server starts. The only code I changed is as shown in http://lists.live555.com/pipermail/live-devel/2009-February/010154.html I appreciate if you could help figure me out the problem. Best regards, Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at schuckmannacres.com Mon Feb 16 08:47:16 2009 From: matt at schuckmannacres.com (Matt Schuckmann) Date: Mon, 16 Feb 2009 08:47:16 -0800 Subject: [Live-devel] Patch against release dated 1/26/2009 In-Reply-To: References: <498B7F19.1040807@schuckmannacres.com> Message-ID: <49999894.9050706@schuckmannacres.com> Right thanks for the feedback. We'll looking to making the changes to our subclasses. Ross Finlayson wrote: > I've now released a new version (2009.02.13) that includes some, but > not all, of your suggested changes. > > >> A synopsis of the changes are listed below. >> >> 1. Modified BasicUsageEnvironment0::reportBackgroundError() to get >> the errorResultMsgBuffer contents via a call to getResultMsg() >> instead of accessing the private member directly so that it will work >> with derived class that doesn't use fResultMsgBuffer >> >> 2. Modified RTSPServer to use a virtual factory function, >> createNewClientSession(), to create RTSPClientSession objects (or >> derived children of that class) so that users that want to use a >> class derived from RTSPClientSession don't have to re-impliment all >> of the incomingConnectionHandler() code. >> >> 3. Added support recognizing SET_PARAMETER commands and passing the >> command body into the RTSPClientSession. > > I have added these. > > >> 4. Added support to specify a range of ports that can be used for >> RTP/RTCP streams instead of just a starting port. Also added failure >> condition for when you run out of ports. Range is specified by a >> starting port and a count of following ports. The count can be set to >> -1 to allow a open ended range. > > This change is likely to be generally useful, but I haven't added it > yet, because it's quite a substantial change to the code, which I need > more time to review. This might get added sometime in the future, > though. > > >> 5. Added an iterator class to iterate over ServerMediaSession objects. > > I have added this. > > >> 6. Added accessors to OnDemandServerMediaSubsession to get the RTP >> port and RTCP port for a client session ID associated with a void* >> streamToken. This primarily so that the server application can list >> the ports in use with each session. We will probably be adding more >> of these types of accessors please tell me if we aren't following >> your vision got this type of access. > > No, I don't want to add stuff like this to > "OnDemandServerMediaSubsession" if we can avoid it; that class is > complicated enough as it is. Instead, I think you can probably set > this information in your subclasses, in your "createNewStreamSource()" > and/or "createNewRTPSink()" virtual function implementations, and > access it via new functions that you could define in your subclasses. > > >> 7. Changed MS Visual Studio Makefile.tail make files to name the pdb >> files uniquely for each library so that all the libraries and there >> pdb files can be copied to a common directory. > > No, you can't add Windows-specific stuff like this to the > "Makefile.tail" files, because those files are used to create > Makefiles for both Windows and Unix platforms. Instead, if necessary, > change your "win32config" file, and then run "GenWindowsMakefiles" as > usual. > From patbob at imoveinc.com Mon Feb 16 10:38:11 2009 From: patbob at imoveinc.com (Patrick White) Date: Mon, 16 Feb 2009 10:38:11 -0800 Subject: [Live-devel] Multicast teardown bug? In-Reply-To: <6b34e06f0902130740h1bc79d4fw33e2aafecf8281c3@mail.gmail.com> References: <6b34e06f0902130740h1bc79d4fw33e2aafecf8281c3@mail.gmail.com> Message-ID: <200902161038.11877.patbob@imoveinc.com> This is probably a stupid question, but... In RTSPServer::RTSPClientSession::livenessTimeoutTask(), the RTSP client session is deleted only if it is not a mutlicast session. The comment claims this is to avoid closing all client sessions, not just the one that has timed out. The only thing that is avoided in this case is deleting the RTSPClientSession instance. In RTSPServer::RTSPClientSession::incomingRequestHandler1(), if there is a read error on the RTSP socket for the client, or it sends a TEARDOWN request, its RTSPClientSession instance is deleted. No check for multicast is made. For multicast sessions, why is it unsafe to delete the RTSPClientSession because it has timed out, yet still safe to delete that same object when the client requests a TEARDOWN or its RTSP socket gets a read error? Wouldn't those deletes have the same undesirable side effect? From baxkstreet at 163.com Mon Feb 16 18:02:17 2009 From: baxkstreet at 163.com (baxkstreet) Date: Tue, 17 Feb 2009 10:02:17 +0800 (CST) Subject: [Live-devel] H264 streaming Message-ID: <12913474.65721234836138157.JavaMail.coremail@bj163app25.163.com> Hi all! I want to stream h264 with live555.but i don't know how to implement this.could anyone kind enough to tell me the probable procedure? step by step. best regards kaka -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Mon Feb 16 19:24:18 2009 From: finlayson at live555.com (Ross Finlayson) Date: Mon, 16 Feb 2009 19:24:18 -0800 Subject: [Live-devel] Multicast teardown bug? In-Reply-To: <200902161038.11877.patbob@imoveinc.com> References: <6b34e06f0902130740h1bc79d4fw33e2aafecf8281c3@mail.gmail.com> <200902161038.11877.patbob@imoveinc.com> Message-ID: The code in question (that "if" statement) is almost 3 years old. At the time, it was there for a reason, but I've forgotten exactly why. I suspect that you're right - it's no longer needed (because closing the "RTSPClientSession" object has no effect on the entire stream other than calling the "deleteStream()" virtual function - but for multicast streams, the implementation of that virtual function is a 'no op'). Therefore, I think we can remove that "if" statement now. Please go ahead and do this, and let us know if it causes any problems. If I don't hear of any problems, I'll remove it from the next release of the code. -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ From baxkstreet at 163.com Mon Feb 16 19:53:41 2009 From: baxkstreet at 163.com (baxkstreet) Date: Tue, 17 Feb 2009 11:53:41 +0800 (CST) Subject: [Live-devel] Could somebody send me a copy of a Tutorial of H264 RTP Streaming Message-ID: <25742952.138001234842821197.JavaMail.coremail@app157.163.com> HI all! I found a tutorial of h264 rtp streaming post by Mojtaba Hosseini on http://lists.live555.com/pipermail/live-devel/2007-June/007030.html .but the download url is unavailable now .if someone have a copy ,could you send it to me thinks! best regards kaka -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuzzylai at dynagrid.net Tue Feb 17 01:02:46 2009 From: fuzzylai at dynagrid.net (Fuzzy Lai) Date: Tue, 17 Feb 2009 17:02:46 +0800 Subject: [Live-devel] Handle frame dropping elegantly on clients with limited output bandwidth Message-ID: Dear Sir: After reviewing the server side implementation, I think the streaming process is a pull model, isn't it? Besides, when the frame data is pulled readily, the event loop will scheduled to send the frame immediately no matter the output channel is over UDP or TCP, won't it? The problem is that if two clients, one of which is over UDP while the other is over TCP and has only limited bandwidth, the transmitting rate of the UDP client seems to be influenced by the slow TCP client, right? If such a problem does exist, how about separating the tcp sending process from the frame pulling one and doing the sending in the event loop only when the output tcp socket is writable? Of course, one should handle the coded frame dropping issue if the prepared tcp sending buffer is full. Is my reasoning OK? BR Fuzzy Lai -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Feb 17 01:26:02 2009 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 17 Feb 2009 01:26:02 -0800 Subject: [Live-devel] Handle frame dropping elegantly on clients with limited output bandwidth In-Reply-To: References: Message-ID: >The problem is that if two clients, one of which is over UDP while >the other is over TCP and has only limited bandwidth, the >transmitting rate of the UDP client seems to be influenced by the >slow TCP client, right? No, because the server's writes to the TCP socket will be non-blocking. The underlying OS (in its implementation of TCP) will accept the data immediately, but, if necessary, buffer the outgoing data until it can be sent. -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ From wouter.dhondt at vsk.be Tue Feb 17 06:33:47 2009 From: wouter.dhondt at vsk.be (Wouter Dhondt) Date: Tue, 17 Feb 2009 15:33:47 +0100 Subject: [Live-devel] Multiple livemedia libs in the same process Message-ID: <499ACACB.8070907@vsk.be> Hello. I have the following problem. I have a basic application that uses livemedia (server to stream to clients). The application links to a static livemedia library. This main application also uses a second external library. This library also uses livemedia dynamically (client to connect to a server). The problem is: the doEventLoop() in the external library fails immediately. If we removed the static linked livemedia library from the main application the external library works fine. There seems to be a shared resource in use, even though we use 2 different libraries and 2 different threads. Ports are different so I'm not sure which other resource is causing this. Anyone an idea? From finlayson at live555.com Tue Feb 17 13:25:02 2009 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 17 Feb 2009 13:25:02 -0800 Subject: [Live-devel] Multiple livemedia libs in the same process In-Reply-To: <499ACACB.8070907@vsk.be> References: <499ACACB.8070907@vsk.be> Message-ID: >I have the following problem. I have a basic application that uses >livemedia (server to stream to clients). The application links to a >static livemedia library. This main application also uses a second >external library. This library also uses livemedia dynamically >(client to connect to a server). The problem is: the doEventLoop() >in the external library fails immediately. If we removed the static >linked livemedia library from the main application the external >library works fine. > >There seems to be a shared resource in use, even though we use 2 >different libraries and 2 different threads. Ports are different so >I'm not sure which other resource is causing this. > >Anyone an idea? Well, I'm not convinced that you can have more than one instance of *any* library linked into a process - static or dynamic - and expect things to work properly. Why not just use a single instance of the library (i.e., the static one)? You can use more than one thread within a process that uses our library only if either (i) only one thread uses the library, or (ii) if more than one thread uses the library, each does so using a completely different "UsageEnvironment" object, and do not access each others' objects. See the FAQ! Your best bet, though, is to use separate processes, not multiple threads within the same process. -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ From fuzzylai at dynagrid.net Tue Feb 17 19:03:02 2009 From: fuzzylai at dynagrid.net (Fuzzy Lai) Date: Wed, 18 Feb 2009 11:03:02 +0800 Subject: [Live-devel] Handle frame dropping elegantly on clients with limited output bandwidth In-Reply-To: References: Message-ID: > > No, because the server's writes to the TCP socket will be non-blocking. > > The underlying OS (in its implementation of TCP) will accept the data > immediately, but, if necessary, buffer the outgoing data until it can > be sent. > > If the underlying OS socket buffer is unlimited, the reasoning should be all right. However, according to the manpage of send() : > If no messages space is available at the socket to hold the message to be > transmitted, then *send*() normally *blocks*, unless the socket has been > placed in non-blocking I/O mode. The *select(2) * call may be used to > determine when it is possible to send more data. > > That is, the calling to send() does actually blocks if no underlying message space is available! Besides, after reviewing the wis-streamer, I think the streaming process can apply an either pull or push model on different ServerMediaSubsessions, right? 2009/2/17 Fuzzy Lai > Dear Sir: > > After reviewing the server side implementation, I think the streaming > process is a pull model, isn't it? > > Besides, when the frame data is pulled readily, the event loop will > scheduled to send the frame immediately no matter the output channel is over > UDP or TCP, won't it? > > The problem is that if two clients, one of which is over UDP while the > other is over TCP and has only limited bandwidth, the transmitting rate of > the UDP client seems to be influenced by the slow TCP client, right? > > If such a problem does exist, how about separating the tcp sending process > from the frame pulling one and doing the sending in the event loop only when > the output tcp socket is writable? > > Of course, one should handle the coded frame dropping issue if the prepared > tcp sending buffer is full. > > Is my reasoning OK? > > BR > Fuzzy Lai > -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Wed Feb 18 00:17:46 2009 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 18 Feb 2009 00:17:46 -0800 Subject: [Live-devel] Handle frame dropping elegantly on clients with limited output bandwidth In-Reply-To: References: Message-ID: >No, because the server's writes to the TCP socket will be non-blocking. > >The underlying OS (in its implementation of TCP) will accept the data > >immediately, but, if necessary, buffer the outgoing data until it can >be sent. > >If the underlying OS socket buffer is unlimited, the reasoning >should be all right. However, according to the manpage of >send(): > >If no messages space is available at the socket to hold the message to be >transmitted, then send() normally blocks, unless the socket has been > >placed in non-blocking I/O mode. But the TCP socket in this case *has* been place non-blocking mode. That was my point. -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Wed Feb 18 00:27:06 2009 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 18 Feb 2009 00:27:06 -0800 Subject: [Live-devel] Handle frame dropping elegantly on clients with limited output bandwidth Message-ID: >No, because the server's writes to the TCP socket will be non-blocking. > >The underlying OS (in its implementation of TCP) will accept the data > >immediately, but, if necessary, buffer the outgoing data until it can >be sent. > >If the underlying OS socket buffer is unlimited, the reasoning >should be all right. However, according to the manpage of >send(): > >If no messages space is available at the socket to hold the message to be >transmitted, then send() normally blocks, unless the socket has been > >placed in non-blocking I/O mode. But the TCP socket in this case *has* been placed in non-blocking mode. That was my point. -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From PhilipH at gmx.at Wed Feb 18 08:29:01 2009 From: PhilipH at gmx.at (Philip Herrmann) Date: Wed, 18 Feb 2009 17:29:01 +0100 Subject: [Live-devel] Detecting a connection loss with openRTSP Message-ID: <20090218162901.29110@gmx.net> Hi, i wrote a client application to receive a mjpeg stream from a camera basing on the openRtsp sample. It works fine, but i don't know how to detect a loss of the network connection. Is there a possibility to set a timeout and to get something like a callback? The openRtsp sample seems also not to recognize a loss of the connection Regards, Philip -- Psssst! Schon vom neuen GMX MultiMessenger geh?rt? Der kann`s mit allen: http://www.gmx.net/de/go/multimessenger01 From moonlit99 at gmail.com Wed Feb 18 09:38:40 2009 From: moonlit99 at gmail.com (moonlit moonlit) Date: Wed, 18 Feb 2009 12:38:40 -0500 Subject: [Live-devel] Video and audio completely out of sync Message-ID: Hi, I'm using openRTSP to connect to an axis camera and record the video and audio to an .mp4 file I do this: openRTSP.exe -4 -y -l -w 640 -h 480 -f 30 -b 100000 -d 10 rtsp:// 192.168.10.56/mpeg4/media.amp > a.mp4 When I playback the file with Videolan or The KMPlayer the audio and video is out of sync. However, if I connect to the camera and I record to .mp4 using Videolan the file is good. Am I doing something wrong? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From moonlit99 at gmail.com Wed Feb 18 09:43:11 2009 From: moonlit99 at gmail.com (moonlit moonlit) Date: Wed, 18 Feb 2009 12:43:11 -0500 Subject: [Live-devel] Using Live555 as a relay server Message-ID: Hi all, I have an axis camera that is only accessible from a specific computer (A). I also have other computers that need to have access to the MPEG4 feed of this camera, but these computers ONLY have access to computer A. Is it possible to stream the live feed from the camera using computer A (working as a relay server) so the other computers can access to the feed by using a URL similar to rtsp://x.x.x.x/mpeg4/media.amp where x.x.x.x is the IP address of computer A? Thanks!! -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Wed Feb 18 15:30:02 2009 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 18 Feb 2009 15:30:02 -0800 Subject: [Live-devel] Detecting a connection loss with openRTSP In-Reply-To: <20090218162901.29110@gmx.net> References: <20090218162901.29110@gmx.net> Message-ID: >i wrote a client application to receive a mjpeg stream from a camera >basing on the openRtsp sample. It works fine, but i don't know how >to detect a loss of the network connection. Is there a possibility >to set a timeout and to get something like a callback? I assume you're talking about the loss of the RTSP TCP connection, rather than just a normal end of stream (which we detect by listening for a RTCP "BYE" packet from the server). Unfortunately there's no easy way in the current code to do this. You would need to write your own "TaskScheduler" subclass (and use this instead of "BasicTaskScheduler") that adds a 'socket error' handler to the "select()" call. -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ From finlayson at live555.com Thu Feb 19 01:35:48 2009 From: finlayson at live555.com (Ross Finlayson) Date: Thu, 19 Feb 2009 01:35:48 -0800 Subject: [Live-devel] Using Live555 as a relay server In-Reply-To: References: Message-ID: >I have an axis camera that is only accessible from a specific computer (A). >I also have other computers that need to have access to the MPEG4 >feed of this camera, >but these computers ONLY have access to computer A. It seems to me that this is your real problem. You should fix your network so that the computers that want to access your stream can contact your server. > >Is it possible to stream the live feed from the camera using >computer A (working as a relay server) No, not with the sofware that we currently make available. -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ From finlayson at live555.com Thu Feb 19 02:07:10 2009 From: finlayson at live555.com (Ross Finlayson) Date: Thu, 19 Feb 2009 02:07:10 -0800 Subject: [Live-devel] Video and audio completely out of sync In-Reply-To: References: Message-ID: >I'm using openRTSP to connect to an axis camera and record the video >and audio to an .mp4 file >I do this: > >openRTSP.exe -4 -y -l -w 640 -h 480 -f 30 -b 100000 -d 10 >rtsp://192.168.10.56/mpeg4/media.amp > >a.mp4 Are you *sure* that the video's frame rate is 30 frames-per-second. You can't just guess this if you want to get an output file that plays properly; you need to specify the correct frame rate. -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at schuckmannacres.com Thu Feb 19 16:33:15 2009 From: matt at schuckmannacres.com (Matt Schuckmann) Date: Thu, 19 Feb 2009 16:33:15 -0800 Subject: [Live-devel] RTSPClient::PlayMediaSession and PlayMediaSubSession Message-ID: <499DFA4B.2080800@schuckmannacres.com> I noticed that the following lines appear near the end of RTSPClient::PlayMediaSession but not in RTSPClient::PlayMediaSubsession(). if (fTCPStreamIdCount == 0) { // we're not receiving RTP-over-TCP // Arrange to handle incoming requests sent by the server envir().taskScheduler().turnOnBackgroundReadHandling(fInputSocketNum, (TaskScheduler::BackgroundHandlerProc*)&incomingRequestHandler, this); } This looks suspicious to me. Also I kind of wonder why you what until a play command has been issued to start responding to server requests couldn't the server send a request to the client at anytime after the session has been setup? Matt S. From moonlit99 at gmail.com Thu Feb 19 07:52:05 2009 From: moonlit99 at gmail.com (moonlit moonlit) Date: Thu, 19 Feb 2009 10:52:05 -0500 Subject: [Live-devel] Using Live555 as a relay server In-Reply-To: References: Message-ID: > It seems to me that this is your real problem. You should fix your network so that the computers that want to access your stream can contact your server. There is nothing really bad that needs to be fixed. We have tons of cameras located everywhere and the same with the clients that connect to these cameras. It would be a really bad design if we need to have a connectivity from each client to each camera. That's why we need to have everything centralized and the clients only need to have connectivity to a central video server. -------------- next part -------------- An HTML attachment was scrubbed... URL: From moonlit99 at gmail.com Thu Feb 19 07:54:25 2009 From: moonlit99 at gmail.com (moonlit moonlit) Date: Thu, 19 Feb 2009 10:54:25 -0500 Subject: [Live-devel] Video and audio completely out of sync In-Reply-To: References: Message-ID: Well, if I remove the -f 30 parameter, the video still looks bad. I also have some artifacts while the video is playing. This doesn't happen with videolan or using the Axis ActiveX component. 2009/2/19 Ross Finlayson > I'm using openRTSP to connect to an axis camera and record the video and > audio to an .mp4 file > I do this: > > openRTSP.exe -4 -y -l -w 640 -h 480 -f 30 -b 100000 -d 10 rtsp:// > 192.168.10.56/mpeg4/media.amp > a.mp4 > > > Are you *sure* that the video's frame rate is 30 frames-per-second. You > can't just guess this if you want to get an output file that plays properly; > you need to specify the correct frame rate. > > -- > > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Thu Feb 19 17:45:19 2009 From: finlayson at live555.com (Ross Finlayson) Date: Thu, 19 Feb 2009 17:45:19 -0800 Subject: [Live-devel] Using Live555 as a relay server In-Reply-To: References: Message-ID: >It would be a really bad design if we need to have a connectivity >from each client to each camera. Nonsense. The whole point of the Internet is that anyone can connect to anyone. Whenever you prevent one node from contacting another, your network is broken (and you're not on the Internet, as far as I'm concerned). -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ From finlayson at live555.com Thu Feb 19 17:53:03 2009 From: finlayson at live555.com (Ross Finlayson) Date: Thu, 19 Feb 2009 17:53:03 -0800 Subject: [Live-devel] Video and audio completely out of sync In-Reply-To: References: Message-ID: >Well, if I remove the -f 30 parameter, the video still looks bad. No, I didn't say that you should 'remove' the "-f " argument; I said that you should make sure that it's accurate. You can use a client like QuickTime Player to check what the stream's frame rate really is. Finally, do you really expect to be taken seriously on a mailing list like this if you use a "From:" line like "moonlit moonlit "?? This is not MySpace. -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ From finlayson at live555.com Thu Feb 19 22:15:28 2009 From: finlayson at live555.com (Ross Finlayson) Date: Thu, 19 Feb 2009 22:15:28 -0800 Subject: [Live-devel] RTSPClient::PlayMediaSession and PlayMediaSubSession In-Reply-To: <499DFA4B.2080800@schuckmannacres.com> References: <499DFA4B.2080800@schuckmannacres.com> Message-ID: As I said before, the "RTSPClient" code will shortly be undergoing signficant changes - in particular, to support asynchronous handling of responses *and* requests from the server. -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ From sebastien-devel at celeos.eu Thu Feb 19 23:09:49 2009 From: sebastien-devel at celeos.eu (=?iso-8859-1?b?U+liYXN0aWVu?= Escudier) Date: Fri, 20 Feb 2009 08:09:49 +0100 Subject: [Live-devel] Using Live555 as a relay server In-Reply-To: References: Message-ID: <1235113789.499e573de0868@imp.celeos.eu> you can use VLC to do what you want. Ross : another reason is that network cameras can't handle a lot of clients at the same time. Relay servers can. From finlayson at live555.com Thu Feb 19 23:17:17 2009 From: finlayson at live555.com (Ross Finlayson) Date: Thu, 19 Feb 2009 23:17:17 -0800 Subject: [Live-devel] Using Live555 as a relay server In-Reply-To: <1235113789.499e573de0868@imp.celeos.eu> References: <1235113789.499e573de0868@imp.celeos.eu> Message-ID: >Ross : another reason is that network cameras can't handle a lot of clients at >the same time. Relay servers can. Yes, that's true. However, the questioner was asking how to connect a single client to the camera. The best way to do this is to fix his network so that the client can communicate with the camera directly. Developing relay server - supporting the handling of (and stream duplication to) multiple clients - would be a much more significant task. (Of course, the best way to handle multiple clients is IP multicast, if it's available on the network.) -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ From berechitai at gmail.com Fri Feb 20 01:57:16 2009 From: berechitai at gmail.com (Alexander) Date: Fri, 20 Feb 2009 12:57:16 +0300 Subject: [Live-devel] Some kind of live file streaming in MediaServer Message-ID: <555e2e1f0902200157q557ed258je0c4c0895e994d7@mail.gmail.com> RTSP Media server has an option "Reuse source", so, if one user is already receiving a stream, another user connected would have the same. But if no one connected, the stream is stopped. My question is how to start playing (something like streaming to null) even if no one is connected. And if a user connects to RTSP server after 5 minutes, he receives stream not from the beginning. Such thing works in demo examples with RTP streaming (and RTPSink). But I want only RTSP. Regards, Alexander. -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Fri Feb 20 06:10:41 2009 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 20 Feb 2009 06:10:41 -0800 Subject: [Live-devel] Some kind of live file streaming in MediaServer In-Reply-To: <555e2e1f0902200157q557ed258je0c4c0895e994d7@mail.gmail.com> References: <555e2e1f0902200157q557ed258je0c4c0895e994d7@mail.gmail.com> Message-ID: >RTSP Media server has an option "Reuse source", so, if one user is >already receiving a stream, another user connected would have the >same. >But if no one connected, the stream is stopped. My question is how >to start playing (something like streaming to null) even if no one >is connected. And if a user connects to RTSP server after 5 minutes, >he receives stream not from the beginning. That's not a problem, because the server's streaming software will read from the input source object only when it actually needs data (to send to a client). Therefore, as long as the input source object delivers data 'live', upon request, the system will work properly. -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ From igor.milavec at lsi.si Fri Feb 20 06:19:34 2009 From: igor.milavec at lsi.si (Igor Milavec) Date: Fri, 20 Feb 2009 15:19:34 +0100 Subject: [Live-devel] Remote close handling in RTSPClient Message-ID: Hi. I have noticed a tiny bug in RTSPClient disconnect handling. The current line 2513 in RTSPClient.cpp reads: envir().setResultErrMsg("Failed to read response: "); However, reading 0 bytes is a normal operation and not an error, thats why errno will not be set and the caller will get incomplete status information with this implementation. I propose to change this line to: envir().setResultMsg("Failed to read response: Connection was closed by the remote host."); Sorry if this post is inappropriate, I haven't found any directions about reporting bugs on the web, that's why I'm posting it here. Regards, Igor ----- Igor Milavec Li?er Solutions d.o.o. Cesta Andreja Bitenca 68 SI-1000 Ljubljana Tel: +386 1 5101-780 Fax: +386 1 5101-785 -------------- next part -------------- An HTML attachment was scrubbed... URL: From berechitai at gmail.com Fri Feb 20 07:59:50 2009 From: berechitai at gmail.com (Alexander) Date: Fri, 20 Feb 2009 18:59:50 +0300 Subject: [Live-devel] Some kind of live file streaming in MediaServer In-Reply-To: References: <555e2e1f0902200157q557ed258je0c4c0895e994d7@mail.gmail.com> Message-ID: <555e2e1f0902200759q2d99e9b9v1695e712b46c72b4@mail.gmail.com> But the problem is that 'live source'. It should be possible to stream live, for example, into named pipe to be streamed by your server to end users. When server do not request data, the 'live' source emulator should skip as much file data as much time passed (according to file's bit rate). So, the emulator should parse TS container itself. But this duplicates your server's functionality! I am absolutely sure, that your server could emulate live source with only few modifications. Similar functionality presents in Microsoft Windows Media Services. It can create broadcast publishing points on RTSP/HTTP server. These points use file playlists. When a publishing point is started by administrator, playlist plays virtually even if no one is connected. On Fri, Feb 20, 2009 at 5:10 PM, Ross Finlayson wrote: > RTSP Media server has an option "Reuse source", so, if one user is already >> receiving a stream, another user connected would have the same. >> But if no one connected, the stream is stopped. My question is how to >> start playing (something like streaming to null) even if no one is >> connected. And if a user connects to RTSP server after 5 minutes, he >> receives stream not from the beginning. >> > > That's not a problem, because the server's streaming software will read > from the input source object only when it actually needs data (to send to a > client). Therefore, as long as the input source object delivers data > 'live', upon request, the system will work properly. > -- > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Fri Feb 20 13:02:21 2009 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 20 Feb 2009 13:02:21 -0800 Subject: [Live-devel] Remote close handling in RTSPClient In-Reply-To: References: Message-ID: >I have noticed a tiny bug in RTSPClient disconnect handling. The >current line 2513 in RTSPClient.cpp reads: > envir().setResultErrMsg("Failed to read response: "); > >However, reading 0 bytes is a normal operation and not an error, >thats why errno will not be set and the caller will get incomplete >status information with this implementation. I propose to change >this line to: > envir().setResultMsg("Failed to read response: Connection was >closed by the remote host."); Thanks. This will be fixed in the next release of the code. > Sorry if this post is inappropriate No, this mailing list is exactly the right place for reporting bugs. -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ From microqq001 at gmail.com Sat Feb 21 23:57:01 2009 From: microqq001 at gmail.com (qiqi z) Date: Sun, 22 Feb 2009 15:57:01 +0800 Subject: [Live-devel] Could somebody send me a copy of a Tutorial of H264 RTP Streaming In-Reply-To: <25742952.138001234842821197.JavaMail.coremail@app157.163.com> References: <25742952.138001234842821197.JavaMail.coremail@app157.163.com> Message-ID: <78b5e57e0902212357n4df0081ftb00dda2e45849b94@mail.gmail.com> Hi,baxkstreet , Is this the one? http://www.fileden.com/files/2008/12/4/2210768/live555_H.264_tutorial.tar.gz QiQi 2009/2/17 baxkstreet : > HI all! > I found a tutorial of h264 rtp streaming post by Mojtaba Hosseini on > http://lists.live555.com/pipermail/live-devel/2007-June/007030.html .but the > download url is unavailable now .if someone have a copy ,could you send it > to me thinks! > best regards > kaka > > > > ________________________________ > ????????????????? > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > From finlayson at live555.com Mon Feb 23 02:20:33 2009 From: finlayson at live555.com (Ross Finlayson) Date: Mon, 23 Feb 2009 02:20:33 -0800 Subject: [Live-devel] Change to 'trick play' Transport Stream generation - reduces output bit rate Message-ID: Some people have reported having problems with 'trick play' operations on Transport Streams, due to the high bitrate of the 'trick play' output streams. By popular demand, I have now released a new version (2009.02.23) of the "LIVE555 Streaming Media" software that changes the way that Transport Streams are generated for 'trick play' operations (fast-forward or reverse play). Now, each I-frame (i.e., key frame) appears no more than once in the output Transport Stream for 'trick play' operations. This will have the effect of reducing the average output bitrate for 'trick play' streams, except for high 'scale' values. For those of you who have been having problems with the high bit rate of 'trick play' Transport Stream data - please try this new code, and let us know if this new version of the code improves things. Note that because these changes are experimental, I have not yet changed the prebuilt binary versions of the "LIVE555 Media Server" application - therefore, if you use this application, you will need to build your own version from the new source code. If - for whatever reason - you wish to go back to the old behavior (in which we always keep the original frame rate, even if it means duplicating I-frames), then you can do so by changing the definition of "KEEP_ORIGINAL_FRAME_RATE" in "liveMedia/MPEG2TransportStreamTrickModeFilter.cpp" to "True". However, if you find you need to do this, please let us know why. This change to the code is experimental, and I will back it out if people end up having problems with it. -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ From patbob at imoveinc.com Mon Feb 23 16:57:11 2009 From: patbob at imoveinc.com (Patrick White) Date: Mon, 23 Feb 2009 16:57:11 -0800 Subject: [Live-devel] UDP sockets don't use ReceivingInterfaceAddr? In-Reply-To: References: Message-ID: <200902231657.11289.patbob@imoveinc.com> I'm trying to use ReceivingInterfaceAddr & SendingInterfaceAddr to control which interface RTSP/RTP traffic is going in and out on. In setupDatagramSocket(), there is the line: if (port.num() == 0) addr = ReceivingInterfaceAddr; This line is just prior to the bind() call. So.. UDP ports are only bound to a particular interface if portnum is also 0. Because of this, all the UDP sockets used for RTP traffic are bound to IP_ADDRANY. Is there a particular reason for this? Is it a multicast thing or something? Can I change it to make it always set addr from ReceivingInterfaceAddr? thanks, patbob From matt at schuckmannacres.com Mon Feb 23 17:26:58 2009 From: matt at schuckmannacres.com (Matt Schuckmann) Date: Mon, 23 Feb 2009 17:26:58 -0800 Subject: [Live-devel] Problem with RTSPServer::RTSPClientSession::incomingRequestHandler1() and SET_PARAMETER & GET_PARAMETER Message-ID: <49A34CE2.4090106@schuckmannacres.com> I see a problem with the while loop for detecting the end of a RTSP command in RTSPServer::RTSPClientSession::incomingRequestHandler1() and the commands SET_PARAMETER and GET_PARAMETER either of which with actual parameters to set. Basically the while loop looks for and then determines that it has gotten the full command. However in both SET_PARAMETER and GET_PARAMETER this sequence appears between the Content-type: header and the parameters to set or get. I guess the solution is going to have to be to make this while loop smarter to detect what type of command is being received and if it's a GET or SET look at the content-length header to determine how much data needs to be read. Any on the best way to accomplish this? Is there generalized header parsing code anywhere in the library. Thanks, Matt S. PS. The current code appears to work most of the time in that generally an entire message comes in at once and the server passes all the data it received on to the lower level handling code, even the stuff that is beyond what it thinks was the end of the message. However, if the socket receive code ever split the message up after the first the server could get very confused. PPS. Am I using the term header correctly in referring to things like the Content-type and Content-length lines? From matt at schuckmannacres.com Mon Feb 23 18:10:36 2009 From: matt at schuckmannacres.com (Matt Schuckmann) Date: Mon, 23 Feb 2009 18:10:36 -0800 Subject: [Live-devel] Server getting confused with RTCP message before PLAY command (when using RTP over TCP) Message-ID: <49A3571C.8090906@schuckmannacres.com> This is sort of related to my last message, mostly because I found the 2 problems at the same time and around the same place in the code. I'm testing RTP over TCP (client and server are both based on LiveMedia) and occasionally (perhaps 50% of the time) the server is responding to the PLAY command with a "400 Bad Request " response. It looks like what is a happening is just before the RTSPClient object sends the PLAY command a RTCP message is being sent from the client (probably from one of the RTCP objects that got created as a result of the earlier SETUP command. Anyway the RTSPServer::RTSPClientSession::incomingRequestHandler() method is getting very confused by the RTCP message that precedes the PLAY command (in fact it never sees the PLAY command because it stops parsing when it sees $\001). I would assume that the RTSPServer::RTSPClientSession::incomingRequestHandler() code should watch for RTCP messages and properly ignore them or forward them on to the session or sub session RTCP handler objects (I'm really not clear on how it could forward the data but maybe it's possible). Or perhaps the client code shouldn't be allowed to send RTCP messages until after the PLAY command has been issued. Any suggestions on what should be done here? Thanks Matt S. PS. I should probably note my code isn't like openRTSP in that it doesn't do DESCRIBE, SETUP, and PLAY all at once I do DESCRIBE followed by SETUP, then I let the TaskScheduler run while I let my UI do some work, then I do a couple of SET_PARAMETER commands, then some more UI work followed by the PLAY command. I might be able to change my code to do everything at once, but I figure somebody isn't working according to the standard and it would be nice if a general purpose server like liveMedia could handle this type of sequence. From finlayson at live555.com Mon Feb 23 18:27:47 2009 From: finlayson at live555.com (Ross Finlayson) Date: Mon, 23 Feb 2009 18:27:47 -0800 Subject: [Live-devel] Problem with RTSPServer::RTSPClientSession::incomingRequestHandler1() and SET_PARAMETER & GET_PARAMETER In-Reply-To: <49A34CE2.4090106@schuckmannacres.com> References: <49A34CE2.4090106@schuckmannacres.com> Message-ID: >I guess the solution is going to have to be to make this while loop >smarter to detect what type of command is being received and if it's >a GET or SET look at the content-length header to determine how much >data needs to be read. Yes. In fact, the code should really be looking for (and handing, if present) a "Content-Length:" header for *any* command (not just ?ET_PARAMETER), in case people want to subclass the server code to handle non-standard custom data of some sort (ugh). >Any on the best way to accomplish this? Is there generalized header >parsing code anywhere in the library. No, not really. >PS. The current code appears to work most of the time in that >generally an entire message comes in at once and the server passes >all the data it received on to the lower level handling code, even >the stuff that is beyond what it thinks was the end of the message. >However, if the socket receive code ever split the message up after >the first the server could get very confused. Yes. We already handle messages that get split before the \r\n\r\n, or even those that get split in the midst of the \r\n\r\n. The code would also need to handle the possibility of getting the "Content-Length:" bytes of data in multiple chunks. -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ From finlayson at live555.com Mon Feb 23 22:11:49 2009 From: finlayson at live555.com (Ross Finlayson) Date: Mon, 23 Feb 2009 22:11:49 -0800 Subject: [Live-devel] Server getting confused with RTCP message before PLAY command (when using RTP over TCP) In-Reply-To: <49A3571C.8090906@schuckmannacres.com> References: <49A3571C.8090906@schuckmannacres.com> Message-ID: >Or perhaps the client code shouldn't be allowed to send RTCP >messages until after the PLAY command has been issued. You're right - it shouldn't. You've discovered a bug. >PS. I should probably note my code isn't like openRTSP in that it >doesn't do DESCRIBE, SETUP, and PLAY all at once I do DESCRIBE >followed by SETUP, then I let the TaskScheduler run while I let my >UI do some work, then I do a couple of SET_PARAMETER commands, then >some more UI work followed by the PLAY command. Yes, that's why you managed to discover the bug. Thanks. I'll fix this as part of my forthcoming planned major upgrade of "RTSPClient". -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ From finlayson at live555.com Mon Feb 23 22:31:11 2009 From: finlayson at live555.com (Ross Finlayson) Date: Mon, 23 Feb 2009 22:31:11 -0800 Subject: [Live-devel] UDP sockets don't use ReceivingInterfaceAddr? In-Reply-To: <200902231657.11289.patbob@imoveinc.com> References: <200902231657.11289.patbob@imoveinc.com> Message-ID: >I'm trying to use ReceivingInterfaceAddr & SendingInterfaceAddr to control >which interface RTSP/RTP traffic is going in and out on. > >In setupDatagramSocket(), there is the line: > > if (port.num() == 0) addr = ReceivingInterfaceAddr; > >This line is just prior to the bind() call. So.. UDP ports are only bound to >a particular interface if portnum is also 0. Because of this, all the UDP >sockets used for RTP traffic are bound to IP_ADDRANY. > >Is there a particular reason for this? Is it a multicast thing or something? Yes, I think so. I think the intention was that you would want to create a datagram socket with an initial non-zero port number only for multicast streams, in which case you probably wouldn't want to bind() to something other than INADDR_ANY. But I'm not sure. But anyway, if you're really doing this for unicast RTSP/RTP, then you shouldn't run into this issue, because - in this case - the port number should be 0 when the socket is created, I think. >Can I change it to make it always set addr from ReceivingInterfaceAddr? This is Open Source; you can change it to whatever you want :-) I can't guarantee that it will work, though. -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ From gbzbz at yahoo.com Tue Feb 24 01:26:50 2009 From: gbzbz at yahoo.com (gather bzbz) Date: Tue, 24 Feb 2009 01:26:50 -0800 (PST) Subject: [Live-devel] H264 streaming problem Message-ID: <877255.35193.qm@web51304.mail.re2.yahoo.com> Hi, I try to implement the H264 streamframer, I write a simple app that does the multicast streaming. then I use the openRTSP to receive the stream. The attached log file contains the output from openRTSP. The saved file is hellovideo-H264-2, when I do "file hellovideo-H264-2", I get "hellovideo-H264-2: JVT NAL sequence", but when I use mplayer or vlc to playback the file, it shows junk color bars on the screen. Nothing got printed out from the server side... Please give me some hints! Thanks -------------- next part -------------- A non-text attachment was scrubbed... Name: log Type: application/octet-stream Size: 3389 bytes Desc: not available URL: From amit.yedidia at elbitsystems.com Tue Feb 24 02:01:21 2009 From: amit.yedidia at elbitsystems.com (Yedidia Amit) Date: Tue, 24 Feb 2009 12:01:21 +0200 Subject: [Live-devel] H264 streaming problem In-Reply-To: <877255.35193.qm@web51304.mail.re2.yahoo.com> Message-ID: Try rename it to have the extension .264. ("hellovideo-H264-2.264") In that way the vlc player will know to treat it as annex-B H264 stream. Regards, Amit Yedidia Elbit System Ltd. Email: amit.yedidia at elbitsystems.com Tel: 972-4-8318905 ---------------------------------------------------------- > -----Original Message----- > From: live-devel-bounces at ns.live555.com > [mailto:live-devel-bounces at ns.live555.com] On Behalf Of gather bzbz > Sent: Tuesday, February 24, 2009 11:27 AM > To: live-devel at ns.live555.com > Subject: [Live-devel] H264 streaming problem > > Hi, > > I try to implement the H264 streamframer, I write a simple > app that does the multicast streaming. then I use the > openRTSP to receive the stream. The attached log file > contains the output from openRTSP. The saved file is > hellovideo-H264-2, when I do "file hellovideo-H264-2", I get > "hellovideo-H264-2: JVT NAL sequence", but when I use mplayer > or vlc to playback the file, it shows junk color bars on the > screen. Nothing got printed out from the server side... > Please give me some hints! > > Thanks > > > > The information in this e-mail transmission contains proprietary and business sensitive information. Unauthorized interception of this e-mail may constitute a violation of law. If you are not the intended recipient, you are hereby notified that any review, dissemination, distribution or duplication of this communication is strictly prohibited. You are also asked to contact the sender by reply email and immediately destroy all copies of the original message. From gabriele.deluca at hotmail.com Tue Feb 24 02:03:06 2009 From: gabriele.deluca at hotmail.com (Gabriele De Luca) Date: Tue, 24 Feb 2009 11:03:06 +0100 Subject: [Live-devel] Question on GroupSock Message-ID: Hi Ross,I study the following classes: GroupSock->OutputSock->Socketand setupDatagramSocket() in the GroupSockHelper. I have seen that setupDatagramSocket() create a socket and bind it to the port parameter. So, if I have already created the socket and the bind, how to avoid the setupDatagramSocket() function without change significantly the library? Thanks in advance for your feedback, Gabriele _________________________________________________________________ Party? con Eventi! http://events.live.com/?showunauth=1 From gbzbz at yahoo.com Tue Feb 24 02:55:48 2009 From: gbzbz at yahoo.com (gather bzbz) Date: Tue, 24 Feb 2009 02:55:48 -0800 (PST) Subject: [Live-devel] H264 streaming problem Message-ID: <569433.99993.qm@web51308.mail.re2.yahoo.com> I rename the file hellovideo-H264-2 to hello.264, "file hello.264" gives "hello.264: JVT NAL sequence", mplayer gives Playing hello.264. H264-ES file format detected. Video: Cannot read properties. No stream found. at least, mplayer can play hellovideo-H264-2 with color bar, but it can not play hello.264 at all. Now I am officially lost..... From anna.richter1 at gmx.net Tue Feb 24 02:59:24 2009 From: anna.richter1 at gmx.net (Anna Richter) Date: Tue, 24 Feb 2009 11:59:24 +0100 Subject: [Live-devel] Streaming mpeg1 videos only! Message-ID: <20090224105924.6790@gmx.net> Hello! I have one question and hope, somebody can help me. I use live555mediaServer and I need to stream mpeg1-videos ONLY, means just a video-file (no audio). Is it possible? The comments I see, when starting the server tell, that only mpeg4 video elementary streams are accepted. mpeg1 can be used only for a program stream (audio+video). Is there a way to stream mpe1-videos only? Best wishes Anna -- Psssst! Schon vom neuen GMX MultiMessenger geh?rt? Der kann`s mit allen: http://www.gmx.net/de/go/multimessenger01 From soumya.patra at lge.com Tue Feb 24 03:03:47 2009 From: soumya.patra at lge.com (soumya patra) Date: Tue, 24 Feb 2009 16:33:47 +0530 Subject: [Live-devel] IPV6 implementation Message-ID: <20090224110348.068DC55800B@LGEMRELSE7Q.lge.com> Hi Ross, I want to make live 555 IPV6 compatible. Can you please give some idea to port live 555 IPV4 to IPV6? What are all changes required to make it IPV6 compatible. The changes are required in groupsock library? IPV6 implementation will effect the RTP transmission or not? Waiting for your response. Regards Soumya -------------- next part -------------- An HTML attachment was scrubbed... URL: From amit.yedidia at elbitsystems.com Tue Feb 24 03:26:39 2009 From: amit.yedidia at elbitsystems.com (Yedidia Amit) Date: Tue, 24 Feb 2009 13:26:39 +0200 Subject: [Live-devel] H264 streaming problem In-Reply-To: <569433.99993.qm@web51308.mail.re2.yahoo.com> Message-ID: The problem is that special headers called SPS and PPS (sequence parameter set and picture parameter set) are not included in the file. Those headers may be carried in-band (which is probably not your case) or in the SDP. My guess is that your source sent them in the SDP and not in-band (in the RTP stream), and that's why they are not found in the file. Regards, Amit Yedidia Elbit System Ltd. Email: amit.yedidia at elbitsystems.com Tel: 972-4-8318905 ---------------------------------------------------------- > -----Original Message----- > From: live-devel-bounces at ns.live555.com > [mailto:live-devel-bounces at ns.live555.com] On Behalf Of gather bzbz > Sent: Tuesday, February 24, 2009 12:56 PM > To: live-devel at ns.live555.com > Subject: Re: [Live-devel] H264 streaming problem > > > I rename the file hellovideo-H264-2 to hello.264, "file > hello.264" gives "hello.264: JVT NAL sequence", mplayer gives > > Playing hello.264. > H264-ES file format detected. > Video: Cannot read properties. > No stream found. > > at least, mplayer can play hellovideo-H264-2 with color bar, > but it can not play hello.264 at all. Now I am officially lost..... > > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > The information in this e-mail transmission contains proprietary and business sensitive information. Unauthorized interception of this e-mail may constitute a violation of law. If you are not the intended recipient, you are hereby notified that any review, dissemination, distribution or duplication of this communication is strictly prohibited. You are also asked to contact the sender by reply email and immediately destroy all copies of the original message. From etienne.bomcke at uclouvain.be Tue Feb 24 03:39:48 2009 From: etienne.bomcke at uclouvain.be (=?ISO-8859-1?Q?Etienne_B=F6mcke?=) Date: Tue, 24 Feb 2009 12:39:48 +0100 Subject: [Live-devel] H264 streaming problem In-Reply-To: <569433.99993.qm@web51308.mail.re2.yahoo.com> References: <569433.99993.qm@web51308.mail.re2.yahoo.com> Message-ID: Are you sure you correctly sent the SPS/PPS nals ? Post the .264 file if you're not sure, I might be able to take a look and help. Etienne On 24 Feb 2009, at 11:55, gather bzbz wrote: > > I rename the file hellovideo-H264-2 to hello.264, "file hello.264" > gives "hello.264: JVT NAL sequence", mplayer gives > > Playing hello.264. > H264-ES file format detected. > Video: Cannot read properties. > No stream found. > > at least, mplayer can play hellovideo-H264-2 with color bar, but it > can not play hello.264 at all. Now I am officially lost..... > > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel -- Etienne B?mcke Laboratoire de T?l?communications et T?l?d?tections Universit? Catholique de Louvain B?timent Stevin - Place du Levant, 2 B-1348 Louvain-la-Neuve e-mail : etienne.bomcke at uclouvain.be tel : +32 10 47 85 51 From finlayson at live555.com Tue Feb 24 07:14:23 2009 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 24 Feb 2009 07:14:23 -0800 Subject: [Live-devel] Question on GroupSock In-Reply-To: References: Message-ID: >I have seen that setupDatagramSocket() create a socket and bind it >to the port parameter. >So, if I have already created the socket and the bind, how to avoid >the setupDatagramSocket() function without change significantly the >library? No, you can't do this without changing the existing code. -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ From finlayson at live555.com Tue Feb 24 07:17:11 2009 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 24 Feb 2009 07:17:11 -0800 Subject: [Live-devel] IPV6 implementation In-Reply-To: <20090224110348.068DC55800B@LGEMRELSE7Q.lge.com> References: <20090224110348.068DC55800B@LGEMRELSE7Q.lge.com> Message-ID: > I want to make live 555 IPV6 compatible. Can you please give >some idea to port live 555 IPV4 to IPV6? > What are all changes required to make it IPV6 compatible. Unfortunately, the changes required to support IPv6 will be extensive. Support for IPv6 is on our 'to do' list, but unfortunately there is no ETA right now. -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Feb 24 07:39:44 2009 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 24 Feb 2009 07:39:44 -0800 Subject: [Live-devel] Streaming mpeg1 videos only! In-Reply-To: <20090224105924.6790@gmx.net> References: <20090224105924.6790@gmx.net> Message-ID: >I have one question and hope, somebody can help me. I use >live555mediaServer and I need to stream mpeg1-videos ONLY, means >just a video-file (no audio). Is it possible? The comments I see, >when starting the server tell, that only mpeg4 video elementary >streams are accepted. mpeg1 can be used only for a program stream >(audio+video). > >Is there a way to stream mpe1-videos only? Yes. You could do this by changing lines 130-135 in "DynamicRTSPServer.cpp" from // Assumed to be a MPEG-1 or 2 Program Stream (audio+video) file: NEW_SMS("MPEG-1 or 2 Program Stream"); MPEG1or2FileServerDemux* demux = MPEG1or2FileServerDemux::createNew(env, fileName, reuseSource); sms->addSubsession(demux->newVideoServerMediaSubsession()); sms->addSubsession(demux->newAudioServerMediaSubsession()); to // Assumed to be a MPEG-1 or 2 Video Elementary Stream file: NEW_SMS("MPEG-1 or 2 Video Elementary Stream"); sms->addSubsession(MPEG1or2VideoFileServerMediaSubsession ::createNew(env, fileName, reuseSource, False)); If you do this, your MPEG-1 or 2 Video Elementary Stream files must have the filename extension ".mpg", and you will no longer be able to stream MPEG Program Stream files. -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ From patbob at imoveinc.com Tue Feb 24 08:32:34 2009 From: patbob at imoveinc.com (Patrick White) Date: Tue, 24 Feb 2009 08:32:34 -0800 Subject: [Live-devel] UDP sockets don't use ReceivingInterfaceAddr? In-Reply-To: References: <200902231657.11289.patbob@imoveinc.com> Message-ID: <200902240832.34619.patbob@imoveinc.com> > >I'm trying to use ReceivingInterfaceAddr & SendingInterfaceAddr to control > >which interface RTSP/RTP traffic is going in and out on. > >In setupDatagramSocket(), there is the line: > > if (port.num() == 0) addr = ReceivingInterfaceAddr; > >This line is just prior to the bind() call. So.. UDP ports are only bound > > to a particular interface if portnum is also 0. Because of this, all the > > UDP sockets used for RTP traffic are bound to IP_ADDRANY. > >Is there a particular reason for this? Is it a multicast thing or > > something? > > Yes, I think so. I think the intention was that you would want to > create a datagram socket with an initial non-zero port number only > for multicast streams, in which case you probably wouldn't want to > bind() to something other than INADDR_ANY. But I'm not sure. > > But anyway, if you're really doing this for unicast RTSP/RTP, then > you shouldn't run into this issue, because - in this case - the port > number should be 0 when the socket is created, I think. We're doing unicast, and the port number is never 0 through this code -- it's always 6970+, so the adapter assignment can never happen. By default, ReceivingInterfaceAddr is 0 (IP_ADDRANY), which is also what addr is by default, so normally this assignment makes no difference. I've tried it with unicast RTP and it works fine. It should also work fine with multicast, restricting the outbound traffic to a particular adapter, but I don't have any way to test multicast anything at the current time. I figured I'd ask before committing the change, just in case you or someone else knows why it needs to be this way. Since we're trying to push as many changes as we can back into your library, I don't want to push back bugs if we can help it. > >Can I change it to make it always set addr from ReceivingInterfaceAddr? > This is Open Source; you can change it to whatever you want :-) I > can't guarantee that it will work, though. The change works fine for us.. and yes, we have the source so we can change it. thanks, patbob From matt at schuckmannacres.com Tue Feb 24 11:31:17 2009 From: matt at schuckmannacres.com (Matt Schuckmann) Date: Tue, 24 Feb 2009 11:31:17 -0800 Subject: [Live-devel] Problem with RTSPServer::RTSPClientSession::incomingRequestHandler1() and SET_PARAMETER & GET_PARAMETER In-Reply-To: References: <49A34CE2.4090106@schuckmannacres.com> Message-ID: <49A44B05.60607@schuckmannacres.com> Thanks for the info. I'm working on writing a RTSPRequest handler class to handle receiving request, across multiple transport packets, deal with the Content-length header, and I'll try to roll the ParseRTSPRequestString() code into it. I'm also going to add some minimal code to ignore interleaved RTP/RTCP messages just because that other problem is getting in my way right now. One question I have is does the capitalization of the headers matter? In the RTSP standard it looks like it supposed to be "Content-Length" however I see it as both "Content-Length" and "Content-length". I suppose to be truly versatile it should be a case insensitive match. Thanks, Matt S. Ross Finlayson wrote: >> I guess the solution is going to have to be to make this while loop >> smarter to detect what type of command is being received and if it's >> a GET or SET look at the content-length header to determine how much >> data needs to be read. > > Yes. In fact, the code should really be looking for (and handing, if > present) a "Content-Length:" header for *any* command (not just > ?ET_PARAMETER), in case people want to subclass the server code to > handle non-standard custom data of some sort (ugh). > > >> Any on the best way to accomplish this? Is there generalized header >> parsing code anywhere in the library. > > No, not really. > > >> PS. The current code appears to work most of the time in that >> generally an entire message comes in at once and the server passes >> all the data it received on to the lower level handling code, even >> the stuff that is beyond what it thinks was the end of the message. >> However, if the socket receive code ever split the message up after >> the first the server could get very confused. > > Yes. We already handle messages that get split before the \r\n\r\n, > or even those that get split in the midst of the \r\n\r\n. The code > would also need to handle the possibility of getting the > "Content-Length:" bytes of data in multiple chunks. From rkunert at wisc.edu Tue Feb 24 08:53:40 2009 From: rkunert at wisc.edu (Richard Kunert) Date: Tue, 24 Feb 2009 10:53:40 -0600 Subject: [Live-devel] QuickTime Broadcaster, Axis Network Cameras and Live555 Message-ID: I've noticed some interesting / aggravating behavior using live555 as included in VLC to record rtsp audio streams from QuickTime Broadcaster and video from a pair of Axis network cameras. I'm hoping someone here can shed some light on it. The problem is that mpeg-4 rtsp audio and streams saved to QuickTime .mov files are set to 75% of their correct duration (file size is normal). This is very repeatable. After a (long) period of troubleshooting I found the RTCP decoder in Wireshark and looked at the packets Broadcaster is sending. The NTP timestamps are in local time, not UTC as defined in RFC 3550. The obvious workaround for this was to set the time zone of the machine running QuickTime Broadcaster to GMT. That fixes the problem completely. With no other changes I can turn the data truncation on and off just by changing the time zone on that machine. Putting it in GMT results in properly recorded files, any other time zone and I'm back at 75% duration. A few more data points: I have identical results with mpeg-4 rtsp video streams from my Axis cameras. 75% duration. Unfortunately they don't seem to be capable of producing a correct RTCP timestamp at all. I'm really curious about this 75% number as it seems to be unrelated to any parameters of the streams. These are actually part of a system that's been in production for about a year. I've been running my streams through QuickTime Streaming Server and recording the resulting stream. QTSS somehow "fixes" the stream so that the duration is correct (or at least it used to, Mac OS X updates last December appear to have broken it). This is in spite of the fact that it doesn't appear to modify the RTCP timestamps that Broadcaster puts out. My preliminary hypothesis is that the RTCP timestamps, if present and correct, are used to fix some other problem in the chain. The setup: Quicktime Broadcaster streaming MPEG-4 audio over RTSP (44.1KHz AAC, 64Kbps). Two Axis network cameras streaming MPEG-4 video over RTSP. Lectures are streamed live using QuickTime Streaming Server. Streams are captured to disk using three scripted instances of VLC pointed at the streaming server. All software is the latest version as of this date. All streams from the cameras and Broadcaster are multicast, but I get the same results with unicast streams. No firewall issues. Nothing interesting gets logged from VLC with it set to maximum verboseness. Any illumination / speculation would be MUCH appreciated. If this sounds like some other part of VLC is likely to be closer to the issue I'll take my question over there. -- Richard Kunert From patbob at imoveinc.com Tue Feb 24 15:06:00 2009 From: patbob at imoveinc.com (Patrick White) Date: Tue, 24 Feb 2009 15:06:00 -0800 Subject: [Live-devel] UDP sockets don't use ReceivingInterfaceAddr? In-Reply-To: <200902240832.34619.patbob@imoveinc.com> References: <200902240832.34619.patbob@imoveinc.com> Message-ID: <200902241506.00857.patbob@imoveinc.com> In a word.. nevermind. Oops.. got my logic backwards.. Yes, the port number is always 0 for RTP UDP unicast... and that's why the if never happens and ReceivingInterfaceAddr is never used. With your added bit about multicast using a !0 port number, and thinking through all the logic reasoning, I can see now why it is done this way -- it is the only way to control which adapter multicast traffic goes out. ..And I can also see now that it would be unwise (not wrong, just unwise) to change the code, so I won't :) Sorry about any confusion.. trying to juggle too many closely related logic threads at once and wires got crossed in the 'ol noggin. later, patbob On Tuesday 24 February 2009 8:32 am, Patrick White wrote: > > >I'm trying to use ReceivingInterfaceAddr & SendingInterfaceAddr to > > > control which interface RTSP/RTP traffic is going in and out on. > > >In setupDatagramSocket(), there is the line: > > > if (port.num() == 0) addr = ReceivingInterfaceAddr; > > >This line is just prior to the bind() call. So.. UDP ports are only > > > bound to a particular interface if portnum is also 0. Because of this, > > > all the UDP sockets used for RTP traffic are bound to IP_ADDRANY. > > >Is there a particular reason for this? Is it a multicast thing or > > > something? > > > > Yes, I think so. I think the intention was that you would want to > > create a datagram socket with an initial non-zero port number only > > for multicast streams, in which case you probably wouldn't want to > > bind() to something other than INADDR_ANY. But I'm not sure. > > > > But anyway, if you're really doing this for unicast RTSP/RTP, then > > you shouldn't run into this issue, because - in this case - the port > > number should be 0 when the socket is created, I think. > > We're doing unicast, and the port number is never 0 through this code -- > it's always 6970+, so the adapter assignment can never happen. > > By default, ReceivingInterfaceAddr is 0 (IP_ADDRANY), which is also what > addr is by default, so normally this assignment makes no difference. I've > tried it with unicast RTP and it works fine. It should also work fine with > multicast, restricting the outbound traffic to a particular adapter, but I > don't have any way to test multicast anything at the current time. > > I figured I'd ask before committing the change, just in case you or someone > else knows why it needs to be this way. Since we're trying to push as many > changes as we can back into your library, I don't want to push back bugs if > we can help it. > > > >Can I change it to make it always set addr from ReceivingInterfaceAddr? > > > > This is Open Source; you can change it to whatever you want :-) I > > can't guarantee that it will work, though. > > The change works fine for us.. and yes, we have the source so we can change > it. > > thanks, > patbob > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel From braitmaier at hlrs.de Wed Feb 25 01:48:02 2009 From: braitmaier at hlrs.de (Michael Braitmaier) Date: Wed, 25 Feb 2009 10:48:02 +0100 Subject: [Live-devel] Streaming from network source Message-ID: <49A513D2.5020008@hlrs.de> Hello everyone! Please excuse me, if this questions is rather simple. I wondered whether there is something already implemented in Live555 to stream video not from a file but from a network socket. Currently I looked at the code, starting with the testOnDemandRTSPServer and I came to the conclusion (following the source code across several classes) that I have to write a different version of ByteStreamFileSource. So I started of writing ByteStreamSocketSource and FramedSocketSource. However before going on , I would like to know whether I miss something obvious in Live555. If so, a hint to the source code section or class would be very nice and helpful. Thanks in advance. Dipl.-Inf. Michael Braitmaier HLRS - Visualization / Video Conferencing University of Stuttgart Germany Website: http://www.hlrs.de/ From anna.richter1 at gmx.net Wed Feb 25 02:00:01 2009 From: anna.richter1 at gmx.net (Anna Richter) Date: Wed, 25 Feb 2009 11:00:01 +0100 Subject: [Live-devel] CPU load for live555mediaServer Message-ID: <20090225100001.221550@gmx.net> Hi everybody! I use the live555mediaServer for Streaming a MPEG1 Video Stream to two receivers. My question concerns the CPU load connected with this server. When starting the server and streaming the requested video live555mediaServer needs about 5% of the CPU. While streaming, the server is in the "sleep" state. After the streaming is completed the server switches to the state "running" and consumes about 80% - 90% of the CPU. But why is that? I tried to study the documentation files as well as the source codes for live555mediaServer, but since I am really new with C++ Programming, I could not find out the reason for this high consume of the CPU load. Can anybody help me? What does the server do, after it quits to stream? Why does it need so much CPU? Thanks, Anna -- Psssst! Schon vom neuen GMX MultiMessenger geh?rt? Der kann`s mit allen: http://www.gmx.net/de/go/multimessenger01 From sk.wong at pacificworld.com.hk Wed Feb 25 01:55:57 2009 From: sk.wong at pacificworld.com.hk (Wong Shuen Kong) Date: Wed, 25 Feb 2009 17:55:57 +0800 Subject: [Live-devel] Receiving Audio Data from Client While Streaming Out Media Stream Message-ID: <49A515AD.9090705@pacificworld.com.hk> Hi all, Is it possible to extend testOnDemandRTSPServer for having a media session receiving audio data from remote client? We would like to use this to accomplish duplex audio transmission on existing RTSP server which using live555 media library instead of adding another daemon process/task on our ARM9 embedded Linux platform. If possible, could you provide some hints on necessary steps to complete the task? Kong -- Wong Shuen Kong System Analyst Pacific World Industrial Ltd Unit 809, 8/F, Westley Square, 48 Hoi Yuen Road, Kwun Tong, HKSAR Tel : (852) 2797 2733 Fax : (852) 2790 7778 http://www.pacificworld.com.hk From finlayson at live555.com Wed Feb 25 03:02:43 2009 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 25 Feb 2009 03:02:43 -0800 Subject: [Live-devel] Streaming from network source In-Reply-To: <49A513D2.5020008@hlrs.de> References: <49A513D2.5020008@hlrs.de> Message-ID: >Please excuse me, if this questions is rather simple. I wondered >whether there is something already implemented in Live555 to stream >video not from a file but from a >network socket. Currently I looked at the code, starting with the >testOnDemandRTSPServer and I came to the conclusion (following the >source code across several classes) >that I have to write a different version of ByteStreamFileSource. Not if your input network socket has a name in the OS's file system. See http://www.live555.com/liveMedia/faq.html#liveInput and http://www.live555.com/liveMedia/faq.html#liveInput-unicast -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ From finlayson at live555.com Wed Feb 25 03:08:41 2009 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 25 Feb 2009 03:08:41 -0800 Subject: [Live-devel] QuickTime Broadcaster, Axis Network Cameras and Live555 In-Reply-To: References: Message-ID: >I've noticed some interesting / aggravating behavior using live555 >as included in VLC to record rtsp audio streams Although VLC uses our software to receive RTSP/RTP streams, it does not use our software to *write* files from the incoming stream data. Therefore, if you have problems with VLC's file writing, you should send your questions to a VLC mailing list, not this one. -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ From finlayson at live555.com Wed Feb 25 03:10:31 2009 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 25 Feb 2009 03:10:31 -0800 Subject: [Live-devel] Receiving Audio Data from Client While Streaming Out Media Stream In-Reply-To: <49A515AD.9090705@pacificworld.com.hk> References: <49A515AD.9090705@pacificworld.com.hk> Message-ID: >Is it possible to extend testOnDemandRTSPServer for having a media >session receiving audio data from remote client? No. Our software is not set up to support full-duplex communication. For that, you should use some other software that supports SIP, not RTSP. -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ From matt at schuckmannacres.com Wed Feb 25 10:36:56 2009 From: matt at schuckmannacres.com (Matt Schuckmann) Date: Wed, 25 Feb 2009 10:36:56 -0800 Subject: [Live-devel] Problem with RTSPServer::RTSPClientSession::incomingRequestHandler1() and SET_PARAMETER & GET_PARAMETER In-Reply-To: <49A44B05.60607@schuckmannacres.com> References: <49A34CE2.4090106@schuckmannacres.com> <49A44B05.60607@schuckmannacres.com> Message-ID: <49A58FC8.4060304@schuckmannacres.com> Am I correct in assuming that all RTSP requests must start with a alpha character? I'm trying to do some simple validation and ignoring of garbage that might come across the RTSP TCP socket and I'm thinking that if we are waiting to receive the start of a new RTSP request we can throw away anything that's not a alpha character (as determined by the isAlpha() function). Sound reasonable? Thanks Matt S. Matt Schuckmann wrote: > Thanks for the info. > I'm working on writing a RTSPRequest handler class to handle receiving > request, across multiple transport packets, deal with the > Content-length header, and I'll try to roll the > ParseRTSPRequestString() code into it. I'm also going to add some > minimal code to ignore interleaved RTP/RTCP messages just because that > other problem is getting in my way right now. > > One question I have is does the capitalization of the headers matter? > In the RTSP standard it looks like it supposed to be "Content-Length" > however I see it as both "Content-Length" and "Content-length". I > suppose to be truly versatile it should be a case insensitive match. > > Thanks, > Matt S. > > > > Ross Finlayson wrote: >>> I guess the solution is going to have to be to make this while loop >>> smarter to detect what type of command is being received and if it's >>> a GET or SET look at the content-length header to determine how much >>> data needs to be read. >> >> Yes. In fact, the code should really be looking for (and handing, if >> present) a "Content-Length:" header for *any* command (not just >> ?ET_PARAMETER), in case people want to subclass the server code to >> handle non-standard custom data of some sort (ugh). >> >> >>> Any on the best way to accomplish this? Is there generalized header >>> parsing code anywhere in the library. >> >> No, not really. >> >> >>> PS. The current code appears to work most of the time in that >>> generally an entire message comes in at once and the server passes >>> all the data it received on to the lower level handling code, even >>> the stuff that is beyond what it thinks was the end of the message. >>> However, if the socket receive code ever split the message up after >>> the first the server could get very confused. >> >> Yes. We already handle messages that get split before the \r\n\r\n, >> or even those that get split in the midst of the \r\n\r\n. The code >> would also need to handle the possibility of getting the >> "Content-Length:" bytes of data in multiple chunks. > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > From andy at j2kvideo.com Wed Feb 25 10:10:20 2009 From: andy at j2kvideo.com (Andy Bell) Date: Wed, 25 Feb 2009 19:10:20 +0100 Subject: [Live-devel] Newbie: How to create a memory sink Message-ID: Hi, I am new to Live555, I just built it today! What I would like to do with it is to connect to an RTSP server and stream the frame data into my own codec and render objects. I have had a look at OpenRTSP and everything looks pretty easy to set up a stream but I can't seem to work out how to get the data streaming into some memory handle which I can use with my own objects. Can anyone help me out? -- Andy Bell J2K Video Limited T: +44 (0)20 8133 2473 M: +34 685 130097 E: andy at j2kvideo.com W: www.j2kvideo.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From RGlobisch at csir.co.za Wed Feb 25 20:54:28 2009 From: RGlobisch at csir.co.za (Ralf Globisch) Date: Thu, 26 Feb 2009 06:54:28 +0200 Subject: [Live-devel] Newbie: How to create a memory sink Message-ID: <49A63CA2.5DA9.004D.0@csir.co.za> As a starting point have a look at the FileSink class, the two main steps involved are 1) Inherit from MediaSink 2) Override the virtual afterGettingFrame1(unsigned frameSize, struct timeval presentationTime) method and call your own "addData" like method where you can do whatever it is you need to. Then similar to in openRtsp's PlayCommon.cpp file, create your an instance of your own sink before starting the event loop. -- This message is subject to the CSIR's copyright terms and conditions, e-mail legal notice, and implemented Open Document Format (ODF) standard. The full disclaimer details can be found at http://www.csir.co.za/disclaimer.html. This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. MailScanner thanks Transtec Computers for their support. From gbzbz at yahoo.com Wed Feb 25 14:27:53 2009 From: gbzbz at yahoo.com (gather bzbz) Date: Wed, 25 Feb 2009 14:27:53 -0800 (PST) Subject: [Live-devel] H264 streaming problem Message-ID: <775657.6379.qm@web51306.mail.re2.yahoo.com> The problem is that special headers called SPS and PPS (sequence parameter set and picture parameter set) are not included in the file. Those headers may be carried in-band (which is probably not your case) or in the SDP. My guess is that your source sent them in the SDP and not in-band (in the RTP stream), and that's why they are not found in the file. Now I am trying to understand the live555 codes, 1. the SPS and PPS NALs are treated the same way as the DATA NALs, right? I mean, looking at the void H264FUAFragmenter::doGetNextFrame() in H264VideoRTPSink.cpp, do we need to actually parse the fInputBuffer for SPS and PPS? I assume that SPS and PPS NALs are small enough to be deliver "as is". 2. My understanding is that, most sources will give single NAL per frame, so it is normally the case that currentNALUnitEndsAccessUnit() returns TRUE. The FUA mode means, even in single NAL/frame case, the NAL may be too big for the MTU to carry, but we usually do not need to do anything special because the void H264FUAFragmenter::doGetNextFrame() in H264VideoRTPSink.cpp has done all the job. Thanks From finlayson at live555.com Wed Feb 25 21:43:37 2009 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 25 Feb 2009 21:43:37 -0800 Subject: [Live-devel] Newbie: How to create a memory sink In-Reply-To: <49A63CA2.5DA9.004D.0@csir.co.za> References: <49A63CA2.5DA9.004D.0@csir.co.za> Message-ID: >As a starting point have a look at the FileSink class, the two main >steps involved are >1) Inherit from MediaSink >2) Override the virtual afterGettingFrame1(unsigned frameSize, >struct timeval presentationTime) Actually, the virtual function to redefine is virtual Boolean continuePlaying(); See the "FileSink" implementation for a good example of how to do this. -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ From gbzbz at yahoo.com Wed Feb 25 22:25:39 2009 From: gbzbz at yahoo.com (gather bzbz) Date: Wed, 25 Feb 2009 22:25:39 -0800 (PST) Subject: [Live-devel] H264 streaming problem In-Reply-To: <775657.6379.qm@web51306.mail.re2.yahoo.com> Message-ID: <250750.95615.qm@web51301.mail.re2.yahoo.com> Right after I capture the buffer from the hardware and right before I pass the buffer for processing (memmove(fTo)), I write the buffer to a file, that file contains the correct the SPS and PPS. VLC/Mplayer can playback that file without problem. But the file saved by openRTSP does not have SPS and PPS. So who dropped them? --- On Wed, 2/25/09, gather bzbz wrote: > From: gather bzbz > Subject: Re: [Live-devel] H264 streaming problem > To: live-devel at ns.live555.com > Date: Wednesday, February 25, 2009, 2:27 PM > > The problem is that special headers called SPS and PPS > (sequence > parameter set and picture parameter set) are not included > in the file. > Those headers may be carried in-band (which is probably not > your case) > or in the SDP. > My guess is that your source sent them in the SDP and not > in-band (in > the RTP stream), and that's why they are not found in > the file. > > > Now I am trying to understand the live555 codes, > > 1. the SPS and PPS NALs are treated the same way as the > DATA NALs, right? > I mean, looking at the void > H264FUAFragmenter::doGetNextFrame() in H264VideoRTPSink.cpp, > do we need to actually parse the fInputBuffer for SPS and > PPS? I assume that SPS and PPS NALs are small enough to be > deliver "as is". > > 2. My understanding is that, most sources will give single > NAL per frame, so it is normally the case that > currentNALUnitEndsAccessUnit() returns TRUE. The FUA mode > means, even in single NAL/frame case, the NAL may be too big > for the MTU to carry, but we usually do not need to do > anything special because the void > H264FUAFragmenter::doGetNextFrame() in H264VideoRTPSink.cpp > has done all the job. > > Thanks > > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel From andy at j2kvideo.com Thu Feb 26 00:39:20 2009 From: andy at j2kvideo.com (Andy Bell) Date: Thu, 26 Feb 2009 09:39:20 +0100 Subject: [Live-devel] Newbie: How to create a memory sink In-Reply-To: References: <49A63CA2.5DA9.004D.0@csir.co.za> Message-ID: On Thu, Feb 26, 2009 at 6:43 AM, Ross Finlayson wrote: > As a starting point have a look at the FileSink class, the two main steps >> involved are >> 1) Inherit from MediaSink >> 2) Override the virtual afterGettingFrame1(unsigned frameSize, struct >> timeval presentationTime) >> > > Actually, the virtual function to redefine is > virtual Boolean continuePlaying(); > > See the "FileSink" implementation for a good example of how to do this. > Thanks for that. Are there any things to look out for when running the event loop in a seperate thread? Andy -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Thu Feb 26 21:48:02 2009 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 27 Feb 2009 15:48:02 +1000 Subject: [Live-devel] Newbie: How to create a memory sink In-Reply-To: References: <49A63CA2.5DA9.004D.0@csir.co.za> Message-ID: >Are there any things to look out for when running the event loop in >a seperate thread? Separate from what? Have you read the FAQ?? -- Ross Finlayson Live Networks, Inc. http://www.live555.com/ From andy at j2kvideo.com Fri Feb 27 01:36:31 2009 From: andy at j2kvideo.com (Andy Bell) Date: Fri, 27 Feb 2009 10:36:31 +0100 Subject: [Live-devel] Newbie: How to create a memory sink In-Reply-To: References: <49A63CA2.5DA9.004D.0@csir.co.za> Message-ID: On Fri, Feb 27, 2009 at 6:48 AM, Ross Finlayson wrote: > Are there any things to look out for when running the event loop in a >> seperate thread? >> > > Separate from what? Running the event loop in a thread apart from the main thread. > > > Have you read the FAQ?? > No. Is there documentation? Andy -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmaljur at elma.hr Fri Feb 27 03:32:06 2009 From: dmaljur at elma.hr (Dario) Date: Fri, 27 Feb 2009 12:32:06 +0100 Subject: [Live-devel] Live chrashes when opening/closing series of RTSPClients Message-ID: <000801c998cf$056272c0$ec03000a@gen47> Hi. I have a lilst of RTSP Addresses from where we are streaming MPEG4 via FileSink. I traverse each address and create RTSP client from where we stream content for about 2-5 seconds. The creation and deletion of each client is done by functions in plaCommon.cpp wrapped inside a class. At some point Live chrashes and debugger mostly points on FileSink::afterGettingFrame1{ ->continuePlaying() } function. But that's not every time. My guess is that some frames still in the buffer continue to come after session and subsessions starts to close, but I'm not sure that's the case. Did anyone had this problem? I tried the clean playCommon.cpp and openRTSP.cpp functions and the problem persist. I appreciate any help I could get since I've been stuck here fo a week. ELMA Kurtalj d.o.o. (ELMA Kurtalj ltd.) Vitezi?eva 1a, 10000 Zagreb, Hrvatska (Viteziceva 1a, 10000 Zagreb, Croatia) Tel: 01/3035555, Faks: 01/3035599 (Tel: ++385-1-3035555, Fax: ++385-1-3035599 ) Www: www.elma.hr; shop.elma.hr E-mail: elma at elma.hr (elma at elma.hr) pitanje at elma.hr (questions at elma.hr) primjedbe at elma.hr (complaints at elma.hr) prodaja at elma.hr (sales at elma.hr) servis at elma.hr (servicing at elma.hr) shop at elma.hr (shop at elma.hr) skladiste at elma.hr (warehouse at elma.hr) -------------- next part -------------- An HTML attachment was scrubbed... URL: From andre-lists at thenot.org Fri Feb 27 14:57:37 2009 From: andre-lists at thenot.org (Andre Thenot) Date: Fri, 27 Feb 2009 17:57:37 -0500 Subject: [Live-devel] Newbie: How to create a memory sink In-Reply-To: References: <49A63CA2.5DA9.004D.0@csir.co.za> Message-ID: On Feb 27, 2009, at 4:36, Andy Bell wrote: > On Fri, Feb 27, 2009 at 6:48 AM, Ross Finlayson > wrote: >> Have you read the FAQ?? > > > No. Is there documentation? The FAQ is at http://www.live555.com/liveMedia/faq.html Also, the test programs provide some good insight as to how the library works, as well as the Doxygen docs. A. From gbzbz at yahoo.com Fri Feb 27 02:24:25 2009 From: gbzbz at yahoo.com (gather bzbz) Date: Fri, 27 Feb 2009 02:24:25 -0800 (PST) Subject: [Live-devel] H264 streaming problem In-Reply-To: <250750.95615.qm@web51301.mail.re2.yahoo.com> Message-ID: <538690.91393.qm@web51305.mail.re2.yahoo.com> The SPS and PPS showed up on the network once, per wireshark, so they did get sent out once at the beginning of the multicasting. So the questions, does that mean all the clients join later will not see SPS or/and PPS at all? How should that work? What do I need to do? Through SDP? Thanks! --- On Wed, 2/25/09, gather bzbz wrote: > From: gather bzbz > Subject: Re: [Live-devel] H264 streaming problem > To: "LIVE555 Streaming Media - development & use" > Date: Wednesday, February 25, 2009, 10:25 PM > Right after I capture the buffer from the hardware and right > before I pass the buffer for processing (memmove(fTo)), I > write the buffer to a file, that file contains the correct > the SPS and PPS. VLC/Mplayer can playback that file without > problem. But the file saved by openRTSP does not have SPS > and PPS. So who dropped them? > > > --- On Wed, 2/25/09, gather bzbz > wrote: > > > From: gather bzbz > > Subject: Re: [Live-devel] H264 streaming problem > > To: live-devel at ns.live555.com > > Date: Wednesday, February 25, 2009, 2:27 PM > > > > The problem is that special headers called SPS and PPS > > (sequence > > parameter set and picture parameter set) are not > included > > in the file. > > Those headers may be carried in-band (which is > probably not > > your case) > > or in the SDP. > > My guess is that your source sent them in the SDP and > not > > in-band (in > > the RTP stream), and that's why they are not found > in > > the file. > > > > > > Now I am trying to understand the live555 codes, > > > > 1. the SPS and PPS NALs are treated the same way as > the > > DATA NALs, right? > > I mean, looking at the void > > H264FUAFragmenter::doGetNextFrame() in > H264VideoRTPSink.cpp, > > do we need to actually parse the fInputBuffer for SPS > and > > PPS? I assume that SPS and PPS NALs are small enough > to be > > deliver "as is". > > > > 2. My understanding is that, most sources will give > single > > NAL per frame, so it is normally the case that > > currentNALUnitEndsAccessUnit() returns TRUE. The FUA > mode > > means, even in single NAL/frame case, the NAL may be > too big > > for the MTU to carry, but we usually do not need to do > > anything special because the void > > H264FUAFragmenter::doGetNextFrame() in > H264VideoRTPSink.cpp > > has done all the job. > > > > Thanks > > > > > > > > _______________________________________________ > > live-devel mailing list > > live-devel at lists.live555.com > > http://lists.live555.com/mailman/listinfo/live-devel > > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel From finlayson at live555.com Sat Feb 28 00:52:37 2009 From: finlayson at live555.com (Ross Finlayson) Date: Sat, 28 Feb 2009 18:52:37 +1000 Subject: [Live-devel] H264 streaming problem In-Reply-To: <538690.91393.qm@web51305.mail.re2.yahoo.com> References: <538690.91393.qm@web51305.mail.re2.yahoo.com> Message-ID: >The SPS and PPS showed up on the network once, per wireshark, so >they did get sent out once at the beginning of the multicasting. So >the questions, does that mean all the clients join later will not >see SPS or/and PPS at all? How should that work? What do I need to >do? Through SDP? Yes, in the "sprop-parameter-sets" string. Note that the function "H264VideoRTPSink::createNew()" takes tis string as a parameter; it creates the appropriate SDP string for you automatically. -- Ross Finlayson Live Networks, Inc. http://www.live555.com/