From yogesh_marathe at ti.com Fri Feb 1 00:06:09 2013 From: yogesh_marathe at ti.com (Marathe, Yogesh) Date: Fri, 1 Feb 2013 08:06:09 +0000 Subject: [Live-devel] Observing dataloss in linux user space with testRTSPClient In-Reply-To: <2471B04D-1F02-45B1-BCEA-FBDA2509A591@live555.com> References: <2471B04D-1F02-45B1-BCEA-FBDA2509A591@live555.com> Message-ID: <6EFDC7CD849764409289BF330AF9704A3E9F6B99@DBDE01.ent.ti.com> Hi, I added simple logic and few parameters in DummySink to calculate received bit rate in afteGettingFrame() of testRTSPClient and printed the same at 30secs of interval. This was showing birate per stream received. When I opened 4 connections from IP cameras (streaming at 8Mbps CBR) I saw 30-32Mbps data received in application as expected. When I opened 8 streams (effectively 64 Mbps data coming in) from different IP cameras, I saw testRTSPClient cannot receive more than 25 Mbps of data collectively. I can see 'ifconfig' showing 64 Mbps data is received. I mean if I execute 'ifconfig' twice with interval of 30 secs and calculate bitrate with difference between 'RX bytes' it comes out to be 64Mbps approximately. I think that means driver is not dropping huge data. I also observed CPU load which was less than 50% so CPU is not overloaded. Why the same bitrate is not observed in user space? Where the data is being dropped? Is it that the application is not consuming data at enough rate (I mean select is not getting called fast enough)? Before doing this experiment I ensured following things. 1. Changed DUMMY_SINK_RECEIVE_BUFFER_SIZE to 10000000 2. unsigned RTSPClient::responseBufferSize = 10000000; 3. I have turned my system to set net.core.rmem_max and other parameters. I am also using setReceiveBufferTo() to increase socket receive buffer to 0xDA000. Please let me know if you can foresee where the bottleneck could be. I'm running linux on receiving side. Regards. Yogesh. -------------- next part -------------- An HTML attachment was scrubbed... URL: From temp2010 at forren.org Fri Feb 1 03:23:39 2013 From: temp2010 at forren.org (temp2010 at forren.org) Date: Fri, 1 Feb 2013 06:23:39 -0500 Subject: [Live-devel] Am I accidentally H.264 encoding twice??? In-Reply-To: <2471B04D-1F02-45B1-BCEA-FBDA2509A591@live555.com> References: <2471B04D-1F02-45B1-BCEA-FBDA2509A591@live555.com> Message-ID: (((Jacob, please see Ross'es quoted email response further below. ABORT the removal of double H.264...))) Ross, Thanks very much for this info. Will do. Regarding your statement [your 'framer' object should then be fed into a "H264VideoRTPSink" object, for streaming], please help me understand this. Currently, the 'framer' object is sent to videoSink->startPlaying(), where videoSink is a much more fundamental RTPSink. This is exactly as inherited from the original code of testH264VideoStreamer. There are several layers of inheritance between RTPSink and H264VideoRTPSink. Might this be part of my quality problem? Or might you have mis-spoken? Or does the use of H264VideoRTPSink vs RTPSink not really matter in this case. After all, the testH264VideoStreamer program that uses RTPSink and not H264VideoRTPSink ought to work (can't say I ever ran it). Thanks very much, -Helmut P.S. To further help my understanding, please confirm that ByteStreamFileSource must simply read files that include H.264 encoding and provide that encoding forward, at least in the testH264VideoStreamer case. On Thu, Jan 31, 2013 at 7:52 PM, Ross Finlayson wrote: > WHAT? Just a while ago I realized I'm passing H.264 encoded buffers to > H264VideoStreamFramer, which is perhaps doubly encoding them to H.264 again. > > Am I accidentally H.264 encoding twice? Does the original > ByteStreamFileSource fed to H264VideoStreamFramer feed raw buffers to > H264VideoStreamFramer? > > > I think you're confused about what our software does. *None* of our > software does *any* encoding. In particular, the "H264VideoStreamFramer" > and "H264VideoStreamDiscreteFramer" classes each take - as input - > already-encoded H.264 video data. They don't do any 'encoding' (because > the input data is already encoded. All they do is parse the input H.264 > video data, and output a sequence of H.264 'NAL units', with proper > 'presentation time' and 'duration' values. > > The difference between these two classes is that "H264VideoStreamFramer" > takes - as input - H.264 video data that appears in a byte stream (e.g. a > file or pipe). "H264VideoStreamDiscreteFramer", on the other hand, takes > as input discrete NAL units (i.e., one NAL unit at a time), *without* any > preceding 'start code'. > > So, the choice of which of these 'framer' classes to use depends on what > kind of data comes out of your "MF_H264_DeviceSource" class. If this class > outputs an unstructured byte stream (that contains H.264 video data, with > 'start codes' preceding each NAL units), then use a > "H264VideoStreamFramer". If, however, your "MF_H264_DeviceSource" class > outputs a sequence of NAL units (one at a time, without a preceding 'start > code'), then use a "H264VideoStreamDiscreteFramer" instead. > > In either case, your 'framer' object should then be fed into a > "H264VideoRTPSink" object, for streaming. > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Fri Feb 1 05:26:31 2013 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 1 Feb 2013 05:26:31 -0800 Subject: [Live-devel] Am I accidentally H.264 encoding twice??? In-Reply-To: References: <2471B04D-1F02-45B1-BCEA-FBDA2509A591@live555.com> Message-ID: <2ECB684F-B0D2-476A-8538-FAFF5EEA6C1E@live555.com> > Regarding your statement [your 'framer' object should then be fed into a "H264VideoRTPSink" object, for streaming], please help me understand this. Currently, the 'framer' object is sent to videoSink->startPlaying(), where videoSink is a much more fundamental RTPSink. This is exactly as inherited from the original code of testH264VideoStreamer. Yes, but if you look at that code, you'll see that "videoSink" was created to be a "H264VideoRTPSink". > P.S. To further help my understanding, please confirm that ByteStreamFileSource must simply read files that include H.264 encoding and provide that encoding forward, at least in the testH264VideoStreamer case. I'm not 100% sure I understand what you're asking here, but I think the answer is "yes". Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From saravanan.s at fossilshale.com Fri Feb 1 08:45:23 2013 From: saravanan.s at fossilshale.com (saravanan) Date: Fri, 1 Feb 2013 22:15:23 +0530 Subject: [Live-devel] Two client sessions to different servermedia sessions In-Reply-To: <2ECB684F-B0D2-476A-8538-FAFF5EEA6C1E@live555.com> References: <2471B04D-1F02-45B1-BCEA-FBDA2509A591@live555.com> <2ECB684F-B0D2-476A-8538-FAFF5EEA6C1E@live555.com> Message-ID: <00ec01ce009b$8a238e80$9e6aab80$@s@fossilshale.com> Hi, I am using OnDemandServerMediaSession and added two servermedia sessions with the name "live0" and "live1". Both serverMediaSessions will stream MJPG video data. I have set the reuseFirstSource flag, so that more than one client sessions to the same serverMediaSession(for e.g "live0") will be served with the single source, that is working fine and I could get 25 fps per session. But when I try to play two different client sessions to "live0" and "live1" then the I could get 3-5 frames per second in the client side!!! Your input will be much helpful. Regards, Saravanan S -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Fri Feb 1 11:15:00 2013 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 1 Feb 2013 11:15:00 -0800 Subject: [Live-devel] Two client sessions to different servermedia sessions In-Reply-To: <00ec01ce009b$8a238e80$9e6aab80$@s@fossilshale.com> References: <2471B04D-1F02-45B1-BCEA-FBDA2509A591@live555.com> <2ECB684F-B0D2-476A-8538-FAFF5EEA6C1E@live555.com> <00ec01ce009b$8a238e80$9e6aab80$@s@fossilshale.com> Message-ID: <383E3150-B954-495A-AF72-B53079AB035B@live555.com> > I am using OnDemandServerMediaSession and added two servermedia sessions with the name ?live0? and ?live1?. Both serverMediaSessions will stream MJPG video data. > > I have set the reuseFirstSource flag, so that more than one client sessions to the same serverMediaSession(for e.g ?live0?) will be served with the single source, that is working fine and I could get 25 fps per session. But when I try to play two different client sessions to ?live0? and ?live1? then the I could get 3-5 frames per second in the client side!!! MJPEG streams tend to be extremely high bitrate. Wastefully so - which is why, in 2013, nobody should be sending MJPEG anymore! (See ) You are probably approaching the capacity of your network, increasing the frequency of lost packets (which *significantly* increases the frequency of lost MJPEG frames!). To help understand this - suppose, for example, that each MJPEG frame is 50 kBytes, and therefore consists of about 35 consecutive RTP packets. Note that if *any* of these 35 RTP packets gets lost, then the receiver will lose (and therefore drop) the *entire* MJPEG frame. Suppose that the packet loss rate is 1% - i.e., assume (for this example) a random loss rate of 1/100. Then, the probability that a (35-packet) MJPEG frame will be received correctly at the receiver's end is (99/100)^35 = 0.70 (approx). I.e., a 1% packet loss rate produces a 30% frame loss rate! Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Fri Feb 1 11:21:53 2013 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 1 Feb 2013 11:21:53 -0800 Subject: [Live-devel] RTP stream retransmit In-Reply-To: References: Message-ID: <0C3F8ABE-AFCF-4348-B673-E53CF19D56AD@live555.com> > I think I figured the cause - many of the incoming RTP frames "get" the same presentation time (via 'receptionStatsDB().noteIncomingPacket(...)'. No, this is a 'wild goose chase'. The presentation times that get set for the outgoing Transport Stream packets are determined *entirely* by the "MPEG2TransportStreamFramer" (that feeds into the "SimpleRTPSink"). They have nothing to do with the presentation times of the incoming packets. > Here is my object chain, maybe you can see something wrong: > > MPEG1or2VideoRTPSource --> > MPEG2TransportStreamFromESSource --> > FrameQueue(this is mine) --> > MPEG2TransportStreamFramer --> > SimpleRTPSink This looks correct. Note that - as I noted earlier - it's the "MPEG2TransportStreamFramer" that computes the 'presentation times' of the outgoing packets. It does this by inspecting the PCR timestamps in the Transport Stream packets that are fed to it. Assuming that the input Transport Stream packets have (occasional) PCR timestamps (as all Transport Streams should have), I suspect that the problem is in the implementation of your 'FrameQueue'. You can verify this by replacing the "SimpleRTPSink" with a "FileSink" - and then trying to play the output file (with a media player). Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From saravanan.s at fossilshale.com Sat Feb 2 04:57:32 2013 From: saravanan.s at fossilshale.com (saravanan) Date: Sat, 2 Feb 2013 18:27:32 +0530 Subject: [Live-devel] Two client sessions to different servermedia sessions In-Reply-To: <383E3150-B954-495A-AF72-B53079AB035B@live555.com> References: <2471B04D-1F02-45B1-BCEA-FBDA2509A591@live555.com> <2ECB684F-B0D2-476A-8538-FAFF5EEA6C1E@live555.com> <00ec01ce009b$8a238e80$9e6aab80$@s@fossilshale.com> <383E3150-B954-495A-AF72-B53079AB035B@live555.com> Message-ID: <003001ce0144$e0cf50d0$a26df270$@s@fossilshale.com> Hi Ross, The issue is not just for MJPEG video. I tried to stream H264(640x480) in "live0" serverMediaSession, and MP4V(640x480) in "live1" serverMediaSession. But the result is same as MJPEG streaming. Below are my environment and observation, Environment : . I am using OnDemandServerMediaSession and created "live0" and "live1" serverMediasessions . I am capturing the video from the device, hence I am setting the fPresentationTime based on the encoder data. And the fDurationInMicroseconds to zero. Observation : . I enabled the flag "reuseFirstSource", so I could play any number of sessions to live0 or live1 without any issues. I could get 25fps. Noticed ~1500kbps bitrate in each client session. . But if I try to play live0 and live1 simultaneously, the streaming frame rate is reduced to 1 or 2 frames. Noticed only ~20kbps!!! . Also noticed that there is no frame loss at client side, but there is no data from the Live555 server !!! Any suggestion from your side will be helpful. Regards, Saravanan S From: Ross Finlayson [mailto:finlayson at live555.com] Sent: Saturday, February 02, 2013 12:45 AM To: LIVE555 Streaming Media - development & use Subject: Re: [Live-devel] Two client sessions to different servermedia sessions I am using OnDemandServerMediaSession and added two servermedia sessions with the name "live0" and "live1". Both serverMediaSessions will stream MJPG video data. I have set the reuseFirstSource flag, so that more than one client sessions to the same serverMediaSession(for e.g "live0") will be served with the single source, that is working fine and I could get 25 fps per session. But when I try to play two different client sessions to "live0" and "live1" then the I could get 3-5 frames per second in the client side!!! MJPEG streams tend to be extremely high bitrate. Wastefully so - which is why, in 2013, nobody should be sending MJPEG anymore! (See ) You are probably approaching the capacity of your network, increasing the frequency of lost packets (which *significantly* increases the frequency of lost MJPEG frames!). To help understand this - suppose, for example, that each MJPEG frame is 50 kBytes, and therefore consists of about 35 consecutive RTP packets. Note that if *any* of these 35 RTP packets gets lost, then the receiver will lose (and therefore drop) the *entire* MJPEG frame. Suppose that the packet loss rate is 1% - i.e., assume (for this example) a random loss rate of 1/100. Then, the probability that a (35-packet) MJPEG frame will be received correctly at the receiver's end is (99/100)^35 = 0.70 (approx). I.e., a 1% packet loss rate produces a 30% frame loss rate! Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Sat Feb 2 08:23:03 2013 From: finlayson at live555.com (Ross Finlayson) Date: Sat, 2 Feb 2013 08:23:03 -0800 Subject: [Live-devel] Two client sessions to different servermedia sessions In-Reply-To: <003001ce0144$e0cf50d0$a26df270$@s@fossilshale.com> References: <2471B04D-1F02-45B1-BCEA-FBDA2509A591@live555.com> <2ECB684F-B0D2-476A-8538-FAFF5EEA6C1E@live555.com> <00ec01ce009b$8a238e80$9e6aab80$@s@fossilshale.com> <383E3150-B954-495A-AF72-B53079AB035B@live555.com> <003001ce0144$e0cf50d0$a26df270$@s@fossilshale.com> Message-ID: <33AFDA0E-1693-4780-93CA-CFC11632822B@live555.com> > Environment : > ? I am using OnDemandServerMediaSession and created ?live0? and ?live1? serverMediasessions > ? I am capturing the video from the device, hence I am setting the fPresentationTime based on the encoder data. And the fDurationInMicroseconds to zero. > Observation : > ? I enabled the flag ?reuseFirstSource?, so I could play any number of sessions to live0 or live1 without any issues. I could get 25fps. Noticed ~1500kbps bitrate in each client session. > ? But if I try to play live0 and live1 simultaneously, the streaming frame rate is reduced to 1 or 2 frames. Noticed only ~20kbps!!! > ? Also noticed that there is no frame loss at client side, but there is no data from the Live555 server !!! > > Any suggestion from your side will be helpful. It appears, then, that there's a problem in the way that you have implemented your 'live input source' class - i.e., the subclass of "FramedSource" that encapsulates your MPEG-4 video encoder. Your implementation is apparently allowing some encoded MPEG-4 video frames to get dropped if there are two (or more) concurrent input sources. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From zvika.meiseles at gmail.com Sun Feb 3 10:17:11 2013 From: zvika.meiseles at gmail.com (Zvika Meiseles) Date: Sun, 3 Feb 2013 20:17:11 +0200 Subject: [Live-devel] RTP stream retransmit Message-ID: This looks correct. Note that - as I noted earlier - it's the "MPEG2TransportStreamFramer" that computes the 'presentation times' of the outgoing packets. It does this by inspecting the PCR timestamps in the Transport Stream packets that are fed to it. Ok, I got that. Assuming that the input Transport Stream packets have (occasional) PCR timestamps (as all Transport Streams should have), I suspect that the problem is in the implementation of your 'FrameQueue'. You can verify this by replacing the "SimpleRTPSink" with a "FileSink" - and then trying to play the output file (with a media player). My queue does not do anything with the video packets except store them in a list, and feed them into the "MPEG2TransportStreamFramer". My problem, I think, is in the PCR timestamps of the incoming packets. The incoming RTP packets are not in TS format - they are Elementary streams, which are "multiplexed" (not exactly, as there is only 1 source) and converted into TS. If I'm not mistaken, then "MultiFramedRTPSource" is the one giving the incoming packets their presentation-time (via "receptionStatsDB() .noteIncomingPacket"), which "MPEG2TransportStreamFromESSource" turns into PCR (via "InputESSourceRecord"). Am I right? Zvika -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Sun Feb 3 10:52:58 2013 From: finlayson at live555.com (Ross Finlayson) Date: Sun, 3 Feb 2013 10:52:58 -0800 Subject: [Live-devel] RTP stream retransmit In-Reply-To: References: Message-ID: Nonetheless, I suggest that you debug your code by first doing: MPEG1or2VideoRTPSource --> MPEG2TransportStreamFromESSource --> FileSink Then, verify (by trying to play the resulting file with a media player (like VLC)) that the resulting file is a proper Transport Stream file. Then, and only then, try streaming your file using our "testMPEG2TransportStreamer" demo application (and playing the received stream). Then, and only then, update your application to use > FrameQueue(this is mine) --> > MPEG2TransportStreamFramer --> > SimpleRTPSink instead of the "FileSink". Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From zvika.meiseles at gmail.com Tue Feb 5 08:06:00 2013 From: zvika.meiseles at gmail.com (Zvika Meiseles) Date: Tue, 5 Feb 2013 18:06:00 +0200 Subject: [Live-devel] RTP stream retransmit Message-ID: Ok, 10x Ross. My initial implementation was a little simpler: MPEG1or2VideoRTPSource --> MPEG2TransportStreamFromESSource --> SimpleRTPSink This produced valid TS video, but some of the frames were sent a little late (causing "frame may be too late to be displayed (0.0)" error in VLC and some jitter in the decoder appliance), so I decide to add the "Queue" and "Framer" objects to the chain. I'll try debugging this a little more and post my findings. Zvika -------------- next part -------------- An HTML attachment was scrubbed... URL: From akalankadesilva at gmail.com Tue Feb 5 15:52:03 2013 From: akalankadesilva at gmail.com (Akalanka De Silva) Date: Wed, 6 Feb 2013 05:22:03 +0530 Subject: [Live-devel] where to place media files in CentOS Message-ID: I installed the live555MediaServer on CentOS using an rpm and it was successful. But I have no idea where to place the media files for the server. Please help me Thanks in advance -- ------------------------------- Akalanka Tharashvin De Silva Undergraduate Dept. of Electronic &Telecommunication engineering University of Moratuwa. +94716443463 "Success is getting up one more time than you have got knocked down" -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Feb 5 22:40:46 2013 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 5 Feb 2013 20:40:46 -1000 Subject: [Live-devel] where to place media files in CentOS In-Reply-To: References: Message-ID: <3DBA69FC-AA2B-4C22-B3C4-22FD0571F431@live555.com> > I installed the live555MediaServer on CentOS using an rpm and it was successful. But I have no idea where to place the media files for the server. Put them in the same directory as the "live555MediaServer" application binary. (However, you should also make sure that you have the latest version of the server - accessible from http://www.live555.com/mediaServer/#downloading ) Linux 'distributions' sometimes contain very old versions of our software. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From hassan.nust at gmail.com Tue Feb 5 23:01:43 2013 From: hassan.nust at gmail.com (Muhammad Hassan) Date: Wed, 6 Feb 2013 12:01:43 +0500 Subject: [Live-devel] Required Help in Receiving Audio Video Stream using OpenRTSP Message-ID: Hi, I am getting problems in getting video stream from the following address rtsp://admin:12345 at 86.47.236.162:9001/h264/ch1/main/av_stream I have tried openRTSP but the saved file is empty. Thanks, -- Muhammad Hassan , Senior Software Engineer, TeReSol , Pakistan. www.teresol.com Email : hassan.nust at gmail.com mhassan at teresol.com Cell No : +92-334-5357854 -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Feb 5 23:12:24 2013 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 5 Feb 2013 21:12:24 -1000 Subject: [Live-devel] Required Help in Receiving Audio Video Stream using OpenRTSP In-Reply-To: References: Message-ID: <9F7E3D10-3A52-4B8C-AD53-A608C641152D@live555.com> > I am getting problems in getting video stream from the following address > rtsp://admin:12345 at 86.47.236.162:9001/h264/ch1/main/av_stream > > I have tried openRTSP but the saved file is empty. This is answered in the FAQ (that everyone was asked to read before posting to the mailing list). Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From saravanan.s at fossilshale.com Thu Feb 7 08:11:08 2013 From: saravanan.s at fossilshale.com (saravanan) Date: Thu, 7 Feb 2013 21:41:08 +0530 Subject: [Live-devel] simultaneous access to different sessions In-Reply-To: <33AFDA0E-1693-4780-93CA-CFC11632822B@live555.com> References: <2471B04D-1F02-45B1-BCEA-FBDA2509A591@live555.com> <2ECB684F-B0D2-476A-8538-FAFF5EEA6C1E@live555.com> <00ec01ce009b$8a238e80$9e6aab80$@s@fossilshale.com> <383E3150-B954-495A-AF72-B53079AB035B@live555.com> <003001ce0144$e0cf50d0$a26df270$@s@fossilshale.com> <33AFDA0E-1693-4780-93CA-CFC11632822B@live555.com> Message-ID: <00de01ce054d$bfb42d00$3f1c8700$@s@fossilshale.com> Hi, I have built live555 testOnDemandServer application for my target and testing two different client sessions to two separate sessions "mpeg4ESVideoTest" and "h264ESVideoTest". The stream is not smooth and seeing struck and resuming the streaming. I thought the target resource is not enough to play two simultaneous format and verified the CPU and memory usage. But the total CPU usage for simultaneous streaming was maximum of 40% and memory usage is less than 10%. If I play one client session for either H264 or MPEG4, the streaming is OK. Any suggestion would be helpful for my problem understanding. Regards, Saravanan S -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Thu Feb 7 23:00:36 2013 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 8 Feb 2013 18:00:36 +1100 Subject: [Live-devel] simultaneous access to different sessions In-Reply-To: <00de01ce054d$bfb42d00$3f1c8700$@s@fossilshale.com> References: <2471B04D-1F02-45B1-BCEA-FBDA2509A591@live555.com> <2ECB684F-B0D2-476A-8538-FAFF5EEA6C1E@live555.com> <00ec01ce009b$8a238e80$9e6aab80$@s@fossilshale.com> <383E3150-B954-495A-AF72-B53079AB035B@live555.com> <003001ce0144$e0cf50d0$a26df270$@s@fossilshale.com> <33AFDA0E-1693-4780-93CA-CFC11632822B@live555.com> <00de01ce054d$bfb42d00$3f1c8700$@s@fossilshale.com> Message-ID: > I have built live555 testOnDemandServer application for my target and testing two different client sessions to two separate sessions ?mpeg4ESVideoTest? and ?h264ESVideoTest?. The stream is not smooth and seeing struck and resuming the streaming. I thought the target resource is not enough to play two simultaneous format and verified the CPU and memory usage. But the total CPU usage for simultaneous streaming was maximum of 40% and memory usage is less than 10%. If you are seeing 40% CPU usage while concurrently streaming from two *files* (which is what the "testOnDemandRTSPServer" (sic) application does), then you must have an extremely wimpy CPU. (If, instead, you are streaming from live, encoded video - rather than from a file - then you are *not* using the "testOnDemandRTSPServer" application; you are using a *modified* version of this application, which we cannot, in general help you with.) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From hackeron at gmail.com Wed Feb 6 14:45:27 2013 From: hackeron at gmail.com (Roman Gaufman) Date: Wed, 6 Feb 2013 22:45:27 +0000 Subject: [Live-devel] RTCPInstance error: Hit limit when reading incoming packet over TCP. Increase "maxRTCPPacketSize" In-Reply-To: References: <078A8D67-C134-4FDC-B0F1-AF6051D30385@live555.com> <0FE7DAB7-4704-4A4D-9F4C-DEAF37E13D0A@live555.com> Message-ID: I have 4 HD cameras on my network, 2 chinese, 1 d-link and 1 sony. All cameras work as expected except the D-Link one which I'm connecting to over wireless. I just see an endless loop of: RTCPInstance error: Hit limit when reading incoming packet over TCP. Increase "maxRTCPPacketSize" RTCPInstance error: Hit limit when reading incoming packet over TCP. Increase "maxRTCPPacketSize" RTCPInstance error: Hit limit when reading incoming packet over TCP. Increase "maxRTCPPacketSize" RTCPInstance error: Hit limit when reading incoming packet over TCP. Increase "maxRTCPPacketSize" RTCPInstance error: Hit limit when reading incoming packet over TCP. Increase "maxRTCPPacketSize" The wireless camera is the only one affected. I am using a version 2013.01.25 I am using this command to connect to the camera: ./live555ProxyServer -t rtsp://admin:@192.168.0.217/live1.sdp Any ideas? On 3 December 2012 08:30, ?????? ????????? wrote: > Of course I always do update the source code, I mean, that exactly after > this update the behavior of the proxy has changed. > > > 2012/12/3 Ross Finlayson > >> If your 'back end' server is using our software, then *it* will need to >> be upgraded as well. Upgrading just the proxy won't be enough > > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michel.promonet at thalesgroup.com Fri Feb 8 08:08:19 2013 From: michel.promonet at thalesgroup.com (PROMONET Michel) Date: Fri, 8 Feb 2013 17:08:19 +0100 Subject: [Live-devel] SEGV handling GET_PARAMETER response. Message-ID: <4378_1360339777_51152341_4378_1252_1_1BE8971B6CFF3A4F97AF4011882AA255015609FB3A2B@THSONEA01CMS01P.one.grp> Hi Ross, Using RTSPClient that have a task sending a periodic GET_PARAMETER, sometimes a crash occurs with the following backtrace : Thread 1 (Thread 24732): #0 __strlen_sse2 () at ../sysdeps/x86_64/multiarch/../strlen.S:31 #1 0x00000000008291b2 in RTSPClient::handleGET_PARAMETERResponse (this=0x7f6ae04d3490, parameterName=0x7f6ad51aed00 "", resultValueString=@0x7f6abd3c1ee0) at RTSPClient.cpp:1147 #2 0x000000000082b047 in RTSPClient::handleResponseBytes (this=0x7f6ae04d3490, newBytesRead=86) at RTSPClient.cpp:1605 #3 0x0000000000829e8c in RTSPClient::incomingDataHandler1 (this=0x7f6ae04d3490) at RTSPClient.cpp:1376 #4 0x0000000000829dff in RTSPClient::incomingDataHandler (instance=0x7f6ae04d3490) at RTSPClient.cpp:1369 #5 0x000000000086b9cc in BasicTaskScheduler::SingleStep (this=0x7f6ac400f5c0, maxDelayTime=0) at BasicTaskScheduler.cpp:146 #6 0x000000000086a184 in BasicTaskScheduler0::doEventLoop (this=0x7f6ac400f5c0, watchVariable=0x7f6ae05f8d32 "") at BasicTaskScheduler0.cpp:81 In this context we are in RTSPClient.cpp around 1555 : // If we saw a "Content-Length:" header, then make sure that we have the amount of data that it specified: unsigned bodyOffset = nextLineStart - headerDataCopy; bodyStart = &fResponseBuffer[bodyOffset]; numBodyBytes = fResponseBytesAlreadySeen - bodyOffset; if (contentLength > numBodyBytes) { Gdb says that nextLineStart is NULL, bodyOffset is a big number, and finally bodyStart point a non allocated memory (that raise a SEGV). Do you think if is possible to add check nextLineStart before using it : if (nextLineStart != 0) { // If we saw a "Content-Length:" header, then make sure that we have the amount of data that it specified: unsigned bodyOffset = nextLineStart - headerDataCopy; bodyStart = &fResponseBuffer[bodyOffset]; numBodyBytes = fResponseBytesAlreadySeen - bodyOffset; .... } Thanks & Regards, Michel. [@@THALES GROUP RESTRICTED@@] -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Fri Feb 8 09:47:57 2013 From: finlayson at live555.com (Ross Finlayson) Date: Sat, 9 Feb 2013 04:47:57 +1100 Subject: [Live-devel] SEGV handling GET_PARAMETER response. In-Reply-To: <4378_1360339777_51152341_4378_1252_1_1BE8971B6CFF3A4F97AF4011882AA255015609FB3A2B@THSONEA01CMS01P.one.grp> References: <4378_1360339777_51152341_4378_1252_1_1BE8971B6CFF3A4F97AF4011882AA255015609FB3A2B@THSONEA01CMS01P.one.grp> Message-ID: > In this context we are in RTSPClient.cpp around 1555 : > > // If we saw a "Content-Length:" header, then make sure that we have the amount of data that it specified: > unsigned bodyOffset = nextLineStart - headerDataCopy; > bodyStart = &fResponseBuffer[bodyOffset]; > numBodyBytes = fResponseBytesAlreadySeen - bodyOffset; > if (contentLength > numBodyBytes) { > > Gdb says that nextLineStart is NULL, bodyOffset is a big number, and finally bodyStart point a non allocated memory (that raise a SEGV). > > Do you think if is possible to add check nextLineStart before using it : Instead, please replace the line unsigned bodyOffset = nextLineStart - headerDataCopy; with unsigned bodyOffset = nextLineStart == NULL ? fResponseBytesAlreadySeen : nextLineStart - headerDataCopy; If you still have a problem after this change, please let me know. (Otherwise I'll make that change in the next release of the software.) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jesse.Hemingway at nerdery.com Fri Feb 8 15:05:42 2013 From: Jesse.Hemingway at nerdery.com (Jesse Hemingway) Date: Fri, 8 Feb 2013 17:05:42 -0600 Subject: [Live-devel] H.264 via RTP - ugly artifacts Message-ID: Hello, I apologize if this is noise - my question may well have nothing to do with Live555, but I thought I'd post here in case anyone can help me rule it out. It appears I'm successfully consuming H.264 via RTSP and acquiring frames in my mediasink. Next, I set up ffmpeg's decoder with the SPS and PPS, and then proceed to pass all the raw NAL units from Live555 to avcodec_decode_video2(...), adding the bytestream start code prefix and trailing zero byte (I add 0x00000001 before the raw NAL, and 0x00 after). I've enabled debug output in ffmpeg, and it appears to be happily decoding without errors, other than the frequent, and perhaps expected log of the form: *concealing 900 DC, 900 AC, 900 MV errors in P frame*. However, when I turn this into a displayable RGBA buffer using swscale() and display the result -- there are lots of ugly artifacts. At certain resolutions, the I frames result in a pretty clear picture. In-between, only part of the image is even reasonably decoded, and with half to 3/4 of the image being an interpolated blur. Even the healthier parts exhibit a regular grid of dots and major glitches in regions where the source video has motion. I wanted to rule out Live555 as a potential source of such trouble - does this sound familiar to anyone? Advice where to focus? A relevant log follows... Thanks! Jesse LOG: Received 24 bytes NAL type [ 7 ] Priming buffer started Received 4 bytes NAL type [ 8 ] Received 14 bytes NAL type [ 1 ] Received 3112 bytes NAL type [ 1 ] Received 3444 bytes NAL type [ 1 ] Received 16 bytes NAL type [ 1 ] Received 14 bytes NAL type [ 1 ] Received 2143 bytes NAL type [ 1 ] Received 1498 bytes NAL type [ 1 ] Received 16 bytes NAL type [ 1 ] Received 14 bytes NAL type [ 1 ] Received 2395 bytes NAL type [ 1 ] Received 3512 bytes NAL type [ 1 ] Received 16 bytes NAL type [ 1 ] Received 14 bytes NAL type [ 1 ] Received 3808 bytes NAL type [ 1 ] Received 2966 bytes NAL type [ 1 ] Received 16 bytes NAL type [ 1 ] Received 14 bytes NAL type [ 1 ] Received 1909 bytes NAL type [ 1 ] Received 1774 bytes NAL type [ 1 ] Received 16 bytes NAL type [ 1 ] Received 14 bytes NAL type [ 1 ] Received 3915 bytes NAL type [ 1 ] Received 3859 bytes NAL type [ 1 ] Received 16 bytes NAL type [ 1 ] Received 14 bytes NAL type [ 1 ] Received 3219 bytes NAL type [ 1 ] Received 3355 bytes NAL type [ 1 ] Received 16 bytes NAL type [ 1 ] Received 2304 bytes NAL type [ 5 ] Priming buffer complete [h264 @ 0x90a2400] concealing 900 DC, 900 AC, 900 MV errors in I frame Picture decoded Initializing decoder frame of size: 640x480 [swscaler @ 0x8898600] No accelerated colorspace conversion found from yuv420p to rgba. Received 6900 bytes NAL type [ 5 ] [h264 @ 0x90a2400] concealing 900 DC, 900 AC, 900 MV errors in I frame Picture decoded Received 6945 bytes NAL type [ 5 ] [h264 @ 0x90a2400] concealing 900 DC, 900 AC, 900 MV errors in I frame Picture decoded Received 73 bytes NAL type [ 5 ] [h264 @ 0x90a2400] concealing 900 DC, 900 AC, 900 MV errors in I frame Picture decoded Received 40 bytes NAL type [ 1 ] [h264 @ 0x90a2400] concealing 900 DC, 900 AC, 900 MV errors in P frame Picture decoded ... -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at gotowti.com Fri Feb 8 16:28:38 2013 From: chris at gotowti.com (Chris Richardson (WTI)) Date: Fri, 8 Feb 2013 16:28:38 -0800 Subject: [Live-devel] H.264 via RTP - ugly artifacts In-Reply-To: References: Message-ID: <042401ce065c$68ca2840$3a5e78c0$@com> Hi, We use a similar set up for our software (LIVE555 -> FFMPEG decode) and it is working well. The first obvious difference I can see is that normally the SPS and PPS precede the IDR frame (type 5) rather than the non-IDR frame (type 1), so we have 7, 8, 5, 1, 1, 1 ., 7, 8, 5, 1, 1, 1, .. For a quick test, I generated a NAL sequence similar to what you are generating and FFMPEG gave me many similar errors. So I would ask, how are you generating the NAL sequence on the encoder side? Is there a reason for generating 7, 8, 1, 1, 1 . 5? Chris Richardson WTI From: live-devel-bounces at ns.live555.com [mailto:live-devel-bounces at ns.live555.com] On Behalf Of Jesse Hemingway Sent: Friday, February 08, 2013 3:06 PM To: LIVE555 Streaming Media - development & use Subject: [Live-devel] H.264 via RTP - ugly artifacts Hello, I apologize if this is noise - my question may well have nothing to do with Live555, but I thought I'd post here in case anyone can help me rule it out. It appears I'm successfully consuming H.264 via RTSP and acquiring frames in my mediasink. Next, I set up ffmpeg's decoder with the SPS and PPS, and then proceed to pass all the raw NAL units from Live555 to avcodec_decode_video2(...), adding the bytestream start code prefix and trailing zero byte (I add 0x00000001 before the raw NAL, and 0x00 after). I've enabled debug output in ffmpeg, and it appears to be happily decoding without errors, other than the frequent, and perhaps expected log of the form: concealing 900 DC, 900 AC, 900 MV errors in P frame. However, when I turn this into a displayable RGBA buffer using swscale() and display the result -- there are lots of ugly artifacts. At certain resolutions, the I frames result in a pretty clear picture. In-between, only part of the image is even reasonably decoded, and with half to 3/4 of the image being an interpolated blur. Even the healthier parts exhibit a regular grid of dots and major glitches in regions where the source video has motion. I wanted to rule out Live555 as a potential source of such trouble - does this sound familiar to anyone? Advice where to focus? A relevant log follows... Thanks! Jesse LOG: Received 24 bytes NAL type [ 7 ] Priming buffer started Received 4 bytes NAL type [ 8 ] Received 14 bytes NAL type [ 1 ] Received 3112 bytes NAL type [ 1 ] Received 3444 bytes NAL type [ 1 ] Received 16 bytes NAL type [ 1 ] Received 14 bytes NAL type [ 1 ] Received 2143 bytes NAL type [ 1 ] Received 1498 bytes NAL type [ 1 ] Received 16 bytes NAL type [ 1 ] Received 14 bytes NAL type [ 1 ] Received 2395 bytes NAL type [ 1 ] Received 3512 bytes NAL type [ 1 ] Received 16 bytes NAL type [ 1 ] Received 14 bytes NAL type [ 1 ] Received 3808 bytes NAL type [ 1 ] Received 2966 bytes NAL type [ 1 ] Received 16 bytes NAL type [ 1 ] Received 14 bytes NAL type [ 1 ] Received 1909 bytes NAL type [ 1 ] Received 1774 bytes NAL type [ 1 ] Received 16 bytes NAL type [ 1 ] Received 14 bytes NAL type [ 1 ] Received 3915 bytes NAL type [ 1 ] Received 3859 bytes NAL type [ 1 ] Received 16 bytes NAL type [ 1 ] Received 14 bytes NAL type [ 1 ] Received 3219 bytes NAL type [ 1 ] Received 3355 bytes NAL type [ 1 ] Received 16 bytes NAL type [ 1 ] Received 2304 bytes NAL type [ 5 ] Priming buffer complete [h264 @ 0x90a2400] concealing 900 DC, 900 AC, 900 MV errors in I frame Picture decoded Initializing decoder frame of size: 640x480 [swscaler @ 0x8898600] No accelerated colorspace conversion found from yuv420p to rgba. Received 6900 bytes NAL type [ 5 ] [h264 @ 0x90a2400] concealing 900 DC, 900 AC, 900 MV errors in I frame Picture decoded Received 6945 bytes NAL type [ 5 ] [h264 @ 0x90a2400] concealing 900 DC, 900 AC, 900 MV errors in I frame Picture decoded Received 73 bytes NAL type [ 5 ] [h264 @ 0x90a2400] concealing 900 DC, 900 AC, 900 MV errors in I frame Picture decoded Received 40 bytes NAL type [ 1 ] [h264 @ 0x90a2400] concealing 900 DC, 900 AC, 900 MV errors in P frame Picture decoded ... -------------- next part -------------- An HTML attachment was scrubbed... URL: From jhemingw at nerdery.com Fri Feb 8 18:36:19 2013 From: jhemingw at nerdery.com (Jesse Hemingway) Date: Fri, 8 Feb 2013 20:36:19 -0600 Subject: [Live-devel] H.264 via RTP - ugly artifacts In-Reply-To: <042401ce065c$68ca2840$3a5e78c0$@com> References: <042401ce065c$68ca2840$3a5e78c0$@com> Message-ID: Interesting, thank you for the response. I actually discard all but 7, 8 and 5 frames that occur before the initial 'priming' is complete, they are just logged for completeness. So my first buffer passed is of the form [7 8 5], after which I pass all frames willy-nilly, even the timing NAL [ 6 ]. I'm wondering if my bytestream framing is wrong? 0x000001.NAL.0x00. I read the Appendix B bytestream syntax and also the ffmpeg NAL detection code and it seems sufficient, but I've also read posts that indicated this simple approach was incorrect. {iPhone} On Feb 8, 2013, at 6:28 PM, "Chris Richardson \(WTI\)" wrote: > Hi, > > We use a similar set up for our software (LIVE555 -> FFMPEG decode) and it is working well. The first obvious difference I can see is that normally the SPS and PPS precede the IDR frame (type 5) rather than the non-IDR frame (type 1), so we have 7, 8, 5, 1, 1, 1 ?, 7, 8, 5, 1, 1, 1, ?. For a quick test, I generated a NAL sequence similar to what you are generating and FFMPEG gave me many similar errors. > > So I would ask, how are you generating the NAL sequence on the encoder side? Is there a reason for generating 7, 8, 1, 1, 1 ? 5? > > Chris Richardson > WTI > > > From: live-devel-bounces at ns.live555.com [mailto:live-devel-bounces at ns.live555.com] On Behalf Of Jesse Hemingway > Sent: Friday, February 08, 2013 3:06 PM > To: LIVE555 Streaming Media - development & use > Subject: [Live-devel] H.264 via RTP - ugly artifacts > > Hello, > > I apologize if this is noise - my question may well have nothing to do with Live555, but I thought I'd post here in case anyone can help me rule it out. It appears I'm successfully consuming H.264 via RTSP and acquiring frames in my mediasink. > > Next, I set up ffmpeg's decoder with the SPS and PPS, and then proceed to pass all the raw NAL units from Live555 to avcodec_decode_video2(...), adding the bytestream start code prefix and trailing zero byte (I add 0x00000001 before the raw NAL, and 0x00 after). I've enabled debug output in ffmpeg, and it appears to be happily decoding without errors, other than the frequent, and perhaps expected log of the form: concealing 900 DC, 900 AC, 900 MV errors in P frame. However, when I turn this into a displayable RGBA buffer using swscale() and display the result -- there are lots of ugly artifacts. > > At certain resolutions, the I frames result in a pretty clear picture. In-between, only part of the image is even reasonably decoded, and with half to 3/4 of the image being an interpolated blur. Even the healthier parts exhibit a regular grid of dots and major glitches in regions where the source video has motion. > > I wanted to rule out Live555 as a potential source of such trouble - does this sound familiar to anyone? Advice where to focus? A relevant log follows... > > Thanks! > Jesse > > LOG: > > Received 24 bytes NAL type [ 7 ] > Priming buffer started > Received 4 bytes NAL type [ 8 ] > Received 14 bytes NAL type [ 1 ] > Received 3112 bytes NAL type [ 1 ] > Received 3444 bytes NAL type [ 1 ] > Received 16 bytes NAL type [ 1 ] > Received 14 bytes NAL type [ 1 ] > Received 2143 bytes NAL type [ 1 ] > Received 1498 bytes NAL type [ 1 ] > Received 16 bytes NAL type [ 1 ] > Received 14 bytes NAL type [ 1 ] > Received 2395 bytes NAL type [ 1 ] > Received 3512 bytes NAL type [ 1 ] > Received 16 bytes NAL type [ 1 ] > Received 14 bytes NAL type [ 1 ] > Received 3808 bytes NAL type [ 1 ] > Received 2966 bytes NAL type [ 1 ] > Received 16 bytes NAL type [ 1 ] > Received 14 bytes NAL type [ 1 ] > Received 1909 bytes NAL type [ 1 ] > Received 1774 bytes NAL type [ 1 ] > Received 16 bytes NAL type [ 1 ] > Received 14 bytes NAL type [ 1 ] > Received 3915 bytes NAL type [ 1 ] > Received 3859 bytes NAL type [ 1 ] > Received 16 bytes NAL type [ 1 ] > Received 14 bytes NAL type [ 1 ] > Received 3219 bytes NAL type [ 1 ] > Received 3355 bytes NAL type [ 1 ] > Received 16 bytes NAL type [ 1 ] > Received 2304 bytes NAL type [ 5 ] > Priming buffer complete > [h264 @ 0x90a2400] concealing 900 DC, 900 AC, 900 MV errors in I frame > Picture decoded > Initializing decoder frame of size: 640x480 > [swscaler @ 0x8898600] No accelerated colorspace conversion found from yuv420p to rgba. > Received 6900 bytes NAL type [ 5 ] > [h264 @ 0x90a2400] concealing 900 DC, 900 AC, 900 MV errors in I frame > Picture decoded > Received 6945 bytes NAL type [ 5 ] > [h264 @ 0x90a2400] concealing 900 DC, 900 AC, 900 MV errors in I frame > Picture decoded > Received 73 bytes NAL type [ 5 ] > [h264 @ 0x90a2400] concealing 900 DC, 900 AC, 900 MV errors in I frame > Picture decoded > Received 40 bytes NAL type [ 1 ] > [h264 @ 0x90a2400] concealing 900 DC, 900 AC, 900 MV errors in P frame > Picture decoded > ... > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From ferielbenghorbel at gmail.com Thu Feb 7 02:09:00 2013 From: ferielbenghorbel at gmail.com (feriel ben ghorbel) Date: Thu, 7 Feb 2013 11:09:00 +0100 Subject: [Live-devel] Questions about Performance of live555 In-Reply-To: References: <7699A6D7-B5E9-49F7-8BBD-50595C3BF123@live555.com> Message-ID: Hi all, Yes I agree about the RTSP client applications(only a few number of request through the prompt ) but it is possible to find an other software how can bombard the server and run a big number of request to the server to judge his performance ?? Thanks and regards _______________________________________________ live-devel mailing list live-devel at lists.live555.com http://lists.live555.com/mailman/listinfo/live-devel > > > > 2013/1/29 Ross Finlayson > >> How bombard live555 by streams "*.ts" to test the performance of >> live555 in terms of throughput and number of streams >> that can diffuse it in the same time. >> >> >> (I'm assuming that the above sentence was intended to be a question...) >> >> You can do this easily just by running multiple RTSP client applications >> (e.g., "testRTSPClient" or "openRTSP") concurrently. >> >> >> Ross Finlayson >> Live Networks, Inc. >> http://www.live555.com/ >> >> >> _______________________________________________ >> live-devel mailing list >> live-devel at lists.live555.com >> http://lists.live555.com/mailman/listinfo/live-devel >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tbatra18 at gmail.com Thu Feb 7 05:59:09 2013 From: tbatra18 at gmail.com (Tarun Batra) Date: Thu, 7 Feb 2013 19:29:09 +0530 Subject: [Live-devel] Please reply query on Proxy Server Message-ID: Hello Ross sir, How can Live 555 proxy server can be used in Thread? I mean creating multiple live 555 proxy server for receiving stream from multiple back end rtsp server? As there are only two ports 554 and 8554 on which rtsp server can be created so how can it be done? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From cloverobert at gmail.com Thu Feb 7 22:02:40 2013 From: cloverobert at gmail.com (Robert Clove) Date: Fri, 8 Feb 2013 11:32:40 +0530 Subject: [Live-devel] Proxy Server issue Message-ID: Hello Sir, Here is mine setup,i have 3 machines,. Machine 1 is for streamer 1 and machine 2 is for streamer 2 and machine 3 is for proxy server. i mine project there is some handshaking done between streamer and server machine so that server came to know which streamer is active. When the client wishes to see the stream from any of the streamer it send the start stream request to the available streamer and on the streamer the code for "testMPEG2TransportStreamer" executes with "#define IMPLEMENT_RTSP_SERVER 1" as un-commented i mean the rtsp server is also created on the streamer.When there is stop stream the streaming is stopped with the rtsp server also destroyed using medium::close(). i have created the rtsp server (in proxy server case)only once and variables that i have taken global are:- int Current_Streamer = 0; int http_port; ServerMediaSession* sms[50]; char StreamName[50][50]; CString MulticastIp[50]; int StreamerPort[50]; int taskScheduler = 0,tunneling = 0; //call env->taskScheduler().doEventLoop(); forfirst time only. char const* descriptionString={"Session streamed by \"testOnDemandRTSPServer\""}; RTSPServer* rtspServer; char* url[50]; Boolean reuseFirstSource = False; Boolean iFramesOnly = False; UsageEnvironment* env; TaskScheduler* scheduler; UserAuthenticationDatabase* authDB; char* username = NULL; char* password = NULL; On start stream request in server i am creating thread which has the following code DWORD WINAPI START_STREAM_Thread() { //this is address for the streamer for which start stream is given,this address we get by implementing rtsp server in the streamer char const* inputAddressStr = MulticastIp[Current_Streamer]; //port on which streamer is streaming portNumBits const inputPortNum = StreamerPort[Current_Streamer]; CString StreamURL ="rtsp://"; StreamURL +=inputAddressStr; int verbosityLevel = 2; Boolean streamRTPOverTCP = False; portNumBits tunnelOverHTTPPortNum = inputPortNum char const* proxiedStreamURL = StreamURL; char const* streamName= StreamName[Current_Streamer]; sms[Current_Streamer] = ProxyServerMediaSession::createNew(*env, rtspServer, proxiedStreamURL, streamName, username, password, tunnelOverHTTPPortNum, verbosityLevel); rtspServer->addServerMediaSession(sms[Current_Streamer]); url[Current_Streamer] = rtspServer->rtspURL(sms[Current_Streamer]); delete[] url[Current_Streamer]; taskScheduler++; if(taskScheduler == 1) //call env->taskScheduler().doEventLoop(); for first time only. env->taskScheduler().doEventLoop(); // does not return } and on stop stream i am doing this:- rtspServer->deleteServerMediaSession(sms[Current_Streamer]); Medium::close(sms[Current_Streamer]); sms[Current_Streamer]=NULL; Now the problem is :- client1 receives the stream from the streamer 1 and client 2 receives the stream from streamer2. It doesn't matter how many times the client 2 send the start and stop stream to streamer 2,client 1 continues to receive the stream from streamer 1 but if client 1 send the stop stream,the client 2 also cannot receive the stream from the streamer 2,i think the proxy server closes all the socket. Current_Streamer variable maintain for which streamer the request has been sent Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Sat Feb 9 11:46:42 2013 From: finlayson at live555.com (Ross Finlayson) Date: Sun, 10 Feb 2013 06:46:42 +1100 Subject: [Live-devel] Please reply query on Proxy Server In-Reply-To: References: Message-ID: > How can Live 555 proxy server can be used in Thread? > I mean creating multiple live 555 proxy server for receiving stream from multiple back end rtsp server? The existing "LIVE555 Proxy Server" can *already* stream concurrently from multiple back-end servers. It does this as a single-threaded application (using an event loop, rather than threads, for concurrency). This is all explained clearly in the FAQ that everyone is asked to read before posting to the mailing list. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Sat Feb 9 13:43:56 2013 From: finlayson at live555.com (Ross Finlayson) Date: Sun, 10 Feb 2013 08:43:56 +1100 Subject: [Live-devel] RTCPInstance error: Hit limit when reading incoming packet over TCP. Increase "maxRTCPPacketSize" In-Reply-To: References: <078A8D67-C134-4FDC-B0F1-AF6051D30385@live555.com> <0FE7DAB7-4704-4A4D-9F4C-DEAF37E13D0A@live555.com> Message-ID: <61C777C4-E104-4341-B8E6-2727CB0CB26B@live555.com> > I just see an endless loop of: > > RTCPInstance error: Hit limit when reading incoming packet over TCP. Increase "maxRTCPPacketSize" > RTCPInstance error: Hit limit when reading incoming packet over TCP. Increase "maxRTCPPacketSize" > RTCPInstance error: Hit limit when reading incoming packet over TCP. Increase "maxRTCPPacketSize" > RTCPInstance error: Hit limit when reading incoming packet over TCP. Increase "maxRTCPPacketSize" > RTCPInstance error: Hit limit when reading incoming packet over TCP. Increase "maxRTCPPacketSize" > > The wireless camera is the only one affected. I already addressed this back on December 3rd: > 2012/12/3 Ross Finlayson > If your 'back end' server is using our software, then *it* will need to be upgraded as well. Upgrading just the proxy won't be enough As I already explained back then, the problem is with your 'back end' server - i.e., the server software that runs in your network camera. Its firmware will need to be upgraded. (Alternatively, don't access that camera using RTP-over-TCP - i.e., don't give "live555ProxyServer" the "-t" option.) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From akalankadesilva at gmail.com Sat Feb 9 18:51:03 2013 From: akalankadesilva at gmail.com (Akalanka De Silva) Date: Sun, 10 Feb 2013 08:21:03 +0530 Subject: [Live-devel] where to place media files in CentOS In-Reply-To: <3DBA69FC-AA2B-4C22-B3C4-22FD0571F431@live555.com> References: <3DBA69FC-AA2B-4C22-B3C4-22FD0571F431@live555.com> Message-ID: I downloaded the file (around 1.09MB) in your site from the link you mentioned and it had no extension. I had no idea whether it was a rpm or a tar file. How do I install that file? Please help me with it. Thanks again On Wed, Feb 6, 2013 at 12:10 PM, Ross Finlayson wrote: > I installed the live555MediaServer on CentOS using an rpm and it was > successful. But I have no idea where to place the media files for the > server. > > > Put them in the same directory as the "live555MediaServer" application > binary. > > (However, you should also make sure that you have the latest version of > the server - accessible from > http://www.live555.com/mediaServer/#downloading ) Linux 'distributions' > sometimes contain very old versions of our software. > > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > -- ------------------------------- Akalanka Tharashvin De Silva Undergraduate Dept. of Electronic &Telecommunication engineering University of Moratuwa. +94716443463 "Success is getting up one more time than you have got knocked down" -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Sun Feb 10 02:15:01 2013 From: finlayson at live555.com (Ross Finlayson) Date: Sun, 10 Feb 2013 21:15:01 +1100 Subject: [Live-devel] where to place media files in CentOS In-Reply-To: References: <3DBA69FC-AA2B-4C22-B3C4-22FD0571F431@live555.com> Message-ID: <2DC6F74B-FF77-4E62-9C4B-C76390DE9DF7@live555.com> > I downloaded the file (around 1.09MB) in your site from the link you mentioned and it had no extension. I had no idea whether it was a rpm or a tar file. It's an application binary. You just run it 'as is'. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastien-devel at celeos.eu Sun Feb 10 09:58:51 2013 From: sebastien-devel at celeos.eu (=?ISO-8859-1?Q?S=E9bastien_Escudier?=) Date: Sun, 10 Feb 2013 18:58:51 +0100 Subject: [Live-devel] getNormalPlayTime after a pause and play Message-ID: <5117DFDB.5080203@celeos.eu> Hi Ross, I tried to issue a PAUSE and then a PLAY without specifying a start value (so it resumes from where it paused), and the return value of the function getNormalPlayTime() starts from 0 again. Why ? I was using testOnDemandRTSPServer for the server. Best regards, S?bastien. From finlayson at live555.com Sun Feb 10 10:06:26 2013 From: finlayson at live555.com (Ross Finlayson) Date: Mon, 11 Feb 2013 05:06:26 +1100 Subject: [Live-devel] getNormalPlayTime after a pause and play In-Reply-To: <5117DFDB.5080203@celeos.eu> References: <5117DFDB.5080203@celeos.eu> Message-ID: > I tried to issue a PAUSE and then a PLAY without specifying a start > value (so it resumes from where it paused), and the return value of the > function getNormalPlayTime() starts from 0 again. Why ? I don't know. Please show us the RTSP protocol exchange. (Did the media playback actually resume from where it was paused (as you wanted), or did it start again from the beginning?) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastien-devel at celeos.eu Sun Feb 10 10:28:09 2013 From: sebastien-devel at celeos.eu (=?ISO-8859-1?Q?S=E9bastien_Escudier?=) Date: Sun, 10 Feb 2013 19:28:09 +0100 Subject: [Live-devel] getNormalPlayTime after a pause and play In-Reply-To: References: Message-ID: <5117E6B9.4060803@celeos.eu> It did resume from where it paused. I did the same test with VLC for the client to check the video resume, and I also had the NPT reset to 0. Here is the RTSP exchange : Sending request: PAUSE rtsp://192.168.1.13:8554/mpeg2TransportStreamTest/ RTSP/1.0 CSeq: 5 User-Agent: ./testRTSPClient (LIVE555 Streaming Media v2013.02.05) Session: F9D406CD Received 269 new bytes of response data. Received a complete PAUSE response: RTSP/1.0 200 OK CSeq: 5 Date: Sun, Feb 10 2013 18:23:19 GMT Session: F9D406CD (plus 185 additional bytes) Sending request: PLAY rtsp://192.168.1.13:8554/mpeg2TransportStreamTest/ RTSP/1.0 CSeq: 6 User-Agent: ./testRTSPClient (LIVE555 Streaming Media v2013.02.05) Session: F9D406CD Received a complete PLAY response: RTSP/1.0 200 OK CSeq: 6 Date: Sun, Feb 10 2013 18:23:19 GMT Session: F9D406CD RTP-Info: url=rtsp://192.168.1.13:8554/mpeg2TransportStreamTest/track1;seq=32531;rtptime=2191593320 From sou07 at mail.ru Sun Feb 10 12:58:58 2013 From: sou07 at mail.ru (=?UTF-8?B?TWF4IC4uLg==?=) Date: Mon, 11 Feb 2013 00:58:58 +0400 Subject: [Live-devel] =?utf-8?q?Playing_RTSP_stream_widget_on_Samsung_and_?= =?utf-8?q?its_freezing?= Message-ID: <1360529938.761655258@f327.mail.ru> Hello. Use your live555 Media Server and using it unfold RTSP stream. Have konteyry format *. Ts and *. Tsx. Sam RTSP stream on vlc players played more steadily. But on the Smart TV Samsung reason he played an average of 10 minutes, no more and the picture freezes, if the flow is cut off, you start to lose it zanogo, all is well, though then again freeze is also 10 minutes away. In what may be the problem, how to fix it can be lost packets and interrupts the flow of your server? Here are the specifications for the RTSP stream Samsunga http://samsungdforum.com/Guide/art00071/index.html .?Maybe your server is not quite so prominent, if so, where to fix it. Thank you in advance for your reply.? ---------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From jshanab at smartwire.com Sun Feb 10 15:29:10 2013 From: jshanab at smartwire.com (Jeff Shanab) Date: Sun, 10 Feb 2013 23:29:10 +0000 Subject: [Live-devel] H.264 via RTP - ugly artifacts In-Reply-To: References: <042401ce065c$68ca2840$3a5e78c0$@com>, Message-ID: <615FD77639372542BF647F5EBAA2DBC2252591AC@IL-BOL-EXCH01.smartwire.com> I had the same problem a while back. I also use live555 feeding libavcodec. While the standard only says you need to have a 7 and an 8 before the first 5 and that after that the 5 or 1 is valid, I have had decoding trouble because of it. So while all the following are technically legal 7,8,5,1,1,1,1,5,1,1,1... 7,8,5,1,1,1,1,1,1...... 7,8,5,5,5,1,1,1,5,1,1,1,5,1,1,1 (some axis cameras by default) It is the decoder not live555 that needs the 7,8,5 at the beginning of every key frame. I have gotten arguments about this, It seems to vary by version. The 7 and 8 packets are so small compared to the key frame slices [5] and the diff frames [1] that I just cache them and inject them if missing. Not only do I get Rock solid playback, I do not need to worry about a second user starting late on a multicast or trouble when I seek. BTW each and every frame needs the 0x00 0x00 0x00 0x01 (4 bytes, aka network byte order. not a 32 bit byte that could end up with endian issues) ________________________________ From: live-devel-bounces at ns.live555.com [live-devel-bounces at ns.live555.com] on behalf of Jesse Hemingway [jhemingw at nerdery.com] Sent: Friday, February 08, 2013 8:36 PM To: LIVE555 Streaming Media - development & use Cc: LIVE555 Streaming Media - development & use Subject: Re: [Live-devel] H.264 via RTP - ugly artifacts Interesting, thank you for the response. I actually discard all but 7, 8 and 5 frames that occur before the initial 'priming' is complete, they are just logged for completeness. So my first buffer passed is of the form [7 8 5], after which I pass all frames willy-nilly, even the timing NAL [ 6 ]. I'm wondering if my bytestream framing is wrong? 0x000001.NAL.0x00. I read the Appendix B bytestream syntax and also the ffmpeg NAL detection code and it seems sufficient, but I've also read posts that indicated this simple approach was incorrect. This message and any attachments contain confidential and proprietary information, and may contain privileged information, belonging to one or more affiliates of Windy City Wire Cable & Technology Products, LLC. No privilege is waived by this transmission. Unauthorized use, copying or disclosure of such information is prohibited and may be unlawful. If you receive this message in error, please delete it from your system, destroy any printouts or copies of it, and notify the sender immediately by e-mail or phone. -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Sun Feb 10 20:06:17 2013 From: finlayson at live555.com (Ross Finlayson) Date: Mon, 11 Feb 2013 15:06:17 +1100 Subject: [Live-devel] Playing RTSP stream widget on Samsung and its freezing In-Reply-To: <1360529938.761655258@f327.mail.ru> References: <1360529938.761655258@f327.mail.ru> Message-ID: > Hello. Use your live555 Media Server and using it unfold RTSP stream. Have konteyry format *. Ts and *. Tsx. Sam RTSP stream on vlc players played more steadily. But on the Smart TV Samsung reason he played an average of 10 minutes, no more and the picture freezes Because the problem occurs only with Samsung's media player, and not with VLC, this suggests that the problem is with Samsung's media player, not our server software. Therefore, you should contact Samsung, asking them to fix the problem. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Sun Feb 10 20:24:25 2013 From: finlayson at live555.com (Ross Finlayson) Date: Mon, 11 Feb 2013 15:24:25 +1100 Subject: [Live-devel] Playing RTSP stream widget on Samsung and its freezing In-Reply-To: References: <1360529938.761655258@f327.mail.ru> Message-ID: <1DDF5B91-A65B-4D53-BE65-44950E259D64@live555.com> > Because the problem occurs only with Samsung's media player, and not with VLC, this suggests that the problem is with Samsung's media player, not our server software. Therefore, you should contact Samsung, asking them to fix the problem. Plus, I might add - If Samsung is interested in having me consult with them to improve their media player, then they should get in touch with me... Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From sou07 at mail.ru Sun Feb 10 21:34:30 2013 From: sou07 at mail.ru (=?UTF-8?B?TWF4IC4uLg==?=) Date: Mon, 11 Feb 2013 09:34:30 +0400 Subject: [Live-devel] =?utf-8?q?Playing_RTSP_stream_widget_on_Samsung_and_?= =?utf-8?q?its=09freezing?= Message-ID: <1360560870.445626336@f147.mail.ru> Message-ID: < 1DDF5B91-A65B-4D53-BE65-44950E259D64 at live555.com > Content-Type: text/plain; charset="us-ascii" > Because the problem occurs only with Samsung's media player, and not with VLC, this suggests that the problem is with Samsung's media player, not our server software. Therefore, you should contact Samsung, asking them to fix the problem. Plus, I might add - If Samsung is interested in having me consult with them to improve their media player, then they should get in touch with me... Thanks for the answer.?But can not you see I'm sort of a constant in the server, which is responsible for the loss of packets, and I think if the loss exceeds the current value, the transmission stops or I am mistaken, misunderstood abriviaturu constant.?But can not you see I'm sort of a constant in the server, which is responsible for the loss of packets, and I think if the loss exceeds the current value, the transmission stops or I am mistaken, misunderstood abriviaturu constants. Samsung is not actively responsible for their mistakes, but at least those Russian support.?And tell me how you can see chat client and server RTSP. -------------- next part -------------- An HTML attachment was scrubbed... URL: From akalankadesilva at gmail.com Sun Feb 10 22:26:30 2013 From: akalankadesilva at gmail.com (Akalanka De Silva) Date: Mon, 11 Feb 2013 11:56:30 +0530 Subject: [Live-devel] where to place media files in CentOS In-Reply-To: <2DC6F74B-FF77-4E62-9C4B-C76390DE9DF7@live555.com> References: <3DBA69FC-AA2B-4C22-B3C4-22FD0571F431@live555.com> <2DC6F74B-FF77-4E62-9C4B-C76390DE9DF7@live555.com> Message-ID: It worked!. Thank a lot for the help Ross. guess i was being stupid :):) On Sun, Feb 10, 2013 at 3:45 PM, Ross Finlayson wrote: > I downloaded the file (around 1.09MB) in your site from the link you > mentioned and it had no extension. I had no idea whether it was a rpm or a > tar file. > > > It's an application binary. You just run it 'as is'. > > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > -- ------------------------------- Akalanka Tharashvin De Silva Undergraduate Dept. of Electronic &Telecommunication engineering University of Moratuwa. +94716443463 "Success is getting up one more time than you have got knocked down" -------------- next part -------------- An HTML attachment was scrubbed... URL: From cloverobert at gmail.com Mon Feb 11 02:22:55 2013 From: cloverobert at gmail.com (Robert Clove) Date: Mon, 11 Feb 2013 15:52:55 +0530 Subject: [Live-devel] Query Related to GroupSock Library Message-ID: Hello All, Sir as i am new to live 555 libraries i built your sample of "testMPEG2TransportStreamer.cpp" which stream from the file and was able to receive the stream.I saw that UDP packets are being streamed. Then i went into your code :- Groupsock rtpGroupsock(*env, destinationAddress, rtpPort, ttl); This function defined in GroupSock.cpp *Groupsock::Groupsock(UsageEnvironment& env, struct in_addr const& groupAddr, Port port, u_int8_t ttl) : OutputSocket(env, port), deleteIfNoMembers(False), isSlave(False), fIncomingGroupEId(groupAddr, port.num(), ttl), fDests(NULL), fTTL(ttl*) { addDestination(groupAddr, port); if (!socketJoinGroup(env, socketNum(), groupAddr.s_addr)) { if (DebugLevel >= 1) { env << *this << ": failed to join group: " << env.getResultMsg() << "\n"; } } The *OutputSocket(env, port) function give call to function : Socket(env, port), * *The socket function is defined like this:- Socket::Socket(UsageEnvironment& env, Port port) : fEnv(DefaultUsageEnvironment != NULL ? *DefaultUsageEnvironment : env), fPort(port) { fSocketNum = setupDatagramSocket(fEnv, port);//original } * *so i changed the socket function like this:- Socket::Socket(UsageEnvironment& env, Port port) : fEnv(DefaultUsageEnvironment != NULL ? *DefaultUsageEnvironment : env), fPort(port) { fSocketNum = setupStreamSocket(fEnv, port); //fSocketNum = setupDatagramSocket(fEnv, port);//original } * *To create TCP socket and recompile the library.Every thing works fine but i didn't get the streaming? * *Can you explain why? and what to do to stream TCP packets not UDP ? * * Thanks * -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Mon Feb 11 04:16:51 2013 From: finlayson at live555.com (Ross Finlayson) Date: Mon, 11 Feb 2013 23:16:51 +1100 Subject: [Live-devel] Query Related to GroupSock Library In-Reply-To: References: Message-ID: <3AF6FEB8-55F6-4311-9630-CEA40EA84F65@live555.com> If you had read the FAQ (as everyone is asked to do before posting to the mailing list), you would have realized that if you modify the supplied source code, you can expect *no* support on this mailing list (especially for people, like you, who use unprofessional email addresses). The "testMPEGTransportStreamer" application transmits its data over RTP packets - which, by default, are carried in UDP packets. You cannot simply change the output socket to a TCP socket, because there would be no way for the receiver to know where each RTP packet begins and ends. It *is* possible to transmit this data over TCP, but only if you enable the application's built-in RTSP server, and use a RTSP client that explicitly requests RTP-over-TCP streaming. However, because your email address announces to the world that you are just a casual hobbyist rather than a serious professional, this will be my last posting on this topic. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From michel.promonet at thalesgroup.com Mon Feb 11 05:04:43 2013 From: michel.promonet at thalesgroup.com (PROMONET Michel) Date: Mon, 11 Feb 2013 14:04:43 +0100 Subject: [Live-devel] SEGV handling GET_PARAMETER response. In-Reply-To: References: <4378_1360339777_51152341_4378_1252_1_1BE8971B6CFF3A4F97AF4011882AA255015609FB3A2B@THSONEA01CMS01P.one.grp> Message-ID: <19604_1360587919_5118EC8F_19604_9128_1_1BE8971B6CFF3A4F97AF4011882AA25501560A00A17C@THSONEA01CMS01P.one.grp> Hi Ross, Even if I have not identify what's wrong with the GET_PARAMETER response sent by the camera, I confirmed that your modification fix the problem. Thanks, Michel. [@@THALES GROUP RESTRICTED@@] De : live-devel-bounces at ns.live555.com [mailto:live-devel-bounces at ns.live555.com] De la part de Ross Finlayson Envoy? : vendredi 8 f?vrier 2013 18:48 ? : LIVE555 Streaming Media - development & use Objet : Re: [Live-devel] SEGV handling GET_PARAMETER response. In this context we are in RTSPClient.cpp around 1555 : // If we saw a "Content-Length:" header, then make sure that we have the amount of data that it specified: unsigned bodyOffset = nextLineStart - headerDataCopy; bodyStart = &fResponseBuffer[bodyOffset]; numBodyBytes = fResponseBytesAlreadySeen - bodyOffset; if (contentLength > numBodyBytes) { Gdb says that nextLineStart is NULL, bodyOffset is a big number, and finally bodyStart point a non allocated memory (that raise a SEGV). Do you think if is possible to add check nextLineStart before using it : Instead, please replace the line unsigned bodyOffset = nextLineStart - headerDataCopy; with unsigned bodyOffset = nextLineStart == NULL ? fResponseBytesAlreadySeen : nextLineStart - headerDataCopy; If you still have a problem after this change, please let me know. (Otherwise I'll make that change in the next release of the software.) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gauravb at interfaceinfosoft.com Mon Feb 11 06:11:50 2013 From: gauravb at interfaceinfosoft.com (Gaurav Badhan) Date: Mon, 11 Feb 2013 19:41:50 +0530 Subject: [Live-devel] TCP Streaming Message-ID: Hello Ross * I came across the question on the mailing list "*Query Related to GroupSock Library",as i am newbie to this live 555 media .In "testMPEG2Trans portStreamer.cpp" there is support to create in built rtsp server, when the proxy server request the "testMPEG2TransportStreamer.cpp" with "-t" option, will "testMPEG2TransportStreamer.cpp" stream TCP packets,i mean will wireshark show the tcp packets are being sent rather than the UDP as i want to TCP streaming. Please help. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jesse.Hemingway at nerdery.com Mon Feb 11 08:10:30 2013 From: Jesse.Hemingway at nerdery.com (Jesse Hemingway) Date: Mon, 11 Feb 2013 10:10:30 -0600 Subject: [Live-devel] H.264 via RTP - ugly artifacts In-Reply-To: <615FD77639372542BF647F5EBAA2DBC2252591AC@IL-BOL-EXCH01.smartwire.com> References: <042401ce065c$68ca2840$3a5e78c0$@com> <615FD77639372542BF647F5EBAA2DBC2252591AC@IL-BOL-EXCH01.smartwire.com> Message-ID: Thanks Jeff, I tried out your suggestion of caching and passing the 7,8 frames before every keyframe, so my sequence looked like 7,8,5,7,8,5,7,8,5,1,1,1,1,1,1,7,8,5,7,8,5,... however, I got exactly the same visual artifacts. I think I'm also safe on the prefix endian-ness, as I pack my buffer with: uint8_t nalStartSequence[] = { 0x00, 0x00, 0x00, 0x01 }; I'm pretty stymied, especially in light of the fact avcodec_decode_video2() is not reporting any errors, other than the disturbingly-consistent DC, AC and MV concealment on every frame (the error count is a constant function of the picture dimensions). -Jesse On Sun, Feb 10, 2013 at 5:29 PM, Jeff Shanab wrote: > I had the same problem a while back. I also use live555 feeding > libavcodec. > While the standard only says you need to have a 7 and an 8 before the > first 5 and that after that the 5 or 1 is valid, I have had decoding > trouble because of it. > So while all the following are technically legal > 7,8,5,1,1,1,1,5,1,1,1... > 7,8,5,1,1,1,1,1,1...... > 7,8,5,5,5,1,1,1,5,1,1,1,5,1,1,1 (some axis cameras by default) > > It is the decoder not live555 that needs the 7,8,5 at the beginning of > every key frame. > I have gotten arguments about this, It seems to vary by version. The 7 and > 8 packets are so small compared to > the key frame slices [5] and the diff frames [1] that I just cache them > and inject them if missing. Not only do I get > Rock solid playback, I do not need to worry about a second user starting > late on a multicast or trouble when I seek. > > BTW each and every frame needs the 0x00 0x00 0x00 0x01 (4 bytes, aka > network byte order. not a 32 bit byte that could end up with endian issues) > ------------------------------ > * From:* live-devel-bounces at ns.live555.com [ > live-devel-bounces at ns.live555.com] on behalf of Jesse Hemingway [ > jhemingw at nerdery.com] > *Sent:* Friday, February 08, 2013 8:36 PM > > *To:* LIVE555 Streaming Media - development & use > *Cc:* LIVE555 Streaming Media - development & use > *Subject:* Re: [Live-devel] H.264 via RTP - ugly artifacts > > Interesting, thank you for the response. I actually discard all but 7, > 8 and 5 frames that occur before the initial 'priming' is complete, they > are just logged for completeness. So my first buffer passed is of the form > [7 8 5], after which I pass all frames willy-nilly, even the timing NAL [ 6 > ]. I'm wondering if my bytestream framing is wrong? 0x000001.NAL.0x00. I > read the Appendix B bytestream syntax and also the ffmpeg NAL detection > code and it seems sufficient, but I've also read posts that indicated this > simple approach was incorrect. > > > This message and any attachments contain confidential and proprietary > information, and may contain privileged information, belonging to one or > more affiliates of Windy City Wire Cable & Technology Products, LLC. No > privilege is waived by this transmission. Unauthorized use, copying or > disclosure of such information is prohibited and may be unlawful. If you > receive this message in error, please delete it from your system, destroy > any printouts or copies of it, and notify the sender immediately by e-mail > or phone. > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > -- Jesse Hemingway Interactive Developer | Co-President The Nerdery (952) 582.6507 // office (612) 205.4682 // mobile jesse.hemingway at nerdery.com http://nerdery.com/people#hw -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Mon Feb 11 10:18:54 2013 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 12 Feb 2013 05:18:54 +1100 Subject: [Live-devel] TCP Streaming In-Reply-To: References: Message-ID: > I came across the question on the mailing list "Query Related to GroupSock Library",as i am newbie to this live 555 media .In "testMPEG2TransportStreamer.cpp" there is support to create in built rtsp server, when the proxy server request the "testMPEG2TransportStreamer.cpp" with "-t" option, will "testMPEG2TransportStreamer.cpp" stream TCP packets,i mean will wireshark show the tcp packets are being sent rather than the UDP No, not in this case, because the "testMPEG3TransportStreamer" application streams via multicast, rather than unicast. (Multicast streams cannot be carried over TCP.) So in this respect my answer to the previous question was incorrect. If you want to stream RTP-over-TCP, you will need to use a unicast RTSP server - such as "testOnDemandRTSPServer" or "live555MediaServer". (However RTP-over-TCP streaming should be used only if your server is behind a firewall that blocks UDP packets.) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ferielbenghorbel at gmail.com Mon Feb 11 06:38:21 2013 From: ferielbenghorbel at gmail.com (feriel ben ghorbel) Date: Mon, 11 Feb 2013 15:38:21 +0100 Subject: [Live-devel] Problem with testOnDemandRTSPServer Message-ID: Hi all, I have a problem when I use "testOnDemandRTSPServer" to run a ".ts" stream VLC can't display it , note that I put all streams under the directory testProgs Thanks ans regards _______________________________________________ live-devel mailing list live-devel at lists.live555.com http://lists.live555.com/mailman/listinfo/live-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jesse.Hemingway at nerdery.com Mon Feb 11 11:33:19 2013 From: Jesse.Hemingway at nerdery.com (Jesse Hemingway) Date: Mon, 11 Feb 2013 13:33:19 -0600 Subject: [Live-devel] H.264 via RTP - ugly artifacts In-Reply-To: References: <042401ce065c$68ca2840$3a5e78c0$@com> <615FD77639372542BF647F5EBAA2DBC2252591AC@IL-BOL-EXCH01.smartwire.com> Message-ID: Ross, I'm sorry to continue this thread on your forum, but I've gotten more traction here than anywhere -- feel free to reject if you feel it's noise. Jeff, to your notes: I spent a little more time experimenting. I find that for low-res video (e.g. 240x160) I'll just get a single IDR slice at any time, and then decoding works as well as VLC (I think). At higher res (e.g. 320x240) I start getting multiple IDR slices in a row, and then it's artifact-city. If I buffer all IDR slices before passing to avcodec_decode_video2(), then I finally get clear-looking frames. The sequence looks something like: [7,8,5,5], [1], [1], [1], [1], [7,8,5,5], [1], [1]... So even if I solve the keyframe issue as above, the intervening P-frames seem to be pushing pixels more than they should. Basically at each keyframe, I now get a clear image, and in-between, the whole scene kind of bulges and gets more and more distorted until the next keyframe. BTW my test example is this one: rtsp:// media1.law.harvard.edu/Media/policy_a/2012/02/02_unger.mov (UDP blocked, as Ross pointed out). So two question: A) Is it really necessary to collect all, successive I-frames to send all at once to avcodec_decode_video2(), or might this indicate some other, larger issue? If I don't collect them all, only one fraction of the image is clear at a time, with the rest of it totally blurred. B) Why would the P-frames (sent to decoder one at a time) result in such additive artifacts? -Jesse On Mon, Feb 11, 2013 at 10:10 AM, Jesse Hemingway < Jesse.Hemingway at nerdery.com> wrote: > Thanks Jeff, > > I tried out your suggestion of caching and passing the 7,8 frames before > every keyframe, so my sequence looked like > 7,8,5,7,8,5,7,8,5,1,1,1,1,1,1,7,8,5,7,8,5,... however, I got exactly the > same visual artifacts. I think I'm also safe on the prefix endian-ness, as > I pack my buffer with: > > uint8_t nalStartSequence[] = { 0x00, 0x00, 0x00, 0x01 }; > > > I'm pretty stymied, especially in light of the fact > avcodec_decode_video2() is not reporting any errors, other than the > disturbingly-consistent DC, AC and MV concealment on every frame (the error > count is a constant function of the picture dimensions). > > -Jesse > > > On Sun, Feb 10, 2013 at 5:29 PM, Jeff Shanab wrote: > >> I had the same problem a while back. I also use live555 feeding >> libavcodec. >> While the standard only says you need to have a 7 and an 8 before the >> first 5 and that after that the 5 or 1 is valid, I have had decoding >> trouble because of it. >> So while all the following are technically legal >> 7,8,5,1,1,1,1,5,1,1,1... >> 7,8,5,1,1,1,1,1,1...... >> 7,8,5,5,5,1,1,1,5,1,1,1,5,1,1,1 (some axis cameras by default) >> >> It is the decoder not live555 that needs the 7,8,5 at the beginning of >> every key frame. >> I have gotten arguments about this, It seems to vary by version. The 7 >> and 8 packets are so small compared to >> the key frame slices [5] and the diff frames [1] that I just cache them >> and inject them if missing. Not only do I get >> Rock solid playback, I do not need to worry about a second user starting >> late on a multicast or trouble when I seek. >> >> BTW each and every frame needs the 0x00 0x00 0x00 0x01 (4 bytes, aka >> network byte order. not a 32 bit byte that could end up with endian issues) >> ------------------------------ >> * From:* live-devel-bounces at ns.live555.com [ >> live-devel-bounces at ns.live555.com] on behalf of Jesse Hemingway [ >> jhemingw at nerdery.com] >> *Sent:* Friday, February 08, 2013 8:36 PM >> >> *To:* LIVE555 Streaming Media - development & use >> *Cc:* LIVE555 Streaming Media - development & use >> *Subject:* Re: [Live-devel] H.264 via RTP - ugly artifacts >> >> Interesting, thank you for the response. I actually discard all but >> 7, 8 and 5 frames that occur before the initial 'priming' is complete, they >> are just logged for completeness. So my first buffer passed is of the form >> [7 8 5], after which I pass all frames willy-nilly, even the timing NAL [ 6 >> ]. I'm wondering if my bytestream framing is wrong? 0x000001.NAL.0x00. I >> read the Appendix B bytestream syntax and also the ffmpeg NAL detection >> code and it seems sufficient, but I've also read posts that indicated this >> simple approach was incorrect. >> >> >> This message and any attachments contain confidential and proprietary >> information, and may contain privileged information, belonging to one or >> more affiliates of Windy City Wire Cable & Technology Products, LLC. No >> privilege is waived by this transmission. Unauthorized use, copying or >> disclosure of such information is prohibited and may be unlawful. If you >> receive this message in error, please delete it from your system, destroy >> any printouts or copies of it, and notify the sender immediately by e-mail >> or phone. >> >> >> _______________________________________________ >> live-devel mailing list >> live-devel at lists.live555.com >> http://lists.live555.com/mailman/listinfo/live-devel >> >> > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Mon Feb 11 11:52:45 2013 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 12 Feb 2013 06:52:45 +1100 Subject: [Live-devel] Problem with testOnDemandRTSPServer In-Reply-To: References: Message-ID: <11666D5B-E522-4FA2-BB5F-5C9BA222BF8E@live555.com> > I have a problem when I use "testOnDemandRTSPServer" to run a ".ts" stream > VLC can't display it http://www.live555.com/liveMedia/faq.html#my-file-doesnt-work Also, VLC is not our software, so you should first use "testRTSPClient" or "openRTSP" as your client. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at gotowti.com Mon Feb 11 12:16:57 2013 From: chris at gotowti.com (Chris Richardson (WTI)) Date: Mon, 11 Feb 2013 12:16:57 -0800 Subject: [Live-devel] H.264 via RTP - ugly artifacts In-Reply-To: References: <042401ce065c$68ca2840$3a5e78c0$@com> <615FD77639372542BF647F5EBAA2DBC2252591AC@IL-BOL-EXCH01.smartwire.com> Message-ID: <04d001ce0894$bf0b64a0$3d222de0$@com> Hi, I also collect SPS, PPS and IDR NALS prior to sending them to FFMPEG and had forgotten about that until you mentioned it. So I send FFMPEG buffers with either [7,8,5] or [1] each time. The P frames are probably distorted because they refer to IDR reference pictures that are not correct. I would guess that the P frames are referring to the first [5] in your [7, 8, 5, 5] sequence, and the last [5] in that sequence is incorrect. Also, there is no purpose to having more than one IDR in a row, unless the whole sequence is IDRs (for enabling seeking to every single frame perhaps). What is the data source for these sequences? It does seem odd to me that an encoder would generate these by default. Chris Richardson WTI From: live-devel-bounces at ns.live555.com [mailto:live-devel-bounces at ns.live555.com] On Behalf Of Jesse Hemingway Sent: Monday, February 11, 2013 11:33 AM To: LIVE555 Streaming Media - development & use Subject: Re: [Live-devel] H.264 via RTP - ugly artifacts Ross, I'm sorry to continue this thread on your forum, but I've gotten more traction here than anywhere -- feel free to reject if you feel it's noise. Jeff, to your notes: I spent a little more time experimenting. I find that for low-res video (e.g. 240x160) I'll just get a single IDR slice at any time, and then decoding works as well as VLC (I think). At higher res (e.g. 320x240) I start getting multiple IDR slices in a row, and then it's artifact-city. If I buffer all IDR slices before passing to avcodec_decode_video2(), then I finally get clear-looking frames. The sequence looks something like: [7,8,5,5], [1], [1], [1], [1], [7,8,5,5], [1], [1]... So even if I solve the keyframe issue as above, the intervening P-frames seem to be pushing pixels more than they should. Basically at each keyframe, I now get a clear image, and in-between, the whole scene kind of bulges and gets more and more distorted until the next keyframe. BTW my test example is this one: rtsp://media1.law.harvard.edu/Media/policy_a/2012/02/02_unger.mov (UDP blocked, as Ross pointed out). So two question: A) Is it really necessary to collect all, successive I-frames to send all at once to avcodec_decode_video2(), or might this indicate some other, larger issue? If I don't collect them all, only one fraction of the image is clear at a time, with the rest of it totally blurred. B) Why would the P-frames (sent to decoder one at a time) result in such additive artifacts? -Jesse On Mon, Feb 11, 2013 at 10:10 AM, Jesse Hemingway wrote: Thanks Jeff, I tried out your suggestion of caching and passing the 7,8 frames before every keyframe, so my sequence looked like 7,8,5,7,8,5,7,8,5,1,1,1,1,1,1,7,8,5,7,8,5,... however, I got exactly the same visual artifacts. I think I'm also safe on the prefix endian-ness, as I pack my buffer with: uint8_t nalStartSequence[] = { 0x00, 0x00, 0x00, 0x01 }; I'm pretty stymied, especially in light of the fact avcodec_decode_video2() is not reporting any errors, other than the disturbingly-consistent DC, AC and MV concealment on every frame (the error count is a constant function of the picture dimensions). -Jesse On Sun, Feb 10, 2013 at 5:29 PM, Jeff Shanab wrote: I had the same problem a while back. I also use live555 feeding libavcodec. While the standard only says you need to have a 7 and an 8 before the first 5 and that after that the 5 or 1 is valid, I have had decoding trouble because of it. So while all the following are technically legal 7,8,5,1,1,1,1,5,1,1,1... 7,8,5,1,1,1,1,1,1...... 7,8,5,5,5,1,1,1,5,1,1,1,5,1,1,1 (some axis cameras by default) It is the decoder not live555 that needs the 7,8,5 at the beginning of every key frame. I have gotten arguments about this, It seems to vary by version. The 7 and 8 packets are so small compared to the key frame slices [5] and the diff frames [1] that I just cache them and inject them if missing. Not only do I get Rock solid playback, I do not need to worry about a second user starting late on a multicast or trouble when I seek. BTW each and every frame needs the 0x00 0x00 0x00 0x01 (4 bytes, aka network byte order. not a 32 bit byte that could end up with endian issues) _____ From: live-devel-bounces at ns.live555.com [live-devel-bounces at ns.live555.com] on behalf of Jesse Hemingway [jhemingw at nerdery.com] Sent: Friday, February 08, 2013 8:36 PM To: LIVE555 Streaming Media - development & use Cc: LIVE555 Streaming Media - development & use Subject: Re: [Live-devel] H.264 via RTP - ugly artifacts Interesting, thank you for the response. I actually discard all but 7, 8 and 5 frames that occur before the initial 'priming' is complete, they are just logged for completeness. So my first buffer passed is of the form [7 8 5], after which I pass all frames willy-nilly, even the timing NAL [ 6 ]. I'm wondering if my bytestream framing is wrong? 0x000001.NAL.0x00. I read the Appendix B bytestream syntax and also the ffmpeg NAL detection code and it seems sufficient, but I've also read posts that indicated this simple approach was incorrect. This message and any attachments contain confidential and proprietary information, and may contain privileged information, belonging to one or more affiliates of Windy City Wire Cable & Technology Products, LLC. No privilege is waived by this transmission. Unauthorized use, copying or disclosure of such information is prohibited and may be unlawful. If you receive this message in error, please delete it from your system, destroy any printouts or copies of it, and notify the sender immediately by e-mail or phone. _______________________________________________ live-devel mailing list live-devel at lists.live555.com http://lists.live555.com/mailman/listinfo/live-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From saravanan.s at fossilshale.com Mon Feb 11 20:34:49 2013 From: saravanan.s at fossilshale.com (saravanan) Date: Tue, 12 Feb 2013 10:04:49 +0530 Subject: [Live-devel] simultaneous access to different sessions In-Reply-To: References: <2471B04D-1F02-45B1-BCEA-FBDA2509A591@live555.com> <2ECB684F-B0D2-476A-8538-FAFF5EEA6C1E@live555.com> <00ec01ce009b$8a238e80$9e6aab80$@s@fossilshale.com> <383E3150-B954-495A-AF72-B53079AB035B@live555.com> <003001ce0144$e0cf50d0$a26df270$@s@fossilshale.com> <33AFDA0E-1693-4780-93CA-CFC11632822B@live555.com> <00de01ce054d$bfb42d00$3f1c8700$@s@fossilshale.com> Message-ID: <003c01ce08da$4c60d430$e5227c90$@s@fossilshale.com> Hi Ross, Yes, exactly you are right. We are streaming from live, encoded video and slightly modified the testOnDemandRTSPServer application. We thought of getting your valuable inputs to understand the issue, if you are OK. Below are the sequence of operation we are doing to stream MPEG4 and H264 videos simultaneously, . Start the MPEG4 video streaming from VLC through "mpeg4ESVideoTest" session, its working fine. . Stop the MPEG4 video streaming . Start the H264 video streaming from VLC through "h264ESVideoTest" session, its working fine. . Stop the H264 video streaming . Start the MPEG4 video streaming again, working fine and see 25fps . Start the H264 video streaming , . Both H264 & MPEG4 streaming happens and could get 25fps for ~15 seconds, and noticed doGetNextFrame() function of MPEG4 and H264 source are getting called alternatively. . After 15 seconds, doGetNextFrame () of H264 getting called 4-5 times continuously and then doGetNextFrame() of MPEG4 got called. At this point of time the problem starts and both the streams are getting struck and reduced to 1 or 2 fps!!! . If we close any one session then other session resumed well and could get normal streaming of 25 fps. . We completely gone through the FramedSource and MultiFramedRTPSink source flow but could not understand what could be wrong. Hoping to get your suggestions. Regards, Saravanan S From: Ross Finlayson [mailto:finlayson at live555.com] Sent: Friday, February 08, 2013 12:31 PM To: LIVE555 Streaming Media - development & use Subject: Re: [Live-devel] simultaneous access to different sessions I have built live555 testOnDemandServer application for my target and testing two different client sessions to two separate sessions "mpeg4ESVideoTest" and "h264ESVideoTest". The stream is not smooth and seeing struck and resuming the streaming. I thought the target resource is not enough to play two simultaneous format and verified the CPU and memory usage. But the total CPU usage for simultaneous streaming was maximum of 40% and memory usage is less than 10%. If you are seeing 40% CPU usage while concurrently streaming from two *files* (which is what the "testOnDemandRTSPServer" (sic) application does), then you must have an extremely wimpy CPU. (If, instead, you are streaming from live, encoded video - rather than from a file - then you are *not* using the "testOnDemandRTSPServer" application; you are using a *modified* version of this application, which we cannot, in general help you with.) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gauravb at interfaceinfosoft.com Mon Feb 11 23:51:41 2013 From: gauravb at interfaceinfosoft.com (Gaurav Badhan) Date: Tue, 12 Feb 2013 13:21:41 +0530 Subject: [Live-devel] Proxy Server implementation query In-Reply-To: References: Message-ID: Hello, There was a query on your mailing list with subject "Please reply query on Proxy Server" in which the fox has asked "How can Live 555 proxy server can be used in Thread? I mean creating multiple live 555 proxy server for receiving stream from multiple back end rtsp server? " and you have replied "The existing "LIVE555 Proxy Server" can *already* stream concurrently from multiple back-end servers. It does this as a single-threaded application (using an event loop, rather than threads, for concurrency). This is all explained clearly in the FAQ that everyone is asked to read before posting to the mailing list." Your answer is absolutely correct we can give multiple URL of backend rtsp server to the proxy server but suppose we want to create the new "proxy server media session" in a thread each time a new back end server is ready to stream . How can it be done? As defined in your FAQ that "Another possible way to access the code from multiple threads is to have each thread use its own "UsageEnvironment" and "TaskScheduler" objects, and thus its own event loop. " so creating env variable and scheduler and own event loop for each session will solve the problem ? As for creating "proxy server media session" we need to implement the rtsp server so i have made the rtsp server only once and whenever the back end server is ready i call the thread which makes the new "proxy server media session". Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From jstafford at ampltd.com Tue Feb 12 01:31:50 2013 From: jstafford at ampltd.com (James Stafford) Date: Tue, 12 Feb 2013 09:31:50 +0000 Subject: [Live-devel] H.264 via RTP - ugly artifacts In-Reply-To: References: <042401ce065c$68ca2840$3a5e78c0$@com> <615FD77639372542BF647F5EBAA2DBC2252591AC@IL-BOL-EXCH01.smartwire.com> Message-ID: <511A0C06.1020208@ampltd.com> Hi Jesse, > Jeff, to your notes: I spent a little more time experimenting. I find > that for low-res video (e.g. 240x160) I'll just get a single IDR slice > at any time, and then decoding works as well as VLC (I think). At > higher res (e.g. 320x240) I start getting multiple IDR slices in a > row, and then it's artifact-city. If I buffer all IDR slices before > passing to avcodec_decode_video2(), then I finally get clear-looking > frames. The sequence looks something like: [7,8,5,5], [1], [1], [1], > [1], [7,8,5,5], [1], [1]... > > So even if I solve the keyframe issue as above, the intervening > P-frames seem to be pushing pixels more than they should. Basically > at each keyframe, I now get a clear image, and in-between, the whole > scene kind of bulges and gets more and more distorted until the next > keyframe. BTW my test example is this one: > rtsp://media1.law.harvard.edu/Media/policy_a/2012/02/02_unger.mov > > (UDP blocked, as Ross pointed out). > > So two question: > A) Is it really necessary to collect all, successive I-frames to send > all at once to avcodec_decode_video2(), or might this indicate some > other, larger issue? If I don't collect them all, only one fraction > of the image is clear at a time, with the rest of it totally blurred. Do you set the context->flags2=CODEC_FLAG2_CHUNKS for the context passed to avcodec_decode_video2()? I think this is necessary in order for avcodec_decode_video2() to properly decode H.264 frames that span multiple calls to it. -- From Jesse.Hemingway at nerdery.com Mon Feb 11 12:54:19 2013 From: Jesse.Hemingway at nerdery.com (Jesse Hemingway) Date: Mon, 11 Feb 2013 14:54:19 -0600 Subject: [Live-devel] H.264 via RTP - ugly artifacts In-Reply-To: <04d001ce0894$bf0b64a0$3d222de0$@com> References: <042401ce065c$68ca2840$3a5e78c0$@com> <615FD77639372542BF647F5EBAA2DBC2252591AC@IL-BOL-EXCH01.smartwire.com> <04d001ce0894$bf0b64a0$3d222de0$@com> Message-ID: If I go ahead and pass the successive IDR frames, and look at the decoded picture after each, I see that each one contains a slice of the whole image. E.g. at 640x480, I get about 5 IDR frames in a row, and each one contains a certain range of the horizontal dimension, e.g. top 5th, second 5th etc, with the entire rest of the image turned into a blur. If I collect them all together before passing, I get one clear image. But as you say, then the P-frames seem to be referring to the wrong image data. I'm wondering if P-frames are also being sent as slices so I need to figure out which ones constitute a full output frame. It seems like others have had more luck than I have with this stuff! -Jesse On Mon, Feb 11, 2013 at 2:16 PM, Chris Richardson (WTI) wrote: > Hi,**** > > ** ** > > I also collect SPS, PPS and IDR NALS prior to sending them to FFMPEG and > had forgotten about that until you mentioned it. So I send FFMPEG buffers > with either [7,8,5] or [1] each time. The P frames are probably distorted > because they refer to IDR reference pictures that are not correct. I would > guess that the P frames are referring to the first [5] in your [7, 8, 5, 5] > sequence, and the last [5] in that sequence is incorrect. Also, there is > no purpose to having more than one IDR in a row, unless the whole sequence > is IDRs (for enabling seeking to every single frame perhaps).**** > > ** ** > > What is the data source for these sequences? It does seem odd to me that > an encoder would generate these by default.**** > > ** ** > > Chris Richardson**** > > WTI**** > > ** ** > > *From:* live-devel-bounces at ns.live555.com [mailto: > live-devel-bounces at ns.live555.com] *On Behalf Of *Jesse Hemingway > *Sent:* Monday, February 11, 2013 11:33 AM > > *To:* LIVE555 Streaming Media - development & use > *Subject:* Re: [Live-devel] H.264 via RTP - ugly artifacts**** > > ** ** > > Ross, I'm sorry to continue this thread on your forum, but I've gotten > more traction here than anywhere -- feel free to reject if you feel it's > noise.**** > > ** ** > > Jeff, to your notes: I spent a little more time experimenting. I find > that for low-res video (e.g. 240x160) I'll just get a single IDR slice at > any time, and then decoding works as well as VLC (I think). At higher res > (e.g. 320x240) I start getting multiple IDR slices in a row, and then it's > artifact-city. If I buffer all IDR slices before passing to > avcodec_decode_video2(), then I finally get clear-looking frames. The > sequence looks something like: [7,8,5,5], [1], [1], [1], [1], [7,8,5,5], > [1], [1]...**** > > ** ** > > So even if I solve the keyframe issue as above, the intervening P-frames > seem to be pushing pixels more than they should. Basically at each > keyframe, I now get a clear image, and in-between, the whole scene kind of > bulges and gets more and more distorted until the next keyframe. BTW my > test example is this one: rtsp:// > media1.law.harvard.edu/Media/policy_a/2012/02/02_unger.mov (UDP blocked, > as Ross pointed out).**** > > ** ** > > So two question:**** > > A) Is it really necessary to collect all, successive I-frames to send all > at once to avcodec_decode_video2(), or might this indicate some other, > larger issue? If I don't collect them all, only one fraction of the image > is clear at a time, with the rest of it totally blurred.**** > > B) Why would the P-frames (sent to decoder one at a time) result in such > additive artifacts?**** > > ** ** > > -Jesse**** > > ** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gauravb at interfaceinfosoft.com Tue Feb 12 06:37:57 2013 From: gauravb at interfaceinfosoft.com (Gaurav Badhan) Date: Tue, 12 Feb 2013 20:07:57 +0530 Subject: [Live-devel] SubClassing "OnDemandServerMediaSubsession" In-Reply-To: References: Message-ID: Hello, By studying live 555 libraries i can understand that to do streaming when demande i need to subclass the "OnDemandServerMediaSubsession" So my queriy is after subclassing what should be my function calling? 1>Create RTSP server. 2>Call getStreamParameters (but from where to get the sessionid,tcpsocketnum,rtpChannelId,rtcpChannelId) 3>Call start stream function Am i correct,Please guide where am i wrong? I will give live source as input so i have created the my own device source class inherited from your device source class Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Feb 12 09:23:58 2013 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 13 Feb 2013 04:23:58 +1100 Subject: [Live-devel] SubClassing "OnDemandServerMediaSubsession" In-Reply-To: References: Message-ID: > By studying live 555 libraries i can understand that to do streaming when demande i need to subclass the > "OnDemandServerMediaSubsession" > So my queriy is after subclassing what should be my function calling? See http://www.live555.com/liveMedia/faq.html#liveInput-unicast Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jesse.Hemingway at nerdery.com Tue Feb 12 04:55:32 2013 From: Jesse.Hemingway at nerdery.com (Jesse Hemingway) Date: Tue, 12 Feb 2013 06:55:32 -0600 Subject: [Live-devel] H.264 via RTP - ugly artifacts In-Reply-To: <511A0C06.1020208@ampltd.com> References: <042401ce065c$68ca2840$3a5e78c0$@com> <615FD77639372542BF647F5EBAA2DBC2252591AC@IL-BOL-EXCH01.smartwire.com> <511A0C06.1020208@ampltd.com> Message-ID: Jeff -- A) Is it really necessary to collect all, successive I-frames to send all >> at once to avcodec_decode_video2(), or might this indicate some other, >> larger issue? If I don't collect them all, only one fraction of the image >> is clear at a time, with the rest of it totally blurred. >> > > Do you set the context->flags2=CODEC_FLAG2_**CHUNKS for the context > passed to avcodec_decode_video2()? > I think this is necessary in order for avcodec_decode_video2() to properly > decode H.264 frames that span multiple calls to it. You just switched on the light!! I was not aware of this flag, even though I vaguely remember coming across it in my reading, I forgot it again without realizing its context. I spent hours trying to figure out if I was missing a whole piece of logic, and this is all it took - thank you, sir! If this wasn't a public forum, I'd ask your city & preferences to make sure you get a nice Groupon or whatever flies in your district. I was dreading another day of slogging over the same codes and forums again. -Jesse -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Feb 12 09:47:23 2013 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 13 Feb 2013 04:47:23 +1100 Subject: [Live-devel] H.264 via RTP - ugly artifacts In-Reply-To: References: <042401ce065c$68ca2840$3a5e78c0$@com> <615FD77639372542BF647F5EBAA2DBC2252591AC@IL-BOL-EXCH01.smartwire.com> <511A0C06.1020208@ampltd.com> Message-ID: <18541EE1-3BAC-4605-BCBE-DB17A0110CF8@live555.com> OK, I'm now declaring this thread over. It has gone way off-topic for this mailing list - which is for discussion of the "LIVE555 Streaming Media" software only. Please do not post any more questions about decoders; they are off-topic for this mailing list. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ferielbenghorbel at gmail.com Tue Feb 12 02:34:03 2013 From: ferielbenghorbel at gmail.com (feriel ben ghorbel) Date: Tue, 12 Feb 2013 11:34:03 +0100 Subject: [Live-devel] Problem with testOnDemandRTSPServer In-Reply-To: <11666D5B-E522-4FA2-BB5F-5C9BA222BF8E@live555.com> References: <11666D5B-E522-4FA2-BB5F-5C9BA222BF8E@live555.com> Message-ID: Hi all, sorry i don't mentioned that but I tried at first to test by "openRTSP" I have as error: "Failed to get a SDP description from URL "rtsp:// 192.168.11.148:8554/alcarte_steno.ts": cannot handle DESCRIBE response: RTSP/1.0 404 Stream Not Found " *note: that the stream is in the directory testProgs * about "testRTSPClient" when I run it Ihave this result: the command not found Thanks and regards _______________________________________________ live-devel mailing list live-devel at lists.live555.com http://lists.live555.com/mailman/listinfo/live-devel 2013/2/11 Ross Finlayson > I have a problem when I use "testOnDemandRTSPServer" to run a ".ts" stream > VLC can't display it > > > http://www.live555.com/liveMedia/faq.html#my-file-doesnt-work > > Also, VLC is not our software, so you should first use "testRTSPClient" or > "openRTSP" as your client. > > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cloverobert at gmail.com Tue Feb 12 10:05:09 2013 From: cloverobert at gmail.com (Robert Clove) Date: Tue, 12 Feb 2013 23:35:09 +0530 Subject: [Live-devel] SubClassing "OnDemandServerMediaSubsession" In-Reply-To: References: Message-ID: Really nice ques fox.Hey Ross i have implemented - createNewStreamSource() - createNewRTPSink() and now after getStreamParameters i was going to call startstream() but from where to get these parameters - void* rtcpRRHandlerClientData, - ServerRequestAlternativeByteHandler* serverRequestAlternativeByteHandler, - void* serverRequestAlternativeByteHandlerClientData) On Tue, Feb 12, 2013 at 10:53 PM, Ross Finlayson wrote: > By studying live 555 libraries i can understand that to do streaming when > demande i need to subclass the > "OnDemandServerMediaSubsession" > So my queriy is after subclassing what should be my function calling? > > > See http://www.live555.com/liveMedia/faq.html#liveInput-unicast > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Feb 12 10:22:20 2013 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 13 Feb 2013 05:22:20 +1100 Subject: [Live-devel] Problem with testOnDemandRTSPServer In-Reply-To: References: <11666D5B-E522-4FA2-BB5F-5C9BA222BF8E@live555.com> Message-ID: <79790EDC-D0EF-4031-929F-012D05EA4A71@live555.com> > sorry i don't mentioned that but I tried at first to test by "openRTSP" > I have as error: > "Failed to get a SDP description from URL "rtsp://192.168.11.148:8554/alcarte_steno.ts": cannot handle DESCRIBE response: RTSP/1.0 404 Stream Not Found " > > note: that the stream is in the directory testProgs ONCE AGAIN, see http://www.live555.com/liveMedia/faq.html#my-file-doesnt-work As we explained in that FAQ: If you are sure that your file really is a Transport Stream file, then please put the file on a publically-accessible web (or FTP) server, and post the URL (not the file itself) to the "live-devel" mailing list, and we'll take a look at it, to see if we can figure out what's wrong. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Feb 12 10:37:07 2013 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 13 Feb 2013 05:37:07 +1100 Subject: [Live-devel] SubClassing "OnDemandServerMediaSubsession" In-Reply-To: References: Message-ID: <347E6D1F-5914-4AC7-9FA7-EBF5E6396DEB@live555.com> On Feb 13, 2013, at 5:05 AM, Robert Clove wrote: > Really nice ques fox.Hey Ross i have implemented > createNewStreamSource() > createNewRTPSink() Those are the ONLY functions that you need to implement. All of the other functions you mentioned are already implemented by the "OnDemandServerMediaSubsession" class. YOU DO NOT NEED TO CONCERN YOURSELF WITH THAT CODE! Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gauravb at interfaceinfosoft.com Wed Feb 13 07:40:24 2013 From: gauravb at interfaceinfosoft.com (Gaurav Badhan) Date: Wed, 13 Feb 2013 21:10:24 +0530 Subject: [Live-devel] SubClassing "OnDemandServerMediaSubsession" In-Reply-To: <347E6D1F-5914-4AC7-9FA7-EBF5E6396DEB@live555.com> References: <347E6D1F-5914-4AC7-9FA7-EBF5E6396DEB@live555.com> Message-ID: Hello Ross, Thanks for the guidance. I was able to stream the live source the only issue that i have is i am creating rtsp server on start and then when some predefined command comes like stream i am adding the the session. but when stop message comes i am breaking the do event loop and and calling doing like this:- First of all i have taken TaskScheduler* scheduler;RTSPServer* rtspServer;ServerMediaSession* sms; objects as global int StartRTSPServer() { scheduler = BasicTaskScheduler::createNew(); env = BasicUsageEnvironment::createNew(*scheduler); UserAuthenticationDatabase* authDB = NULL; portNumBits rtspServerPortNum = 554; unsigned reclamationTestSeconds=65U; rtspServer = RTSPServer::createNew(*env,rtspServerPortNum, authDB, reclamationTestSeconds); //creating rtsp server if (rtspServer == NULL) { *env << "Failed to create RTSP server: " <getResultMsg()<<"\n"; //If rtsp server creation fails on 554 it will be created on 8554 rtspServerPortNum = 8554; rtspServer = RTSPServer::createNew(*env,rtspServerPortNum); flag=false; if (rtspServer == NULL) { return 0; } else {pDailyLogger->LogInfoString("Created RTSP server.. on port 8554"); *env << "Created RTSP server.."<<"\n."; } } else { *env << "Created RTSP server.."<<"\n."; } } //The following code executes on start message int InitLive555() { g_ExitEventLoop = 0; Boolean const inputStreamIsRawUDP = False; char const* descriptionString={"Session streamed by \"testOnDemandRT\""}; sms = ServerMediaSession::createNew(*env, streamName, streamName,descriptionString); MyOnDemandServerMediaSubsession *session = new MyOnDemandServerMediaSubsession(*env, false,sport); //created my subseeion sms->addSubsession(session); rtspServer->addServerMediaSession(sms); char* url = rtspServer->rtspURL(sms); *env << "Play this stream using the URL \"" << url << "\"\n"; delete[] url; env->taskScheduler().doEventLoop(&g_ExitEventLoop); //try catch execute when stop message come and my code break here try { if(sms) { Medium::close(sms); } rtspServer->deleteServerMediaSession(sms); } catch(...) { if(sms) { Medium::close(sms); } rtspServer->deleteServerMediaSession(sms); } Can you please guide why my code is breaking. On Wed, Feb 13, 2013 at 12:07 AM, Ross Finlayson wrote: > > On Feb 13, 2013, at 5:05 AM, Robert Clove wrote: > > Really nice ques fox.Hey Ross i have implemented > > - createNewStreamSource() > - createNewRTPSink() > > Those are the ONLY functions that you need to implement. All of the other > functions you mentioned are already implemented by the > "OnDemandServerMediaSubsession" class. YOU DO NOT NEED TO CONCERN YOURSELF > WITH THAT CODE! > > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Wed Feb 13 09:14:26 2013 From: finlayson at live555.com (Ross Finlayson) Date: Thu, 14 Feb 2013 04:14:26 +1100 Subject: [Live-devel] SubClassing "OnDemandServerMediaSubsession" In-Reply-To: References: <347E6D1F-5914-4AC7-9FA7-EBF5E6396DEB@live555.com> Message-ID: > //try catch execute when stop message come and my code break here > try > { > if(sms) > { > Medium::close(sms); > } > > rtspServer->deleteServerMediaSession(sms); > } > catch(...) > { > if(sms) > { > Medium::close(sms); > } > rtspServer->deleteServerMediaSession(sms); > } > > Can you please guide why my code is breaking. It's because you are calling "Medium::close(sms);" as well as "rtspServer->deleteServerMediaSession(sms);". You shouldn't do that! You should only be calling rtspServer->deleteServerMediaSession(sms); That will delete "sms" as well. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gauravb at interfaceinfosoft.com Thu Feb 14 06:07:55 2013 From: gauravb at interfaceinfosoft.com (Gaurav Badhan) Date: Thu, 14 Feb 2013 19:37:55 +0530 Subject: [Live-devel] SubClassing "OnDemandServerMediaSubsession" In-Reply-To: References: <347E6D1F-5914-4AC7-9FA7-EBF5E6396DEB@live555.com> Message-ID: Hello Ross, First of all thanks for your support on the topic "OnDemandServerMediaSubsession". And sorry if i am making noise on the mailing list by again asking that someone has already asked under the subject "Please reply query on Proxy Server" that How can Live 555 proxy server code can be used in Thread? and you have replied that The existing "LIVE 555 Proxy Server" can *already* stream concurrently from multiple back-end servers. It does this as a single-threaded application (using an event loop, rather than threads, for concurrency) But the issue is that if am receiving one stream from backend rtsp server and then i want another stream to receive i call the proxy server media with new env and scheduler and without calling the do event loop again. This works fine.But in this case the problem is if i stop the the client that is receiving the stream from first streamer and call the delete server media seesion for that particular session as i have taken the array of server media session object than the the stream from the another streamer also stops displaying on the another client. What need to be done please tell. code is something like this:- env = BasicUsageEnvironment::createNew(*scheduler); sms[index]=ProxyServerMediaSession::createNew(*env, rtspServer, proxiedStreamURL, streamName, username, password, tunnelOverHTTPPortNum, verbosityLevel); onlyonce++; if(onlyonce==1) env->taskScheduler().doEventLoop(); on stop rtspServer->deleteservermediasession(sms[index]); index variable maintain for which session we have got the stop command. Thanks. On Wed, Feb 13, 2013 at 10:44 PM, Ross Finlayson wrote: > //try catch execute when stop message come and my code break here > try > { > if(sms) > { > Medium::close(sms); > } > > rtspServer->deleteServerMediaSession(sms); > } > catch(...) > { > if(sms) > { > Medium::close(sms); > } > rtspServer->deleteServerMediaSession(sms); > } > > Can you please guide why my code is breaking. > > > It's because you are calling "Medium::close(sms);" as well as > "rtspServer->deleteServerMediaSession(sms);". You shouldn't do that! You > should only be calling > rtspServer->deleteServerMediaSession(sms); > That will delete "sms" as well. > > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Thu Feb 14 07:17:20 2013 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 15 Feb 2013 02:17:20 +1100 Subject: [Live-devel] SubClassing "OnDemandServerMediaSubsession" In-Reply-To: References: <347E6D1F-5914-4AC7-9FA7-EBF5E6396DEB@live555.com> Message-ID: First, DO NOT post the same question to the list multiple times. This is a basic violation of 'netiquette' that everyone should know, and is explained clearly in the FAQ (that everyone is asked to read before posting to the mailing list). You have (effectively) posted the same question to the list three times. Because of this, all future postings from you (and anyone else from "interfaceinfosoft.com") will be moderated. My answer to the original question was very clear: > > The existing "LIVE 555 Proxy Server" can *already* stream concurrently from multiple back-end servers. It does this as a single-threaded application (using an event loop, rather than threads, for concurrency) > > But the issue is that if am receiving one stream from backend rtsp server and then i want another stream to receive i call the proxy server media with new env and scheduler and without calling the do event loop again. As I explained before, you do not need to (and should not) do this. You can receive from multiple streams using a *single* event loop, and thus a *single* "UsageEnvironment" and "TaskScheduler". All operations (including your 'stop' operation) are performed by handling events from within the (single) event loop. As explained in the FAQ, you *can* run multiple threads, each with their own "UsageEnvironment" and "TaskScheduler". However, if you do this, these threads must not interact (except perhaps via global variables). So, that's probably not what you want. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mczarnek at objectvideo.com Thu Feb 14 07:42:24 2013 From: mczarnek at objectvideo.com (Czarnek, Matt) Date: Thu, 14 Feb 2013 10:42:24 -0500 Subject: [Live-devel] Streaming RTSP and pulling single frames Message-ID: Hello, I would like to is pull from a RTSP stream, and then pull individual frames (whole images) from it and be able to pass pointers around and use those frames from there. Seems like it should be simple but I'm going in circles. I see testRTSPClient & openRTSP and more or less see how they work & that they can save to a file but am still having trouble figuring out how to pull individual images from the stream. The more versatile the better but I at least need to be able to support H264. Any thoughts? Thank you! Matt -- Matt Czarnek, Software Engineer Work Phone: (760) 462-5843 ObjectVideo Inc. http://www.objectvideo.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Thu Feb 14 08:11:48 2013 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 15 Feb 2013 03:11:48 +1100 Subject: [Live-devel] Streaming RTSP and pulling single frames In-Reply-To: References: Message-ID: <50890B73-EFAD-4ADA-90D3-E6AFDEF48A05@live555.com> > I would like to is pull from a RTSP stream, and then pull individual frames (whole images) from it and be able to pass pointers around and use those frames from there. Seems like it should be simple but I'm going in circles. > > I see testRTSPClient & openRTSP and more or less see how they work & that they can save to a file No, "testRTSPClient" does not save to a file. It receives each frame into a memory buffer, but does not do anything with the frame data. However, it is this code that you should use as a model for your own application. In particular, look at the "DummySink" class that the "testRTSPClient" demo application uses. Note, in particular, the (non-static) "DummySink::afterGettingFrame()" function ("testRTSPClient.cpp", lines 479-500. Note that when this function is called, a complete 'frame' (for H.264, this will be a "NAL unit") will have already been delivered into "fReceiveBuffer". Note that our "DummySink" implementation doesn't actually do anything with this data; that's why it's called a 'dummy' sink. If you want to decode (or otherwise process) these frames, you would replace "DummySink" with your own "MediaSink" subclass. It's "afterGettingFrame()" function would pass the data (at "fReceiveBuffer", of length "frameSize") to a decoder. If you are receiving H.264 video data, there is one more thing that you have to do before you start feeding frames to your decoder. H.264 streams have out-of-band configuration information (SPS and PPS NAL units) that you may need to feed to the decoder to initialize it. To get this information, call "MediaSubsession::fmtp_spropparametersets()" (on the video 'subsession' object). This will give you a (ASCII) character string. You can then pass this to "parseSPropParameterSets()", to generate binary NAL units for your decoder.) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jshanab at smartwire.com Thu Feb 14 08:33:45 2013 From: jshanab at smartwire.com (Jeff Shanab) Date: Thu, 14 Feb 2013 16:33:45 +0000 Subject: [Live-devel] Streaming RTSP and pulling single frames In-Reply-To: References: Message-ID: <615FD77639372542BF647F5EBAA2DBC22525BC77@IL-BOL-EXCH01.smartwire.com> I allow snapshots from my video to be saved when a user pauses and navigates to a frame. This is done by encodeing the RGB image into a JPEG image with avcodec. I also allow pulling a snapshot from a file at a timestamp. In this case I navigate to the closest keyframe, decode it from h264 and enode to JPEG and offer it up for view or save. H264 like mpeg4 deals with keyframes and difference frames so you must find the frames with the nal types [7][8] and [5] and send all three to the decoder for it to know how to decode it. This message and any attachments contain confidential and proprietary information, and may contain privileged information, belonging to one or more affiliates of Windy City Wire Cable & Technology Products, LLC. No privilege is waived by this transmission. Unauthorized use, copying or disclosure of such information is prohibited and may be unlawful. If you receive this message in error, please delete it from your system, destroy any printouts or copies of it, and notify the sender immediately by e-mail or phone. From: live-devel-bounces at ns.live555.com [mailto:live-devel-bounces at ns.live555.com] On Behalf Of Czarnek, Matt Sent: Thursday, February 14, 2013 9:42 AM To: live-devel at ns.live555.com Subject: [Live-devel] Streaming RTSP and pulling single frames Hello, I would like to is pull from a RTSP stream, and then pull individual frames (whole images) from it and be able to pass pointers around and use those frames from there. Seems like it should be simple but I'm going in circles. I see testRTSPClient & openRTSP and more or less see how they work & that they can save to a file but am still having trouble figuring out how to pull individual images from the stream. The more versatile the better but I at least need to be able to support H264. Any thoughts? Thank you! Matt -- Matt Czarnek, Software Engineer Work Phone: (760) 462-5843 ObjectVideo Inc. http://www.objectvideo.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From felix at embedded-sol.com Thu Feb 14 22:19:04 2013 From: felix at embedded-sol.com (Felix Radensky) Date: Fri, 15 Feb 2013 08:19:04 +0200 Subject: [Live-devel] G.711 audio in MPEG2 TS Message-ID: <511DD358.80902@embedded-sol.com> Hi, I'm trying to wrap live H.264 video and G.711 audio into MPEG-2 TS using MPEG2TransportStreamFromESSource and stream the resulting TS. Video streaming works fine, but audio format is recognized by VLC as AAC. The value of mpegVersion (PMT stream_type) passed to addNewInputSource is obviously not correct. What value should be used for G.711 (a-law). Thanks a lot. Felix. From finlayson at live555.com Thu Feb 14 22:25:45 2013 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 15 Feb 2013 17:25:45 +1100 Subject: [Live-devel] G.711 audio in MPEG2 TS In-Reply-To: <511DD358.80902@embedded-sol.com> References: <511DD358.80902@embedded-sol.com> Message-ID: > I'm trying to wrap live H.264 video and G.711 audio into MPEG-2 TS > using MPEG2TransportStreamFromESSource and stream the resulting > TS. Video streaming works fine, but audio format is recognized by VLC > as AAC. The value of mpegVersion (PMT stream_type) passed to addNewInputSource > is obviously not correct. What value should be used for G.711 (a-law). I'm not sure that it's even possible to put G.711 audio in a MPEG Transport Stream. Usually this is defined only for MPEG-defined audio (or video) codecs. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From cloverobert at gmail.com Fri Feb 15 06:25:36 2013 From: cloverobert at gmail.com (Robert Clove) Date: Fri, 15 Feb 2013 19:55:36 +0530 Subject: [Live-devel] SubClassing "OnDemandServerMediaSubsession" In-Reply-To: References: <347E6D1F-5914-4AC7-9FA7-EBF5E6396DEB@live555.com> Message-ID: Hello Ross sir, As stated in your reply i ran multiple threads, each with their own "UsageEnvironment" and "TaskScheduler". It works fine. But when i stop receiving the stream from the back-end server that is when i call "deleteservermediasession" for that particular session and again want to receive stream the backend server didnot stream.When i tried to figure it out i saw that the port on which the back-end server was streaming is still showing the status of "ESTABLISHED" even after i have called the "deleteservermediasession" for that particular session. I really need your help. I was searching the mailing list i found a ques in which the guy want to stream multiple stream like one today and another day after tomorrow and for them you have asked him to use " TaskScheduler::scheduleDelayedTask". Can some thing like be used in our case? Thanks On Thu, Feb 14, 2013 at 8:47 PM, Ross Finlayson wrote: > First, DO NOT post the same question to the list multiple times. This is > a basic violation of 'netiquette' that everyone should know, and is > explained clearly in the FAQ (that everyone is asked to read before posting > to the mailing list). You have (effectively) posted the same question to > the list three times. Because of this, all future postings from you (and > anyone else from "interfaceinfosoft.com") will be moderated. > > My answer to the original question was very clear: > > > The existing "LIVE 555 Proxy Server" can *already* stream concurrently > from multiple back-end servers. It does this as a single-threaded > application (using an event loop, rather than threads, for concurrency) > > > > > > But the issue is that if am receiving one stream from backend rtsp server > and then i want another stream to receive i call the proxy server media > with new env and scheduler and without calling the do event loop again. > > > As I explained before, you do not need to (and should not) do this. You > can receive from multiple streams using a *single* event loop, and thus a > *single* "UsageEnvironment" and "TaskScheduler". All operations (including > your 'stop' operation) are performed by handling events from within the > (single) event loop. > > As explained in the FAQ, you *can* run multiple threads, each with their > own "UsageEnvironment" and "TaskScheduler". However, if you do this, these > threads must not interact (except perhaps via global variables). So, > that's probably not what you want. > > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From temp2010 at forren.org Fri Feb 15 13:46:11 2013 From: temp2010 at forren.org (temp2010 at forren.org) Date: Fri, 15 Feb 2013 16:46:11 -0500 Subject: [Live-devel] Where can I trap and debug response to signalNewFrameData() ? Message-ID: I've successfully cobbled some things together in the past, but in doing it again this time it's not working. I come to the point of wanting to trace the action that happens in response to DeviceSource.cpp's signalNewFrameData() . In detail, signalNewFrameData() does this: ourScheduler->triggerEvent(DeviceSource::eventTriggerId, ourDevice); and I want to set a breakpoint elsewhere in 555 to see this event getting handled. Perhaps you need more info to answer the question... More correctly, I have derived my own source from DeviceSource.cpp and so it's no longer DeviceSource.cpp but MF_H264_DeviceSource.cpp, no longer DeviceSource but MF_H264_DeviceSource. I'm creating frames from somewhere else and then calling MF_H264_DeviceSource.cpp's signalNewFrameData(). My successful cobble in the past was derived from H264VideoDeviceStreamer.cpp and it worked. My frames got created, I called signalNewFrameData(), and magically the video went out IP and I viewed it using VLC. I'm pretty sure this was multicast. Now I'm trying to reproduce my cobble, but for unicast and derived from SPIIROnDemandRTSPServer.cpp instead. Exactly as before, my frames get created and I call signalNewFrameData(), but then no frames are coming out IP. With all this cobbling, I'm not sure I have all the lose ends reconnected. Therefore, I'm interested in setting a breakpoint deep inside Live555 somewhere. I'll run my successful cobble first, see that I call signalNewFrameData(), and then stop on a breakpoint somewhere in response to that, where this new frame data is handled and sent out IP. After that, I'll run my new unsuccessful cobble, see that I call signalNewFrameData(), and then see if I hit the breakpoint or not. If I do hit the breakpoint, I'll follow down through to see if I can find the problem. If I don't hit the breakpoint, I'll be alerted to the fact that I must not have something connected. One other thing, both H264VideoDeviceStreamer.cpp and SPIIROnDemandRTSPServer.cpp use a ByteStreamFileSource. In both cases I've replaced that with my MF_H264_DeviceSource. As mentioned before, this worked for the first cobble H264VideoDeviceStreamer() but not the current cobble (SPIIROnDemandRTSPServer). Your help and advice is greatly appreciated. -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Fri Feb 15 15:43:16 2013 From: finlayson at live555.com (Ross Finlayson) Date: Sat, 16 Feb 2013 10:43:16 +1100 Subject: [Live-devel] Where can I trap and debug response to signalNewFrameData() ? In-Reply-To: References: Message-ID: <709ACF88-9831-4240-AEB2-CAFAEB70DA47@live555.com> > Now I'm trying to reproduce my cobble, but for unicast and derived from SPIIROnDemandRTSPServer.cpp instead. Exactly as before, my frames get created and I call signalNewFrameData(), but then no frames are coming out IP. With all this cobbling, I'm not sure I have all the lose ends reconnected. Therefore, I'm interested in setting a breakpoint deep inside Live555 somewhere. No, you shouldn't be setting breakpoints 'deep inside LIVE555', because that code is working just fine, and it's code that you're unfamiliar with. Instead, you should be setting breakpoints in *your own* code. Specifically, you should be setting a breakpoint in your "deliverFrame0()" function - i.e., in the function that you passed as a parameter to the "createEventTrigger()" call. That function is the function that should be getting called after each event is triggered. Don't forget, of course, that all of this happens within the LIVE555 event loop, so don't forget to call "doEventLoop()" in your application to start the event loop running. > One other thing, both H264VideoDeviceStreamer.cpp and SPIIROnDemandRTSPServer.cpp use a ByteStreamFileSource. In both cases I've replaced that with my MF_H264_DeviceSource. OK, but don't forget that - because your "MF_H264_DeviceSource" code (presumably) delivers discrete H.264 NAL units - you must feed this into a "H264VideoStreamDiscreteFramer", *not* a "H264VideoStreamFramer" (as was used when the input was a byte stream). Also, the H.264 NAL units that come out of your "MF_H264_DeviceSource" code *must not* include a preceding 'start code' (i.e., must not start with 0x00 0x00 0x00 0x01). Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jesse.Hemingway at nerdery.com Fri Feb 15 09:30:27 2013 From: Jesse.Hemingway at nerdery.com (Jesse Hemingway) Date: Fri, 15 Feb 2013 11:30:27 -0600 Subject: [Live-devel] Packet reordering vs Jitter buffer design Message-ID: Hi Ross, I've seen threads in which you state that Live555 does not offer a jitter buffer, and I've also seen posts (in which you weren't involved) that claim it does. I'm fairly certain that your logic is reordering packets so that frames are delivered to a media sink in presentation order (which is sometimes considered part of a jitter buffer's duty). Can you verify what you mean by saying Live555 does not offer a jitter buffer? Is this in effect saying that your logic may delay delivery for the maximum time necessary to deliver ordered and complete frames, but makes no attempt to deliver these frames at a steady rate? Thanks, Jesse -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Fri Feb 15 16:00:37 2013 From: finlayson at live555.com (Ross Finlayson) Date: Sat, 16 Feb 2013 11:00:37 +1100 Subject: [Live-devel] Packet reordering vs Jitter buffer design In-Reply-To: References: Message-ID: > Can you verify what you mean by saying Live555 does not offer a jitter buffer? Is this in effect saying that your logic may delay delivery for the maximum time necessary to deliver ordered and complete frames, but makes no attempt to deliver these frames at a steady rate? That's correct. The LIVE555 library code (basically, "*RTPSource") delivers complete frames in their proper order as they arrive, along with 'presentation times' for each. It is up to the application code to use these 'presentation times' to render the received frames at the appropriate time. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From zanglan at yahoo.com Fri Feb 15 19:44:50 2013 From: zanglan at yahoo.com (Lan Zang) Date: Sat, 16 Feb 2013 11:44:50 +0800 (CST) Subject: [Live-devel] SimpleRTPSource and RTCP Message-ID: <1360986290.44260.YahooMailNeo@web15804.mail.cnb.yahoo.com> Hi, I am trying to use the codes of stream "mpeg2TransportStreamFromUDPSourceTest" of testProgs/testOnDemandRTSPServer to read from a source which will send mpeg2ts over RTP, and the server do read data from RTP source, which is good. But I found that the RTSPServer has not a RTCP port. So, is that normal? If I want to using RTCP to sync-up data of this incoming RTP stream, what else do I need to do? The class "MPEG2TransportUDPServerMediaSubsession" Seems don't have a RTCPInstance member. Thanks, Lan Zang(Sander)? -------------- next part -------------- An HTML attachment was scrubbed... URL: From temp2010 at forren.org Sat Feb 16 07:39:45 2013 From: temp2010 at forren.org (temp2010 at forren.org) Date: Sat, 16 Feb 2013 10:39:45 -0500 Subject: [Live-devel] Where can I trap and debug response to signalNewFrameData() ? In-Reply-To: <709ACF88-9831-4240-AEB2-CAFAEB70DA47@live555.com> References: <709ACF88-9831-4240-AEB2-CAFAEB70DA47@live555.com> Message-ID: Ross, Thanks for the advice. I found MF_H264_DeviceSource::deliverFrame() as well as an old trace statement I put in it back on my successful cobble. This is in fact the "deep inside" I was looking for, albeit having been called back from deep inside and now back in my code. The trace is coming out, which proves that this is connected. I also made the H264VideoStreamDiscreteFramer change and removed the 'start code', but it didn't help. My new cobble is still not working. Perhaps I should tell you the actual problem! When I go back to using ByteStreamFileSource as the FramedSource, VLC successfully opens the announced rtsp://192.168.123.5:8554/h264ESVideoTest and plays it great. This proves the VLC side is working. However, when I use MF_H264_DeviceSource as the FramedSource, VLC says it can't even open it. Nevertheless, when VLC does try to open it, I do see a trace for my MF_H264_DeviceSource being created and passed back as FramedSource, and I see my trace in MF_H264_DeviceSource::deliverFrame() start happening. For some reason, however, there's a difference such that VLC can no longer open the stream. Here's my intercept. Just for the easiest method and testing, I trap the filename in H264VideoFileServerMediaSubsession:: createNewStreamSource. This function normally creates a ByteStreamFileSource. But if the filename is "", I cause it to instead create a MF_H264_DeviceSource. You can also see the H264VideoStreamDiscreteFramer change I just made. FramedSource* H264VideoFileServerMediaSubsession::createNewStreamSource(unsigned /*clientSessionId*/, unsigned& estBitrate) { // HgF: Add this DEBUG stuff #ifdef _DEBUG //#define TRACE(msg) OutputDebugString(TEXT(msg)) /* HgF */ #define TRACE TRACEX void CDECL TRACEX(PCSTR pszFormat, ...); #else #define TRACE(...) /* OMIT IF NOT DEBUGGING */ #endif FramedSource* returnFramedSource; FramedSource* videoES; // HgF: Add this var as common place for source pointer. Note it's type better matches what H264VideoStreamFramer::createNew() wants anyway estBitrate = 500; // kbps, estimate // Create the video source: if (0==strcmp(fFileName,"")) { // HgF: Add this hook for using MF_H264_DeviceSource instead of ByteStreamFileSource TRACE("BEFORE HOOK - H264VideoFileServerMediaSubsession::createNewStreamSource() using MF_H264_DeviceSource to receive live SPIIR-Capture H.264 stream\n"); MF_H264_DeviceSource* theSource = MF_H264_DeviceSource::createNew( *SPIIROnDemandRTSPServer::env /* UsageEnvironment& */, &SPIIROnDemandRTSPServer::H264VideoDeviceStreamer_params /* MF_H264_DeviceParameters */); videoES = theSource; TRACE("AFTER HOOK - H264VideoFileServerMediaSubsession::createNewStreamSource() using MF_H264_DeviceSource to receive live SPIIR-Capture H.264 stream\n"); // Create a framer for the Video Elementary Stream: returnFramedSource = H264VideoStreamDiscreteFramer::createNew(envir(), videoES); // HgF-20130215 per advice from Ross Finlayson, must use H264VideoStreamDiscreteFramer, not H264VideoStreamFramer } else { // HgF: The ByteStreamFileSource code in this clause is unchanged, aside from copying fileSource into videoES and TRACE statements TRACE("NOT HOOKED - H264VideoFileServerMediaSubsession::createNewStreamSource() using ByteStreamFileSource to play file %s\n", fFileName); ByteStreamFileSource* fileSource = ByteStreamFileSource::createNew(envir(), fFileName); if (fileSource == NULL) { TRACE("NOT HOOKED - H264VideoFileServerMediaSubsession::createNewStreamSource() ERROR OPENING FILE %s\n", fFileName); return NULL; } fFileSize = fileSource->fileSize(); videoES = fileSource; // HgF: Add this copy to common place for source pointer TRACE("NOT HOOKED - H264VideoFileServerMediaSubsession::createNewStreamSource() ready to play %s\n", fFileName); // Create a framer for the Video Elementary Stream: returnFramedSource = H264VideoStreamFramer::createNew(envir(), videoES); // HgF-20130215 per advice from Ross Finlayson, must use H264VideoStreamDiscreteFramer, not H264VideoStreamFramer } return(returnFramedSource); // HgF-20130215 per advice from Ross Finlayson, must use H264VideoStreamDiscreteFramer, not H264VideoStreamFramer } ADDITIONAL CONFUSION... I thought tracking this down might help me find my problem, but it appears not. Please don't let the question below subtract from focus on the more important question above, about VLC not being able to open the stream in the MF_H264_DeviceSource case when it can in the ByteStreamFileSource case. I know VLC isn't yours. The question is about how Live555 is reacting to my code, not about VLC. It appears that BasicTaskScheduler0::doEventLoop is being called recursively. SPIIROnDemandRTSPServer (derived from testOnDemandRTSPServer calls it. Then, when an incoming request event occurs, it ends up being called again. This causes the location of the watchVariable to be changed for this recursive instance. When using ByteStreamFileSource, it seems to set this new watchVariable, and the recursive instance stops, returning to the original instance. But in my MF_H264_DeviceSource where VLC causes an incoming request event yet can't connector for some reason, no such set of the watchVariable occurs, so the recursive instance won't stop, and when my mainline sets the watchVariable of the first instance of doEventLoop, it does nothing because the recursive instance is still running. Below is the call stack when the recursive doEventLoop() first begins. > Capture.exe!BasicTaskScheduler0::doEventLoop(char * watchVariable) Line 85 C++ Capture.exe!H264VideoFileServerMediaSubsession::getAuxSDPLine(RTPSink * rtpSink, FramedSource * inputSource) Line 99 C++ Capture.exe!OnDemandServerMediaSubsession::setSDPLinesFromRTPSink(RTPSink * rtpSink, FramedSource * inputSource, unsigned int estBitrate) Line 325 C++ Capture.exe!OnDemandServerMediaSubsession::sdpLines() Line 68 C++ Capture.exe!ServerMediaSession::generateSDPDescription() Line 233 C++ Capture.exe!RTSPServer::RTSPClientConnection::handleCmd_DESCRIBE(const char * urlPreSuffix, const char * urlSuffix, const char * fullRequestStr) Line 398 C++ Capture.exe!RTSPServer::RTSPClientConnection::handleRequestBytes(int newBytesRead) Line 759 C++ Capture.exe!RTSPServer::RTSPClientConnection::incomingRequestHandler1() Line 623 C++ Capture.exe!RTSPServer::RTSPClientConnection::incomingRequestHandler(void * instance, int __formal) Line 616 C++ Capture.exe!BasicTaskScheduler::SingleStep(unsigned int maxDelayTime) Line 164 C++ Capture.exe!BasicTaskScheduler0::doEventLoop(char * watchVariable) Line 88 C++ Capture.exe!SPIIROnDemandRTSPServer::doEventLoop(char * watchVariable) Line 125 C++ Capture.exe!OutputThreadFunction(void * lpParam) Line 113 C++ kernel32.dll!BaseThreadInitThunk () Unknown ntdll.dll!RtlUserThreadStart () Unknown On further analysis, it's watchVariable is fDoneFlag member of H264VideoFileSerrverMdiaSubsession. I see this var initialized to zero, but can find nowhere it's ever set. So I don't know how it's ever set in the ByteStreamFileSource case. On Fri, Feb 15, 2013 at 6:43 PM, Ross Finlayson wrote: > Now I'm trying to reproduce my cobble, but for unicast and derived > from SPIIROnDemandRTSPServer.cpp instead. Exactly as before, my frames get > created and I call signalNewFrameData(), but then no frames are coming out > IP. With all this cobbling, I'm not sure I have all the lose ends > reconnected. Therefore, I'm interested in setting a breakpoint deep inside > Live555 somewhere. > > > No, you shouldn't be setting breakpoints 'deep inside LIVE555', because > that code is working just fine, and it's code that you're unfamiliar with. > Instead, you should be setting breakpoints in *your own* code. > Specifically, you should be setting a breakpoint in your "deliverFrame0()" > function - i.e., in the function that you passed as a parameter to the > "createEventTrigger()" call. That function is the function that should be > getting called after each event is triggered. > > Don't forget, of course, that all of this happens within the LIVE555 event > loop, so don't forget to call "doEventLoop()" in your application to start > the event loop running. > > > One other thing, both H264VideoDeviceStreamer.cpp and > SPIIROnDemandRTSPServer.cpp use a ByteStreamFileSource. In both cases I've > replaced that with my MF_H264_DeviceSource. > > > OK, but don't forget that - because your "MF_H264_DeviceSource" code > (presumably) delivers discrete H.264 NAL units - you must feed this into a > "H264VideoStreamDiscreteFramer", *not* a "H264VideoStreamFramer" (as was > used when the input was a byte stream). Also, the H.264 NAL units that > come out of your "MF_H264_DeviceSource" code *must not* include a preceding > 'start code' (i.e., must not start with 0x00 0x00 0x00 0x01). > > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Sat Feb 16 17:48:57 2013 From: finlayson at live555.com (Ross Finlayson) Date: Sun, 17 Feb 2013 12:48:57 +1100 Subject: [Live-devel] Where can I trap and debug response to signalNewFrameData() ? In-Reply-To: References: <709ACF88-9831-4240-AEB2-CAFAEB70DA47@live555.com> Message-ID: <2CBA9CA2-7817-4CF1-9B57-2EB247EC8F8F@live555.com> First, you should *not* modify the existing "H264VideoFileServerMediaSubsession" code. Instead, write your own "OnDemandServerMediaSubsession" subclass, and use that instead. This is explained very clearly in the FAQ. > Perhaps I should tell you the actual problem! Sorry, but I don't have time to debug your code for free on this mailing list. If, however, your organization is interested in hiring me as a consultant on your project, then please let me know (by private email). Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Sat Feb 16 17:48:58 2013 From: finlayson at live555.com (Ross Finlayson) Date: Sun, 17 Feb 2013 12:48:58 +1100 Subject: [Live-devel] SimpleRTPSource and RTCP In-Reply-To: <1360986290.44260.YahooMailNeo@web15804.mail.cnb.yahoo.com> References: <1360986290.44260.YahooMailNeo@web15804.mail.cnb.yahoo.com> Message-ID: <3C5C83F1-4416-491D-A636-EB63DFB57666@live555.com> > I am trying to use the codes of stream "mpeg2TransportStreamFromUDPSourceTest" of testProgs/testOnDemandRTSPServer to read from a source which will send mpeg2ts over RTP, and the server do read data from RTP source, which is good. But I found that the RTSPServer has not a RTCP port. No, that's incorrect. When the RTSP server streams to any client, there will be one (even) port for RTP, and the next (i.e., odd) port for RTCP. > SThe class "MPEG2TransportUDPServerMediaSubsession" Seems don't have a RTCPInstance member. That's because it's automatically created by the superclass ("OnDemandServerMediaSubsession"). Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From zanglan at yahoo.com Sat Feb 16 22:00:13 2013 From: zanglan at yahoo.com (Lan Zang) Date: Sun, 17 Feb 2013 14:00:13 +0800 (CST) Subject: [Live-devel] SimpleRTPSource and RTCP In-Reply-To: <3C5C83F1-4416-491D-A636-EB63DFB57666@live555.com> References: <1360986290.44260.YahooMailNeo@web15804.mail.cnb.yahoo.com> <3C5C83F1-4416-491D-A636-EB63DFB57666@live555.com> Message-ID: <1361080813.89232.YahooMailNeo@web15808.mail.cnb.yahoo.com> Ross, I tested the?testProgs/testOnDemandRTSPServer. It does not have the odd port for RTCP, (i.e only 1234 port is open). Did I missed anything? Regards, Lan Zang(Sander) ________________________________ From: Ross Finlayson To: LIVE555 Streaming Media - development & use Sent: Sunday, February 17, 2013 9:48 AM Subject: Re: [Live-devel] SimpleRTPSource and RTCP I am trying to use the codes of stream "mpeg2TransportStreamFromUDPSourceTest" of testProgs/testOnDemandRTSPServer to read from a source which will send mpeg2ts over RTP, and the server do read data from RTP source, which is good. But I found that the RTSPServer has not a RTCP port. No, that's incorrect. ?When the RTSP server streams to any client, there will be one (even) port for RTP, and the next (i.e., odd) port for RTCP. SThe class "MPEG2TransportUDPServerMediaSubsession" Seems don't have a RTCPInstance member. That's because it's automatically created by the superclass ("OnDemandServerMediaSubsession"). Ross Finlayson Live Networks, Inc. http://www.live555.com/ _______________________________________________ live-devel mailing list live-devel at lists.live555.com http://lists.live555.com/mailman/listinfo/live-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Sat Feb 16 22:15:09 2013 From: finlayson at live555.com (Ross Finlayson) Date: Sun, 17 Feb 2013 16:15:09 +1000 Subject: [Live-devel] SimpleRTPSource and RTCP In-Reply-To: <1361080813.89232.YahooMailNeo@web15808.mail.cnb.yahoo.com> References: <1360986290.44260.YahooMailNeo@web15804.mail.cnb.yahoo.com> <3C5C83F1-4416-491D-A636-EB63DFB57666@live555.com> <1361080813.89232.YahooMailNeo@web15808.mail.cnb.yahoo.com> Message-ID: <23EF6256-7EE6-4ED1-8FAB-5F1B176AD49F@live555.com> > I tested the testProgs/testOnDemandRTSPServer. It does not have the odd port for RTCP, (i.e only 1234 port is open). OK, I misunderstood your question. I thought you were asking about the *output* from the RTSP server. But yes, you're right. We should really be creating a "RTCPInstance" as well as the "SimpleRTPSource". But in this case it doesn't really matter, because we don't need accurate 'presentation times' for these packets. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From cloverobert at gmail.com Sun Feb 17 07:38:33 2013 From: cloverobert at gmail.com (Robert Clove) Date: Sun, 17 Feb 2013 21:08:33 +0530 Subject: [Live-devel] Is there any way to know socket no in Proxy server Message-ID: Hi Ross, Ross is there any way to know the TCP socket no on which proxy server is receiving the stream from the back-end rtsp server when the -t option is given in the url which is given to proxy server? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From gauravb at interfaceinfosoft.com Mon Feb 18 05:42:06 2013 From: gauravb at interfaceinfosoft.com (Gaurav Badhan) Date: Mon, 18 Feb 2013 19:12:06 +0530 Subject: [Live-devel] Please help me in following problems In-Reply-To: References: Message-ID: Hello Ross, Please help me in following problems:- Can you please tell me when the proxy server send the TEAR-DOWN request to the backend rtsp server. And when proxy server send the PAUSE request to the back end rtsp server. Some when i delete server media session i see that the proxy server send the Tear-Down request ProxyServerMediaSession["rtsp://192.168.15.192:8554/STREAMER/ "]::~ProxyServerMed iaSession() Sending request: TEARDOWN rtsp://192.168.15.192:8554/STREAMER/ RTSP/1.0 CSeq: 6 User-Agent: ProxyRTSPClient (LIVE555 Streaming Media v2013.01.25) Session: 8DA7818C and some times it send the PAUSE Request ProxyServerMediaSubsession["MP2T"]::closeStreamSource() Sending request: PAUSE rtsp://192.168.15.192:8554/STREAMER/ RTSP/1.0 CSeq: 5 User-Agent: ProxyRTSPClient (LIVE555 Streaming Media v2013.01.25) Session: EE8585A1 One more thing after sending the tear down request it should close the socket on which the proxy server is reciving data but the connection seems to be established. How to close the sockets? How can the proxy server be made to send the TEAR DOWN request each time? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jesse.Hemingway at nerdery.com Mon Feb 18 20:20:29 2013 From: Jesse.Hemingway at nerdery.com (Jesse Hemingway) Date: Mon, 18 Feb 2013 22:20:29 -0600 Subject: [Live-devel] Interpreting Presentation Time Message-ID: Sorry if this is a stupid question, but I can't fully understand how Live555's Presentation Times are to be applied, *aside* from synchronizing parallel media streams. Is there any way to use these presentation times to determine the receiver's time offset from the server? In my case, I'm trying to achieve stable, consistent and most-importantly, low-latency streaming. I've pulled this off fairly well under controlled conditions, but I'm still a bit mystified about how (or whether) it's possible to relate presentation times as computed for afterGettingFrame() to absolute time, as the server understands it. This would help calculate round-trip latency or ask the server to skip ahead when the client falls behind due to network or other delays. I know that SR's and RR's have something to do with it and Live555 'takes care of it' for me, but I'd like to understand more and can't seem to find the resources that would explain it clearly. Can anyone point me to a resource (other than the IETF docs which I've already read and somewhat understand)? Right now my method involves recording the presentation time at the first moment that synchronization occurs, and use that as my 'absolute' server-client time offset - but this is obviously a flawed approach. Thanks, Jesse -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Mon Feb 18 23:27:50 2013 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 19 Feb 2013 17:27:50 +1000 Subject: [Live-devel] Interpreting Presentation Time In-Reply-To: References: Message-ID: <55939621-0FF6-4E93-8A07-877079DBF048@live555.com> > Sorry if this is a stupid question, but I can't fully understand how Live555's Presentation Times are to be applied, *aside* from synchronizing parallel media streams. That's correct. The presentation time indicates the time of each frame, relative to other frames of the same media type, and to frames of other media type(s) (if any). They are used to render each frame at the appropriate time (again, relative to other frames of the same media type, and to frames of other media type(s) (if any)). > Is there any way to use these presentation times to determine the receiver's time offset from the server? No. Although the presentation times are generated by the server, they are not 'absolute' times, because you don't know what 'absolute' clock, if any, the server's clock is synchronized to. > I'm still a bit mystified about how (or whether) it's possible to relate presentation times as computed for afterGettingFrame() to absolute time, as the server understands it. It's not possible (without extensions to RTP/RTCP that we don't currently support). But it's rarely something that you really need. (See below.) > This would help calculate round-trip latency or ask the server to skip ahead when the client falls behind due to network or other delays. Do you really need to know much time the receiver/player is behind the sender? You usually shouldn't - unless you have *multiple* receivers (e.g., in the same room) that you need to keep synchronized with each other. (As I noted above, there are RTP/RTCP extensions - currently being standardized by the IETF - to handle this situation, but it's not something that I plan to support any time soon, and it's rarely something that people really need.) > I know that SR's and RR's have something to do with it Our *server* uses its received "RR" packets (from each receiver) to estimate the round-trip time to the receiver (note the function "RTPTransmissionStats::roundTripTime()"). However, this functionality is not available to receivers. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From felix at embedded-sol.com Tue Feb 19 01:32:50 2013 From: felix at embedded-sol.com (Felix Radensky) Date: Tue, 19 Feb 2013 11:32:50 +0200 Subject: [Live-devel] Streaming live PCM audio Message-ID: <512346C2.4030304@embedded-sol.com> Hi, I have problems streaming live PCM audio. Audio comes either directly from microphone (16-bit LE) or from hardware encoder (A-LAW). Sampling frequency is 8k, mono. In both cases I receive a buffer (320 bytes for 16-bit, 160 bytes for 8-bit) in the thread interfacing hardware and use event trigger to signal my live555 thread. I've created my own class based on DeviceSource template, that sets fFrameSize and fPresentationTime (using gettimeoofday()) and delivers the buffer to SimpleRTPSink via deliverFrame(). 16-bit data is also converted to BE. I'm setting OutPacketBuffer:: maxSize to 20480. I know that audio is correct, because if I save it to file instead of sending to RTPSink, convert to WAV and stream via testWAVAudioStreamer, VLC reproduces it correctly. I also know that my DeviceSource class is correct, because I use it successfully to stream live H.264 video. However with PCM audio, VLC correctly recognizes stream format, sampling frequency and numbers of channels, but produces noisy output that cuts frequently. I get the following errors in VLC: mainerror: ES_OUT_RESET_PCR called mainwarning: PTS is out of range (-9982), dropping buffer mainwarning: buffer too early (-55875), down-sampling mainwarning: timing screwed, stopping resampling mainwarning: buffer too early (-86375), down-sampling mainwarning: resampling stopped after 256448 usec (drift: 58884) mainwarning: buffer too early (-56084), down-sampling mainwarning: timing screwed, stopping resampling mainwarning: buffer way too early (-125031), clearing queue mainwarning: buffer too late (60316), up-sampling mainwarning: resampling stopped after 610634 usec (drift: -7483) mainwarning: buffer too early (-54829), down-sampling mainwarning: timing screwed, stopping resampling mainwarning: buffer too early (-80356), down-sampling mainwarning: buffer way too early (-133937), clearing queue mainwarning: timing screwed, stopping resampling mainwarning: buffer too early (-97938), down-sampling mainwarning: buffer way too early (-150441), clearing queue mainwarning: timing screwed, stopping resampling I'm currently out of ideas what I'm doing wrong. Any help would be greatly appreciated. Thanks. Felix. -------------- next part -------------- An HTML attachment was scrubbed... URL: From temp2010 at forren.org Tue Feb 19 10:31:05 2013 From: temp2010 at forren.org (temp2010 at forren.org) Date: Tue, 19 Feb 2013 13:31:05 -0500 Subject: [Live-devel] Where can I trap and debug response to signalNewFrameData() ? In-Reply-To: <2CBA9CA2-7817-4CF1-9B57-2EB247EC8F8F@live555.com> References: <709ACF88-9831-4240-AEB2-CAFAEB70DA47@live555.com> <2CBA9CA2-7817-4CF1-9B57-2EB247EC8F8F@live555.com> Message-ID: Ross, I apologize for offending you. It was not intended. A snafu led me around from one piece of advice on the FAQ to following the trail I was following, and I didn't realize there was other germane advice on the FAQ that I missed. Meanwhile, my mod of the existing code was only a tentative attempt at trapping what was going on in order to see if the end objective would work. Finally, I wasn't asking you to debug my code, but simply sent you a copy for context if you needed it. Well, per the FAQ, I already had a working FramedSource subclass, and I went on to derive my own class from OnDemandMediaServerSubsession using H264VideoFileServerMediaSubsession as a model. It turns out that this indeed puts me exactly where I was before, but now with my own derived code doing the same things that the original code I hooked was doing. And the result is the same, as one might expect to be the case. When I try to connect with VLC, it fails to connect. I don't understand what's going on or why this happens. I'm seeking advice as to either. I don't understand what actually constitutes this failure to connect. When VLC tries to connect, my own createNewStreamSource() and createNewRTPSink() get called, taking advantage of my own MF_H264_DeviceSource() and the same ol' H264VideoRTPSink. All return codes seem fine to me. But VLC reports that it's unable to connect and tries again, over and over. The only thing I can fathom is that the env or taskscheduler isn't connected up properly. But I haven't been able to get to the bottom of this yet. I ask, is this possibly the reason? Or is there some other thing I totally don't understand? Again, your advice is greatly appreciated. -Forren On Sat, Feb 16, 2013 at 8:48 PM, Ross Finlayson wrote: > First, you should *not* modify the existing > "H264VideoFileServerMediaSubsession" code. Instead, write your own > "OnDemandServerMediaSubsession" subclass, and use that instead. This is > explained very clearly in the FAQ. > > > Perhaps I should tell you the actual problem! > > > Sorry, but I don't have time to debug your code for free on this mailing > list. If, however, your organization is interested in hiring me as a > consultant on your project, then please let me know (by private email). > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Feb 19 16:51:03 2013 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 20 Feb 2013 10:51:03 +1000 Subject: [Live-devel] Streaming live PCM audio In-Reply-To: <512346C2.4030304@embedded-sol.com> References: <512346C2.4030304@embedded-sol.com> Message-ID: > I have problems streaming live PCM audio. Audio comes either directly > from microphone (16-bit LE) or from hardware encoder (A-LAW). The problem is that a-law audio is *not* PCM, and therefore has a different RTP payload format (if a-law audio is what you're sending). Specifically, if you're streaming a-law audio, then when you create your "SimpleRTPSink" object, the "rtpPayloadFormatName" parameter should be "PCMA" (and, of course, the "sdpMediaTypeString" parameter will be "audio"). Because a-law audio is 8-bits per sample, you don't do any byte swapping. If, on the other hand, you are converting the audio from (8-bit) a-law to (16-bit) PCM before streaming it, then you need to (1) make sure that the 16-bit audio is in big-endian order, and (2) use "L16" as the "rtpPayloadFormatName" parameter when you create your "SimpleRTPSink". > I know that audio is correct, because if I save it to file instead of sending to > RTPSink, convert to WAV and stream via testWAVAudioStreamer, VLC reproduces > it correctly. That works because (presumably) there's an appropriate header in the WAV file that tells "testWAVAudioStreamer" what kind of audio this is. If you run "testRTSPClient" on the stream, and look at the SDP description, you'll see the proper RTP payload format name name ("PCMA" or "L16") for this audio. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From chensc at corp.21cn.com Mon Feb 18 23:56:25 2013 From: chensc at corp.21cn.com (chensc) Date: Tue, 19 Feb 2013 15:56:25 +0800 Subject: [Live-devel] building live555 in VS 2010 Message-ID: <000601ce0e76$a02d82b0$e0888810$@corp.21cn.com> Hello, all, I would like to building live555 in VS 2010, and I think someone have done, so, who can send me your VC10 project? Thank you very much! Best regards, Oscar -------------- next part -------------- An HTML attachment was scrubbed... URL: From amar.sontakke1988 at gmail.com Tue Feb 19 01:59:00 2013 From: amar.sontakke1988 at gmail.com (amar sontakke) Date: Tue, 19 Feb 2013 15:29:00 +0530 Subject: [Live-devel] How to make video call from sip phone to rtsp server Message-ID: Hi, I am new in live media. I want to setup rtp/rtsp server for real time audio-video and want to make video call from sip phone to the server for live audio-video streaming. Is it possible using livemedia ? if yes, then how can I do this? Thanks in advance. Regards, Amar. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hinavinmath at gmail.com Tue Feb 19 02:02:51 2013 From: hinavinmath at gmail.com (Navin Math) Date: Tue, 19 Feb 2013 15:32:51 +0530 Subject: [Live-devel] Play same same stream in all clients Message-ID: Hi I have live555MediaServer.exe and 1.mp3 files in a folder. And I run the live555MediaServer.exe and play the 1.mp3 in vlc player with the address rtsp:///1.mp3 consider 1.mp3 file length is 4 min song. I am successfully able to play the 1.mp3 song, consider the song is reached now 2nd min, still 2 more min left. Now, I run the rtsp:///1.mp3 in second instance of VLC player. In the second VLC player, it starts playing 1.mp3 song from beginning (starting 0th sec) instead of 2nd min. I want this second VLC player should play the song from 2nd min, since the first vlc player reached 2 min already. I wanted both VLC player stream same data, both should be sync. Is it possible, if yes, what is the option? What binary I have to used? Thanks Navin -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinodh.g.us at gmail.com Tue Feb 19 05:46:22 2013 From: vinodh.g.us at gmail.com (Vinodh Kumar) Date: Tue, 19 Feb 2013 19:16:22 +0530 Subject: [Live-devel] Regarding start streaming of video at given postion(time). Message-ID: Hi there, I Have a requirement like starting live555server for streaming video, then client1 connects ands starts receiving the streamed video, if client2 connects to live555server for receiving the streamed video, the client2 should receive streamed video from the position were client1 is watching, can you get back to me on this, and how this can be implemented. Regards, Vinodh -------------- next part -------------- An HTML attachment was scrubbed... URL: From cloverobert at gmail.com Tue Feb 19 07:28:13 2013 From: cloverobert at gmail.com (Robert Clove) Date: Tue, 19 Feb 2013 20:58:13 +0530 Subject: [Live-devel] Can You Please only if possible Message-ID: Hello, Sorry if i am wrong but can you please give me pseudo code for:- Another possible way to access the code from multiple threads is to have each thread use its own "UsageEnvironment" and "TaskScheduler" objects, and thus its own event loop. The objects created by each thread (i.e., using its own "UsageEnvironment") must not interact (except via global variables). How to do this? -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Feb 19 18:26:10 2013 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 20 Feb 2013 12:26:10 +1000 Subject: [Live-devel] Where can I trap and debug response to signalNewFrameData() ? In-Reply-To: References: <709ACF88-9831-4240-AEB2-CAFAEB70DA47@live555.com> <2CBA9CA2-7817-4CF1-9B57-2EB247EC8F8F@live555.com> Message-ID: <3239C0D0-D709-4C0F-826B-62CB053283FC@live555.com> I suspect that your input source does not contain SPS and PPS NAL units. Our server code needs to see these (to generate stream 'configuration' information) before it can initialize completely. Therefore, you should either (1) make sure that your input stream contains SPS and PPS NAL units (preferably at or near the start), or (2) change your "createNewRTPSink()" implementation to use one of the alternative forms of "H264VideoRTPSink::createNew()" that take SPS and PPS NAL unit data as parameters. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jesse.Hemingway at nerdery.com Tue Feb 19 06:35:33 2013 From: Jesse.Hemingway at nerdery.com (Jesse Hemingway) Date: Tue, 19 Feb 2013 08:35:33 -0600 Subject: [Live-devel] Interpreting Presentation Time In-Reply-To: <55939621-0FF6-4E93-8A07-877079DBF048@live555.com> References: <55939621-0FF6-4E93-8A07-877079DBF048@live555.com> Message-ID: I see - thank you. That was a bit of a sanity check. In our case, we need to keep server-to-client latency low and predictable. Our use case is indeed like the exception you mention, with multiple receivers in the same 'room' that need to stay synced, i.e. having similar requirements as VoIP applications. After reading the specs, I already know that having a simple 'presentation time' (and if I read this post correctly, adjusted Normal Time) is a big convenience. But since I'm sure it's far from trivial to alter Live555 to support these new RTP/RTCP extensions, it appears the only viable alternative would be to add another proprietary, out-of-band connection to exchange ping information. Another thing I'm unclear on is who is buffering this data when the client falls behind. Let's say the client has a small socket buffer, so it can obviously only take in so much before discarding. Even in this case, I notice that the client can continue to play back at a much slower rate for an indeterminate period of time, with similar UDP packet loss as it would have if it was keeping up. This attains against at least a couple different RTSP servers in my tests. Does a typical RTSP server buffer large amounts of outgoing data, or is this again a matter of how it's configured? It looks like the RR contains the highest sequence number received, as well as some timing info, so then it seems the server at least would know that the client has fallen behind. Since the server already has the info, maybe it makes more sense to have the server automatically drop data, skip ahead or adjust bandwidth requirements to maintain latency. In which case, I've been thinking about this thing backwards. -Jesse On Tue, Feb 19, 2013 at 1:27 AM, Ross Finlayson wrote: > Sorry if this is a stupid question, but I can't fully understand how > Live555's Presentation Times are to be applied, *aside* from synchronizing > parallel media streams. > > > That's correct. The presentation time indicates the time of each frame, > relative to other frames of the same media type, and to frames of other > media type(s) (if any). They are used to render each frame at the > appropriate time (again, relative to other frames of the same media type, > and to frames of other media type(s) (if any)). > > > Is there any way to use these presentation times to determine the > receiver's time offset from the server? > > > No. Although the presentation times are generated by the server, they are > not 'absolute' times, because you don't know what 'absolute' clock, if any, > the server's clock is synchronized to. > > > I'm still a bit mystified about how (or whether) it's possible to relate > presentation times as computed for afterGettingFrame() to absolute time, as > the server understands it. > > > It's not possible (without extensions to RTP/RTCP that we don't currently > support). But it's rarely something that you really need. (See below.) > > > This would help calculate round-trip latency or ask the server to skip > ahead when the client falls behind due to network or other delays. > > > Do you really need to know much time the receiver/player is behind the > sender? You usually shouldn't - unless you have *multiple* receivers > (e.g., in the same room) that you need to keep synchronized with each > other. (As I noted above, there are RTP/RTCP extensions - currently being > standardized by the IETF - to handle this situation, but it's not something > that I plan to support any time soon, and it's rarely something that people > really need.) > > > I know that SR's and RR's have something to do with it > > > Our *server* uses its received "RR" packets (from each receiver) to > estimate the round-trip time to the receiver (note the function > "RTPTransmissionStats::roundTripTime()"). However, this functionality is > not available to receivers. > > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jesse.Hemingway at nerdery.com Tue Feb 19 15:40:51 2013 From: Jesse.Hemingway at nerdery.com (Jesse Hemingway) Date: Tue, 19 Feb 2013 17:40:51 -0600 Subject: [Live-devel] Interpreting Presentation Time In-Reply-To: References: <55939621-0FF6-4E93-8A07-877079DBF048@live555.com> Message-ID: I realized that there is semi-documented support for manually getting ping times via RTSP, by sending an empty GET_PARAMETERS command. The round-trip latency can be measured from the response time. Apparently this is problematic on certain servers, which do not respond to that command after the PLAY command has occurred, so YMMV. Thanks, Jesse On Tue, Feb 19, 2013 at 8:35 AM, Jesse Hemingway < Jesse.Hemingway at nerdery.com> wrote: > I see - thank you. That was a bit of a sanity check. In our case, we > need to keep server-to-client latency low and predictable. Our use case is > indeed like the exception you mention, with multiple receivers in the same > 'room' that need to stay synced, i.e. having similar requirements as VoIP > applications. > > After reading the specs, I already know that having a simple 'presentation > time' (and if I read this post correctly, > adjusted Normal Time) is a big convenience. But since I'm sure it's far > from trivial to alter Live555 to support these new RTP/RTCP extensions, it > appears the only viable alternative would be to add another proprietary, > out-of-band connection to exchange ping information. > > Another thing I'm unclear on is who is buffering this data when the client > falls behind. Let's say the client has a small socket buffer, so it can > obviously only take in so much before discarding. Even in this case, I > notice that the client can continue to play back at a much slower rate for > an indeterminate period of time, with similar UDP packet loss as it would > have if it was keeping up. This attains against at least a couple > different RTSP servers in my tests. Does a typical RTSP server buffer > large amounts of outgoing data, or is this again a matter of how it's > configured? > > It looks like the RR contains the highest sequence number received, as > well as some timing info, so then it seems the server at least would know > that the client has fallen behind. Since the server already has the info, > maybe it makes more sense to have the server automatically drop data, skip > ahead or adjust bandwidth requirements to maintain latency. In which case, > I've been thinking about this thing backwards. > > -Jesse > > > On Tue, Feb 19, 2013 at 1:27 AM, Ross Finlayson wrote: > >> Sorry if this is a stupid question, but I can't fully understand how >> Live555's Presentation Times are to be applied, *aside* from synchronizing >> parallel media streams. >> >> >> That's correct. The presentation time indicates the time of each frame, >> relative to other frames of the same media type, and to frames of other >> media type(s) (if any). They are used to render each frame at the >> appropriate time (again, relative to other frames of the same media type, >> and to frames of other media type(s) (if any)). >> >> >> Is there any way to use these presentation times to determine the >> receiver's time offset from the server? >> >> >> No. Although the presentation times are generated by the server, they >> are not 'absolute' times, because you don't know what 'absolute' clock, if >> any, the server's clock is synchronized to. >> >> >> I'm still a bit mystified about how (or whether) it's possible to relate >> presentation times as computed for afterGettingFrame() to absolute time, as >> the server understands it. >> >> >> It's not possible (without extensions to RTP/RTCP that we don't currently >> support). But it's rarely something that you really need. (See below.) >> >> >> This would help calculate round-trip latency or ask the server to skip >> ahead when the client falls behind due to network or other delays. >> >> >> Do you really need to know much time the receiver/player is behind the >> sender? You usually shouldn't - unless you have *multiple* receivers >> (e.g., in the same room) that you need to keep synchronized with each >> other. (As I noted above, there are RTP/RTCP extensions - currently being >> standardized by the IETF - to handle this situation, but it's not something >> that I plan to support any time soon, and it's rarely something that people >> really need.) >> >> >> I know that SR's and RR's have something to do with it >> >> >> Our *server* uses its received "RR" packets (from each receiver) to >> estimate the round-trip time to the receiver (note the function >> "RTPTransmissionStats::roundTripTime()"). However, this functionality is >> not available to receivers. >> >> >> Ross Finlayson >> Live Networks, Inc. >> http://www.live555.com/ >> >> >> _______________________________________________ >> live-devel mailing list >> live-devel at lists.live555.com >> http://lists.live555.com/mailman/listinfo/live-devel >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Feb 19 19:36:34 2013 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 20 Feb 2013 13:36:34 +1000 Subject: [Live-devel] How to make video call from sip phone to rtsp server In-Reply-To: References: Message-ID: > I am new in live media. I want to setup rtp/rtsp server for real time audio-video and want to make video call from sip phone to the server for live audio-video streaming. > Is it possible using livemedia ? No, because our server code does not implement SIP. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Feb 19 19:39:10 2013 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 20 Feb 2013 13:39:10 +1000 Subject: [Live-devel] Play same same stream in all clients In-Reply-To: References: Message-ID: <22D04C5B-6659-402D-BD38-9466A4465AD9@live555.com> > I am successfully able to play the 1.mp3 song, consider the song is reached now 2nd min, still 2 more min left. Now, I run the rtsp:///1.mp3 in second instance of VLC player. > > In the second VLC player, it starts playing 1.mp3 song from beginning (starting 0th sec) instead of 2nd min. > > I want this second VLC player should play the song from 2nd min, since the first vlc player reached 2 min already. > > I wanted both VLC player stream same data, both should be sync. > > Is it possible Yes, you can get this behavior by changing the constant "reuseSource" (in the file "mediaServer/DynamicRTSPServer.cpp", line 97) from False to True. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Feb 19 19:39:44 2013 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 20 Feb 2013 13:39:44 +1000 Subject: [Live-devel] Regarding start streaming of video at given postion(time). In-Reply-To: References: Message-ID: > I Have a requirement like starting live555server for streaming video, then client1 connects ands starts receiving the streamed video, if client2 connects to live555server for receiving the streamed video, the client2 should receive streamed video from the position were client1 is watching, can you get back to me on this, and how this can be implemented. Yes, see my answer to the previous questioner; he's apparently a classmate of yours. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From cloverobert at gmail.com Tue Feb 19 22:00:58 2013 From: cloverobert at gmail.com (Robert Clove) Date: Wed, 20 Feb 2013 11:30:58 +0530 Subject: [Live-devel] building live555 in VS 2010 In-Reply-To: <000601ce0e76$a02d82b0$e0888810$@corp.21cn.com> References: <000601ce0e76$a02d82b0$e0888810$@corp.21cn.com> Message-ID: Hi, Here are the steps to compile:- Modify the line "TOOLS32 = ..." in win32config to point to the VS2005 installed directory in your host machine. For example, "TOOLS32 = C:\Program Files\Microsoft Visual Studio 8\VC" is corresponding to my desktop's configuration. 2. Modify the line "LINK_OPTS_0 = $(linkdebug) msvicrt.lib" in win32config to "LINK_OPTS_0 = $(linkdebug) msvcrt.lib" Go to live media and other directories and run the following command in VS command Prompt cd liveMedia nmake /B -f liveMedia.mak cd ../groupsock nmake /B -f groupsock.mak cd ../UsageEnvironment nmake /B -f UsageEnvironment.mak cd ../BasicUsageEnvironment nmake /B -f BasicUsageEnvironment.mak cd ../testProgs nmake /B -f testProgs.mak cd ../mediaServer nmake /B -f mediaServer.mak Thanks On Tue, Feb 19, 2013 at 1:26 PM, chensc wrote: > Hello, all,**** > > ** ** > > I would like to building live555 in VS 2010, and I think someone have > done, so, who can send me your VC10 project? Thank you very much!**** > > ** ** > > Best regards,**** > > Oscar**** > > ** ** > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From temp2010 at forren.org Wed Feb 20 01:20:53 2013 From: temp2010 at forren.org (temp2010 at forren.org) Date: Wed, 20 Feb 2013 04:20:53 -0500 Subject: [Live-devel] Where can I trap and debug response to signalNewFrameData() ? In-Reply-To: <3239C0D0-D709-4C0F-826B-62CB053283FC@live555.com> References: <709ACF88-9831-4240-AEB2-CAFAEB70DA47@live555.com> <2CBA9CA2-7817-4CF1-9B57-2EB247EC8F8F@live555.com> <3239C0D0-D709-4C0F-826B-62CB053283FC@live555.com> Message-ID: Ross, Thanks. Cool... I figured out this exact same problem on my own yesterday, perhaps 5 hours before your email. I got up in the middle of the night tonight and implemented a fix. I just got it working... yeah!... and went to email you back. Then I found your email. I was proud of myself for figuring it out. I had to add or re-enable a bunch of tracing, comparing a working file-based case to failing my-device-stream case (sorry, I had to temporarily punch a bunch of code to do this). But this helped me find it. After all, there were no positive indicators (aka warning messages) about this absence, so it could have been any one of a number of problems. You, of course, are very familiar with the software, so you nailed it on the first suggestion! The problem was my live stream starts before the [VLC] client connects, and so the SPS and PPS NAL units have long passed by before Live555 code begins looking for them. So, I figured out how to add a callback function pointer to my device, so that when it's created it calls this callback, which posts a message that starts up my live stream. Thus, the live stream won't start until after the [VLC] client connects. This fixed the problem. The Live555 code sees the SPS and PPS NAL units and begins working. Thanks for your assistance nevertheless. If I hadn't had a sudden breakthrough on my own yesterday, your advice now would have made it happen for me today. And your prior assistance got me up to the pre-breakthrough point as well. NEXT, I'm back on to my horrible video quality. You may recall an old post from me about it, from my first otherwise-working cobble, then my wasn't really double-encoding confusion. It simply MUST BE a problem with my stream origination. I'm going to go search for some H.264 analysis tools, and I'll have to better educate myself on it. FYI, I'm using Microsoft Media Foundation in the stream origination, and using their H.264 encoding function. Something must be wrong with the setup or use, because it seems the quality can't be on the Live555 side (including my code and fault), because the from-file version works beautifully. On Tue, Feb 19, 2013 at 9:26 PM, Ross Finlayson wrote: > I suspect that your input source does not contain SPS and PPS NAL units. > Our server code needs to see these (to generate stream 'configuration' > information) before it can initialize completely. > > Therefore, you should either (1) make sure that your input stream contains > SPS and PPS NAL units (preferably at or near the start), or (2) change your > "createNewRTPSink()" implementation to use one of the alternative forms of > "H264VideoRTPSink::createNew()" that take SPS and PPS NAL unit data as > parameters. > > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From temp2010 at forren.org Wed Feb 20 06:34:08 2013 From: temp2010 at forren.org (temp2010 at forren.org) Date: Wed, 20 Feb 2013 09:34:08 -0500 Subject: [Live-devel] For raw H.264, live device source horribly ragged while byte stream file source beautiful Message-ID: I have a MF_H264_DeviceSource derived from DeviceSource, being used by H264VideoOnDemandServerMediaSubsession derived from OnDemandServerMediaSubsession, from top level XXXOnDemandRTSPServer derived from testOnDemandRTSPServer. Viewing my output with VLC, the output is HORRIBLY RAGGED. I took the same data being fed live to the MF_H264_DeviceSource and wrote it out to a file test.h264 instead. First, outside of Live555, I can convert my test.h264 to mp4 with a separate utility, and it plays BEAUTIFUL in Windows Media Player. Second, when I play my test.h264 through existing Live555 ByteStreamFileSource, being used by H264VideoFileServerMediaSubsession, from top level XXXOnDemandRTSPServer derived from testOnDemandRTSPServer, it views BEAUTIFUL on VLC. So it seems I must have some connectivity problem with my MF_H264_DeviceSource substitute for ByteStreamFileSource. But what? It mostly just passes frames through, so I doubt it's dropping them. Is it maybe a timestamp thing? What else might it be? Your advice is yet again greatly appreciated... -------------- next part -------------- An HTML attachment was scrubbed... URL: From markuss at sonicfoundry.com Wed Feb 20 08:00:12 2013 From: markuss at sonicfoundry.com (Markus Schumann) Date: Wed, 20 Feb 2013 16:00:12 +0000 Subject: [Live-devel] building live555 in VS 2010 In-Reply-To: <000601ce0e76$a02d82b0$e0888810$@corp.21cn.com> References: <000601ce0e76$a02d82b0$e0888810$@corp.21cn.com> Message-ID: <1ED2F9A76678E0428E90FB2B6F93672D0108C381@postal.sonicfoundry.net> I attached my VS2010 project and solution files with all sources removed. I left the directory structure intact - so it will be obvious where to place the sources. Enjoy Markus. From: live-devel-bounces at ns.live555.com [mailto:live-devel-bounces at ns.live555.com] On Behalf Of chensc Sent: Tuesday, February 19, 2013 1:56 AM To: live-devel at ns.live555.com Subject: [Live-devel] building live555 in VS 2010 Hello, all, I would like to building live555 in VS 2010, and I think someone have done, so, who can send me your VC10 project? Thank you very much! Best regards, Oscar -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: live555.zip Type: application/x-zip-compressed Size: 19426 bytes Desc: live555.zip URL: From felix at embedded-sol.com Wed Feb 20 08:10:35 2013 From: felix at embedded-sol.com (Felix Radensky) Date: Wed, 20 Feb 2013 18:10:35 +0200 Subject: [Live-devel] Streaming live PCM audio In-Reply-To: References: <512346C2.4030304@embedded-sol.com> Message-ID: <5124F57B.3050801@embedded-sol.com> An HTML attachment was scrubbed... URL: From temp2010 at forren.org Wed Feb 20 08:22:42 2013 From: temp2010 at forren.org (temp2010 at forren.org) Date: Wed, 20 Feb 2013 11:22:42 -0500 Subject: [Live-devel] For raw H.264, live device source horribly ragged while byte stream file source beautiful In-Reply-To: References: Message-ID: FYI, I've re-enabled tracing in H264VideoStreamParser::parse(), and am comparing the working ByteStreamFileSource to the failing MF_H264_DeviceSource, of essentially the same stream. So far I find that the working ByteStreamFileSource is causing the parse to update the presentation time every eight (8) NAL units, due to "(frame_num differs in value)". However, the failing MF_H264_DeviceSource is causing the parse to update the presentation time only every 64 (one example) NAL units. Thus I think it's squeezing presentation time so much it can't be rendered properly. So I'm trying to figure out where frame_num comes from and how my live send of the same stream might be messed up... Another other advice is greatly appreciated. On Wed, Feb 20, 2013 at 9:34 AM, temp2010 at forren.org wrote: > I have a MF_H264_DeviceSource derived from DeviceSource, being used > by H264VideoOnDemandServerMediaSubsession derived > from OnDemandServerMediaSubsession, from top level XXXOnDemandRTSPServer > derived from testOnDemandRTSPServer. > > Viewing my output with VLC, the output is HORRIBLY RAGGED. > > I took the same data being fed live to the MF_H264_DeviceSource and wrote > it out to a file test.h264 instead. > > First, outside of Live555, I can convert my test.h264 to mp4 with a > separate utility, and it plays BEAUTIFUL in Windows Media Player. > > Second, when I play my test.h264 through existing > Live555 ByteStreamFileSource, being used > by H264VideoFileServerMediaSubsession, from top level XXXOnDemandRTSPServer > derived from testOnDemandRTSPServer, it views BEAUTIFUL on VLC. > > So it seems I must have some connectivity problem with my > MF_H264_DeviceSource substitute for ByteStreamFileSource. But what? It > mostly just passes frames through, so I doubt it's dropping them. Is it > maybe a timestamp thing? What else might it be? > > Your advice is yet again greatly appreciated... > -------------- next part -------------- An HTML attachment was scrubbed... URL: From felix at embedded-sol.com Wed Feb 20 08:46:07 2013 From: felix at embedded-sol.com (Felix Radensky) Date: Wed, 20 Feb 2013 18:46:07 +0200 Subject: [Live-devel] Streaming live PCM audio In-Reply-To: <5124F57B.3050801@embedded-sol.com> References: <512346C2.4030304@embedded-sol.com> <5124F57B.3050801@embedded-sol.com> Message-ID: <5124FDCF.2080103@embedded-sol.com> An HTML attachment was scrubbed... URL: From temp2010 at forren.org Wed Feb 20 09:29:12 2013 From: temp2010 at forren.org (temp2010 at forren.org) Date: Wed, 20 Feb 2013 12:29:12 -0500 Subject: [Live-devel] For raw H.264, live device source horribly ragged while byte stream file source beautiful In-Reply-To: References: Message-ID: ...I'm getting closer, but still hoping for a hint on how to solve the puzzle. In H264VideoStreamParser, log2_max_frame_num defaults to 5, but the SPS NAL unit sets it to 8. In the ByteStreamFileSource case, the SPS NAL occurs twice, with the stream seemingly restarting a moment after it starts in the first place. In my live MF_H264_DeviceSource case, there's a restart of sorts, and thus a re-default to 5, but my actual stream isn't restarting. So from there down, log2_max_frame_num is erroneously 5 rather than 8. This causes the wrong frame_num to get parsed out, leading to frame_num NOT differing, leading to overly compressed presentation time. So WHY is this restart happening? Do I simply save the SPS (and PPS) NAL's from my stream and drop a repeat in later? Or do I make the restart not happen? Or something else. This "restart", in detail, relates to checkForAuxSDPLine1() function, which seems to be simply checking for something to be OK. Then after it the real job starts. Thus the apparent restart... On Wed, Feb 20, 2013 at 11:22 AM, temp2010 at forren.org wrote: > FYI, I've re-enabled tracing in H264VideoStreamParser::parse(), and am > comparing the working ByteStreamFileSource to the > failing MF_H264_DeviceSource, of essentially the same stream. So far I > find that the working ByteStreamFileSource is causing the parse to update > the presentation time every eight (8) NAL units, due to "(frame_num differs > in value)". However, the failing MF_H264_DeviceSource is causing the parse > to update the presentation time only every 64 (one example) NAL units. > Thus I think it's squeezing presentation time so much it can't be rendered > properly. So I'm trying to figure out where frame_num comes from and how > my live send of the same stream might be messed up... > > Another other advice is greatly appreciated. > > > On Wed, Feb 20, 2013 at 9:34 AM, temp2010 at forren.org wrote: > >> I have a MF_H264_DeviceSource derived from DeviceSource, being used >> by H264VideoOnDemandServerMediaSubsession derived >> from OnDemandServerMediaSubsession, from top level XXXOnDemandRTSPServer >> derived from testOnDemandRTSPServer. >> >> Viewing my output with VLC, the output is HORRIBLY RAGGED. >> >> I took the same data being fed live to the MF_H264_DeviceSource and wrote >> it out to a file test.h264 instead. >> >> First, outside of Live555, I can convert my test.h264 to mp4 with a >> separate utility, and it plays BEAUTIFUL in Windows Media Player. >> >> Second, when I play my test.h264 through existing >> Live555 ByteStreamFileSource, being used >> by H264VideoFileServerMediaSubsession, from top level XXXOnDemandRTSPServer >> derived from testOnDemandRTSPServer, it views BEAUTIFUL on VLC. >> >> So it seems I must have some connectivity problem with my >> MF_H264_DeviceSource substitute for ByteStreamFileSource. But what? It >> mostly just passes frames through, so I doubt it's dropping them. Is it >> maybe a timestamp thing? What else might it be? >> >> Your advice is yet again greatly appreciated... >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From CERLANDS at arinc.com Wed Feb 20 11:12:15 2013 From: CERLANDS at arinc.com (Erlandsson, Claes P (CERLANDS)) Date: Wed, 20 Feb 2013 19:12:15 +0000 Subject: [Live-devel] Connection failure - socket error 10057 Message-ID: Once in a while, about 1 out of maybe 3000 connections, I notice a connection failure. The client runs on Windows. The DESCRIBE response handler receives resultCode -10057 and resultString "Unknown error". I first suspected the server was to blame by not responding, but when I look at socket error 10057 the description says "Socket is not connected." (WSAENOTCONN), which sounds like a client issue. I btw don't know why it says -10057, but assume it is socket error 10057. Does this happen to anyone else? Anyone have more insight in why/when this socket error occur? /Claes -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 5740 bytes Desc: not available URL: From Anton.Chmelev at transas.com Wed Feb 20 00:32:03 2013 From: Anton.Chmelev at transas.com (Chmelev, Anton) Date: Wed, 20 Feb 2013 08:32:03 +0000 Subject: [Live-devel] Stack overflow caused by recursive destructors Message-ID: <532656B720FE4E48A88D9D98B11941667766155B@VOITV-EXCH-BE02.transas.com> Hello, I'm using liveMedia library to develop video recording application. Turns out, when you try to record a really big file (like, several hours) you get stack overflow. The problem is that linked lists are destroyed recursively in the code (eg. AVIIndexRecord in AVIFileSink, ChunkDescriptor in QuickTimeFileSink). Although this issue can be addressed by incrementing stack size, why not use iterated destruction? // Then, delete the index records: /// delete fIndexRecordsHead; AVIIndexRecord *cur = fIndexRecordsHead; while (cur) { AVIIndexRecord *next = cur->next(); delete cur; cur = next; } ____________ Best Regards, Anton Chmelev Senior Software Developer Transas Technologies www.transas.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Wed Feb 20 18:33:52 2013 From: finlayson at live555.com (Ross Finlayson) Date: Thu, 21 Feb 2013 12:33:52 +1000 Subject: [Live-devel] Stack overflow caused by recursive destructors In-Reply-To: <532656B720FE4E48A88D9D98B11941667766155B@VOITV-EXCH-BE02.transas.com> References: <532656B720FE4E48A88D9D98B11941667766155B@VOITV-EXCH-BE02.transas.com> Message-ID: > Although this issue can be addressed by incrementing stack size, why not use iterated destruction? Yes - good idea. This will be fixed in the next release of the software. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From alokkat at gmail.com Wed Feb 20 09:14:51 2013 From: alokkat at gmail.com (Alok Kumar) Date: Wed, 20 Feb 2013 12:14:51 -0500 Subject: [Live-devel] Debian vlc 2.0.4 with live555 liblive555_plugin.so: undefined symbol: _ZN10RTSPClient11sendRequestEPNS_13RequestRecordE) Message-ID: Hi, I am trying to compile vlc 2.0.4 with live555 (latest version release in Feb 2013) support on debian x86. I am getting following undefined symbol. main libvlc warning: cannot load module `/home/alok/workdir/vlc-2.0.4/modules/demux/.libs/liblive555_plugin.so' (/home/alok/workdir/vlc-2.0.4/modules/demux/.libs/liblive555_plugin.so: undefined symbol: _ZN10RTSPClient11sendRequestEPNS_13RequestRecordE) Could you please provide some advice how to fix this. Full logs: ./vlc -vv VLC media player 2.0.4 Twoflower (revision 2.0.3-289-g6e6100a) [0x82739e8] main libvlc debug: VLC media player - 2.0.4 Twoflower [0x82739e8] main libvlc debug: Copyright ? 1996-2012 VLC authors and VideoLAN [0x82739e8] main libvlc debug: revision 2.0.3-289-g6e6100a [0x82739e8] main libvlc debug: configured with ./configure '--prefix=/usr' '--enable-x11' '--enable-xvideo' '--enable-sdl' '--enable-avcodec' '--enable-avformat' '--enable-swscale' '--enable-mad' '--enable-libdvbpsi' '--enable-a52' '--enable-libmpeg2' '--enable-dvdnav' '--enable-faad' '--enable-vorbis' '--enable-ogg' '--enable-theora' '--enable-faac' '--enable-mkv' '--enable-freetype' '--enable-fribidi' '--enable-speex' '--enable-flac' '--enable-live555' '--enable-caca' '--enable-skins' '--enable-skins2' '--enable-alsa' '--enable-qt4' [0x82739e8] main libvlc debug: searching plug-in modules [0x82739e8] main libvlc debug: loading plugins cache file /home/alok/workdir/vlc-2.0.4/src/.libs/vlc/plugins/plugins.dat [0x82739e8] main libvlc warning: cannot read /home/alok/workdir/vlc-2.0.4/src/.libs/vlc/plugins/plugins.dat (No such file or directory) [0x82739e8] main libvlc debug: recursively browsing `/home/alok/workdir/vlc-2.0.4/src/.libs/vlc/plugins' [0x82739e8] main libvlc debug: saving plugins cache /home/alok/workdir/vlc-2.0.4/src/.libs/vlc/plugins/plugins.dat [0x82739e8] main libvlc debug: loading plugins cache file /home/alok/workdir/vlc-2.0.4/modules/plugins.dat [0x82739e8] main libvlc debug: recursively browsing `/home/alok/workdir/vlc-2.0.4/modules' [0x82739e8] main libvlc warning: cannot load module `/home/alok/workdir/vlc-2.0.4/modules/demux/.libs/liblive555_plugin.so' (/home/alok/workdir/vlc-2.0.4/modules/demux/.libs/liblive555_plugin.so: undefined symbol: _ZN10RTSPClient11sendRequestEPNS_13RequestRecordE) [0x82739e8] main libvlc debug: saving plugins cache /home/alok/workdir/vlc-2.0.4/modules/plugins.dat [0x82739e8] main libvlc debug: plug-ins loaded: 369 modules [0x82739e8] main libvlc debug: opening config file (/home/alok/.config/vlc/vlcrc) [0x82739e8] main libvlc debug: translation test: code is "C" [0x82739e8] main libvlc debug: CPU has capabilities MMX MMXEXT SSE SSE2 SSE3 SSSE3 FPU -- Regards Alok ~ -- Regards Alok Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastien-devel at celeos.eu Thu Feb 21 11:20:54 2013 From: sebastien-devel at celeos.eu (=?ISO-8859-1?Q?S=E9bastien_Escudier?=) Date: Thu, 21 Feb 2013 20:20:54 +0100 Subject: [Live-devel] getNormalPlayTime after a pause and play In-Reply-To: <5117DFDB.5080203@celeos.eu> References: <5117DFDB.5080203@celeos.eu> Message-ID: <51267396.9040706@celeos.eu> Hi Ross, The return value of getNormalPlayTime() is false in case of a "resume" play because in this case the value of playStartTime() is incorrect. playStartTime() returns 0 if the stream resume from the previous position, and then fNPT_PTS_Offset is set to the last presentation time value. So when the stream resume, getNormalPlayTime starts from 0 instead of the current NPT. Regards, Sebastien From tpack at zmicro.com Thu Feb 21 14:41:57 2013 From: tpack at zmicro.com (Trenton Pack) Date: Thu, 21 Feb 2013 14:41:57 -0800 Subject: [Live-devel] Live Trick Play with Growing TS File Message-ID: Hi, We are working on a project that involves live h264/TS video streaming over RTSP. The live555 release code base supports trick play for TS files only if a hint file is generated. Has anyone tried to generate the hint file on a growing TS file while streaming the TS file? Looking at the code base, this might be achievable with some modification to four or five source files. Does anyone consult on small projects like that? Thanks, Trenton Pack -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Thu Feb 21 15:31:09 2013 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 22 Feb 2013 09:31:09 +1000 Subject: [Live-devel] Live Trick Play with Growing TS File In-Reply-To: References: Message-ID: <17420F3B-55B3-4EB9-864D-8D8459EE4CFF@live555.com> > We are working on a project that involves live h264/TS video streaming over RTSP. The live555 release code base supports trick play for TS files only if a hint file is generated. Has anyone tried to generate the hint file on a growing TS file while streaming the TS file? Looking at the code base, this might be achievable with some modification to four or five source files. Yes, I think it would be possible to do this. (The most difficult part of this would be to modify the application that generates the index file (not 'hint file') for each transport stream so that it doesn't halt when it reaches the end of the input stream, but instead (somehow) continues, after waiting for more data to be arrive.) This couldn't work, however, if you had a file 'buffer' that didn't grow indefinitely, but instead 'chopped off' old data from the start of the file (so that the file maintained a bounded size, rather than growing indefinitely). If that's what you want to do, then the index file mechanism probably could not be modified to accommodate this. Instead, you would probably need a different implementation of 'trick play'. > Does anyone consult on small projects like that? Yes, I ("Live Networks, Inc.") am available for hire to develop custom upgrades to the source code. If you're interested, please let me know (by private email). Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gauravb at interfaceinfosoft.com Thu Feb 21 07:10:40 2013 From: gauravb at interfaceinfosoft.com (Gaurav Badhan) Date: Thu, 21 Feb 2013 20:40:40 +0530 Subject: [Live-devel] Why the following messages come on proxy server console? Message-ID: Hello, I ran the one "testondemandRTSPServer" program on my machine and another on another machine which are connected through lan to stream the .ts file. Then i gave the URL of both the RTSPserver to the proxy server with an option to stream over TCP. I open the first URL given by proxy server in VLC player. Then i open the another URL in an another instance of VLC player. Both VLC player shows the stream correctly. Then i close the first instance of VLC player and i saw the following statement on the console of proxy server:- Received 3996 new bytes of response data. Received 5328 new bytes of response data. Received 2664 new bytes of response data. Received 2616 new bytes of response data. Received 2664 new bytes of response data. Received 4 new bytes of response data. Received 14692 new bytes of response data. Received 19872 new bytes of response data. ProxyServerMediaSubsession["MP2T"]::createNewRTPSink() ProxyServerMediaSubsession["MP2T"]::closeStreamSource() ProxyServerMediaSubsession["MP2T"]::createNewStreamSource(session id 3880907106) ProxyServerMediaSubsession["MP2T"]::createNewRTPSink() ProxyServerMediaSubsession["MP2T"]::closeStreamSource() ProxyServerMediaSubsession["MP2T"]::createNewStreamSource(session id 2740275749) ProxyServerMediaSubsession["MP2T"]::createNewRTPSink() ProxyServerMediaSubsession["MP2T"]::closeStreamSource() ProxyServerMediaSubsession["MP2T"]::createNewStreamSource(session id 529879796) ProxyServerMediaSubsession["MP2T"]::createNewRTPSink() ProxyServerMediaSubsession["MP2T"]::closeStreamSource() ProxyServerMediaSubsession["MP2T"]::createNewStreamSource(session id 2309179375) ProxyServerMediaSubsession["MP2T"]::createNewRTPSink() ProxyServerMediaSubsession["MP2T"]::closeStreamSource() ProxyRTSPClient["rtsp://192.168.15.192:8554/STREAMER"]: RTSP "DESCRIBE" comman failed; trying again in 128 seconds Received 413 new bytes of response data. Then when i tri'ed to again open the first stream in vlc player it didn't show the stream. Same thing happened with the rtsp client. if i directly give the URL of the first "testondemandRTSPServer" program to VLC or rtsp client they receive the stream. To be more specific the following messages Received 3996 new bytes of response data. Received 5328 new bytes of response data. Received 2664 new bytes of response data. Received 2616 new bytes of response data. Received 2664 new bytes of response data. Received 4 new bytes of response data. Received 14692 new bytes of response data. Received 19872 new bytes of response data. display over the proxy rtsp server console When i get the message the Sending request: GET_PARAMETER rtsp://192.168.15.189:8554/STREAMER/ RTSP/1.0 CSeq: 8 User-Agent: ProxyRTSPClient (LIVE555 Streaming Media v2013.01.25) Session: CAAD6BD1 Content-Length: 2 WARNING: The server did not respond to our "PLAY" request (CSeq: 7). The server appears to be buggy (perhaps not handling pipelined requests properly). Received a complete GET_PARAMETER response: RTSP/1.0 200 OK CSeq: 8 Date: Thu, Feb 21 2013 14:11:40 GMT Session: CAAD6BD1 And when then i close the vlc or rtsp client that is receiving the stream i get the following messages Received 3996 new bytes of response data. Received 5328 new bytes of response data. Received 2664 new bytes of response data. Received 2616 new bytes of response data. Received 2664 new bytes of response data. Received 4 new bytes of response data. Received 14692 new bytes of response data. Received 19872 new bytes of response data. Can you tell me how to resolve this issue and whats happening wrong? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From temp2010 at forren.org Thu Feb 21 15:40:09 2013 From: temp2010 at forren.org (temp2010 at forren.org) Date: Thu, 21 Feb 2013 18:40:09 -0500 Subject: [Live-devel] OnDemandRTSPServer working with File as File source but not File as Pipe Message-ID: We want our PC to send a video stream to an iPad, and we sometimes test by sending to VLC over the local network. The video stream gets into linked-in Live555 code via ByteStreamFileSource, but in two different ways. One way is from a file on disk (File as File). The other way is through a named pipe (File as Pipe). Both File as File and File as Pipe look great on VLC over the lan. But to the iPad, the File as File looks great but the File as Pipe is very blocky, as if compression info is missing or whatever. This tells me two things: (#1) it seems VLC may be more robust and forgiving than our iPad app, as expected; and (#2) there's some difference in what Live555 sends (perhaps presentation time?) depending on whether the stream came from File as File or File as Pipe. I'm writing you to ask about #2. If we can fix #2, then we don't have to fix #1. In detail (sorry, but there's no other way for the quick turn we need this week), Live555 InputFile.cpp already had a hood in it for filename "stdin". We tentatively added a hook for "pipe". Thus, we can easily switch between File as File (real filename) or File as Pipe (hooked "pipe" filename). This also means that the ONLY difference in the code base from this point all the way out to the iPad is this tiny difference. One would expect the output and iPad behavior to be the same... but it's not. Meanwhile, the actual stream being sent INTO the Pipe is simultaneously written to a raw h.264 file, and on later tests, it's that raw h.264 that's being read in the File as File case. Therefore, the content of the stream in both cases should be indistinguishable. We tried File as Pipe, and we got the blockiness. We then took that very raw h.264 file just written, and ran again with File as File, and we did NOT get the blockiness. So, back to #2, there simply must be some difference in what Live555 sends, between the two cases. Do you have any clue at all as to what this difference might be, and what we might do to get rid of it? My guess is something about presentation time, but I'm so ignorant on this stuff in general that I don't trust my own guess. (Once we figure it out, we can write alternate InputFile.cpp and ByteStreamFileSourcesource.cpp (and on up the rather long derivation chain) in order to return InputFile.cpp to it's pre-pipe-hook content. But it's too wasteful to do that first, in case this strategy fails to ever work at all...) Thanks very much for you help. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastien-devel at celeos.eu Fri Feb 22 06:41:05 2013 From: sebastien-devel at celeos.eu (=?ISO-8859-1?Q?S=E9bastien_Escudier?=) Date: Fri, 22 Feb 2013 15:41:05 +0100 Subject: [Live-devel] TS discontinuity when seeking Message-ID: <51278381.10101@celeos.eu> Hi Ross, I noticed that after a seek, the first TS packet from live555 server, does not have the correct continuity counter value (with respect to the last packet before the seek). There is a discontinuity. This is error is reported in VLC and wireshark. Regards, S?bastien. From sebastien-devel at celeos.eu Fri Feb 22 13:04:35 2013 From: sebastien-devel at celeos.eu (=?ISO-8859-1?Q?S=E9bastien_Escudier?=) Date: Fri, 22 Feb 2013 22:04:35 +0100 Subject: [Live-devel] getNormalPlayTime after a pause and play In-Reply-To: <5117DFDB.5080203@celeos.eu> References: <5117DFDB.5080203@celeos.eu> Message-ID: <5127DD63.7080400@celeos.eu> Another solution would be to modify your server to return the actual npt in the PLAY response. Currently, when your server resume from the previous position, it does not return any npt value in the PLAY response. I guess both fixes would be valid. Best regards, Sebastien. From finlayson at live555.com Fri Feb 22 02:31:21 2013 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 22 Feb 2013 20:31:21 +1000 Subject: [Live-devel] OnDemandRTSPServer working with File as File source but not File as Pipe In-Reply-To: References: Message-ID: <88BFA771-C746-4CC1-98BE-729F2BC86C27@live555.com> > Both File as File and File as Pipe look great on VLC over the lan. But to the iPad, the File as File looks great but the File as Pipe is very blocky, as if compression info is missing or whatever. > > This tells me two things: (#1) it seems VLC may be more robust and forgiving than our iPad app, as expected; and (#2) there's some difference in what Live555 sends (perhaps presentation time?) depending on whether the stream came from File as File or File as Pipe. No, the difference is not the frames' presentation times, because they are derived from the input H.264 data, which should be the same in both cases. The difference is likely that reads from the pipe - and thus transmissions over the network - are not happening smoothly. I.e., you are probably reading a large chunk of data from the pipe, then waiting a long time (for more data to become available in the pipe), then reading a large chunk of data from the pipe, etc. Because of these 'bursty' reads from the pipe, network transmission (of RTP packets) is also happening in a 'bursty' fashion. Your receiving application *should* be able to handle this OK, if it does reasonable buffering of incoming data (i.e., data that has just been read from the "RTPSource" object). But the fact that it's not handling this OK (but VLC does) suggests that your receiving player (on the iPad) is not buffering incoming data enough before decoding/rendering it. That is the first thing that you should fix (because bursty/jittery data can happen in many environments). However, you should also fix whatever is causing reads from your pipe (i.e., on your server) to be so bursty. Because I don't know anything about your server OS, I can't help you here. But if you are running Windows (which often has a notoriously coarse process-switching granularity), then please stop. In this day and age, Windows is simply not a serious OS for running server applications. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric.hoffmann at thalesgroup.com Fri Feb 22 05:54:25 2013 From: eric.hoffmann at thalesgroup.com (HOFFMANN Eric) Date: Fri, 22 Feb 2013 14:54:25 +0100 Subject: [Live-devel] testH264VideoStreamer multi client play stop Message-ID: <16632_1361541318_512778C6_16632_206_1_8AF751CD48F5A14AADA5E935BB6A406501EC21D29610@THSONEA01CMS03P.one.grp> Hi, I would like to implement the same functionality than testH264VideoStreamer (rtsp, multicast, loop file) but i need to start streaming at first client and stop when the last quit. Any suggestion would be helpful Regards, eric [@@ THALES GROUP INTERNAL @@] -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Fri Feb 22 16:16:14 2013 From: finlayson at live555.com (Ross Finlayson) Date: Sat, 23 Feb 2013 10:16:14 +1000 Subject: [Live-devel] testH264VideoStreamer multi client play stop In-Reply-To: <16632_1361541318_512778C6_16632_206_1_8AF751CD48F5A14AADA5E935BB6A406501EC21D29610@THSONEA01CMS03P.one.grp> References: <16632_1361541318_512778C6_16632_206_1_8AF751CD48F5A14AADA5E935BB6A406501EC21D29610@THSONEA01CMS03P.one.grp> Message-ID: > I would like to implement the same functionality than testH264VideoStreamer (rtsp, multicast, loop file) but i need to start streaming at first client and stop when the last quit. > Any suggestion would be helpful I'm not sure how easy this would be to do without a lot of work. One thing that might work, though, would be to define your own subclass of "PassiveServerMediaSubsession" and redefine the "startStream()", "deleteStream()", and (perhaps) "getStreamParameters()" virtual functions to do , and then call the original (superclass) version of each virtual function. You're pretty much on your own here, though... Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinodh.g.us at gmail.com Fri Feb 22 02:11:42 2013 From: vinodh.g.us at gmail.com (Vinodh Kumar) Date: Fri, 22 Feb 2013 15:41:42 +0530 Subject: [Live-devel] Regarding Stoping and resuming the Live555Server Message-ID: Hi There, I have a iphone user interface to start and stop the live555server, I am able to stop the server by setting _watchVariable to 0; env->taskScheduler().doEventLoop(_watchVariable); but Iam not able to neither restart the server nor resume the services, can you get back to me on this. Regards Vinodh -------------- next part -------------- An HTML attachment was scrubbed... URL: From zanglan at yahoo.com Fri Feb 22 05:11:34 2013 From: zanglan at yahoo.com (Lan Zang) Date: Fri, 22 Feb 2013 21:11:34 +0800 (CST) Subject: [Live-devel] SimpleRTPSource and RTCP In-Reply-To: <23EF6256-7EE6-4ED1-8FAB-5F1B176AD49F@live555.com> References: <1360986290.44260.YahooMailNeo@web15804.mail.cnb.yahoo.com> <3C5C83F1-4416-491D-A636-EB63DFB57666@live555.com> <1361080813.89232.YahooMailNeo@web15808.mail.cnb.yahoo.com> <23EF6256-7EE6-4ED1-8FAB-5F1B176AD49F@live555.com> Message-ID: <1361538694.39309.YahooMailNeo@web15808.mail.cnb.yahoo.com> Ross, I observed that sometimes the testOnDemandRTSPServer increases the delay of the mpegts/rtp stream. It went all well, but suddenly, the server keeps increasing its memory usage. In this case, the player slows down, and the delay increases. The delay of the video could be up to 1 minute. The resident memory used by the server could reach ~20 megabytes(usually it only cost ~1 megabyte). It is hard for me to inspect the internal memory usage of the server. Can you shed some light on the issue? Thanks, Lan Zang(Sander)? ________________________________ From: Ross Finlayson To: LIVE555 Streaming Media - development & use Sent: Sunday, February 17, 2013 2:15 PM Subject: Re: [Live-devel] SimpleRTPSource and RTCP I tested the?testProgs/testOnDemandRTSPServer. It does not have the odd port for RTCP, (i.e only 1234 port is open). OK, I misunderstood your question. ?I thought you were asking about the *output* from the RTSP server. But yes, you're right. ?We should really be creating a "RTCPInstance" as well as the "SimpleRTPSource". ?But in this case it doesn't really matter, because we don't need accurate 'presentation times' for these packets. Ross Finlayson Live Networks, Inc. http://www.live555.com/ _______________________________________________ live-devel mailing list live-devel at lists.live555.com http://lists.live555.com/mailman/listinfo/live-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From temp2010 at forren.org Sat Feb 23 03:00:40 2013 From: temp2010 at forren.org (temp2010 at forren.org) Date: Sat, 23 Feb 2013 06:00:40 -0500 Subject: [Live-devel] OnDemandRTSPServer working with File as File source but not File as Pipe In-Reply-To: <88BFA771-C746-4CC1-98BE-729F2BC86C27@live555.com> References: <88BFA771-C746-4CC1-98BE-729F2BC86C27@live555.com> Message-ID: Ross, Thanks. We got it working. You are correct about the bursty transmission. I'll add your buffering comment to our work task list, to keep it in mind for when we may need it in the future. ... To assist others in figuring out their own problems, I'll mention that my remote associate on the actual hardware finally pointed out that the iPad was processing random frame counts per second, not always 30. This led to noticing a debug message output stream available from VLC, that was frequently claiming "picture is too late to be displayed (missing ???ms)", where "???ms" varied from 30ms to 300ms. The iPad random frame counts and the VLC too late messages didn't occur when using File as File. So this was the smoking gun. It was also repeatable for me locally. We found that running out of processor horsepower was the cause. We stripped a bunch of temporary stuff out of the program, including three unused potential stream connections from OnDemand..., plus compiled in Release mode rather than Debug mode (VS2012). An additional cause was the amount of memory given to the pipe. Increasing it from 4K to 300K was also necessary. This is Windows 7 and we have no choice about it. Fortunately, it is working well now. Note that the File as Pipe case uses more horsepower because of the real activity feeding the pipe, activity that's not consuming horsepower in the File as File case. Regarding buffering, we need to run as short a latency as possible, so I'm hesitant to bump up the buffering in the iPad (software my remote associate wrote, not me, although I have iOS programming experience). So, in terms of my original post, (#1) it must be that VLC handles the bursty data better than the iPad, very likely related to buffering, and (#2) the difference in what Live555 sends was about info being late due to insufficient horsepower On Fri, Feb 22, 2013 at 5:31 AM, Ross Finlayson wrote: > Both File as File and File as Pipe look great on VLC over the lan. But to > the iPad, the File as File looks great but the File as Pipe is very blocky, > as if compression info is missing or whatever. > > This tells me two things: (#1) it seems VLC may be more robust and > forgiving than our iPad app, as expected; and (#2) there's some difference > in what Live555 sends (perhaps presentation time?) depending on whether the > stream came from File as File or File as Pipe. > > > No, the difference is not the frames' presentation times, because they are > derived from the input H.264 data, which should be the same in both cases. > > The difference is likely that reads from the pipe - and thus transmissions > over the network - are not happening smoothly. I.e., you are probably > reading a large chunk of data from the pipe, then waiting a long time (for > more data to become available in the pipe), then reading a large chunk of > data from the pipe, etc. Because of these 'bursty' reads from the pipe, > network transmission (of RTP packets) is also happening in a 'bursty' > fashion. > > Your receiving application *should* be able to handle this OK, if it does > reasonable buffering of incoming data (i.e., data that has just been read > from the "RTPSource" object). But the fact that it's not handling this OK > (but VLC does) suggests that your receiving player (on the iPad) is not > buffering incoming data enough before decoding/rendering it. That is the > first thing that you should fix (because bursty/jittery data can happen in > many environments). > > However, you should also fix whatever is causing reads from your pipe > (i.e., on your server) to be so bursty. Because I don't know anything > about your server OS, I can't help you here. But if you are running > Windows (which often has a notoriously coarse process-switching > granularity), then please stop. In this day and age, Windows is simply not > a serious OS for running server applications. > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tayeb.dotnet at gmail.com Sun Feb 24 06:14:36 2013 From: tayeb.dotnet at gmail.com (Tayeb Meftah) Date: Sun, 24 Feb 2013 15:14:36 +0100 Subject: [Live-devel] Live555Proxy server multicast to unicast Proxying Message-ID: Hello. it's pocible to proxy a Multicast UDP MPEG2-TS stream to Unicast over RTSP using Live555 proxy server? my stream source is udp://@239.100.1.27 and i want to make it unicast thank, Tayeb Meftah Voice of the blind T Broadcast Freedom http://www.vobradio.org Phone:447559762242 -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Sun Feb 24 21:04:45 2013 From: finlayson at live555.com (Ross Finlayson) Date: Mon, 25 Feb 2013 18:04:45 +1300 Subject: [Live-devel] Live555Proxy server multicast to unicast Proxying In-Reply-To: References: Message-ID: > it's pocible to proxy a Multicast UDP MPEG2-TS stream to Unicast over RTSP using Live555 proxy server? Only if the multicast stream is served by a RTSP server (because a "RTSP proxy server" requires that the back-end input source have a RTSP server). > my stream source is udp://@239.100.1.27 and i want to make it unicast However, you should be able to use the existing "testOnDemandRTSPServer" demo application (the "mpeg2TransportStreamFromUDPSourceTest" stream), for this stream, simply by changing the multicast input string ("inputAddressStr") to "239.100.1.27", and "changing inputStreamIsRawUDP" to True. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From zanglan at yahoo.com Mon Feb 25 00:56:23 2013 From: zanglan at yahoo.com (Lan Zang) Date: Mon, 25 Feb 2013 16:56:23 +0800 (CST) Subject: [Live-devel] mpegts over RTP cause live555 increase memory using. Message-ID: <1361782583.68724.YahooMailNeo@web15808.mail.cnb.yahoo.com> Hi, I want to use class MPEG2TransportUDPServerMediaSubsession as in testOnDemandRTSPServer.cpp to conduct a mpegts/RTP broadcast. I found that after start of the broadcasting, about one or tow minutes, the process using?MPEG2TransportUDPServerMediaSubsession will increase the memory using. ? ? ? ? With the help of mcheck, I find the memory increase is caused by new operator(this is almost no help). I can not locate where the memory is allocated or why memory keeps growing. This increased memory can be freed eventually if I stop the RTP source.? Could you give the hint where the memory consumes or how to avoid it? Regards, Lan Zang(Sander) -------------- next part -------------- An HTML attachment was scrubbed... URL: From yangzx at acejet.com.cn Mon Feb 25 02:06:07 2013 From: yangzx at acejet.com.cn (=?gb2312?B?0e7Wvs/p?=) Date: Mon, 25 Feb 2013 18:06:07 +0800 Subject: [Live-devel] only one client play h264 stream Message-ID: <41FC1A94FE13684D8249B089C38119707261B5@mailserver-nj1.njacejet.com> hi: i use last version live555.i write my h264streamserver,and set reuseSource is true.i find,when use one vlc client open live stream,it work ok. but use multi client open stream,only last run clien is play stream ok.the other client is no play,it debug message " no video data". why? i use H264VideoRTPSink(....) and H264VideoStreamDiscreteFramer(....) create sink and source. From finlayson at live555.com Mon Feb 25 02:09:02 2013 From: finlayson at live555.com (Ross Finlayson) Date: Mon, 25 Feb 2013 23:09:02 +1300 Subject: [Live-devel] only one client play h264 stream In-Reply-To: <41FC1A94FE13684D8249B089C38119707261B5@mailserver-nj1.njacejet.com> References: <41FC1A94FE13684D8249B089C38119707261B5@mailserver-nj1.njacejet.com> Message-ID: > i use last version live555.i write my h264streamserver,and set reuseSource is true.i find,when use one vlc client open live stream,it work ok. but use multi client open stream,only last run clien is play stream ok.the other client is no play,it debug message " no video data". why? I don't know. However, I suggest that you first use "testRTSPClient" - rather than VLC - as your client. That should tell you more about what is (and is not) happening. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mczarnek at objectvideo.com Mon Feb 25 06:45:50 2013 From: mczarnek at objectvideo.com (Czarnek, Matt) Date: Mon, 25 Feb 2013 09:45:50 -0500 Subject: [Live-devel] help Message-ID: I decided to go with ffmpeg instead. So since I'm not using you guys and probably wouldn't be much help when it comes to helping other people solve their problems either, I'd like to unsubscribe. Thank you! Matt On Fri, Feb 22, 2013 at 7:16 PM, wrote: > Send live-devel mailing list submissions to > live-devel at lists.live555.com > > To subscribe or unsubscribe via the World Wide Web, visit > http://lists.live555.com/mailman/listinfo/live-devel > or, via email, send a message with subject or body 'help' to > live-devel-request at lists.live555.com > > You can reach the person managing the list at > live-devel-owner at lists.live555.com > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of live-devel digest..." > > > Today's Topics: > > 1. OnDemandRTSPServer working with File as File source but not > File as Pipe (temp2010 at forren.org) > 2. TS discontinuity when seeking (S?bastien Escudier) > 3. Re: getNormalPlayTime after a pause and play (S?bastien Escudier) > 4. Re: OnDemandRTSPServer working with File as File source but > not File as Pipe (Ross Finlayson) > 5. testH264VideoStreamer multi client play stop (HOFFMANN Eric) > 6. Re: testH264VideoStreamer multi client play stop (Ross Finlayson) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Thu, 21 Feb 2013 18:40:09 -0500 > From: "temp2010 at forren.org" > To: live-devel at ns.live555.com > Subject: [Live-devel] OnDemandRTSPServer working with File as File > source but not File as Pipe > Message-ID: > < > CAK0dNZidNpdvxTqazWosKp3zADqiaq1sb3Vtozw_6ai4wz1UkA at mail.gmail.com> > Content-Type: text/plain; charset="iso-8859-1" > > We want our PC to send a video stream to an iPad, and we sometimes test by > sending to VLC over the local network. > > The video stream gets into linked-in Live555 code via ByteStreamFileSource, > but in two different ways. One way is from a file on disk (File as File). > The other way is through a named pipe (File as Pipe). > > Both File as File and File as Pipe look great on VLC over the lan. But to > the iPad, the File as File looks great but the File as Pipe is very blocky, > as if compression info is missing or whatever. > > This tells me two things: (#1) it seems VLC may be more robust and > forgiving than our iPad app, as expected; and (#2) there's some difference > in what Live555 sends (perhaps presentation time?) depending on whether the > stream came from File as File or File as Pipe. > > I'm writing you to ask about #2. If we can fix #2, then we don't have to > fix #1. > > In detail (sorry, but there's no other way for the quick turn we need this > week), Live555 InputFile.cpp already had a hood in it for filename "stdin". > We tentatively added a hook for "pipe". Thus, we can easily switch > between File as File (real filename) or File as Pipe (hooked "pipe" > filename). This also means that the ONLY difference in the code base from > this point all the way out to the iPad is this tiny difference. One would > expect the output and iPad behavior to be the same... but it's not. > > Meanwhile, the actual stream being sent INTO the Pipe is simultaneously > written to a raw h.264 file, and on later tests, it's that raw h.264 that's > being read in the File as File case. Therefore, the content of the stream > in both cases should be indistinguishable. We tried File as Pipe, and we > got the blockiness. We then took that very raw h.264 file just written, > and ran again with File as File, and we did NOT get the blockiness. > > So, back to #2, there simply must be some difference in what Live555 sends, > between the two cases. > > Do you have any clue at all as to what this difference might be, and what > we might do to get rid of it? My guess is something about presentation > time, but I'm so ignorant on this stuff in general that I don't trust my > own guess. > > (Once we figure it out, we can write alternate InputFile.cpp and > ByteStreamFileSourcesource.cpp (and on up the rather long derivation chain) > in order to return InputFile.cpp to it's pre-pipe-hook content. But it's > too wasteful to do that first, in case this strategy fails to ever work at > all...) > > Thanks very much for you help. > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://lists.live555.com/pipermail/live-devel/attachments/20130221/4df1802a/attachment-0001.html > > > > ------------------------------ > > Message: 2 > Date: Fri, 22 Feb 2013 15:41:05 +0100 > From: S?bastien Escudier > To: live-devel at ns.live555.com > Subject: [Live-devel] TS discontinuity when seeking > Message-ID: <51278381.10101 at celeos.eu> > Content-Type: text/plain; charset=ISO-8859-1 > > Hi Ross, > > I noticed that after a seek, the first TS packet from live555 server, > does not have the correct continuity counter value (with respect to the > last packet before the seek). There is a discontinuity. > This is error is reported in VLC and wireshark. > > Regards, > S?bastien. > > > > ------------------------------ > > Message: 3 > Date: Fri, 22 Feb 2013 22:04:35 +0100 > From: S?bastien Escudier > To: live-devel at ns.live555.com > Subject: Re: [Live-devel] getNormalPlayTime after a pause and play > Message-ID: <5127DD63.7080400 at celeos.eu> > Content-Type: text/plain; charset=ISO-8859-1 > > Another solution would be to modify your server to return the actual npt > in the PLAY response. > Currently, when your server resume from the previous position, it does > not return any npt value in the PLAY response. > > I guess both fixes would be valid. > > Best regards, > Sebastien. > > > > ------------------------------ > > Message: 4 > Date: Fri, 22 Feb 2013 20:31:21 +1000 > From: Ross Finlayson > To: LIVE555 Streaming Media - development & use > > Subject: Re: [Live-devel] OnDemandRTSPServer working with File as File > source but not File as Pipe > Message-ID: <88BFA771-C746-4CC1-98BE-729F2BC86C27 at live555.com> > Content-Type: text/plain; charset="us-ascii" > > > Both File as File and File as Pipe look great on VLC over the lan. But > to the iPad, the File as File looks great but the File as Pipe is very > blocky, as if compression info is missing or whatever. > > > > This tells me two things: (#1) it seems VLC may be more robust and > forgiving than our iPad app, as expected; and (#2) there's some difference > in what Live555 sends (perhaps presentation time?) depending on whether the > stream came from File as File or File as Pipe. > > No, the difference is not the frames' presentation times, because they are > derived from the input H.264 data, which should be the same in both cases. > > The difference is likely that reads from the pipe - and thus transmissions > over the network - are not happening smoothly. I.e., you are probably > reading a large chunk of data from the pipe, then waiting a long time (for > more data to become available in the pipe), then reading a large chunk of > data from the pipe, etc. Because of these 'bursty' reads from the pipe, > network transmission (of RTP packets) is also happening in a 'bursty' > fashion. > > Your receiving application *should* be able to handle this OK, if it does > reasonable buffering of incoming data (i.e., data that has just been read > from the "RTPSource" object). But the fact that it's not handling this OK > (but VLC does) suggests that your receiving player (on the iPad) is not > buffering incoming data enough before decoding/rendering it. That is the > first thing that you should fix (because bursty/jittery data can happen in > many environments). > > However, you should also fix whatever is causing reads from your pipe > (i.e., on your server) to be so bursty. Because I don't know anything > about your server OS, I can't help you here. But if you are running > Windows (which often has a notoriously coarse process-switching > granularity), then please stop. In this day and age, Windows is simply not > a serious OS for running server applications. > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://lists.live555.com/pipermail/live-devel/attachments/20130222/674243c1/attachment-0001.html > > > > ------------------------------ > > Message: 5 > Date: Fri, 22 Feb 2013 14:54:25 +0100 > From: HOFFMANN Eric > To: "live-devel at lists.live555.com" > Subject: [Live-devel] testH264VideoStreamer multi client play stop > Message-ID: > > <16632_1361541318_512778C6_16632_206_1_8AF751CD48F5A14AADA5E935BB6A406501EC21D29610 at THSONEA01CMS03P.one.grp > > > > Content-Type: text/plain; charset="us-ascii" > > Hi, > I would like to implement the same functionality than > testH264VideoStreamer (rtsp, multicast, loop file) but i need to start > streaming at first client and stop when the last quit. > Any suggestion would be helpful > > Regards, > eric > > > [@@ THALES GROUP INTERNAL @@] > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://lists.live555.com/pipermail/live-devel/attachments/20130222/b4253d31/attachment-0001.html > > > > ------------------------------ > > Message: 6 > Date: Sat, 23 Feb 2013 10:16:14 +1000 > From: Ross Finlayson > To: LIVE555 Streaming Media - development & use > > Subject: Re: [Live-devel] testH264VideoStreamer multi client play stop > Message-ID: > Content-Type: text/plain; charset="us-ascii" > > > I would like to implement the same functionality than > testH264VideoStreamer (rtsp, multicast, loop file) but i need to start > streaming at first client and stop when the last quit. > > Any suggestion would be helpful > > I'm not sure how easy this would be to do without a lot of work. > > One thing that might work, though, would be to define your own subclass of > "PassiveServerMediaSubsession" and redefine the "startStream()", > "deleteStream()", and (perhaps) "getStreamParameters()" virtual functions > to do , and then call the original (superclass) version of each > virtual function. You're pretty much on your own here, though... > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://lists.live555.com/pipermail/live-devel/attachments/20130223/272f786e/attachment.html > > > > ------------------------------ > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > > End of live-devel Digest, Vol 112, Issue 27 > ******************************************* > -- Matt Czarnek, Software Engineer Work Phone: (760) 4-OBJVID aka: (760) 462-5843 Cell Phone: HAHAHOORAY ObjectVideo Inc. http://www.objectvideo.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Mon Feb 25 07:30:04 2013 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 26 Feb 2013 04:30:04 +1300 Subject: [Live-devel] help In-Reply-To: References: Message-ID: I absolutely will NOT unsubscribe you myself, because you have committed two violations of basic email 'netiquette'. Everyone should know this before they use email. First, you replied to a 'digest' while quoting the entire digest - i.e., without trimming your email. Second, one NEVER unsubscribes to a mailing list by sending email directly to the mailing list. There is always a separate email address or - as in this case - web URL that you use instead. The last line of this - and every - "live-devel" mailing list message is the URL that you should follow in order to unsubscribe from the mailing list (or change your list subscription in any way). Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Mon Feb 25 23:07:08 2013 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 26 Feb 2013 20:07:08 +1300 Subject: [Live-devel] A quick survey: How are you using the "LIVE555 Streaming Media" software? Message-ID: <9965646B-2A92-412F-9C95-B5C28D55E67B@live555.com> With almost 2000 members of this mailing list alone, the "LIVE555 Streaming Media" software has become very popular. I'd like to get a better feel for how it is being used. I invite you to take the following quick survey: http://www.surveymonkey.com/s/T5NNYY9 Thus survey will run for the next few weeks. There will likely be one or more follow-up surveys, asking about possible new features, modifications, extensions, etc. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Mon Feb 25 23:27:18 2013 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 26 Feb 2013 20:27:18 +1300 Subject: [Live-devel] A quick survey: How are you using the "LIVE555 Streaming Media" software? In-Reply-To: <9965646B-2A92-412F-9C95-B5C28D55E67B@live555.com> References: <9965646B-2A92-412F-9C95-B5C28D55E67B@live555.com> Message-ID: <9C37CB7A-ECF8-4E61-9620-9382428798F8@live555.com> On Feb 26, 2013, at 8:07 PM, Ross Finlayson wrote: > With almost 2000 members of this mailing list alone, the "LIVE555 Streaming Media" software has become very popular. I'd like to get a better feel for how it is being used. > > I invite you to take the following quick survey: > > http://www.surveymonkey.com/s/T5NNYY9 One thing about this survey that I want to make clear: In a few places, the survey asks (optionally) for a specific description of how are you using this software. I want to assure everyone that these responses will be kept confidential. I realize that several of you are using the software to develop proprietary products, and I want to assure you that if you give details of how you are using the software, your responses will be kept confidential. (The only results of the survey that I'm likely to make public are general statistics about which OS platforms are being used, etc.) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at sound4.biz Tue Feb 26 01:48:29 2013 From: eric at sound4.biz (Eric HEURTEL) Date: Tue, 26 Feb 2013 10:48:29 +0100 Subject: [Live-devel] RTSPClient::parseTransportParams() crashes if "Transport:" line is missing In-Reply-To: References: Message-ID: <512C84ED.7050503@sound4.biz> Hi Ross, just to inform you that if "Transport:" line is missing in Server Setup Response, Live crashes (RTSPClient::parseTransportParams() does not accept NULL paramStr) (yes this line is required, it's just for more safety...) Regards, E. HEURTEL -------------- next part -------------- An HTML attachment was scrubbed... URL: From michel.promonet at thalesgroup.com Tue Feb 26 04:50:25 2013 From: michel.promonet at thalesgroup.com (PROMONET Michel) Date: Tue, 26 Feb 2013 13:50:25 +0100 Subject: [Live-devel] setting rtpTime return by the RTSP PLAY Message-ID: <4963_1361883085_512CAFCD_4963_10994_1_1BE8971B6CFF3A4F97AF4011882AA25501560A431AE6@THSONEA01CMS01P.one.grp> Hi Ross, I am trying to see how to propagate a timestamp between an RTSP server and a RTSP client. One aspect involve the RTCP SR, but this information is send less often than the RTP, so the first frames are not synchronized. I saw that the rtptime return by the PLAY is computed in RTPSink::presetNextTimestamp based on current time. As this method is not virtual and the member fTimetampBase is private, I have no chance to bring a different behavior. What do you suggest to customize the timestamp send through in the RTP-INFO ? Best Regards, Michel. [@@ THALES GROUP INTERNAL @@] -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Feb 26 07:03:53 2013 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 27 Feb 2013 04:03:53 +1300 Subject: [Live-devel] setting rtpTime return by the RTSP PLAY In-Reply-To: <4963_1361883085_512CAFCD_4963_10994_1_1BE8971B6CFF3A4F97AF4011882AA25501560A431AE6@THSONEA01CMS01P.one.grp> References: <4963_1361883085_512CAFCD_4963_10994_1_1BE8971B6CFF3A4F97AF4011882AA25501560A431AE6@THSONEA01CMS01P.one.grp> Message-ID: > I am trying to see how to propagate a timestamp between an RTSP server and a RTSP client. Your question makes no sense, because the things that get 'propagated' from servers to clients are *presentation times*. (RTP timestamps are used only internally, by the RTP/RTCP protocol, as a mechanism for conveying presentation times from servers to clients. Our software automatically converts presentation times to RTP timestamps (at the server end) and RTP timestamps to presentation times (at the client end); application code (server or client) never needs to set or see RTP timestamp values.) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From sourabh.jain at xpointers.com Mon Feb 25 10:05:55 2013 From: sourabh.jain at xpointers.com (Sourabh Jain) Date: Mon, 25 Feb 2013 23:35:55 +0530 Subject: [Live-devel] Just a query Message-ID: Hello Developers, I am new to your software. I was using proxy server for receiving stream from the "testOnDemandRTSPServer", While using the stream that we receive from the proxy server i saw the following messages on the proxy server console:- Sending request: GET_PARAMETER rtsp://192.158.15.189/test_stream/RTSP/1.0 CSeq: 8 User-Agent: ProxyRTSPClient (LIVE555 Streaming Media v2013.01.25) Session: CAAD6BD1 Content-Length: 2 WARNING: The server did not respond to our "PLAY" request (CSeq: 7). The server appears to be buggy (perhaps not handling pipelined requests properly). Received a complete GET_PARAMETER response: RTSP/1.0 200 OK CSeq: 8 Date: Thu, Feb 21 2013 14:11:40 GMT Session: CAAD6BD1 Received 1332 new bytes of response data. Received 1332 new bytes of response data. Received 1332 new bytes of response data. Received 1332 new bytes of response data. Received 1332 new bytes of response data. Received 4 new bytes of response data. Received 14692 new bytes of response data. Received 19872 new bytes of response data. Received 3996 new bytes of response data. Received 5328 new bytes of response data. Received 2664 new bytes of response data. Received 2616 new bytes of response data. Received 2664 new bytes of response data. Received 4 new bytes of response data. Received 14692 new bytes of response data. Received 19872 new bytes of response data. And then the i don't get the stream. Can you tell whats the issue. -- ** Thanks & Regards, Sourabh Jain | QA engineer xPointers Consulting Pvt. Ltd. Contact :+91 8796855143 Mail : sourabh.jain at xpointers.com Website : www.xpointers.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Feb 26 19:48:18 2013 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 27 Feb 2013 16:48:18 +1300 Subject: [Live-devel] Just a query In-Reply-To: References: Message-ID: <321966EF-1088-414B-A80C-393FFBEB0A1C@live555.com> > Can you tell whats the issue. No, because you haven't provided nearly enough information. First, you need to make sure that your 'back-end' RTSP server - not just the proxy server - is using an up-to-date version of our software. Then, you will need to run the proxy server with the -V (that's 'upper case' V) option, to print full diagnostic output. Make *no* changes to *any* of the supplied source code. Then please send the *complete* diagnostic output from the proxy server. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Feb 26 20:17:23 2013 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 27 Feb 2013 17:17:23 +1300 Subject: [Live-devel] getNormalPlayTime after a pause and play In-Reply-To: <5127DD63.7080400@celeos.eu> References: <5117DFDB.5080203@celeos.eu> <5127DD63.7080400@celeos.eu> Message-ID: <16B99E11-8FE4-4F75-B09A-E52E0D27E6C9@live555.com> On Feb 23, 2013, at 10:04 AM, S?bastien Escudier wrote: > Another solution would be to modify your server to return the actual npt > in the PLAY response. Sorry for the delay in responding to this. Yes, you're right - this is actually the best solution (because it's the server, not the client, that best knows the actual NPT). I have just installed a new version (2013.02.27) of the library that includes this change. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Feb 26 20:26:24 2013 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 27 Feb 2013 17:26:24 +1300 Subject: [Live-devel] TS discontinuity when seeking In-Reply-To: <51278381.10101@celeos.eu> References: <51278381.10101@celeos.eu> Message-ID: <03520A26-B9CC-441D-9816-5C890352995B@live555.com> > I noticed that after a seek, the first TS packet from live555 server, > does not have the correct continuity counter value (with respect to the > last packet before the seek). There is a discontinuity. > This is error is reported in VLC and wireshark. Mumble... When streaming from a Transport Stream source, our server code does not change the contents of the Transport Stream input data at all (except when doing a 'fast forward' or 'reverse play'). It just copies each of the 188-byte Transport Stream packets into outgoing RTP packets 'as is'. That's why you'll usually see a discontinuity in the 4-bit "continuity_counter" value after a seek. I don't plan on changing this, unless it causes a major problem (which it sounds like it doesn't). Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Feb 26 20:36:27 2013 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 27 Feb 2013 17:36:27 +1300 Subject: [Live-devel] RTSPClient::parseTransportParams() crashes if "Transport:" line is missing In-Reply-To: <512C84ED.7050503@sound4.biz> References: <512C84ED.7050503@sound4.biz> Message-ID: <8841A5F1-BB3A-4F40-AA71-CC6AB0D17529@live555.com> Thanks for the note. This has been fixed in the latest version of software (2013.02.27), released today. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From xuemingqiang at gmail.com Mon Feb 25 17:57:03 2013 From: xuemingqiang at gmail.com (XUE MINGQIANG) Date: Tue, 26 Feb 2013 09:57:03 +0800 Subject: [Live-devel] Embedding equence number to frames Message-ID: Hi All, I am very new to both streaming and live555. Basically, I want to achieve the following task: I have a LAN and a RTSP video source machine. I want to do some CV task e.g. face detection, on another machine, and display the result in the form of continuous original video stream (with detected faces in rectangles) on a client machine in the same LAN. I want to have the following design: Ideally, each frame of the video stream is labelled with a unique sequence number. On the CV-task machine, I select the frames that need for face detection, record down its sequence number and coordinates of detected faces. Then I forward the (sequence number, coordinates) pairs to the client machine. The client machine runs an RTSP client and receive stream from the source, but instead of playing the stream directly, it temporally stores the received frame in a buffer until it receives the (sequence number, coordinates) from the CV-task machine. The sequence number is used to find the frame on which the rectangles for faces should be put onto. Once the rectangles are added to the right frames, the video is read from the buffer and played. My questions are as follows: 1. Does each frame of a video stream, e.g. the H264 stream contain an unique identifier? If so, how to get this unique identifier? 2. If such unique identifier does not exist, I am thinking to add a proxy machine between the video source and the other machines in the network. The proxy server basically read rtsp stream from the source, and add unique sequence number to each extracted frame, and then it re-stream the video to the other machines. So in this way, each machine in the network can see frames with unique IDs. Since, I am quite new to video streaming, my questions could be naive. Please help me. Thank you. Regards Mitchell -- Xue Mingqiang Computer Science Department, School of Computing, National University of Singapore. Contact: (+65)81573418 MSN : xuemingqiang2008 at hotmail.com Google: xuemingqiang at gmail.com From finlayson at live555.com Wed Feb 27 03:03:33 2013 From: finlayson at live555.com (Ross Finlayson) Date: Thu, 28 Feb 2013 00:03:33 +1300 Subject: [Live-devel] Embedding equence number to frames In-Reply-To: References: Message-ID: <4385B604-DFAE-4C9B-BA75-40E1F0E40085@live555.com> > 1. Does each frame of a video stream, e.g. the H264 stream contain an > unique identifier? You could probably use each frame's "presentation time" as a 'unique id'. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From zanglan at yahoo.com Tue Feb 26 01:17:08 2013 From: zanglan at yahoo.com (Lan Zang) Date: Tue, 26 Feb 2013 17:17:08 +0800 (CST) Subject: [Live-devel] MultiFramedRTPSource storage question Message-ID: <1361870228.60490.YahooMailNeo@web15803.mail.cnb.yahoo.com> Hi, Per my understanding, In function?MultiFramedRTPSource::networkReadHandler1(), Live555 will create?BufferedPacket and store them to?fReorderingBuffer. These BufferedPackets are consumed by calling?doGetNextFrame1(). I discovered that once, for any reason, the stored?BufferedPackets are piled up, but the number of these packets will never get lower. I just wander is it normal? Since this is could be a live broadcast, and this will bring delays. Finally, if the RTP source stops, the stored packets will eventually consumed anyway. Regards, Lan Zang(Sander) From finlayson at live555.com Wed Feb 27 03:21:04 2013 From: finlayson at live555.com (Ross Finlayson) Date: Thu, 28 Feb 2013 00:21:04 +1300 Subject: [Live-devel] MultiFramedRTPSource storage question In-Reply-To: <1361870228.60490.YahooMailNeo@web15803.mail.cnb.yahoo.com> References: <1361870228.60490.YahooMailNeo@web15803.mail.cnb.yahoo.com> Message-ID: <967D34C4-7A5F-47E0-A6EC-7E5A67945373@live555.com> > Per my understanding, In function MultiFramedRTPSource::networkReadHandler1(), Live555 will create BufferedPacket and store them to fReorderingBuffer. These BufferedPackets are consumed by calling doGetNextFrame1(). I discovered that once, for any reason, the stored BufferedPackets are piled up, but the number of these packets will never get lower. I just wander is it normal? Yes, but note that this happens only when there's network packet loss, and - in any case - the length of this queue is bounded by the 'reordering threshold time' (default value: 100ms). It's there to compensate for possibly out-of-order packets. You shouldn't be concerning yourself with this at all. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From michel.promonet at thalesgroup.com Wed Feb 27 04:05:40 2013 From: michel.promonet at thalesgroup.com (PROMONET Michel) Date: Wed, 27 Feb 2013 13:05:40 +0100 Subject: [Live-devel] setting rtpTime return by the RTSP PLAY In-Reply-To: References: <4963_1361883085_512CAFCD_4963_10994_1_1BE8971B6CFF3A4F97AF4011882AA25501560A431AE6@THSONEA01CMS01P.one.grp> Message-ID: <12167_1361966742_512DF696_12167_834_1_1BE8971B6CFF3A4F97AF4011882AA25501560AC08020@THSONEA01CMS01P.one.grp> Ross, Perhaps I missed some pieces. The purpose is to get from the client end the presentation time that was sent by the server. By now the client presentation time start from gettimeofday and will be synchronized by the first RTSP Sender report. I try to find a way to have a different initialization of the presentation time (in my case have no relation with the frame timestamp). What I guess is that the rtpTime in the RTP-Info embedded in the PLAY response could help to reach this aim, but it seems you answer no ! I don't need to use RTP time, what I would like to have is the NTP time that is in RCTP (I always ask myself why this time is not in the RTP header !) Do you have some suggestions to initialize the presentation time at the client end that is not based on client gettimeofday ? Best Regards, Michel. [@@ THALES GROUP INTERNAL @@] De : live-devel-bounces at ns.live555.com [mailto:live-devel-bounces at ns.live555.com] De la part de Ross Finlayson Envoy? : mardi 26 f?vrier 2013 16:04 ? : LIVE555 Streaming Media - development & use Objet : Re: [Live-devel] setting rtpTime return by the RTSP PLAY I am trying to see how to propagate a timestamp between an RTSP server and a RTSP client. Your question makes no sense, because the things that get 'propagated' from servers to clients are *presentation times*. (RTP timestamps are used only internally, by the RTP/RTCP protocol, as a mechanism for conveying presentation times from servers to clients. Our software automatically converts presentation times to RTP timestamps (at the server end) and RTP timestamps to presentation times (at the client end); application code (server or client) never needs to set or see RTP timestamp values.) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastien-devel at celeos.eu Wed Feb 27 07:17:17 2013 From: sebastien-devel at celeos.eu (=?ISO-8859-1?Q?S=E9bastien_Escudier?=) Date: Wed, 27 Feb 2013 16:17:17 +0100 Subject: [Live-devel] TS discontinuity when seeking In-Reply-To: <03520A26-B9CC-441D-9816-5C890352995B@live555.com> References: <03520A26-B9CC-441D-9816-5C890352995B@live555.com> Message-ID: <512E237D.3020400@celeos.eu> Ok, thanks for the answer. Another issue when seeking a stream from your RTSP server, is that the first packet after the seek has a PTS from before the seek. The following packets are fine. This confuses VLC when it tries to re-buffer after the seek. Regards, S?bastien. From finlayson at live555.com Wed Feb 27 08:27:42 2013 From: finlayson at live555.com (Ross Finlayson) Date: Thu, 28 Feb 2013 05:27:42 +1300 Subject: [Live-devel] setting rtpTime return by the RTSP PLAY In-Reply-To: <12167_1361966742_512DF696_12167_834_1_1BE8971B6CFF3A4F97AF4011882AA25501560AC08020@THSONEA01CMS01P.one.grp> References: <4963_1361883085_512CAFCD_4963_10994_1_1BE8971B6CFF3A4F97AF4011882AA25501560A431AE6@THSONEA01CMS01P.one.grp> <12167_1361966742_512DF696_12167_834_1_1BE8971B6CFF3A4F97AF4011882AA25501560AC08020@THSONEA01CMS01P.one.grp> Message-ID: <490239F8-513E-4704-9EEB-422457242905@live555.com> > The purpose is to get from the client end the presentation time that was sent by the server. > By now the client presentation time start from gettimeofday and will be synchronized by the first RTSP Sender report. You meant to say "RTCP Sender Report". But yes, that's correct. > what I would like to have is the NTP time that is in RCTP (I always ask myself why this time is not in the RTP header !) I agree. This is a flaw in the design of RTP, IMHO. To save space in the RTP header, the protocol's designers decided that each RTP packet would contain only a 32-bit timestamp. Then, because 32 bits is not enough to convey a full presentation time, they added this convoluted mechanism whereby a separate control protocol (RTCP) would be used to map the 32-bit RTP timestamp into a full (64-bit) presentation time. But this means that - until the first RTCP "SR" packet is received by the client (i.e., in our code, before "RTPSource::hasBeenSynchronizedUsingRTCP()" returns True) - the client does not (and cannot) know the server's presentation time, and has to 'guess' it. (The RTSP "RTP-Info" information does *not* give you this; it gives you 'normal play time', which is different.) Sorry. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastien-devel at celeos.eu Wed Feb 27 12:52:40 2013 From: sebastien-devel at celeos.eu (=?ISO-8859-1?Q?S=E9bastien_Escudier?=) Date: Wed, 27 Feb 2013 21:52:40 +0100 Subject: [Live-devel] Vorbis and config parse Message-ID: <512E7218.9080401@celeos.eu> Hi Ross, I am trying to add VP8/VORBIS codecs to VLC when it uses your library to receive such streams. To do this I need to get the VORBIS parameters (included in the a=fmtp configuration line). I noticed you have some functions to parse this config string, so I tried to call parseGeneralConfigStr but it fails. Is it because this function can't parse vorbis config ? should I write my own parse function in VLC for this case ? Regards, S?bastien. From finlayson at live555.com Wed Feb 27 13:51:11 2013 From: finlayson at live555.com (Ross Finlayson) Date: Thu, 28 Feb 2013 10:51:11 +1300 Subject: [Live-devel] Vorbis and config parse In-Reply-To: <512E7218.9080401@celeos.eu> References: <512E7218.9080401@celeos.eu> Message-ID: <02AA4895-132B-4606-8AA3-A0A5DFA43811@live555.com> > I am trying to add VP8/VORBIS codecs to VLC when it uses your library to > receive such streams. > To do this I need to get the VORBIS parameters (included in the a=fmtp > configuration line). > > I noticed you have some functions to parse this config string, so I > tried to call parseGeneralConfigStr but it fails. > Is it because this function can't parse vorbis config ? Correct - because "parseGeneralConfigStr()" isn't the right function for decoding that data. (That function is used only for payload formats (e.g., MPEG-4 audio or video) where the configuration string is encoded as a hexadecimal string: two hex digits for each byte of binary data - e.g. "DEADBEEFB00B".) Vorbis, like H.264, uses a more efficient encoding: Base64. According to RFC 5215 (the Vorbis audio RTP payload format specification), section 6, the configuration string (which is accessible, from client code, using "MediaSubsession::fmtp_configuration()" (or, equivalently, "MediaSubsession::fmtp_config()")) is the Base64 representation of the packed structure defined in section 3.2.1. Therefore, first call "base64Decode()" (see "liveMedia/include/Base64.hh") on the configuration string. That should give you a binary data structure that's formatted according to section 3.2.1 of RFC 5215. Depending on what your Vorbis decoder accepts, you might need to do further parsing of that binary data. BTW, if, for testing, you need a RTSP Vorbis audio (and VP8 video) stream, then you can use the "LIVE555 Media Server" to stream a ".webm" file (which contains Vorbis audio and VP8 video tracks). Our server code automatically demultiplexes these tracks from the file for streaming. (If you need examples of such files, just let me know.) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From temp2010 at forren.org Wed Feb 27 14:02:48 2013 From: temp2010 at forren.org (temp2010 at forren.org) Date: Wed, 27 Feb 2013 17:02:48 -0500 Subject: [Live-devel] Totally flummoxed about video quality, myDeviceSource->myOnDemandRTSPServer Message-ID: To Ross, I assume... You're so insightful and you always seem to nail it on the head. Perhaps you can give me some guidance here again. I got my named-pipe (actually I think it was an anonymous pipe) on Windows working with good quality, from my video source software, through Live555, and out to either VLC or custom iPad app. But the pipe latency was over one second. Far too long. So I'm back to my prior attempt that used myDeviceSource (MF_H264_DeviceSource in prior emails) based solution. Back on the pipe, I had similar bad quality. It turned out to be mostly due to the horsepower and speed with which I pushed into the pipe, as well as the amount of memory given to the pipe (whatever that actually controlled). I discovered this by noticing VLC message logs about frames being late by 100, 200, even 400ms. Along the way, I put trace statements into your Live555 code so that I could understand what's happening. Presentation time seemed important, although I recall you saying once it wasn't so important. But presentation time changes related directy to the VLC late messages, and when I got rid of the VLC late messages, good quality came through. So now back on myDeviceSource, I've tweaked some presentation time stuff and the VLC late messages have gone away. In essence, just like is done by ByteStreamFileSource and even H264VideoStreamFramer, the presentation time is set once in the beginning, and then only ever incremented (25fps=40ms) after that. But even though VLC no longer complains about late frames, the quality is still horrible. I temporarily added a trace to H264VideoStreamParser to report both the presentation time and the current time. On the trace output, this helps me tell not only what the presentation times are, but also if I'm feeding into the parser (indirectly through myDeviceSource) frames at a high enough speed. It seems that I am. So this should not be a case of not sending fast enough, as was previously the case (on a different computer remote from me) for the pipe. I have no clue what to try or what to do or what day it is. HELP! (BTW, I tried to look up the specs for H.264. I found a cryptic 500+ page pdf. That's just too overwhelming to even start. I'm smart and I know allot, but I don't know this already and can't afford the years it would take to really understand it... like you do!) Thanks... -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Wed Feb 27 18:32:03 2013 From: finlayson at live555.com (Ross Finlayson) Date: Thu, 28 Feb 2013 15:32:03 +1300 Subject: [Live-devel] Totally flummoxed about video quality, myDeviceSource->myOnDemandRTSPServer In-Reply-To: References: Message-ID: <6364C5C6-F4E4-4A2D-85EB-63474F9E1E74@live555.com> > To Ross, I assume... You're so insightful and you always seem to nail it on the head. Perhaps you can give me some guidance here again. I already have. However, I'll try to explain again what I've said in the past. In any case, this will be my last posting on this topic. > I got my named-pipe (actually I think it was an anonymous pipe) on Windows working with good quality, from my video source software, through Live555, and out to either VLC or custom iPad app. > > But the pipe latency was over one second. Far too long. You already know what the problem is. Your decoder (hardware) has to receive frames at a smooth, evenly paced rate, otherwise it won't be able to display them properly. That's what's causing the video quality problems that you're seeing. However, network packets do not, in general, arrive at a smooth, even pace. I.e., network jitter is an unavoidable fact of the Internet. Even if you could overcome it in your own carefully-controlled LAN (with nothing else going on), you can't avoid it on other networks, and certainly not on the general Internet. To compensate for jitter, buffering (and thus some delay) *has to* be added somewhere. There is no alternative! The best place to add this buffering is in the receiver - i.e., in your client application, so that it can compensate for jitter on the network. This is what VLC does (which is why you say that it plays your stream with no problem), and it's what you should do also in your iPad player application. I.e., add a buffer between the "*RTPSource" object and the decoder. Of course, you can tune the size of this buffer to try to keep the (unavoidable) delay low. If you add buffering to your receiver application, then you won't need to add it to your server. I.e., you then can get rid of your pipe-based server (and its associated delay). Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From agritsay at cnord.ru Wed Feb 27 22:39:00 2013 From: agritsay at cnord.ru (agritsay at cnord.ru) Date: Thu, 28 Feb 2013 09:39:00 +0300 Subject: [Live-devel] rtsp proxy: memory leak Message-ID: Hello, I believe I found a huge memory leak in the rtsp proxy server. The problem is that I can only reliably reproduce it on qemu-arm or physical armv7 device. It doesn't happen when I run rtsp proxy on my x86-64 machine. Moreover, it only reveals when there are four or more clients connected to the proxy server: everything works fine when there are just three clients (memory usage stays at the level of about 6 MiB) but when the fourth client connects, memory consumption increases very quickly until the process gets killed by oom killer. Unfortunately I can not use valgrind for debugging the leak because it is not available on the platforms the problem is reproducible on. What can be done in order to debug the leak? Or maybe I'm just doing something wrong and this behaviour is sort of expected? Thanks! -- Anton Gritsay From felix at embedded-sol.com Wed Feb 27 23:15:35 2013 From: felix at embedded-sol.com (Felix Radensky) Date: Thu, 28 Feb 2013 09:15:35 +0200 Subject: [Live-devel] rtsp proxy: memory leak In-Reply-To: References: Message-ID: <512F0417.3090705@embedded-sol.com> Hi Anton, On 02/28/2013 08:39 AM, agritsay at cnord.ru wrote: > Hello, > > I believe I found a huge memory leak in the rtsp proxy server. The > problem is that I can only reliably reproduce it on qemu-arm or > physical armv7 device. It doesn't happen when I run rtsp proxy on my > x86-64 machine. Moreover, it only reveals when there are four or more > clients connected to the proxy server: everything works fine > when there are just three clients (memory usage stays at the level of > about 6 MiB) but when the fourth client connects, memory consumption > increases very quickly until the process gets killed by oom killer. > > Unfortunately I can not use valgrind for debugging the leak because it > is not available on the platforms the problem is reproducible on. Valgrind is available for ARMv7, see http://valgrind.org/info/platforms.html Felix. From agritsay at cnord.ru Wed Feb 27 23:18:09 2013 From: agritsay at cnord.ru (agritsay at cnord.ru) Date: Thu, 28 Feb 2013 10:18:09 +0300 Subject: [Live-devel] rtsp proxy: memory leak In-Reply-To: <512F0417.3090705@embedded-sol.com> References: <512F0417.3090705@embedded-sol.com> Message-ID: You are right, it is available. I tried to use it but it fails with disInstr(arm): unhandled instruction: 0xEEBA7BC1 cond=14(0xE) 27:20=235(0xEB) 4:4=0 3:0=1(0x1) immediately after the log message "ProxyServerMediaSubsession["H264"]::createNewStreamSource". -- Anton 2013/2/28 Felix Radensky : > Hi Anton, > > > On 02/28/2013 08:39 AM, agritsay at cnord.ru wrote: >> >> Hello, >> >> I believe I found a huge memory leak in the rtsp proxy server. The >> problem is that I can only reliably reproduce it on qemu-arm or >> physical armv7 device. It doesn't happen when I run rtsp proxy on my >> x86-64 machine. Moreover, it only reveals when there are four or more >> clients connected to the proxy server: everything works fine >> when there are just three clients (memory usage stays at the level of >> about 6 MiB) but when the fourth client connects, memory consumption >> increases very quickly until the process gets killed by oom killer. >> >> Unfortunately I can not use valgrind for debugging the leak because it >> is not available on the platforms the problem is reproducible on. > > Valgrind is available for ARMv7, see http://valgrind.org/info/platforms.html > > Felix. From finlayson at live555.com Wed Feb 27 23:19:58 2013 From: finlayson at live555.com (Ross Finlayson) Date: Thu, 28 Feb 2013 20:19:58 +1300 Subject: [Live-devel] rtsp proxy: memory leak In-Reply-To: References: Message-ID: > Unfortunately I can not use valgrind for debugging the leak because it > is not available on the platforms the problem is reproducible on. And what platforms are these? Let me guess - Windows? Sigh... Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From agritsay at cnord.ru Wed Feb 27 23:58:00 2013 From: agritsay at cnord.ru (agritsay at cnord.ru) Date: Thu, 28 Feb 2013 10:58:00 +0300 Subject: [Live-devel] rtsp proxy: memory leak In-Reply-To: References: Message-ID: 2013/2/28 Ross Finlayson : > Unfortunately I can not use valgrind for debugging the leak because it > is not available on the platforms the problem is reproducible on. > > > And what platforms are these? Let me guess - Windows? Sigh... No, they are are qemu-arm (which emulates armv5) and TI AM3359 (which is armv7). I use linux on both. -- Anton > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > From sebastien-devel at celeos.eu Thu Feb 28 03:28:28 2013 From: sebastien-devel at celeos.eu (=?ISO-8859-1?Q?S=E9bastien_Escudier?=) Date: Thu, 28 Feb 2013 12:28:28 +0100 Subject: [Live-devel] Vorbis and config parse In-Reply-To: <02AA4895-132B-4606-8AA3-A0A5DFA43811@live555.com> References: <02AA4895-132B-4606-8AA3-A0A5DFA43811@live555.com> Message-ID: <512F3F5C.9000802@celeos.eu> Ok, thanks for the hint. > BTW, if, for testing, you need a RTSP Vorbis audio (and VP8 video) > stream, then you can use the "LIVE555 Media Server" to stream a ".webm" Yes, that's what I am doing. I took some sample here : http://www.webmfiles.org/demo-files/ However, I don't understand the length field in the packed structure defined in section 3.2.1 I tried with your server and VLC. VLC set it to the total length of the Packed Headers Your server set it to the length of the first header minus 8. For example, when streaming this file : http://video.webmfiles.org/big-buck-bunny_trailer.webm Once converted into binary format, the total length of the configuration field is : 3106 If we remove the 4 byte count field, the length of the first (and only one) packed header is 3102. VLC set its length field to 3106 Your server set it to 3094 Regards, Sebastien. From finlayson at live555.com Thu Feb 28 03:58:23 2013 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 1 Mar 2013 00:58:23 +1300 Subject: [Live-devel] Vorbis and config parse In-Reply-To: <512F3F5C.9000802@celeos.eu> References: <02AA4895-132B-4606-8AA3-A0A5DFA43811@live555.com> <512F3F5C.9000802@celeos.eu> Message-ID: <5786169A-805C-4E0F-8DCF-06C90F0A08C4@live555.com> > However, I don't understand the length field in the packed structure > defined in section 3.2.1 Although it's not stated explicitly, I'm pretty sure that it (and the other fields) are the same as for the structure that's used for "In-band Header Transmission" (in section 3.1). In particular, at the end of page 10 of RFC 5215, we see: "The 2 byte length tag defines the length of the packed headers as the sum of the Configuration, Comment, and Setup lengths." I suspect that they meant to say "Identification" instead of "Configuration" there. So, I inferred that the "length" field should be just the sum of the lengths of the "Identification", "Comment", and "Setup" headers, but *not* including the lengths of any of the preceding fields ("Ident", "length", "n. of headers", "length1", "length2"). So, in your example, I think the LIVE555 server is correct in setting the "length" field to 3094, and not 3106. So, if you agree with this, I suggest modifying VLC's code accordingly. But if you disagree, then we should probably ask the Xiph folks for clarification... Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastien-devel at celeos.eu Thu Feb 28 04:41:37 2013 From: sebastien-devel at celeos.eu (=?ISO-8859-1?Q?S=E9bastien_Escudier?=) Date: Thu, 28 Feb 2013 13:41:37 +0100 Subject: [Live-devel] Vorbis and config parse In-Reply-To: <5786169A-805C-4E0F-8DCF-06C90F0A08C4@live555.com> References: <5786169A-805C-4E0F-8DCF-06C90F0A08C4@live555.com> Message-ID: <512F5081.4010105@celeos.eu> > So, if you agree with this, I suggest modifying VLC's code accordingly. Ok, Thanks Ross. From sebastien-devel at celeos.eu Thu Feb 28 07:25:53 2013 From: sebastien-devel at celeos.eu (=?ISO-8859-1?Q?S=E9bastien_Escudier?=) Date: Thu, 28 Feb 2013 16:25:53 +0100 Subject: [Live-devel] live555 server and webm issue Message-ID: <512F7701.1080705@celeos.eu> Hi Ross, While testing your server with webm files, I noticed a strange bug. When I launch the server (testOnDemandRTSPServer), the first time I try to view the webm stream (/webmFileTest) is always working well. But then, when I try to view this stream again, I see a lot of error (in VLC). And I have to relaunch your server to make it work again. So I tried to connect with your client (openRTSP) and I noticed that on the first launch, the ouput files (audio-VORBIS-2 video-VP8-1) have a good size (more or less the same size than the original test.webm), but on the second try, the size is far less (like half the size), and each time I try again the size decrease. If I restart the server, I get 2 files with correct size (audio-VORBIS-2 video-VP8-1). So I think it is a bug in the server. The file I am streaming is : http://video.webmfiles.org/big-buck-bunny_trailer.webm This bug does not affect mpeg2TransportStreamTest for example. I always have the same output. Regards, Sebastien. From Schoenstedt.H at st-sportservice.com Thu Feb 28 04:20:16 2013 From: Schoenstedt.H at st-sportservice.com (Schoenstedt, Holger) Date: Thu, 28 Feb 2013 12:20:16 +0000 Subject: [Live-devel] memory leak in RTSPClient::handleRequestError() Message-ID: <34CE1F2F79A9324EAB825D735D038CA159150703@GDC00216.swatchgroup.net> Hello, I'm developing a RTSP client application. While testing I've found a possible memory leak in RTSPClient::handleRequestError(). While calling the handler there is a string duplication of the error string. IMHO this string should be freed inside RTSPClient::handleRequestError() after the call to the handler. Am I right? Or should the strings freed inside the handler call? Cheers, Holger Sch?nstedt Software Engineer ST SPORTSERVICE GmbH Wiesenring 11 D - 04159 Leipzig - Germany Phone: +49 341 46 21 100 Fax: +49 341 46 21 400 schoenstedt.h at st-sportservice.com www.swisstiming.com Firmensitz ? Registered Offices: Leipzig Amtsgericht ? District Court: Leipzig HRB 2325 Gesch?ftsf?hrer ? Managing Directors: Dr. Ulrich Heilfort, Eckhard Frank Aufsichtsratsvorsitzender ? Chairman of the Supervisory Board: Georges N. Hayek ******************************************************************************* This e-mail message is intended only for the addresse(s) and contains information which may be confidential. If you are not the intended recipient please do not read, save, forward, disclose or copy the contents of this e-mail. If this e-mail has been sent to you in error, please delete this e-mail and any copies or links to this e-mail completely and immediately from your system. We also like to inform you that communication via e-mail over the Internet is insecure because third parties may have the possibility to access and manipulate e-mails. Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of The Swatch Group Ltd. ******************************************************************************* -------------- next part -------------- An HTML attachment was scrubbed... URL: From jshanab at smartwire.com Thu Feb 28 07:58:43 2013 From: jshanab at smartwire.com (Jeff Shanab) Date: Thu, 28 Feb 2013 15:58:43 +0000 Subject: [Live-devel] rtsp proxy: memory leak In-Reply-To: References: Message-ID: <615FD77639372542BF647F5EBAA2DBC22525E457@IL-BOL-EXCH01.smartwire.com> I am currently designing an app around rtspProxy server to run on arm. (raspberry pi) As a test I let it run 4 channels and 4 clients (1 per channel) for 48 hours. I did not see any leak or cpu % growth in that time. If you tell me your setup so I reproduce it exactly, I can run it on my pi. This message and any attachments contain confidential and proprietary information, and may contain privileged information, belonging to one or more affiliates of Windy City Wire Cable & Technology Products, LLC. No privilege is waived by this transmission. Unauthorized use, copying or disclosure of such information is prohibited and may be unlawful. If you receive this message in error, please delete it from your system, destroy any printouts or copies of it, and notify the sender immediately by e-mail or phone -----Original Message----- From: live-devel-bounces at ns.live555.com [mailto:live-devel-bounces at ns.live555.com] On Behalf Of agritsay at cnord.ru Sent: Thursday, February 28, 2013 1:58 AM To: LIVE555 Streaming Media - development & use Subject: Re: [Live-devel] rtsp proxy: memory leak 2013/2/28 Ross Finlayson : > Unfortunately I can not use valgrind for debugging the leak because it > is not available on the platforms the problem is reproducible on. > > > And what platforms are these? Let me guess - Windows? Sigh... No, they are are qemu-arm (which emulates armv5) and TI AM3359 (which is armv7). I use linux on both. -- Anton > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > _______________________________________________ live-devel mailing list live-devel at lists.live555.com http://lists.live555.com/mailman/listinfo/live-devel From finlayson at live555.com Thu Feb 28 10:03:04 2013 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 1 Mar 2013 07:03:04 +1300 Subject: [Live-devel] memory leak in RTSPClient::handleRequestError() In-Reply-To: <34CE1F2F79A9324EAB825D735D038CA159150703@GDC00216.swatchgroup.net> References: <34CE1F2F79A9324EAB825D735D038CA159150703@GDC00216.swatchgroup.net> Message-ID: <17CE42D6-F892-45B9-9E5B-F47C62CE23F5@live555.com> > I'm developing a RTSP client application. While testing I've found a possible memory leak in RTSPClient::handleRequestError(). While calling the handler there is a string duplication of the error string. No, there's no memory leak here. In *all* calls to a RTSP "responseHandler" (including the one called to implement "handleRequestError()"), the "resultString" parameter is assumed to have been dynamically allocated, and must be delete[]d after the handler function has been called. (See the comment in "liveMedia/include/RTSPClient.hh", lines 59-60.) However, your question did turn out to be useful, because after I read it, I went through the code, looking for places where we might not be delete[]ing the "resultString" afterwards. I did find a handful of places - in the "testRTSPClient" and "openRTSP" applications - where the "resultString" was not being delete[]d; thus, there was a minor memory leak in those places (although only in situations where a RTSP request failed). These will be fixed in the next release of the software. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From cvanbrederode at gmail.com Thu Feb 28 13:49:13 2013 From: cvanbrederode at gmail.com (Chris Van Brederode) Date: Thu, 28 Feb 2013 16:49:13 -0500 Subject: [Live-devel] Question on streams in Windows Message-ID: Hello, I'm trying to develop a small program that reads output from an xvid encoder through a pipe (named "\\.\pipe\file.m4e" as an example) and serves it over RTSP - unicast. I've got a small application based off of testOnDemandRTSPServer.cpp, and it works if I save the bitstream to a file and give it that. But once I use the pipe, VLC times out trying to connect to the server, and the server doesn't give any errors. Upon digging into the code, I found this in ByteStreamSource.cpp: #ifdef READ_FROM_FILES_SYNCHRONOUSLY fFrameSize = fread(fTo, 1, fMaxSize, fFid); #else if (fFidIsSeekable) { fFrameSize = fread(fTo, 1, fMaxSize, fFid); } else { // For non-seekable files (e.g., pipes), call "read()" rather than "fread()", to ensure that the read doesn't block: fFrameSize = read(fileno(fFid), fTo, fMaxSize); } #endif ....and looking for READ_FROM_FILES_SYNCHRONOUSLY I find it defined in InputFile.hh with the following note: #define READ_FROM_FILES_SYNCHRONOUSLY 1 // Because Windows is a silly toy operating system that doesn't (reliably) treat // open files as being readable sockets (which can be handled within the default // "BasicTaskScheduler" event loop, using "select()"), we implement file reading // in Windows using synchronous, rather than asynchronous, I/O. This can severely // limit the scalability of servers using this code that run on Windows. // If this is a problem for you, then either use a better operating system, // or else write your own Windows-specific event loop ("TaskScheduler" subclass) // that can handle readable data in Windows open files as an event. I'm working under the assumption that the problem is that the fread call is blocking and cause the problems. My question is this: can I safely undefine READ_FROM_FILES_SYNCHRONOUSLY (on windows) and have non-blocking calls to read, or do I need to extend ByteStreamSource with my own version that uses non-blocking IO in windows? Thank you for any help... Chris V. -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Thu Feb 28 15:20:05 2013 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 1 Mar 2013 12:20:05 +1300 Subject: [Live-devel] Question on streams in Windows In-Reply-To: References: Message-ID: > My question is this: can I safely undefine READ_FROM_FILES_SYNCHRONOUSLY NO! You should not modify the supplied source code. (Windows developers who use "@gmail.com" email addresses should especially not modify the supplied source code :-) The whole point of this code is that, in Windows, reads from open files (including pipes) can only be done synchronously - i.e., as a blocking read. This means that if you are reading from a pipe, then the writing to the (other end of the) pipe must be done by another process. However, if your encoder (the thing that writes to the other end of the pipe) is running as another process, then things should still work. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Thu Feb 28 16:26:16 2013 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 1 Mar 2013 13:26:16 +1300 Subject: [Live-devel] live555 server and webm issue In-Reply-To: <512F7701.1080705@celeos.eu> References: <512F7701.1080705@celeos.eu> Message-ID: <7E416C80-0B55-4C09-9997-08CA1E47400F@live555.com> > So I think it is a bug in the server. Yes, you're right. I'm looking at it now. Thanks for the report. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From cvanbrederode at gmail.com Thu Feb 28 17:45:39 2013 From: cvanbrederode at gmail.com (Chris Van Brederode) Date: Thu, 28 Feb 2013 20:45:39 -0500 Subject: [Live-devel] Question on streams in Windows In-Reply-To: References: Message-ID: Yes, the encoder is another process (which is in turn reading raw frames from yet another process). I'll test my pipe code in the encoder; I'm doing it differently from in the 3D program going to the encoder. As far as Windows developers who use gmail...I can understand the anti-windows sentiment, but I don't know what you have against gmail... And I code in Windows because I'm paid too...and I know *exactly* how to do asynchronous, non-blocking file IO in windows. Be careful with the word "impossible." ;-) (Hint: don't use stdio) Thank you for the quick response. Chris On Thu, Feb 28, 2013 at 6:20 PM, Ross Finlayson wrote: > My question is this: can I safely undefine READ_FROM_FILES_SYNCHRONOUSLY > > > NO! You should not modify the supplied source code. (Windows developers > who use "@gmail.com" email addresses should especially not modify the > supplied source code :-) > > The whole point of this code is that, in Windows, reads from open files > (including pipes) can only be done synchronously - i.e., as a blocking > read. This means that if you are reading from a pipe, then the writing to > the (other end of the) pipe must be done by another process. > > However, if your encoder (the thing that writes to the other end of the > pipe) is running as another process, then things should still work. > > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Thu Feb 28 18:14:36 2013 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 1 Mar 2013 15:14:36 +1300 Subject: [Live-devel] Question on streams in Windows In-Reply-To: References: Message-ID: > As far as Windows developers who use gmail...I can understand the anti-windows sentiment, but I don't know what you have against gmail... This is explained clearly in the FAQ (that everyone was asked to read before posting to the mailing list :-) > And I code in Windows because I'm paid too...and I know *exactly* how to do asynchronous, non-blocking file IO in windows. Be careful with the word "impossible." ;-) The issue with doing asynchronous file reading in Windows is that - in Windows - extra work needs to be done to handle the 'data is available on the open file' event. In other OSs, open files are sockets that can be passed to "select()", as we do in the the implementation of "BasicTaskScheduler" (the "TaskScheduler" subclass that we provide with the supplied code). In (at least some versions of) Windows, however, open files are not "select()"able sockets. Therefore, to do asynchronous file reads in Windows, you would need to write your own subclass of "TaskScheduler" (that reimplements the "setBackgroundHandling()" virtual function). Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: