From gotar at polanet.pl Sat Nov 1 03:39:54 2014 From: gotar at polanet.pl (Tomasz Pala) Date: Sat, 1 Nov 2014 11:39:54 +0100 Subject: [Live-devel] [patch] Authentication hiccups Message-ID: <20141101103953.GA24173@polanet.pl> Hello, recently I've tried to watch some recordings on my DVR. While there were no problems with online streams from cameras attached, I couldn't find a way to review anything older, all I got was _the first frame_ of specified period only. I've tried vlc, mplayer and finally openRTSP without success. However, ffmpeg did the job and fetched entire stream as requested. After some digging I've found what causes this behaviour and made some workaround/fix - patch attached, description follows. My DVR is password-protected, session using live555 libs looks like this: -> Sending request: OPTIONS <- RTSP/1.0 401 Unauthorized WWW-Authenticate: Digest -> Sending request: OPTIONS Authorization: Digest <- RTSP/1.0 200 OK -> Sending request: DESCRIBE <- RTSP/1.0 401 Unauthorized WWW-Authenticate: Digest -> Sending request: DESCRIBE Authorization: Digest <- RTSP/1.0 200 OK -> Sending request: SETUP <- RTSP/1.0 401 Unauthorized WWW-Authenticate: Digest -> Sending request: SETUP Authorization: Digest <- RTSP/1.0 200 OK -> Sending request: SETUP <- RTSP/1.0 401 Unauthorized WWW-Authenticate: Digest -> Sending request: SETUP Authorization: Digest <- RTSP/1.0 200 OK -> Sending request: PLAY <- RTSP/1.0 401 Unauthorized WWW-Authenticate: Digest -> Sending request: PLAY Authorization: Digest <- RTSP/1.0 455 Method Not Valid In This State -> Sending request: TEARDOWN <- RTSP/1.0 401 Unauthorized WWW-Authenticate: Digest -> Sending request: TEARDOWN Authorization: Digest <- RTSP/1.0 200 OK This is a bit ugly, compared to ffmpeg which follows realm and nonce once authenticated, not only because of excessive traffic, but apparently in case of my DVR - disturbance of SETUPed status and preventing PLAY request from doing what it was supposed to do. Rationale behind ffmpeg approach (keeping realm and nonce) is straightforward - if access is protected, then every request from a session should be authorized. There's no point in trying unauthenticated access. The problem with RTSPClient is that it operates on fCurrentAuthenticator which is only a copy of main authenticator instance; this in turn is not passed to any response handler, so realm and nonce are lost at every send command beginning (and reappear after command resend); fCurrentAuthenticator is overwritten and there's no path that allows saving these values in original authenticator instance. My first workaround was to save them in sendDescribeCommand: unsigned RTSPClient::sendDescribeCommand(responseHandler* responseHandler, Authenticator* authenticator) { authenticator->setRealmAndNonce(fCurrentAuthenticator.realm(),fCurrentAuthenticator.nonce()); // note missing sanity checks above /when authenticator == NULL/, this is just a PoC if (authenticator != NULL ) fCurrentAuthenticator = *authenticator; return sendRequest(new RequestRecord(++fCSeq, "DESCRIBE", responseHandler)); } and that works like a charm. However, I don't know if it is the proper solutiuon (what if '401 Unauthorized' is returned later, i.e. after SETUP? or if nonce changes? May I alter anything in authenticator itself anyway?) so I've decided to create even less intrusive fix. It simply prevents overwrites of fCurrentAuthenticator unless this is necessary. This way there are no behaviour changes visible on application side (resends were handled by RTSPClient internally), so please review attached patch. Note that I'm not an RTSP specialist of any kind and everything above is just my yesterday's observations, whether this is close to proper or any other solution is required is up to you, I can't assist in this issue anymore. One more suggestion if I might, that it would be nice to be able to forcefully disable Basic auth. In current code this could be enforced by assuring nonce != NULL, e.g. by setting it to empty string ("") in authenticator, but this is not straightforward and might change in future. regards, -- Tomasz Pala -------------- next part -------------- diff -ur -x .svn -x .git -x .bzr -x CVS -ur live/liveMedia/DigestAuthentication.cpp live.new/liveMedia/DigestAuthentication.cpp --- live/liveMedia/DigestAuthentication.cpp 2014-10-28 22:09:59.000000000 +0100 +++ live.new/liveMedia/DigestAuthentication.cpp 2014-11-01 10:04:01.884124153 +0100 @@ -48,6 +55,17 @@ return *this; } +Boolean Authenticator::operator<(const Authenticator* rightSide) { + if (rightSide != NULL && rightSide != this && ( + rightSide->realm() != NULL || + rightSide->nonce() != NULL || + strcmp(rightSide->username(), this->username()) || + strcmp(rightSide->password(), this->password()) ) ) + return True; + + return False; +} + Authenticator::~Authenticator() { reset(); } diff -ur -x .svn -x .git -x .bzr -x CVS -ur live/liveMedia/include/DigestAuthentication.hh live.new/liveMedia/include/DigestAuthentication.hh --- live/liveMedia/include/DigestAuthentication.hh 2014-10-28 22:09:59.000000000 +0100 +++ live.new/liveMedia/include/DigestAuthentication.hh 2014-11-01 09:55:41.113389683 +0100 @@ -37,6 +37,7 @@ // by md5(::) Authenticator(const Authenticator& orig); Authenticator& operator=(const Authenticator& rightSide); + Boolean operator<(const Authenticator* rightSide); virtual ~Authenticator(); void reset(); diff -ur -x .svn -x .git -x .bzr -x CVS -ur live/liveMedia/RTSPClient.cpp live.new/liveMedia/RTSPClient.cpp --- live/liveMedia/RTSPClient.cpp 2014-10-28 22:09:59.000000000 +0100 +++ live.new/liveMedia/RTSPClient.cpp 2014-11-01 09:55:13.433349096 +0100 @@ -37,7 +37,7 @@ } unsigned RTSPClient::sendDescribeCommand(responseHandler* responseHandler, Authenticator* authenticator) { - if (authenticator != NULL) fCurrentAuthenticator = *authenticator; + if (fCurrentAuthenticator < authenticator) fCurrentAuthenticator = *authenticator; return sendRequest(new RequestRecord(++fCSeq, "DESCRIBE", responseHandler)); } @@ -47,7 +52,7 @@ } unsigned RTSPClient::sendAnnounceCommand(char const* sdpDescription, responseHandler* responseHandler, Authenticator* authenticator) { - if (authenticator != NULL) fCurrentAuthenticator = *authenticator; + if (fCurrentAuthenticator < authenticator) fCurrentAuthenticator = *authenticator; return sendRequest(new RequestRecord(++fCSeq, "ANNOUNCE", responseHandler, NULL, NULL, False, 0.0, 0.0, 0.0, sdpDescription)); } @@ -55,7 +60,7 @@ Boolean streamOutgoing, Boolean streamUsingTCP, Boolean forceMulticastOnUnspecified, Authenticator* authenticator) { if (fTunnelOverHTTPPortNum != 0) streamUsingTCP = True; // RTSP-over-HTTP tunneling uses TCP (by definition) - if (authenticator != NULL) fCurrentAuthenticator = *authenticator; + if (fCurrentAuthenticator < authenticator) fCurrentAuthenticator = *authenticator; u_int32_t booleanFlags = 0; if (streamUsingTCP) booleanFlags |= 0x1; @@ -67,7 +72,7 @@ unsigned RTSPClient::sendPlayCommand(MediaSession& session, responseHandler* responseHandler, double start, double end, float scale, Authenticator* authenticator) { - if (authenticator != NULL) fCurrentAuthenticator = *authenticator; + if (fCurrentAuthenticator < authenticator) fCurrentAuthenticator = *authenticator; sendDummyUDPPackets(session); // hack to improve NAT traversal return sendRequest(new RequestRecord(++fCSeq, "PLAY", responseHandler, &session, NULL, 0, start, end, scale)); } @@ -75,7 +80,7 @@ unsigned RTSPClient::sendPlayCommand(MediaSubsession& subsession, responseHandler* responseHandler, double start, double end, float scale, Authenticator* authenticator) { - if (authenticator != NULL) fCurrentAuthenticator = *authenticator; + if (fCurrentAuthenticator < authenticator) fCurrentAuthenticator = *authenticator; sendDummyUDPPackets(subsession); // hack to improve NAT traversal return sendRequest(new RequestRecord(++fCSeq, "PLAY", responseHandler, NULL, &subsession, 0, start, end, scale)); } @@ -83,7 +88,7 @@ unsigned RTSPClient::sendPlayCommand(MediaSession& session, responseHandler* responseHandler, char const* absStartTime, char const* absEndTime, float scale, Authenticator* authenticator) { - if (authenticator != NULL) fCurrentAuthenticator = *authenticator; + if (fCurrentAuthenticator < authenticator) fCurrentAuthenticator = *authenticator; sendDummyUDPPackets(session); // hack to improve NAT traversal return sendRequest(new RequestRecord(++fCSeq, responseHandler, absStartTime, absEndTime, scale, &session, NULL)); } @@ -91,45 +96,45 @@ unsigned RTSPClient::sendPlayCommand(MediaSubsession& subsession, responseHandler* responseHandler, char const* absStartTime, char const* absEndTime, float scale, Authenticator* authenticator) { - if (authenticator != NULL) fCurrentAuthenticator = *authenticator; + if (fCurrentAuthenticator < authenticator) fCurrentAuthenticator = *authenticator; sendDummyUDPPackets(subsession); // hack to improve NAT traversal return sendRequest(new RequestRecord(++fCSeq, responseHandler, absStartTime, absEndTime, scale, NULL, &subsession)); } unsigned RTSPClient::sendPauseCommand(MediaSession& session, responseHandler* responseHandler, Authenticator* authenticator) { - if (authenticator != NULL) fCurrentAuthenticator = *authenticator; + if (fCurrentAuthenticator < authenticator) fCurrentAuthenticator = *authenticator; return sendRequest(new RequestRecord(++fCSeq, "PAUSE", responseHandler, &session)); } unsigned RTSPClient::sendPauseCommand(MediaSubsession& subsession, responseHandler* responseHandler, Authenticator* authenticator) { - if (authenticator != NULL) fCurrentAuthenticator = *authenticator; + if (fCurrentAuthenticator < authenticator) fCurrentAuthenticator = *authenticator; return sendRequest(new RequestRecord(++fCSeq, "PAUSE", responseHandler, NULL, &subsession)); } unsigned RTSPClient::sendRecordCommand(MediaSession& session, responseHandler* responseHandler, Authenticator* authenticator) { - if (authenticator != NULL) fCurrentAuthenticator = *authenticator; + if (fCurrentAuthenticator < authenticator) fCurrentAuthenticator = *authenticator; return sendRequest(new RequestRecord(++fCSeq, "RECORD", responseHandler, &session)); } unsigned RTSPClient::sendRecordCommand(MediaSubsession& subsession, responseHandler* responseHandler, Authenticator* authenticator) { - if (authenticator != NULL) fCurrentAuthenticator = *authenticator; + if (fCurrentAuthenticator < authenticator) fCurrentAuthenticator = *authenticator; return sendRequest(new RequestRecord(++fCSeq, "RECORD", responseHandler, NULL, &subsession)); } unsigned RTSPClient::sendTeardownCommand(MediaSession& session, responseHandler* responseHandler, Authenticator* authenticator) { - if (authenticator != NULL) fCurrentAuthenticator = *authenticator; + if (fCurrentAuthenticator < authenticator) fCurrentAuthenticator = *authenticator; return sendRequest(new RequestRecord(++fCSeq, "TEARDOWN", responseHandler, &session)); } unsigned RTSPClient::sendTeardownCommand(MediaSubsession& subsession, responseHandler* responseHandler, Authenticator* authenticator) { - if (authenticator != NULL) fCurrentAuthenticator = *authenticator; + if (fCurrentAuthenticator < authenticator) fCurrentAuthenticator = *authenticator; return sendRequest(new RequestRecord(++fCSeq, "TEARDOWN", responseHandler, NULL, &subsession)); } unsigned RTSPClient::sendSetParameterCommand(MediaSession& session, responseHandler* responseHandler, char const* parameterName, char const* parameterValue, Authenticator* authenticator) { - if (authenticator != NULL) fCurrentAuthenticator = *authenticator; + if (fCurrentAuthenticator < authenticator) fCurrentAuthenticator = *authenticator; char* paramString = new char[strlen(parameterName) + strlen(parameterValue) + 10]; sprintf(paramString, "%s: %s\r\n", parameterName, parameterValue); unsigned result = sendRequest(new RequestRecord(++fCSeq, "SET_PARAMETER", responseHandler, &session, NULL, False, 0.0, 0.0, 0.0, paramString)); @@ -139,7 +144,7 @@ unsigned RTSPClient::sendGetParameterCommand(MediaSession& session, responseHandler* responseHandler, char const* parameterName, Authenticator* authenticator) { - if (authenticator != NULL) fCurrentAuthenticator = *authenticator; + if (fCurrentAuthenticator < authenticator) fCurrentAuthenticator = *authenticator; // We assume that: // parameterName is NULL means: Send no body in the request. From finlayson at live555.com Sat Nov 1 14:15:20 2014 From: finlayson at live555.com (Ross Finlayson) Date: Sat, 1 Nov 2014 14:15:20 -0700 Subject: [Live-devel] [patch] Authentication hiccups In-Reply-To: <20141101103953.GA24173@polanet.pl> References: <20141101103953.GA24173@polanet.pl> Message-ID: <83905EF8-E05C-4763-8284-A9C2959D80E8@live555.com> Tomasz, Thanks for the suggestion. This is a good idea, and I?ve just released a new version (2014.11.01) of the ?LIVE555 Streaming Media? code that includes your patch. It seems, however, that there may be a bug in your RTSP server, because I don?t think this should be happening: > -> Sending request: PLAY > <- RTSP/1.0 401 Unauthorized WWW-Authenticate: Digest > -> Sending request: PLAY Authorization: Digest > <- RTSP/1.0 455 Method Not Valid In This State I don?t understand why your server would first send back a "401 Unauthorized? response, but then, after the client followed up with a valid ?Authorization:? header, respond "455 Method Not Valid In This State?. I suggest filing a bug report for your server. > One more suggestion if I might, that it would be nice to be able to > forcefully disable Basic auth. No, I don?t want to do this. If a server is foolish enough to request Basic Authentication, then the client should use it. (Few servers these days use Basic Authentication, fortunately.) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Sat Nov 1 15:03:22 2014 From: finlayson at live555.com (Ross Finlayson) Date: Sat, 1 Nov 2014 15:03:22 -0700 Subject: [Live-devel] please help , , openrtsp recording audio/video sync problem In-Reply-To: <916e3b57ad1a304326f5eb747e5d67b@cweb12.nm.nhnsystem.com> References: <916e3b57ad1a304326f5eb747e5d67b@cweb12.nm.nhnsystem.com> Message-ID: > I recorded a stream from camera by using openrtsp like this > > openrtsp -4 -F test -P 120 rtsp://admin:4321 at 192.168.1.121/profile2/media.smp > > > then > > test-00000-00120.mp4 file is created . > > i expected the play time would be 120 seconds > > but was 136 seconds . > > The problem is that you didn?t specify the frame rate, so ?openRTSP? assumed a default frame rate of 15 fps. You need to use the "-w ", "-h " and "-f ? options; see http://www.live555.com/openRTSP/#quicktime Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From markb at virtualguard.com Mon Nov 3 13:02:46 2014 From: markb at virtualguard.com (Mark Bondurant) Date: Mon, 3 Nov 2014 21:02:46 +0000 Subject: [Live-devel] Buffering an H264 Video Stream In-Reply-To: References: <6555ad5ce84f48d189928894094ae3e1@VGI-EX1.vg.local> <708eef3d62f641b7ac0cbbbd8a3a74b6@VGI-EX1.vg.local> Message-ID: <782e1b3f8cbe4209ae3831669d1f7fe5@VGI-EX1.vg.local> Hello, Sorry if this is a repeat, but I/we have constant email problems (political issues), which I'm fairly sure I've now found a workaround for. What I'm saying is that this may be a repeat. If it is, I apologize, I didn't get your responses. I need to keep a constant 3 second buffer of an H264 video stream. It's for security cameras. When something trips the camera, I replay the 3 seconds and then 6 more to see the event (One hopes I'll catch some ghosts or mountain lions, but really it's to catch car thieves!). With mpeg it was easy because mpeg has discrete frames, but the much better definition h264 doesn't. I mean it does, but they're spread out over an indefinite series of NAL packets that can contain various varieties of slices. Squishing together into a discrete frame is a problem. It seems to me that there are two different paradigms at work in live555. Modules that derive from Medium and modules that derive from MediaSource. One seems to clock out of env and the other out of session and they don't fit together with each other. I need RTSPClient to interface with the camera, which derives from Medium. It understands the RTSP Describe and Setup responses, but it only fits with its Filesinks. I need H264DiscreteFramer filter because it understands how to gather together NAL packets into frames. Unfortunately Discrete Framer wants it's input from an input source that derives from MediaSource, not Medium, which RTSPClient derives from. RTSPClient doesn't fit together with H264DiscreteFramer! (clunk, clunk, me trying to squish them together). When things don't fit, I think that I'm missing something important. So what I'm asking for is a clue. What am I missing? Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Mon Nov 3 13:37:30 2014 From: finlayson at live555.com (Ross Finlayson) Date: Mon, 3 Nov 2014 13:37:30 -0800 Subject: [Live-devel] Buffering an H264 Video Stream In-Reply-To: <782e1b3f8cbe4209ae3831669d1f7fe5@VGI-EX1.vg.local> References: <6555ad5ce84f48d189928894094ae3e1@VGI-EX1.vg.local> <708eef3d62f641b7ac0cbbbd8a3a74b6@VGI-EX1.vg.local> <782e1b3f8cbe4209ae3831669d1f7fe5@VGI-EX1.vg.local> Message-ID: > I need to keep a constant 3 second buffer of an H264 video stream. It's for security cameras. When something trips the camera, I replay the 3 seconds and then 6 more to see the event (One hopes I'll catch some ghosts or mountain lions, but really it's to catch car thieves!). With mpeg it was easy because mpeg has discrete frames, but the much better definition h264 doesn't. I mean it does, but they're spread out over an indefinite series of NAL packets that can contain various varieties of slices. Squishing together into a discrete frame is a problem. > It seems to me that there are two different paradigms at work in live555. Modules that derive from Medium and modules that derive from MediaSource. No, this is completely wrong. (Note that ?MediaSource? is a subclass of ?Medium?.) I suggest that you begin by reviewing the ?testRTSPClient? demo application (in the ?testProgs? directory). Note, in particular, the ?DummySink? object that receives ?frames? (for H.264 video, each ?frame? will actually be a NAL unit). Note the code for ?DummyRTPSink::afterGettingFrame()? ("testRTSPClient.cpp?, lines 500-521). For your own RTSP client application, you could write your own ?sink? class (a subclass of ?MediaSink?) that receives H.264 NAL units, and processes them however you want. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jshanab at jfs-tech.com Mon Nov 3 13:40:49 2014 From: jshanab at jfs-tech.com (Jeff Shanab) Date: Mon, 3 Nov 2014 16:40:49 -0500 Subject: [Live-devel] Buffering an H264 Video Stream In-Reply-To: <782e1b3f8cbe4209ae3831669d1f7fe5@VGI-EX1.vg.local> References: <6555ad5ce84f48d189928894094ae3e1@VGI-EX1.vg.local> <708eef3d62f641b7ac0cbbbd8a3a74b6@VGI-EX1.vg.local> <782e1b3f8cbe4209ae3831669d1f7fe5@VGI-EX1.vg.local> Message-ID: You need to create a filter and insert it into the chain. I had this exact scenario and what I had was my own filter that handled the incoming frames. All my frames were small POD classes with a bit of meta data and a buffer holding the frame. I had a pool of these of different sizes and they were boost reference counted to save on memory and copying overhead as they went to disk or HTTP live Streaming or out the HTTP interface or to the browser plugin. I then had a GOP buffer. Each GOP contained the reference to one keyframe and a list of references to all the difference frames. Then I had a adjustable buffer of gops. 2 -3 gops at a time max were fully decoded or partially decoded to allow forward and backward play and fast forward on key frame only depending on if it was live or recorded or cached playback. On Mon, Nov 3, 2014 at 4:02 PM, Mark Bondurant wrote: > Hello, > > > > Sorry if this is a repeat, but I/we have constant email problems > (political issues), which I'm fairly sure I've now found a workaround for. > What I'm saying is that this may be a repeat. If it is, I apologize, I > didn't get your responses. > > > > I need to keep a constant 3 second buffer of an H264 video stream. It's > for security cameras. When something trips the camera, I replay the 3 > seconds and then 6 more to see the event (One hopes I'll catch some ghosts > or mountain lions, but really it's to catch car thieves!). With mpeg it > was easy because mpeg has discrete frames, but the much better definition > h264 doesn't. I mean it does, but they're spread out over an indefinite > series of NAL packets that can contain various varieties of slices. > Squishing together into a discrete frame is a problem. > > > > It seems to me that there are two different paradigms at work in live555. > Modules that derive from Medium and modules that derive from MediaSource. > One seems to clock out of env and the other out of session and they don't > fit together with each other. > > > > I need RTSPClient to interface with the camera, which derives from Medium. > It understands the RTSP Describe and Setup responses, but it only fits with > its Filesinks. I need H264DiscreteFramer filter because it understands how > to gather together NAL packets into frames. Unfortunately Discrete Framer > wants it's input from an input source that derives from MediaSource, not > Medium, which RTSPClient derives from. RTSPClient doesn't fit together > with H264DiscreteFramer! (clunk, clunk, me trying to squish them together). > > > > When things don't fit, I think that I'm missing something important. So > what I'm asking for is a clue. What am I missing? > > > > Mark > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From markb at virtualguard.com Mon Nov 3 14:36:37 2014 From: markb at virtualguard.com (Mark Bondurant) Date: Mon, 3 Nov 2014 22:36:37 +0000 Subject: [Live-devel] Buffering an H264 Video Stream In-Reply-To: References: <6555ad5ce84f48d189928894094ae3e1@VGI-EX1.vg.local> <708eef3d62f641b7ac0cbbbd8a3a74b6@VGI-EX1.vg.local> <782e1b3f8cbe4209ae3831669d1f7fe5@VGI-EX1.vg.local> Message-ID: <1bb41c8a63c14a2d968f0c98e74f0519@VGI-EX1.vg.local> Sorry. That wasn't clear. Yes, FramedSource derives from MediaSource, which derives from Medium. RTSPClient derives from Medium. In the example program you have the ourRTSPClient with a StreamClientState object attached. You "strobe" the session object to cause the client to pump frames through. I suppose in my case, I would have a session a camera. In testH264VideoToTransportStream you have a FramedSource file input, ByteStreamFileSource, which you pass in to H264VideoStreamFramer as an argument within the context of a session object. Looking at these two pieces together, I see what I need, but I also see two disparate paradigms. The framer wants a FramedSource and RTSPClient presents a Media source. RTSPClient understands RTSP, the framer understands H264. But they don't fit together in any way I can see. I see a Framed paradigm and a Media paradigm. From: live-devel [mailto:live-devel-bounces at ns.live555.com] On Behalf Of Ross Finlayson Sent: Monday, November 03, 2014 1:38 PM To: LIVE555 Streaming Media - development & use Subject: Re: [Live-devel] Buffering an H264 Video Stream I need to keep a constant 3 second buffer of an H264 video stream. It's for security cameras. When something trips the camera, I replay the 3 seconds and then 6 more to see the event (One hopes I'll catch some ghosts or mountain lions, but really it's to catch car thieves!). With mpeg it was easy because mpeg has discrete frames, but the much better definition h264 doesn't. I mean it does, but they're spread out over an indefinite series of NAL packets that can contain various varieties of slices. Squishing together into a discrete frame is a problem. It seems to me that there are two different paradigms at work in live555. Modules that derive from Medium and modules that derive from MediaSource. No, this is completely wrong. (Note that ?MediaSource? is a subclass of ?Medium?.) I suggest that you begin by reviewing the ?testRTSPClient? demo application (in the ?testProgs? directory). Note, in particular, the ?DummySink? object that receives ?frames? (for H.264 video, each ?frame? will actually be a NAL unit). Note the code for ?DummyRTPSink::afterGettingFrame()? ("testRTSPClient.cpp?, lines 500-521). For your own RTSP client application, you could write your own ?sink? class (a subclass of ?MediaSink?) that receives H.264 NAL units, and processes them however you want. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From markb at virtualguard.com Mon Nov 3 14:40:18 2014 From: markb at virtualguard.com (Mark Bondurant) Date: Mon, 3 Nov 2014 22:40:18 +0000 Subject: [Live-devel] Buffering an H264 Video Stream In-Reply-To: References: <6555ad5ce84f48d189928894094ae3e1@VGI-EX1.vg.local> <708eef3d62f641b7ac0cbbbd8a3a74b6@VGI-EX1.vg.local> <782e1b3f8cbe4209ae3831669d1f7fe5@VGI-EX1.vg.local> Message-ID: <0542bf2bb4464beea0590b6825eea4d0@VGI-EX1.vg.local> You need to create a filter and insert it into the chain. I had this exact scenario and what I had was my own filter that handled the incoming frames. All my frames were small POD classes with a bit of meta data and a buffer holding the frame. I had a pool of these of different sizes and they were boost reference counted to save on memory and copying overhead as they went to disk or HTTP live Streaming or out the HTTP interface or to the browser plugin. I may be mistaken, but I think the Framer filters do just this. I then had a GOP buffer. Each GOP contained the reference to one keyframe and a list of references to all the difference frames. Then I had a adjustable buffer of gops. 2 -3 gops at a time max were fully decoded or partially decoded to allow forward and backward play and fast forward on key frame only depending on if it was live or recorded or cached playback. Now, here you've lost me. I don't know what a GOP is. I don't need forward or backward. I just need to spit out a discrete autonomous ten second clip. Someone else will play it. On Mon, Nov 3, 2014 at 4:02 PM, Mark Bondurant > wrote: Hello, Sorry if this is a repeat, but I/we have constant email problems (political issues), which I'm fairly sure I've now found a workaround for. What I'm saying is that this may be a repeat. If it is, I apologize, I didn't get your responses. I need to keep a constant 3 second buffer of an H264 video stream. It's for security cameras. When something trips the camera, I replay the 3 seconds and then 6 more to see the event (One hopes I'll catch some ghosts or mountain lions, but really it's to catch car thieves!). With mpeg it was easy because mpeg has discrete frames, but the much better definition h264 doesn't. I mean it does, but they're spread out over an indefinite series of NAL packets that can contain various varieties of slices. Squishing together into a discrete frame is a problem. It seems to me that there are two different paradigms at work in live555. Modules that derive from Medium and modules that derive from MediaSource. One seems to clock out of env and the other out of session and they don't fit together with each other. I need RTSPClient to interface with the camera, which derives from Medium. It understands the RTSP Describe and Setup responses, but it only fits with its Filesinks. I need H264DiscreteFramer filter because it understands how to gather together NAL packets into frames. Unfortunately Discrete Framer wants it's input from an input source that derives from MediaSource, not Medium, which RTSPClient derives from. RTSPClient doesn't fit together with H264DiscreteFramer! (clunk, clunk, me trying to squish them together). When things don't fit, I think that I'm missing something important. So what I'm asking for is a clue. What am I missing? Mark _______________________________________________ live-devel mailing list live-devel at lists.live555.com http://lists.live555.com/mailman/listinfo/live-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Mon Nov 3 15:03:53 2014 From: finlayson at live555.com (Ross Finlayson) Date: Mon, 3 Nov 2014 15:03:53 -0800 Subject: [Live-devel] Buffering an H264 Video Stream In-Reply-To: <1bb41c8a63c14a2d968f0c98e74f0519@VGI-EX1.vg.local> References: <6555ad5ce84f48d189928894094ae3e1@VGI-EX1.vg.local> <708eef3d62f641b7ac0cbbbd8a3a74b6@VGI-EX1.vg.local> <782e1b3f8cbe4209ae3831669d1f7fe5@VGI-EX1.vg.local> <1bb41c8a63c14a2d968f0c98e74f0519@VGI-EX1.vg.local> Message-ID: > In the example program you have the ourRTSPClient with a StreamClientState object attached. You "strobe" the session object to cause the client to pump frames through. I don?t know what you mean here. I don?t use the word ?strobe? anywhere in the code or documentation. Please stop making up your own terminology to describe our code! > Looking at these two pieces together, I see what I need, but I also see two disparate paradigms. The framer wants a FramedSource and RTSPClient presents a Media source. RTSPClient understands RTSP, the framer understands H264. But they don't fit together in any way I can see. I see a Framed paradigm and a Media paradigm. You need to completely discard whatever view (or ?paradigm?) that you have of the code, because it?s at best confusing, and at worst completely wrong! Instead, start thinking of the code this way (which corresponds to what it actually does!): ---------- Your RTSP client application (again, I recommend using ?testRTSPClient? as a model) would consist of a ?MediaSink? object (in the ?testRTSPClient? code, this is called ?DummySink?), which receives data from a ?FramedSource? object (in the ?testRTSPClient? code, this is "scs.subsession->readSource()?). The application starts the flow of data by calling ?startPlaying()? on the ?MediaSink? object. (This is done just once.) The ?MediaSink? object receives one ?frame? at a time. (For H.264 video, this ?frame? is actually a H.264 NAL unit; not a complete ?picture?.) Your ?MediaSink? subclass would do whatever processing that you want on these incoming H.264 NAL units. Once again, I suggest that you review the code for ?DummyRTPSink::afterGettingFrame()? ("testRTSPClient.cpp?, lines 500-521). In your own application, you would rewrite this code to do whatever you want to the incoming NAL units. ---------- (And yes, as Jeff Shanab implied, you could write a ?FramedFilter? subclass, and insert an object of this subclass in front of the ""scs.subsession->readSource()?, and then modify your call to ?FramedSink::startPlaying()? accordingly. But before you start playing around with ?FramedFilter?s, you first need to understand the basic ?testRTSPClient? application code - and, right now, you seem to be way away from this.) If you don?t want to use the ?testRTSPClient? code as a model for your own application, then that?s your choice - but then you can?t expect much help (at least, not for free) on this mailing list. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Mon Nov 3 15:35:00 2014 From: finlayson at live555.com (Ross Finlayson) Date: Mon, 3 Nov 2014 15:35:00 -0800 Subject: [Live-devel] Buffering an H264 Video Stream In-Reply-To: <1bb41c8a63c14a2d968f0c98e74f0519@VGI-EX1.vg.local> References: <6555ad5ce84f48d189928894094ae3e1@VGI-EX1.vg.local> <708eef3d62f641b7ac0cbbbd8a3a74b6@VGI-EX1.vg.local> <782e1b3f8cbe4209ae3831669d1f7fe5@VGI-EX1.vg.local> <1bb41c8a63c14a2d968f0c98e74f0519@VGI-EX1.vg.local> Message-ID: <00115A6A-D5E6-43BB-94B4-1523AD4046D1@live555.com> Also, part of the problem here, I think, is that you seem to be confused by what the class ?H264VideoStreamFramer? does. This class takes as input an unstructured H.264 byte stream, and parses it into discrete H.264 NAL units. It *does not* combine multiple H.264 NAL units into a single ?access unit? (i.e., picture). If you want to do this (or any other processing on the incoming H.264 NAL units), then you?ll need to do this yourself. The LIVE555 libraries do not contain any video ?codec? (i.e., decoding or encoding) functionality. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jshanab at jfs-tech.com Mon Nov 3 15:35:39 2014 From: jshanab at jfs-tech.com (Jeff Shanab) Date: Mon, 3 Nov 2014 18:35:39 -0500 Subject: [Live-devel] Buffering an H264 Video Stream In-Reply-To: References: <6555ad5ce84f48d189928894094ae3e1@VGI-EX1.vg.local> <708eef3d62f641b7ac0cbbbd8a3a74b6@VGI-EX1.vg.local> <782e1b3f8cbe4209ae3831669d1f7fe5@VGI-EX1.vg.local> <1bb41c8a63c14a2d968f0c98e74f0519@VGI-EX1.vg.local> Message-ID: Understanding the RTSPClient code is the first requirement! it succinctly and completely shows the minimum needed. Strobe is an interesting word but I get what he means, It is a pull system. The FAQ has a great explanation of this. The getNextFrame calls the source to the left(with the provided callback), which calls the source to the left and so on. The source on the far left then calls afterGettingFrame() and calls the client callback with the data. which calls it's guy on the right until it hits the sink on the right. At that point a getNextFrame() must be called to keep the process going. On Mon, Nov 3, 2014 at 6:03 PM, Ross Finlayson wrote: > In the example program you have the ourRTSPClient with a StreamClientState > object attached. You "strobe" the session object to cause the client to > pump frames through. > > > I don?t know what you mean here. I don?t use the word ?strobe? anywhere > in the code or documentation. Please stop making up your own terminology > to describe our code! > > > Looking at these two pieces together, I see what I need, but I also see > two disparate paradigms. The framer wants a FramedSource and RTSPClient > presents a Media source. RTSPClient understands RTSP, the framer > understands H264. But they don't fit together in any way I can see. I see a > Framed paradigm and a Media paradigm. > > > You need to completely discard whatever view (or ?paradigm?) that you have > of the code, because it?s at best confusing, and at worst completely wrong! > > Instead, start thinking of the code this way (which corresponds to what it > actually does!): > ---------- > Your RTSP client application (again, I recommend using ?testRTSPClient? as > a model) would consist of a ?MediaSink? object (in the ?testRTSPClient? > code, this is called ?DummySink?), which receives data from a > ?FramedSource? object (in the ?testRTSPClient? code, this is > "scs.subsession->readSource()?). The application starts the flow of data > by calling ?startPlaying()? on the ?MediaSink? object. (This is done just > once.) > > The ?MediaSink? object receives one ?frame? at a time. (For H.264 video, > this ?frame? is actually a H.264 NAL unit; not a complete ?picture?.) Your > ?MediaSink? subclass would do whatever processing that you want on these > incoming H.264 NAL units. Once again, I suggest that you review the code > for ?DummyRTPSink::afterGettingFrame()? ("testRTSPClient.cpp?, lines > 500-521). In your own application, you would rewrite this code to do > whatever you want to the incoming NAL units. > ---------- > > (And yes, as Jeff Shanab implied, you could write a ?FramedFilter? > subclass, and insert an object of this subclass in front of the > ""scs.subsession->readSource()?, and then modify your call to > ?FramedSink::startPlaying()? accordingly. But before you start playing > around with ?FramedFilter?s, you first need to understand the basic > ?testRTSPClient? application code - and, right now, you seem to be way away > from this.) > > If you don?t want to use the ?testRTSPClient? code as a model for your own > application, then that?s your choice - but then you can?t expect much help > (at least, not for free) on this mailing list. > > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From markb at virtualguard.com Mon Nov 3 15:52:30 2014 From: markb at virtualguard.com (Mark Bondurant) Date: Mon, 3 Nov 2014 23:52:30 +0000 Subject: [Live-devel] Buffering an H264 Video Stream In-Reply-To: <00115A6A-D5E6-43BB-94B4-1523AD4046D1@live555.com> References: <6555ad5ce84f48d189928894094ae3e1@VGI-EX1.vg.local> <708eef3d62f641b7ac0cbbbd8a3a74b6@VGI-EX1.vg.local> <782e1b3f8cbe4209ae3831669d1f7fe5@VGI-EX1.vg.local> <1bb41c8a63c14a2d968f0c98e74f0519@VGI-EX1.vg.local> <00115A6A-D5E6-43BB-94B4-1523AD4046D1@live555.com> Message-ID: <89fcf4d5f501486385c4e0399b0b969b@VGI-EX1.vg.local> As you surly must know, I'm a noob thrust unwillingly by circumstances into this. This is helpful. I don't need frames, just ten seconds of stream. But, doesn't H264 have a definite beginning with the following NAL packets updating the initial packet? That's what all that predictive slices and three dimensional compression and such does? I don't think I can just jump into the middle of the stream and start grabbing packets and still have a usable stream, which is why I was thinking about converting it to mpeg or something. From: live-devel [mailto:live-devel-bounces at ns.live555.com] On Behalf Of Ross Finlayson Sent: Monday, November 03, 2014 3:35 PM To: LIVE555 Streaming Media - development & use Subject: Re: [Live-devel] Buffering an H264 Video Stream Also, part of the problem here, I think, is that you seem to be confused by what the class ?H264VideoStreamFramer? does. This class takes as input an unstructured H.264 byte stream, and parses it into discrete H.264 NAL units. It *does not* combine multiple H.264 NAL units into a single ?access unit? (i.e., picture). If you want to do this (or any other processing on the incoming H.264 NAL units), then you?ll need to do this yourself. The LIVE555 libraries do not contain any video ?codec? (i.e., decoding or encoding) functionality. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jshanab at jfs-tech.com Mon Nov 3 16:46:27 2014 From: jshanab at jfs-tech.com (Jeff Shanab) Date: Mon, 3 Nov 2014 19:46:27 -0500 Subject: [Live-devel] Buffering an H264 Video Stream In-Reply-To: <89fcf4d5f501486385c4e0399b0b969b@VGI-EX1.vg.local> References: <6555ad5ce84f48d189928894094ae3e1@VGI-EX1.vg.local> <708eef3d62f641b7ac0cbbbd8a3a74b6@VGI-EX1.vg.local> <782e1b3f8cbe4209ae3831669d1f7fe5@VGI-EX1.vg.local> <1bb41c8a63c14a2d968f0c98e74f0519@VGI-EX1.vg.local> <00115A6A-D5E6-43BB-94B4-1523AD4046D1@live555.com> <89fcf4d5f501486385c4e0399b0b969b@VGI-EX1.vg.local> Message-ID: Security cameras keep it simple. They will not usually have bidirectionally predictive frames as that generally takes a two pass encoder and adds latency. Either way that is not the concern at the streaming level although i think live555 will reorder frames if need be. Remember that this is an RTSP stream, probably unlike the MJPEG you are use to. Wikipedia has a good explanation of the basic RTSP conversation. Live555 implements this protocol (both as server and as client) Look at the RTSPCLient.cpp and PlayCommon.cpp to understand the client. After the necessary steps to start up the conversation with the response from each step responsable for sending the next one. OPTIONS DESCRIBE SETUP PLAY You will call GetNextFrame and live555 will listen the on the socket, collect the data into frames, handle all the lower level protocol assemble into and pass only only complete nal units to your afterGettingFunction, Complete with presentation timestamps. At that point you need to collect the frames into your buffer for decoding and display. And call GetNextFrame to keep everything moving. Oversimplified... class GOP { public: uint8* m_keyFrame uint8* [] m_diffFrames int framecount ~GOP(){ // delete all allocated buffers} } So lets say you have a simple producer consumer ring buffer holding pointers to GOP instances. You maintain a statemachine watching the transitions of nal unit types and fill in the GOP's with the bytes for each frame. Each GOP now represents 1 second of video. When the current GOP is full you 1 put a pointer to it into the ring buffer 2 get the pointer to the next space in the buffer and reallocate and fill it in. and so on. You always have 10 seconds worth of video avail. You just copy them out or write them out etc. nal 7 - 8 start frame push 7 into keyframe buffer 8 - 5 push 8 into keyframe buffer 5 - 5 push 5 into keyframe buffer 5 - 1 push 5 into keyframe buffer 1 -1 push 1 into diff frame buffer and increment counter 1 - 7 or 1-5 push 1 into last diff frame, set counter to final value and put GOP into ring buffer. Start new GOP repeat. Get it? On Mon, Nov 3, 2014 at 6:52 PM, Mark Bondurant wrote: > As you surly must know, I'm a noob thrust unwillingly by circumstances > into this. > > > > This is helpful. I don't need frames, just ten seconds of stream. But, > doesn't H264 have a definite beginning with the following NAL packets > updating the initial packet? That's what all that predictive slices and > three dimensional compression and such does? I don't think I can just jump > into the middle of the stream and start grabbing packets and still have a > usable stream, which is why I was thinking about converting it to mpeg or > something. > > > > > > *From:* live-devel [mailto:live-devel-bounces at ns.live555.com] *On Behalf > Of *Ross Finlayson > *Sent:* Monday, November 03, 2014 3:35 PM > *To:* LIVE555 Streaming Media - development & use > *Subject:* Re: [Live-devel] Buffering an H264 Video Stream > > > > Also, part of the problem here, I think, is that you seem to be confused > by what the class ?H264VideoStreamFramer? does. This class takes as input > an unstructured H.264 byte stream, and parses it into discrete H.264 NAL > units. It *does not* combine multiple H.264 NAL units into a single > ?access unit? (i.e., picture). If you want to do this (or any other > processing on the incoming H.264 NAL units), then you?ll need to do this > yourself. The LIVE555 libraries do not contain any video ?codec? (i.e., > decoding or encoding) functionality. > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From markb at virtualguard.com Tue Nov 4 11:04:23 2014 From: markb at virtualguard.com (Mark Bondurant) Date: Tue, 4 Nov 2014 19:04:23 +0000 Subject: [Live-devel] Buffering an H264 Video Stream In-Reply-To: References: <6555ad5ce84f48d189928894094ae3e1@VGI-EX1.vg.local> <708eef3d62f641b7ac0cbbbd8a3a74b6@VGI-EX1.vg.local> <782e1b3f8cbe4209ae3831669d1f7fe5@VGI-EX1.vg.local> <1bb41c8a63c14a2d968f0c98e74f0519@VGI-EX1.vg.local> <00115A6A-D5E6-43BB-94B4-1523AD4046D1@live555.com> <89fcf4d5f501486385c4e0399b0b969b@VGI-EX1.vg.local> Message-ID: <7b04046c6c6245e38792926b52f0a591@VGI-EX1.vg.local> Sorry for the absence, but I have about a dozen hats to wear. From: live-devel [mailto:live-devel-bounces at ns.live555.com] On Behalf Of Jeff Shanab Sent: Monday, November 03, 2014 4:46 PM To: LIVE555 Streaming Media - development & use Subject: Re: [Live-devel] Buffering an H264 Video Stream Security cameras keep it simple. They will not usually have bidirectionally predictive frames as that generally takes a two pass encoder and adds latency. Either way that is not the concern at the streaming level although i think live555 will reorder frames if need be. Remember that this is an RTSP stream, probably unlike the MJPEG you are use to. Wikipedia has a good explanation of the basic RTSP conversation. Live555 implements this protocol (both as server and as client) Read that (the sound of the breeze blowing by over my head), bought a book and read it too. Not helpful in regards to live555 though, which is more a matter of implementation details. Did read the faqs, although apparently it did me no good. Look at the RTSPCLient.cpp and PlayCommon.cpp to understand the client. After the necessary steps to start up the conversation with the response from each step responsable for sending the next one. OPTIONS DESCRIBE SETUP PLAY You will call GetNextFrame and live555 will listen the on the socket, collect the data into frames, handle all the lower level protocol assemble into and pass only only complete nal units to your afterGettingFunction, Complete with presentation timestamps. So you're saying that, because of the brain dead nature of security of security cameras, I can gather NAL packets into a clip starting at any point in the stream and expect that clip to play? I don't have to worry about boundaries beyond that? At that point you need to collect the frames into your buffer for decoding and display. And call GetNextFrame to keep everything moving. Oversimplified... class GOP { public: uint8* m_keyFrame uint8* [] m_diffFrames int framecount ~GOP(){ // delete all allocated buffers} } So lets say you have a simple producer consumer ring buffer holding pointers to GOP instances. A 10 second linked list perhap, dropping the end, adding to the beginning? You maintain a statemachine watching the transitions of nal unit types and fill in the GOP's with the bytes for each frame. Each GOP now represents 1 second of video. Ah, so there is a boundary in the NAL's that I need to watch for, that initial frame that the following packets are moding. You mentioned this earlier: [7][8][5][1][1][1][1][1][1][1][1].... When the current GOP is full you 1 put a pointer to it into the ring buffer 2 get the pointer to the next space in the buffer and reallocate and fill it in. and so on. You always have 10 seconds worth of video avail. You just copy them out or write them out etc. nal 7 - 8 start frame push 7 into keyframe buffer 8 - 5 push 8 into keyframe buffer 5 - 5 push 5 into keyframe buffer 5 - 1 push 5 into keyframe buffer 1 -1 push 1 into diff frame buffer and increment counter 1 - 7 or 1-5 push 1 into last diff frame, set counter to final value and put GOP into ring buffer. Start new GOP repeat. Get it? 7, 8,5,1, are these actual identifiers, like a NAL packet type? On Mon, Nov 3, 2014 at 6:52 PM, Mark Bondurant > wrote: As you surly must know, I'm a noob thrust unwillingly by circumstances into this. This is helpful. I don't need frames, just ten seconds of stream. But, doesn't H264 have a definite beginning with the following NAL packets updating the initial packet? That's what all that predictive slices and three dimensional compression and such does? I don't think I can just jump into the middle of the stream and start grabbing packets and still have a usable stream, which is why I was thinking about converting it to mpeg or something. From: live-devel [mailto:live-devel-bounces at ns.live555.com] On Behalf Of Ross Finlayson Sent: Monday, November 03, 2014 3:35 PM To: LIVE555 Streaming Media - development & use Subject: Re: [Live-devel] Buffering an H264 Video Stream Also, part of the problem here, I think, is that you seem to be confused by what the class ?H264VideoStreamFramer? does. This class takes as input an unstructured H.264 byte stream, and parses it into discrete H.264 NAL units. It *does not* combine multiple H.264 NAL units into a single ?access unit? (i.e., picture). If you want to do this (or any other processing on the incoming H.264 NAL units), then you?ll need to do this yourself. The LIVE555 libraries do not contain any video ?codec? (i.e., decoding or encoding) functionality. Ross Finlayson Live Networks, Inc. http://www.live555.com/ _______________________________________________ live-devel mailing list live-devel at lists.live555.com http://lists.live555.com/mailman/listinfo/live-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From markb at virtualguard.com Tue Nov 4 11:16:23 2014 From: markb at virtualguard.com (Mark Bondurant) Date: Tue, 4 Nov 2014 19:16:23 +0000 Subject: [Live-devel] Buffering an H264 Video Stream In-Reply-To: <00115A6A-D5E6-43BB-94B4-1523AD4046D1@live555.com> References: <6555ad5ce84f48d189928894094ae3e1@VGI-EX1.vg.local> <708eef3d62f641b7ac0cbbbd8a3a74b6@VGI-EX1.vg.local> <782e1b3f8cbe4209ae3831669d1f7fe5@VGI-EX1.vg.local> <1bb41c8a63c14a2d968f0c98e74f0519@VGI-EX1.vg.local> <00115A6A-D5E6-43BB-94B4-1523AD4046D1@live555.com> Message-ID: <0ad348a0997b4056a2b3ee87a9c12656@VGI-EX1.vg.local> I get this part now, but what I still don't get is how to pass incoming data from RSTPClient to H264VideoStreamFramer. The constructor for it is: H264VideoStreamDiscreteFramer::createNew(UsageEnvironment& env, FramedSource* inputSource) { A FramedSource input, but RSTPClient is not a FramedSource. Don't see any framed source in it. From: live-devel [mailto:live-devel-bounces at ns.live555.com] On Behalf Of Ross Finlayson Sent: Monday, November 03, 2014 3:35 PM To: LIVE555 Streaming Media - development & use Subject: Re: [Live-devel] Buffering an H264 Video Stream Also, part of the problem here, I think, is that you seem to be confused by what the class ?H264VideoStreamFramer? does. This class takes as input an unstructured H.264 byte stream, and parses it into discrete H.264 NAL units. It *does not* combine multiple H.264 NAL units into a single ?access unit? (i.e., picture). If you want to do this (or any other processing on the incoming H.264 NAL units), then you?ll need to do this yourself. The LIVE555 libraries do not contain any video ?codec? (i.e., decoding or encoding) functionality. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From markb at virtualguard.com Tue Nov 4 12:00:35 2014 From: markb at virtualguard.com (Mark Bondurant) Date: Tue, 4 Nov 2014 20:00:35 +0000 Subject: [Live-devel] Buffering an H264 Video Stream In-Reply-To: <0ad348a0997b4056a2b3ee87a9c12656@VGI-EX1.vg.local> References: <6555ad5ce84f48d189928894094ae3e1@VGI-EX1.vg.local> <708eef3d62f641b7ac0cbbbd8a3a74b6@VGI-EX1.vg.local> <782e1b3f8cbe4209ae3831669d1f7fe5@VGI-EX1.vg.local> <1bb41c8a63c14a2d968f0c98e74f0519@VGI-EX1.vg.local> <00115A6A-D5E6-43BB-94B4-1523AD4046D1@live555.com> <0ad348a0997b4056a2b3ee87a9c12656@VGI-EX1.vg.local> Message-ID: I'm sure this is this is a lame question as you all are familiar with live555 and this seems every day simple, but to me it seems somewhat insurmountable. What makes a framed source a framed source? If it's a FramedSource, then why does it need a framer? Perhaps it's more of a "Frame-able Source" and an H264 RSTP stream isn't? But if it isn't, why does everyone say it's simple to do? From: live-devel [mailto:live-devel-bounces at ns.live555.com] On Behalf Of Mark Bondurant Sent: Tuesday, November 04, 2014 11:16 AM To: 'LIVE555 Streaming Media - development & use' Subject: Re: [Live-devel] Buffering an H264 Video Stream I get this part now, but what I still don't get is how to pass incoming data from RSTPClient to H264VideoStreamFramer. The constructor for it is: H264VideoStreamDiscreteFramer::createNew(UsageEnvironment& env, FramedSource* inputSource) { A FramedSource input, but RSTPClient is not a FramedSource. Don't see any framed source in it. From: live-devel [mailto:live-devel-bounces at ns.live555.com] On Behalf Of Ross Finlayson Sent: Monday, November 03, 2014 3:35 PM To: LIVE555 Streaming Media - development & use Subject: Re: [Live-devel] Buffering an H264 Video Stream Also, part of the problem here, I think, is that you seem to be confused by what the class ?H264VideoStreamFramer? does. This class takes as input an unstructured H.264 byte stream, and parses it into discrete H.264 NAL units. It *does not* combine multiple H.264 NAL units into a single ?access unit? (i.e., picture). If you want to do this (or any other processing on the incoming H.264 NAL units), then you?ll need to do this yourself. The LIVE555 libraries do not contain any video ?codec? (i.e., decoding or encoding) functionality. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at gotowti.com Tue Nov 4 12:14:21 2014 From: chris at gotowti.com (Chris Richardson (WTI)) Date: Tue, 4 Nov 2014 12:14:21 -0800 Subject: [Live-devel] Buffering an H264 Video Stream In-Reply-To: References: <6555ad5ce84f48d189928894094ae3e1@VGI-EX1.vg.local> <708eef3d62f641b7ac0cbbbd8a3a74b6@VGI-EX1.vg.local> <782e1b3f8cbe4209ae3831669d1f7fe5@VGI-EX1.vg.local> <1bb41c8a63c14a2d968f0c98e74f0519@VGI-EX1.vg.local> <00115A6A-D5E6-43BB-94B4-1523AD4046D1@live555.com> <0ad348a0997b4056a2b3ee87a9c12656@VGI-EX1.vg.local> Message-ID: <00c101cff86b$eca18070$c5e48150$@com> Hey Mark, I think the main point of confusion here is the difference between LIVE555 server-side concepts and client-side concepts. You are trying to write a client, and normally on the client side you wouldn?t be using H264VideoStreamDiscreteFramer because that is a *server-side* concept. What you want to do, as Ross suggested, is discard your current thoughts about using H264VideoStreamDiscreteFramer. You want to look at making a subclass of MediaSink (perhaps call it ?MemorySink? or something). This class could be based on the DummySink class Ross mentioned. The job of this class would be to take incoming H.264 NAL units (given to you straight from LIVE555), and buffer them, as suggested by Jeff. Once you have something running that receives data from the LIVE555 client side stack then you can look into saving the most recent IDR NALs and all the other features you want to implement. Chris Richardson WTI -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Nov 4 12:19:05 2014 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 4 Nov 2014 12:19:05 -0800 Subject: [Live-devel] Buffering an H264 Video Stream In-Reply-To: <0ad348a0997b4056a2b3ee87a9c12656@VGI-EX1.vg.local> References: <6555ad5ce84f48d189928894094ae3e1@VGI-EX1.vg.local> <708eef3d62f641b7ac0cbbbd8a3a74b6@VGI-EX1.vg.local> <782e1b3f8cbe4209ae3831669d1f7fe5@VGI-EX1.vg.local> <1bb41c8a63c14a2d968f0c98e74f0519@VGI-EX1.vg.local> <00115A6A-D5E6-43BB-94B4-1523AD4046D1@live555.com> <0ad348a0997b4056a2b3ee87a9c12656@VGI-EX1.vg.local> Message-ID: <13908EFD-B83D-448E-AB35-935FE9BD2860@live555.com> > I get this part now, but what I still don't get is how to pass incoming data from RSTPClient to H264VideoStreamFramer. The constructor for it is: > > H264VideoStreamDiscreteFramer::createNew(UsageEnvironment& env, FramedSource* inputSource) { > > A FramedSource input, but RSTPClient is not a FramedSource. Don't see any framed source in it. Several times now, I have pointed you at code (?testRTSPClient.cpp?) that illustrates how to do this. If you look at this code, you?ll see that you don?t feed a ?RTSPClient? into anything. Instead, you use a ?RTSPClient? to create a ?MediaSession? object (which in turn will contain ?MediaSubsession? objects - one for each audio or video track in the stream). Once you?ve used the ?RTSPClient? to create these objects (using the RTSP protocol, as illustrated by the ?testRTSPClient? application code (yet again :-(), you then feed each ?MediaSubsession::readSource()? object into whatever you want. So, you *could*, in principle, feed the ?MediaSubsession::readSource()? object into a ?H264VideoStreamDiscreteFramer? (*not* into a ?H264VideoStreamFramer?, because that?s for unstructured byte-stream input). I?m not sure why you?d want to do this, though. You?d need to do this *only* if you plan to re-transmit the H.264 video (by feeding it into a ?H264VideoRTPSink?). > I'm sure this is this is a lame question as you all are familiar with live555 and this seems every day simple, but to me it seems somewhat insurmountable. What makes a framed source a framed source? A ?Framed Source? is simply an object that delivers chunks of data (which we call ?frames? in the code). > If it's a FramedSource, then why does it need a framer? It usually doesn?t. I don?t know where you got the idea that you need to feed your incoming H.264 video data into a ?Framer?. As I noted above, you would do this only if you plan to re-transmit the incoming H.264 video NAL units. But in any case, this thread has gone on long enough. (Note that this mailing list has more than 2000 members.) So this will be my - and your - last posting on this subject. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From grom86 at mail.ru Tue Nov 4 04:02:21 2014 From: grom86 at mail.ru (=?UTF-8?B?bWludXM=?=) Date: Tue, 04 Nov 2014 15:02:21 +0300 Subject: [Live-devel] =?utf-8?q?Video_delay_on_the_new_connection_to_the_s?= =?utf-8?q?treamer?= Message-ID: <1415102541.405766293@f258.i.mail.ru> Using LIVE555 library as streamer in UNICAST UDP mode found if i'm using video and audio tracks, and during playing mode somebody else make a new connection, the client with previous connection receive video frame with about 500ms delay. Trying to open new connections several times, the first client can have up to 4 seconds video delay if media is playing strongly by received timestamp values. -------------- next part -------------- An HTML attachment was scrubbed... URL: From K.Vikhorev at liverpool.ac.uk Wed Nov 5 01:31:11 2014 From: K.Vikhorev at liverpool.ac.uk (Vikhorev, Konstantin) Date: Wed, 5 Nov 2014 09:31:11 +0000 Subject: [Live-devel] Concurrent OnDemandServerMediaSubsession to stream MPEG In-Reply-To: <77B97C90-7449-4559-A27E-1F189C9E2FDA@live555.com> References: <9C37CF9A21989A4BAB49C7BC56CF3E382D04D289@CHEXMBX1.livad.liv.ac.uk> <1107BDA2-D6CF-46C9-A6EC-8426C87C858E@live555.com> <9C37CF9A21989A4BAB49C7BC56CF3E382D04FCB5@CHEXMBX1.livad.liv.ac.uk>, <77B97C90-7449-4559-A27E-1F189C9E2FDA@live555.com> Message-ID: <1029B14B-534B-4CA8-84B1-187E0D4A57A6@liverpool.ac.uk> Still can't find the problem. Everything seems to be set correctly and Live555 event loop is running on a separate thread. Quick debug showed me that everything works perfectly for one session until another client joined concurrent media session. At this point the trigger event handler for first FrameSource stopped being triggered, when triggerEvent() called, instead second FrameSource trigger event handler start being triggered. Could it be a result of using Live555 static libraties? -- Best Regards, Konstantin On 27 Oct 2014, at 22:31, Ross Finlayson > wrote: I tried your suggestion today and result is the same. I can only stream data to one session at the time. Could it be some pre-processor definition that I am missing? Also some times when I try to play to streams at the same time I get following exception: ?RTCPInstance:: RTCPInstance error: totSessionBW parameter should not be zero!? This suggests that, somehow, ?fEstimatedKbps? is getting set to zero. You might try running a debugger (such as ?gdb?) to test whether that member variable ever gets overwritten. You might have a ?memory smash? (e.g., buffer overflow bug) somewhere in your code that?s causing this to happen. Also, you should make sure that you never have more than one thread calling LIVE555 library code (except for the thread that calls ?triggerEvent()? - the only LIVE555 function that you may call from a separate thread). Ross Finlayson Live Networks, Inc. http://www.live555.com/ _______________________________________________ live-devel mailing list live-devel at lists.live555.com http://lists.live555.com/mailman/listinfo/live-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Wed Nov 5 05:13:54 2014 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 5 Nov 2014 05:13:54 -0800 Subject: [Live-devel] Concurrent OnDemandServerMediaSubsession to stream MPEG In-Reply-To: <1029B14B-534B-4CA8-84B1-187E0D4A57A6@liverpool.ac.uk> References: <9C37CF9A21989A4BAB49C7BC56CF3E382D04D289@CHEXMBX1.livad.liv.ac.uk> <1107BDA2-D6CF-46C9-A6EC-8426C87C858E@live555.com> <9C37CF9A21989A4BAB49C7BC56CF3E382D04FCB5@CHEXMBX1.livad.liv.ac.uk> <, > <77B97C90-7449-4559-A27E-1F189C9E2FDA@live555.com> <1029B14B-534B-4CA8-84B1-187E0D4A57A6@liverpool.ac.uk> Message-ID: <2BDB1DCB-43A2-4045-90EC-708E0C259137@live555.com> > Still can't find the problem. Everything seems to be set correctly and Live555 event loop is running on a separate thread. > > Quick debug showed me that everything works perfectly for one session until another client joined concurrent media session. At this point the trigger event handler for first FrameSource stopped being triggered, when triggerEvent() called, instead second FrameSource trigger event handler start being triggered. Check the part of your code where you call ?triggerEvent()? (from a separate thread). Make sure that the ?clientData? parameter is a pointer to the correct ?AnalyserSource? object (so that your ?DeliverFrame0()? gets called with the correct ?clientData? parameter, and thus ?DeliverFrame()? gets called on the correct ?AnalyserSource? object). Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Wed Nov 5 20:12:11 2014 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 5 Nov 2014 20:12:11 -0800 Subject: [Live-devel] Video delay on the new connection to the streamer In-Reply-To: <1415102541.405766293@f258.i.mail.ru> References: <1415102541.405766293@f258.i.mail.ru> Message-ID: <6A76559F-1FC5-44C0-AD6E-4E9BF1A6AE40@live555.com> There?s no known issue with our library code that can be causing the delays that you are seeing. In my experience, delays like this are almost always caused by resource limitations (e.g., network bandwidth and/or CPU) somewhere. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gotar at polanet.pl Wed Nov 5 01:41:43 2014 From: gotar at polanet.pl (Tomasz Pala) Date: Wed, 5 Nov 2014 10:41:43 +0100 Subject: [Live-devel] [patch] Authentication hiccups In-Reply-To: <83905EF8-E05C-4763-8284-A9C2959D80E8@live555.com> References: <20141101103953.GA24173@polanet.pl> <83905EF8-E05C-4763-8284-A9C2959D80E8@live555.com> Message-ID: <20141105094143.GA3467@polanet.pl> On Sat, Nov 01, 2014 at 14:15:20 -0700, Ross Finlayson wrote: > It seems, however, that there may be a bug in your RTSP server, > because I don't think this should be happening: > >> -> Sending request: PLAY >> <- RTSP/1.0 401 Unauthorized WWW-Authenticate: Digest >> -> Sending request: PLAY Authorization: Digest >> <- RTSP/1.0 455 Method Not Valid In This State > > I don't understand why your server would first send back a "401 > Unauthorized" response, but then, after the client followed up with > a valid "Authorization:" header, respond "455 Method Not Valid In > This State". I suggest filing a bug report for your server. I agree, this should not happen, as there is no reason to invalidate SETUP with 401 in-between. My guess is that there is some error in state machine flow in the server, especially when it keeps my params (at least the starttime value I've set), but holds down playback. This is HQVision HQ-DVR0401HD960, an importer-branded Hikvision OEM platform, so it's not oficially supported and I don't even know the original device symbol to refer to fill bug report. Well, I'm not even sure if it's firmware origins from Hikvision or maybe some other 3rd party. Anyway, these devices are as popular as cheap, so this bug is probably widespread. It didn't pop up before probably because people use web inferface (with it's plugin - considering amount of Linux-only users) or some dedicates PSIA software or are not aware that PSIA allows them to use RTSP directly. But even if this bug would be fixed, firmware upgrade procedure is not as straightforward as usuall due to huge number (dozens) of importer brands on the market, without their own web pages, without any support (warranty limited to replacing device or money payback), so after all it's best to have robust client implementation. >> One more suggestion if I might, that it would be nice to be able to >> forcefully disable Basic auth. > > No, I don't want to do this. If a server is foolish enough to > request Basic Authentication, then the client should use it. (Few I was thinking about preventing MITM attacker degrading auth to Basic. Currently any RTSP client is vulnerable to exposing full credentials in plain-text (almost), as there is no way to authenticate server first. The only solution is to use firewall with deep packet inspection to block Basic responses, which is way too hard to be widely usable. And if not available at runtime (which in turn requires client apps to follow), maybe an option to disable Basic entirely at buildtime? -- Tomasz Pala From finlayson at live555.com Thu Nov 6 17:35:24 2014 From: finlayson at live555.com (Ross Finlayson) Date: Thu, 6 Nov 2014 15:35:24 -1000 Subject: [Live-devel] [patch] Authentication hiccups In-Reply-To: <20141105094143.GA3467@polanet.pl> References: <20141101103953.GA24173@polanet.pl> <83905EF8-E05C-4763-8284-A9C2959D80E8@live555.com> <20141105094143.GA3467@polanet.pl> Message-ID: <66B50757-C698-4E31-910E-A75A87A772EF@live555.com> > I was thinking about preventing MITM attacker degrading auth to Basic. > Currently any RTSP client is vulnerable to exposing full credentials in > plain-text (almost), as there is no way to authenticate server first. That?s a good point. I?ve just installed a new version (2014.11.07) of the ?LIVE555 Streaming Media? software that adds a new method RTSPClient::disallowBasicAuthentication() that you can call on a ?RTSPClient? object to disallow ?basic? authentication if the server requests it. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From rajeshgupta1253 at gmail.com Wed Nov 5 23:16:40 2014 From: rajeshgupta1253 at gmail.com (rajesh gupta) Date: Thu, 6 Nov 2014 12:46:40 +0530 Subject: [Live-devel] streaming from webcam Message-ID: I am new to live555.com api . I download live555media server exe . I only streams the files that are in current directory . there is no way to stream the data via live source i.e web camera on my laptop . Is there any compiled binary or source code in vc++ so that I can stream the webcamera video and audio using the h264 and acc codec Regards Rajesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at sparkalley.com Fri Nov 7 14:02:51 2014 From: eric at sparkalley.com (Eric Blanpied) Date: Fri, 7 Nov 2014 14:02:51 -0800 Subject: [Live-devel] Building Mac RTSP Client Application Message-ID: Hello, We're building Mac-only application to capture and view synchronized videos, with all media handling done via AVFoundation classes. Except for RTSP, of course! In order to get things prototyped, our app currently saves streams to .mov files by tasking the openRTSP program. This temporary solution has gotten our app as a whole up and running, and has validated the live555 library as a stream-storage solution. Accordingly, we are now working on a proper implementation. Basic Summary: We are looking to use the live555 library to save H264+AAC streams in the .mov format, along with timing information useful for synchronization (storage format tbd). Capture will start and stop based on user action in the GUI. There will be no real-time viewing of the incoming streams. At present I have a working test application that wraps the testRTSPClient code with a bit of objective-c and runs that on a second thread, calling the library from a shared lib within the application bundle. This delivers the same debug messages as the command-line version, so that seems like a decent basic reference platform to continue to experiment with. My questions now are about wider application architecture, and while some are not directly about the live555 library itself, I'm sure that many of my questions have recommended practices for answers, some folks may have direct experience to share. In any case, I'd be grateful to get the initial project going with an appropriate structure. Generally, I envision a singleton "RTSPStreamService? class (obj-c++) that wraps and runs the UsageEnvironment(etc.) on it's own thread, and an "RTSPStreamClient? class (also obj-c++) which the app would instantiate per-stream and pass to the RTSPStreamService to connect to and start storing a stream. This would also provide a reference to instruct the RTSPStreamService to stop capture & finish the file storage. Ideally the interfaces between the larger app and the live555-handling code would be concentrated in a very small number of classes. Sounds good when described in the abstract, but I'm unsure about a number of things. It sounds like the C++ work will involve subclassing the following classes: - UsageEnvironment, clearly. - TaskScheduler? >From what I see in the example and the library code, it's not clear to me why BasicTaskScheduler wouldn't be appropriate to use. I guess some info on when TaskScheduler needs subclassing would be helpful. - RTSPClient Following the example code, since we want to handle multiple streams. Not sure if this is the RTSPStreamClient class mentioned above, or if there is some kind of reference stored to link the two. - MediaSink (or FileSink?) The example makes it clear that instead of DummySink we'll be creating our own. The plan is to implement an AVAssetWriter-based solution. We have other parts of our app that use this, and the work in this case will involve properly describing the NAL units (and audio samples) to pass the data to those classes. Such details are clearly outside of this list's scope, but my question is whether it would be more appropriate to be subclassing MediaSink or FileSink. It looks as though much of what makes up FileSink has to do with actual file handling, while we'll probably want something much thinner, using afterGettingFrame to pass the data to a mainly obj-c class that handles the AVAssetWriter stuff. To me, that makes it sound like subclassing MediaSink is the way to go. - Other. Anything obviously missing? I'm unsure about how we'd signal from outside the event loop that it's time to stop a particular stream and close up it's file. Would that be best done via the "EventTrigger" mechanism the faq mentions? Is there some more info, or example code, on that? No doubt the above is full of naive assumptions and incorrect understanding, but one has to start somewhere! thanks in advance, -Eric From finlayson at live555.com Fri Nov 7 21:51:34 2014 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 7 Nov 2014 19:51:34 -1000 Subject: [Live-devel] Building Mac RTSP Client Application In-Reply-To: References: Message-ID: > - TaskScheduler? > From what I see in the example and the library code, it's not clear to me why BasicTaskScheduler wouldn't be appropriate to use. I guess some info on when TaskScheduler needs subclassing would be helpful. ?BasicTaskScheduler? turns out to be good enough for most people don?t need to. Defining your own subclass of ?TaskScheduler? is usually something that you?d do only if you wanted to embed LIVE555-based code in some other existing event-loop framework. > - RTSPClient > Following the example code, since we want to handle multiple streams. You should use the ?testRTSPClient? code as a model for how to do this. > - MediaSink (or FileSink?) > The example makes it clear that instead of DummySink we'll be creating our own. The plan is to implement an AVAssetWriter-based solution. We have other parts of our app that use this, and the work in this case will involve properly describing the NAL units (and audio samples) to pass the data to those classes. Such details are clearly outside of this list's scope, but my question is whether it would be more appropriate to be subclassing MediaSink or FileSink. Because you?re planning on recording MP4-format files, you should use ?QuickTimeFileSink?, as is done by ?openRTSP? (?testProgs/playCommon.cpp?). > I'm unsure about how we'd signal from outside the event loop that it's time to stop a particular stream and close up it's file. Would that be best done via the "EventTrigger" mechanism the faq mentions? Is there some more info, or example code, on that? See ?liveMedia/DeviceSource.cpp?. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Anton.Chmelev at transas.com Sat Nov 8 04:47:44 2014 From: Anton.Chmelev at transas.com (Chmelev, Anton) Date: Sat, 8 Nov 2014 12:47:44 +0000 Subject: [Live-devel] Problem with fMaxSize after signalNewFrame() for live source Message-ID: <1415450864099.63858@transas.com> ??Hello Ross, I'm writing an application that encodes screen capture (using MS media foundation h264 encoder) and transmits it via RTSP using liveMedia. I've written a wrapper for interfacing with the encoder (based on DeviceSource.cpp). When the encoding starts, maxFrameSize eventually grows to BANK_SIZE and all works well, but eventually input frames run out (it is supposed to be a live stream with no preroll). So, when I get new encoded frame, I trigger signalNewFrame(), at which point maxFrameSize is less than encoder's output, hence the frames get clipped. I've implemented a "kludge"to keep a record of maximum frame size and adjust it, if it drops. I have doubts about the viability of such solution, what do you think? Thanks! BR, Anton -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Sat Nov 8 05:02:15 2014 From: finlayson at live555.com (Ross Finlayson) Date: Sat, 8 Nov 2014 03:02:15 -1000 Subject: [Live-devel] Problem with fMaxSize after signalNewFrame() for live source In-Reply-To: <1415450864099.63858@transas.com> References: <1415450864099.63858@transas.com> Message-ID: <732D30C5-36FE-42C8-AFC2-F9D5BBE8C12C@live555.com> > I'm writing an application that encodes screen capture (using MS media foundation h264 encoder) and transmits it via RTSP using liveMedia. > > I've written a wrapper for interfacing with the encoder (based on DeviceSource.cpp). > When the encoding starts, maxFrameSize eventually grows to BANK_SIZE This implies that (in your ?createNewStreamSource()? implementation) you are feeding incoming H.264 NAL units to a ? H264VideoStreamFramer?. That is wrong; ?H264VideoStreamFramer? is used only when reading a H.264 file as a byte stream. If your encoder ?FramedSource? subclass is delivering discrete H.264 NAL units (i.e., one NAL unit at a time), then you must feed it into a ?H264VideoStreamDiscreteFramer?, *not* a ?H264VideoStreamFramer?. (Also, each NAL unit that you feed into the ?H264VideoStreamDiscreteFramer? must *not* begin with a ?start code? (0x00 0x00 0x00 0x01).) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Sat Nov 8 14:05:51 2014 From: finlayson at live555.com (Ross Finlayson) Date: Sat, 8 Nov 2014 12:05:51 -1000 Subject: [Live-devel] streaming from webcam In-Reply-To: References: Message-ID: <9580EA64-22E8-4CF0-B3FA-7004EBD42047@live555.com> > Is there any compiled binary or source code in vc++ so that I can stream the webcamera video and audio using the h264 and acc codec Unfortunately not, because there is no standard way that programs access a camera?s audio/video data (i.e., there is no standard API for this). Therefore, each programmer needs to write their own software - subclassing the ?LIVE555 Streaming Media? interface - that does this. Be aware that the ?LIVE555 Streaming Media? software is intended only for experienced systems programmers. See http://www.live555.com/liveMedia/faq.html#liveInput Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Anton.Chmelev at transas.com Sun Nov 9 01:13:32 2014 From: Anton.Chmelev at transas.com (Chmelev, Anton) Date: Sun, 9 Nov 2014 09:13:32 +0000 Subject: [Live-devel] Problem with fMaxSize after signalNewFrame() for live source In-Reply-To: <732D30C5-36FE-42C8-AFC2-F9D5BBE8C12C@live555.com> References: <1415450864099.63858@transas.com>, <732D30C5-36FE-42C8-AFC2-F9D5BBE8C12C@live555.com> Message-ID: <1415524412549.51295@transas.com> Hi, >> If your encoder "FramedSource" subclass is delivering discrete H.264 NAL units I was just confused with the fact that MS's encoder outputs multiple NALs with start codes. I separate them, stripped off start codes, fed them into the H264VideoStreamDiscreteFramer?,and it works! Thanks! BR, Anton ________________________________ From: live-devel on behalf of Ross Finlayson Sent: 08 November 2014 4:02 PM To: LIVE555 Streaming Media - development & use Subject: Re: [Live-devel] Problem with fMaxSize after signalNewFrame() for live source I'm writing an application that encodes screen capture (using MS media foundation h264 encoder) and transmits it via RTSP using liveMedia. I've written a wrapper for interfacing with the encoder (based on DeviceSource.cpp). When the encoding starts, maxFrameSize eventually grows to BANK_SIZE This implies that (in your "createNewStreamSource()" implementation) you are feeding incoming H.264 NAL units to a " H264VideoStreamFramer". That is wrong; "H264VideoStreamFramer" is used only when reading a H.264 file as a byte stream. If your encoder "FramedSource" subclass is delivering discrete H.264 NAL units (i.e., one NAL unit at a time), then you must feed it into a "H264VideoStreamDiscreteFramer", *not* a "H264VideoStreamFramer". (Also, each NAL unit that you feed into the "H264VideoStreamDiscreteFramer" must *not* begin with a 'start code' (0x00 0x00 0x00 0x01).) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From tencio2001 at skku.edu Tue Nov 11 03:43:34 2014 From: tencio2001 at skku.edu (=?ks_c_5601-1987?B?wMy/67/s?=) Date: Tue, 11 Nov 2014 20:43:34 +0900 Subject: [Live-devel] network coding implementation Message-ID: <033a01cffda4$b9e1ca80$2da55f80$@skku.edu> Dear sir/madam. Hi. I would like to ask a question here. I am trying to implement network coding in LIVE555. Network coding in this context means similar to error correction code. For example, I collect a certain size of packets and make them into a network coding packet. A sender will send the packet to a receiver and the receiver decodes the packet to have the original packets in lossy network. I have no idea which code should I look, I have setup a video streaming player(Mplayer) using LIVE555. So far, it was ok. I have been successful with the process, unfortunately, implementing network coding is like a gigantic wall to me. Could you be a dear and answer this question? Best regards. Yong. -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Nov 11 06:33:21 2014 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 11 Nov 2014 04:33:21 -1000 Subject: [Live-devel] network coding implementation In-Reply-To: <033a01cffda4$b9e1ca80$2da55f80$@skku.edu> References: <033a01cffda4$b9e1ca80$2da55f80$@skku.edu> Message-ID: Whenever possible, the ?LIVE555 Streaming Media? code follows IETF network protocol standards. The IETF has standardized several RTP payload formats that use FEC (i.e., ?Forward Error Correction?, which I think is what you refer to as ?Network Coding?); however, we currently do not implement any of them. Note that - to implement FEC - it is not sufficient simply to modify (?encode?) network packets when transmitting, and modify them again (?decode?) when receiving. You also have to indicate in the stream?s SDP description that (and how) this modification has taken place. Thus, the various IETF standard RTP payload formats for FEC don?t just define how RTP packets are encoded/decoded; they also define how this FEC payload format is described in SDP. Any implementation of this FEC scheme would therefore need to include this modification to SDP (at both the sending and receiving end). If you have a particular IETF RTP payload format (for FEC) that you are interested in, please let us know. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at sparkalley.com Tue Nov 11 10:03:02 2014 From: eric at sparkalley.com (Eric Blanpied) Date: Tue, 11 Nov 2014 10:03:02 -0800 Subject: [Live-devel] Adding streams to already-running event loop Message-ID: <3A4D66FE-9592-4490-BF24-3F03FAC89337@sparkalley.com> Hello, Our client app needs to receive multiple streams at once, adding them as required by various parts of the application logic. Both playCommon.cpp and testRTSPClient.cpp appear to require any client->sendDescribeCommand() calls before the env->scheduler().doEventLoop() call. With a single event loop, how does one add new client requests to the already running loop? thanks, -eric From finlayson at live555.com Tue Nov 11 11:07:40 2014 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 11 Nov 2014 09:07:40 -1000 Subject: [Live-devel] Adding streams to already-running event loop In-Reply-To: <3A4D66FE-9592-4490-BF24-3F03FAC89337@sparkalley.com> References: <3A4D66FE-9592-4490-BF24-3F03FAC89337@sparkalley.com> Message-ID: <20990371-A1CB-4722-905F-C6990F0C9272@live555.com> > Our client app needs to receive multiple streams at once, adding them as required by various parts of the application logic. > > Both playCommon.cpp and testRTSPClient.cpp appear to require any client->sendDescribeCommand() calls before the env->scheduler().doEventLoop() call. No - although both of these demo applications happen to do this, it is not ?required?. > With a single event loop, how does one add new client requests to the already running loop? You can create new ?RTSPClient? objects and call operations on them (including ?sendDescribeCommand()?) from within the event loop, at any time. I.e., this could be done from code that?s called to handle an event. Your next question is probably: How do I get the event loop to handle an event? See http://www.live555.com/liveMedia/faq.html#other-kinds-of-event Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at sparkalley.com Tue Nov 11 12:23:13 2014 From: eric at sparkalley.com (Eric Blanpied) Date: Tue, 11 Nov 2014 12:23:13 -0800 Subject: [Live-devel] Adding streams to already-running event loop In-Reply-To: <20990371-A1CB-4722-905F-C6990F0C9272@live555.com> References: <3A4D66FE-9592-4490-BF24-3F03FAC89337@sparkalley.com> <20990371-A1CB-4722-905F-C6990F0C9272@live555.com> Message-ID: > >> With a single event loop, how does one add new client requests to the already running loop? > > You can create new ?RTSPClient? objects and call operations on them (including ?sendDescribeCommand()?) from within the event loop, at any time. I.e., this could be done from code that?s called to handle an event. > > Your next question is probably: How do I get the event loop to handle an event? See http://www.live555.com/liveMedia/faq.html#other-kinds-of-event OK, having studied deviceSource.cpp, I have some idea about how this works. Seems like you register with the scheduler for a callback, then once called you?re safe to make calls that effect the loop. But there are still questions: 1. In this case (vs. deviceSource) I?m unclear what class I?d set up the eventTrigger on. On my RTSPClient subclass? Perhaps a new class that encapsulates the whole rtsp ?service? i?m putting together, or on my subclass of UsageEnvironment? 2. Any restrictions on multiple eventTriggerIds per class? Guessing that the RTSPClient subclass might be a good place, here?s some pseudocode. class myRTSPClient: public RTSPClient{ public: EventTriggerId addStreamEventTriggerId; EventTriggerId stopStreamEventTriggerId; private: void addStreamToLoop(); void stopStream(); } myRTSPClient::createNew(env,url,etc.){ ? addStreamEventTriggerId = envir().taskScheduler().createEventTrigger(addStreamToLoop); } myRTSPClient::addStreamToLoop(){ this->sendDescribeCommand(continueAfterDESCRIBE); } myClient = myRTSPClient::createNew(env, url, etc.); ourScheduler->triggerEvent(myRTSPClient::addStreamEventTriggerId, myClient); On the right track? thanks! -eric From finlayson at live555.com Tue Nov 11 13:07:04 2014 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 11 Nov 2014 11:07:04 -1000 Subject: [Live-devel] Adding streams to already-running event loop In-Reply-To: References: <3A4D66FE-9592-4490-BF24-3F03FAC89337@sparkalley.com> <20990371-A1CB-4722-905F-C6990F0C9272@live555.com> Message-ID: <01FE53A7-D9DB-4A43-8D56-1E5898AB4FD9@live555.com> First you need to figure out exactly when/how you?re going to decide when to add a new stream. I.e., you need to figure out what event will cause this to happen. If it?s simply a delayed task - within the event loop (i.e., set up using ?TaskScheduler::scheduleDelayedTask()?) - then you don?t need ?event triggers? at all. Instead, simply add the new stream from the ?delayed task?. Ditto if the event occurs as a result of reading data from a socket. If, however, you decide to do this from some external thread (i.e., from the thread that is *not* running the LIVE555 event loop), then you can call ?triggerEvent()? from this external thread. Note that ?triggerEvent()? is the *only* LIVE555 code that you may call from this external thread; all other LIVE555 calls *must* be done from within the ?LIVE555 event loop? thread. If you decide to use ?event triggers? (triggered from an external thread), then you are sort of on the right track, with a couple of corrections: > myRTSPClient::createNew(env,url,etc.){ > ? > addStreamEventTriggerId = envir().taskScheduler().createEventTrigger(addStreamToLoop); Note that the function parameter to ?createEventTrigger()? has to have the proper ?TaskFunc? signature: typedef void TaskFunc(void* clientData); > myClient = myRTSPClient::createNew(env, url, etc.); > ourScheduler->triggerEvent(myRTSPClient::addStreamEventTriggerId, myClient); Because you are calling ?triggerEvent()? from an external thread (again, if you?re in the LIVE555 event loop thread when you decide to create a new stream, then you don?t need event triggers at all), you can?t call ?myRTSPClient::createNew()? from within that thread. Instead, the ?myRTSPClient::createNew()? call should occur (along with a subsequent ?sendDescribeCommand()?) within the event handler function - i.e., within the LIVE555 event loop. So, to get this, you can do something like: ///// Within the LIVE555 thread: void newStreamHandler(void* clientData) { char* url = (char*)clientData; myRTSPClient* myClient = myRTSPClient::createNew(env, url, etc.); myClient->addStreamToLoop(); } TaskScheduler* live555TaskScheduler = &(envir().taskScheduler()); // A global variable EventTriggerId newStreamEventTrigger = envir().taskScheduler().createEventTrigger(newStreamHandler); // A global variable // Note that you need to create only one event trigger envir().taskScheduler().doEventLoop(); // does not return ///// Then later, within a separate thread: live555TaskScheduler->triggerEvent(newStreamEventTrigger, url_for_the_new_stream); Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ivan.Roubicek at zld.cz Tue Nov 11 13:44:36 2014 From: Ivan.Roubicek at zld.cz (=?iso-8859-2?Q?Ivan_Roub=ED=E8ek?=) Date: Tue, 11 Nov 2014 21:44:36 +0000 Subject: [Live-devel] Live555 RTP/RTCP Proxy Message-ID: <399343558EDA8C46B59B6AD69334A9DB3FDCA264@SERVER1.zld.local> Hello guys, I would like to ask you what would be the best way to create RTP/RTCP proxy server. I have customized live555 proxy server which works great. The problem I have is with RTP/RTCP server ports. I have a silverlight RTSP player which is running without elevated trust (meaning it can only use port range 4502-4534). So without proxiing RTP/RTCP server ports I could only have 15 connections (32 - 1 port for RTSP). Thank you very much for any kind of advice Best regards Ivan Roubicek -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Nov 11 14:02:41 2014 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 11 Nov 2014 12:02:41 -1000 Subject: [Live-devel] Live555 RTP/RTCP Proxy In-Reply-To: <399343558EDA8C46B59B6AD69334A9DB3FDCA264@SERVER1.zld.local> References: <399343558EDA8C46B59B6AD69334A9DB3FDCA264@SERVER1.zld.local> Message-ID: <0746BDF7-FE4D-4459-899C-40ED25CF4A2A@live555.com> > The problem I have is with RTP/RTCP server ports. I have a silverlight RTSP player which is running without elevated trust (meaning it can only use port range 4502-4534). So without proxiing RTP/RTCP server ports I could only have 15 connections (32 ? 1 port for RTSP). Note that you can specify the initial port number (to be used by RTP and RTCP) as the ?initialPortNum? parameter to the ?OnDemandServerMediaSubsession? constructor. But if the number of available port numbers on your system isn?t sufficient, then you should get a better system! Either ?elevate your trust? on your system, or use a Unix system (e.g., Linux, FreeBSD, Mac OS X), which doesn?t have such a draconian restriction. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ivan.Roubicek at zld.cz Tue Nov 11 14:22:15 2014 From: Ivan.Roubicek at zld.cz (=?utf-8?B?SXZhbiBSb3Viw63EjWVr?=) Date: Tue, 11 Nov 2014 22:22:15 +0000 Subject: [Live-devel] Live555 RTP/RTCP Proxy In-Reply-To: <0746BDF7-FE4D-4459-899C-40ED25CF4A2A@live555.com> References: <399343558EDA8C46B59B6AD69334A9DB3FDCA264@SERVER1.zld.local> <0746BDF7-FE4D-4459-899C-40ED25CF4A2A@live555.com> Message-ID: <399343558EDA8C46B59B6AD69334A9DB3FDCA292@SERVER1.zld.local> Hello, thank you for your anwser. I understand that I can specify server ports and I?m already doing that. The problem is with Microsoft Silverlight Elevated Trust. Because this is a web application which will be used by a lots of clients I just have to deal with restrictions I have. Elevated trust is possible but then my application would run out of browser (which is not acceptable for us) or user would have to change reg key (on MS Windows for example) which is also not acceptable for us. I believe I can do RTP/RTCP proxy myself which would forward RTP/RTCP base on session id or something. I just wanted to discuss best possible way to do that with you guys. Because I believe you have a better knowledge about concept and best practices with live555. Best regards Ivan Roub??ek From: live-devel [mailto:live-devel-bounces at ns.live555.com] On Behalf Of Ross Finlayson Sent: Tuesday, November 11, 2014 11:14 PM To: LIVE555 Streaming Media - development & use Subject: Re: [Live-devel] Live555 RTP/RTCP Proxy The problem I have is with RTP/RTCP server ports. I have a silverlight RTSP player which is running without elevated trust (meaning it can only use port range 4502-4534). So without proxiing RTP/RTCP server ports I could only have 15 connections (32 ? 1 port for RTSP). Note that you can specify the initial port number (to be used by RTP and RTCP) as the ?initialPortNum? parameter to the ?OnDemandServerMediaSubsession? constructor. But if the number of available port numbers on your system isn?t sufficient, then you should get a better system! Either ?elevate your trust? on your system, or use a Unix system (e.g., Linux, FreeBSD, Mac OS X), which doesn?t have such a draconian restriction. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Nov 11 14:38:22 2014 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 11 Nov 2014 12:38:22 -1000 Subject: [Live-devel] Live555 RTP/RTCP Proxy In-Reply-To: <399343558EDA8C46B59B6AD69334A9DB3FDCA292@SERVER1.zld.local> References: <399343558EDA8C46B59B6AD69334A9DB3FDCA264@SERVER1.zld.local> <0746BDF7-FE4D-4459-899C-40ED25CF4A2A@live555.com> <399343558EDA8C46B59B6AD69334A9DB3FDCA292@SERVER1.zld.local> Message-ID: You can reduce the number of sockets used by each RTP/RTCP stream from 2 to 1 by multiplexing RTP and RTCP on the same port. For information on how to do this, see http://lists.live555.com/pipermail/live-devel/2014-March/018179.html Note, though, that you can do this only if you know that your clients will accept/understand this. If your clients don?t understand RTP/RTCP multiplexing, then you have no choice but to (somehow) increase the number of available sockets on your system. Sorry. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From francisco at eyelynx.com Wed Nov 12 03:26:40 2014 From: francisco at eyelynx.com (Francisco Feijoo) Date: Wed, 12 Nov 2014 12:26:40 +0100 Subject: [Live-devel] Possible authentication bug opening several RTSPClient from the same event loop Message-ID: Hello, After updating live555 to the latest live.2014.11.07 I'm seeing some issues where some connections are failing with: "Failed to get a SDP description: 401 Unauthorized " I open several RTSP streams from the same event loop using one RTSPClient for each stream. Some of those streams connect to the same device, requesting different video channels. For example: rtsp://admin:12345 at 192.168.1.222:554/h.264/ch1/main/av_stream rtsp://admin:12345 at 192.168.1.222:554/h.264/ch2/main/av_stream What I'm seeing is that 'randomly' some are correctly authorized and some not. Could this be related to the latest changes in 2014.11.01 and 2014.11.07? Is there any way to open several streams with openRTSP or any other live555 tool? This is working ok in live.2014.07.25 Best Regards. -- *Francisco Feijoo - MSc (Eng)* Chief Technical Officer EyeLynx a subsidiary of Zaun Limited +44 020 8133 9388 | www.eyelynx.com | @EyeLynxLtd | Youtube -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Wed Nov 12 04:14:14 2014 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 12 Nov 2014 02:14:14 -1000 Subject: [Live-devel] Possible authentication bug opening several RTSPClient from the same event loop In-Reply-To: References: Message-ID: <265E0883-983E-42C4-A01F-5FBBE6F92A32@live555.com> Can you reproduce the problem using the *unmodified* ?openRTSP? command - or does the problem occur only with your custom RTSP client application (that is opening multiple RTSP streams)? If it?s the latter, then I suspect that your problem may be that you are using the same ?Authenticator? object for more than one RTSP stream. You can?t do this (especially not with the latest LIVE555 version); you have to use a different ?Authenticator? object for each ?RTSPClient?. But if you can reproduce the problem with ?openRTSP? (i.e., on a single RTSP stream), then please post the *complete* RTSP protocol exchange. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Admin at sorinetgroup.net Wed Nov 12 03:13:31 2014 From: Admin at sorinetgroup.net (Administrator) Date: Wed, 12 Nov 2014 15:43:31 +0430 Subject: [Live-devel] current position, time problem Message-ID: <702697081-5304@mail.sorinetgroup.net> Thanks in advanced , for this great job you have done. We have "Amino A139 STB" , and live555 server running on "Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-32-generic x86_64)". When playing TS streams , or Audio TS files, the "current position" or "current time" is "-1" which means Amino cannot receive correct time , or receive no time at all. but for testing , i tried the "free trial version of Elecard streamer VOD server" , and the time is ok. Is there anyway which i can fix this problem ??? Here is the packets captured by WireShark ,? from live555 server with "-1" error for current position , and elecard vod server with O.K current position: ============================================================ live555 streaming server to Amino STB: (no position, cause -1 as position) ============================================================ RTSP/1.0 200 OK CSeq: 13 Date: Wed, Oct 29 2014 18:28:52 GMT Session: 2E23B3A9 Content-Length: 10 ============================================================ Elecard Vod server packets to Amino STB: (correct position value) ============================================================ RTSP/1.0 200 OK CSeq: 13 Session: 0B79DEDA10ED9DA693BAEC052DC8E0E86CEEFB07 Server: Elecard Video On Demand Server 3.0.37159.131022 Elecard-SessionID: 000000000000022E Range: npt=0.000000-7983.040000 Content-Type: text/parameters Content-Length: 22 Allow: SETUP, TEARDOWN, PLAY, PAUSE, DESCRIBE, GET_PARAMETER, OPTIONS position: 9.760000 -------------- next part -------------- An HTML attachment was scrubbed... URL: From taha.ansari at objectsynergy.com Wed Nov 12 02:48:06 2014 From: taha.ansari at objectsynergy.com (Taha Ansari) Date: Wed, 12 Nov 2014 15:48:06 +0500 Subject: [Live-devel] Streaming to RTSP clients with different ports Message-ID: <000c01cffe66$264e51e0$72eaf5a0$@objectsynergy.com> Hi, I have a test scenario, where I am streaming webcam input + mic over to a remote server using RTP protocol, which re-streams this to another client (other client pulls data through RTSP): FFmpeg (RTP AAC audio + RTP video) -> Server -> (RTSP AAC audio + RTSP video) FFmpeg client In a local development setup, everything works fine. But in live environment, neither the audio nor video gets pulled by other client. I tried to use RTSP tunneled over HTTP, and that got my video content on other side, but audio still fails. If I enable debug level log in FFmpeg, it gives me this error: method PLAY failed: 500 SERVER ERROR and later: x-Error: Failed to create audio This error is given for both local and live server environment, so I believe this could be a server related fault (remember this error is there only for AAC, video contents are delivered/fetched just fine using HTTP tunneling). Being very new to live555, I am not sure how to trigger a command line based live555 instance that will read an input URL (through SDP maybe), and broadcast arriving data over RTSP on some port; if anyone here can guide me, I will really appreciate it. For reference, this is complete console output when fetching stream from live server: ------------------------------------------------------------------ ffplay -loglevel debug -rtsp_transport http rtsp://171.215.211.115:3648/a.aac ffplay version N-63439-g96470ca Copyright (c) 2003-2014 the FFmpeg developers built on May 25 2014 22:05:32 with gcc 4.8.2 (GCC) configuration: --disable-static --enable-shared --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --ena ble-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --e nable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libi lbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable- libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enabl e-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable -libtwolame --enable-libvidstab --enable-libvo-aacenc --enable-libvo-amrwbenc -- enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-l ibx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-decklink --en able-zlib libavutil 52. 86.100 / 52. 86.100 libavcodec 55. 65.100 / 55. 65.100 libavformat 55. 41.100 / 55. 41.100 libavdevice 55. 13.101 / 55. 13.101 libavfilter 4. 5.100 / 4. 5.100 libswscale 2. 6.100 / 2. 6.100 libswresample 0. 19.100 / 0. 19.100 libpostproc 52. 3.100 / 52. 3.100 [http @ 022a46a0] request: GET /a.aac HTTP/1.10KB sq= 0B f=0/0 User-Agent: Lavf/55.41.100 Range: bytes=0- Connection: close Host: 171.215.211.115:3648 x-sessioncookie: bceb6136e1a59ada Accept: application/x-rtsp-tunnelled Pragma: no-cache Cache-Control: no-cache [http @ 022a46a0] header='HTTP/1.1 200 OK' 0KB sq= 0B f=0/0 [http @ 022a46a0] http_code=200 [http @ 022a46a0] header='Date: Wed, 12 Nov 2014 04:59:44 GMT' [http @ 022a46a0] header='Server: [server name here]' [http @ 022a46a0] header='Connection: Close' [http @ 022a46a0] header='Content-Type: application/x-rtsp-tunnelled' [http @ 022a46a0] header='Expires: -1' [http @ 022a46a0] header='Cache-Control: private, max-age=0' [http @ 022a46a0] header='' [http @ 022a4780] request: POST /a.aac HTTP/1.1KB sq= 0B f=0/0 User-Agent: Lavf/55.41.100 Accept: */* Connection: close Host: 171.215.211.115:3648 x-sessioncookie: bceb6136e1a59ada Content-Type: application/x-rtsp-tunnelled Pragma: no-cache Cache-Control: no-cache Content-Length: 32767 Expires: Sun, 9 Jan 1972 00:00:00 GMT [rtsp @ 022a4ae0] SDP:= 0 aq= 0KB vq= 0KB sq= 0B f=0/0 v=0 o=- 400522656 1415768385 IN IP4 171.215.211.115 s= c=IN IP4 0.0.0.0 t=0 0 m=audio 0 RTP/AVP 96 a=rtpmap:96 MPEG4-GENERIC/22050/2 a=fmtp:96 profile-level-id=1;mode=AAC-hbr;sizelength=13;indexlength=3;indexdelta length=3;config=1390; a=control:trackID=2 [rtsp @ 022a4ae0] audio codec set to: aac [rtsp @ 022a4ae0] audio samplerate set to: 22050 [rtsp @ 022a4ae0] audio channels set to: 2 [rtsp @ 022a4ae0] hello state=0 0KB vq= 0KB sq= 0B f=0/0 [rtsp @ 022a4ae0] method PLAY failed: 500 SERVER ERROR 0B f=0/0 [rtsp @ 022a4ae0] Server: [server name here] CSeq: 4 Cache-Control: no-cache Date: Wed, 12 Nov 2014 04:59:46 GMT Expires: Wed, 12 Nov 2014 04:59:46 GMT Session: 57538100428063;timeout=30 x-Error: Failed to create audio rtsp://171.215.211.115:3648/a.aac: Invalid data found when processing input ------------------------------------------------------------------ Note: some info changed about to protect privacy. Thanks in advance for all guidance! Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Wed Nov 12 04:54:15 2014 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 12 Nov 2014 02:54:15 -1000 Subject: [Live-devel] Streaming to RTSP clients with different ports In-Reply-To: <000c01cffe66$264e51e0$72eaf5a0$@objectsynergy.com> References: <000c01cffe66$264e51e0$72eaf5a0$@objectsynergy.com> Message-ID: <4D4BA832-F4D3-4634-9A6E-C36B0E040574@live555.com> > I have a test scenario, where I am streaming webcam input + mic over to a remote server using RTP protocol, which re-streams this to another client (other client pulls data through RTSP): > > FFmpeg (RTP AAC audio + RTP video) -> Server -> (RTSP AAC audio + RTSP video) FFmpeg client You didn?t say specifically which parts of these are using the ?LIVE555 Streaming Media? software; I presume it?s just the ?Server?. You should first use ?testRTSPClient? and ?openRTSP? (our command-line RTSP client applications) as clients to test your server, before using something like ?FFmpeg? or ?VLC?. See http://live555.com/openRTSP/ Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From rglobisch at csir.co.za Wed Nov 12 04:49:51 2014 From: rglobisch at csir.co.za (Ralf Globisch) Date: Wed, 12 Nov 2014 14:49:51 +0200 Subject: [Live-devel] Opus test file Message-ID: <5463738F0200004D000B693E@pta-emo.csir.co.za> Hi Ross, Would it be possible to provide the opus test file you use for testing opus RTP streaming on live555.com or in the tar file? Thanks, Ralf -- This message is subject to the CSIR's copyright terms and conditions, e-mail legal notice, and implemented Open Document Format (ODF) standard. The full disclaimer details can be found at http://www.csir.co.za/disclaimer.html. This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. Please consider the environment before printing this email. From finlayson at live555.com Wed Nov 12 05:28:57 2014 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 12 Nov 2014 03:28:57 -1000 Subject: [Live-devel] current position, time problem In-Reply-To: <702697081-5304@mail.sorinetgroup.net> References: <702697081-5304@mail.sorinetgroup.net> Message-ID: <00690BDC-F603-464B-87B7-318E5D0EF87D@live555.com> > We have "Amino A139 STB" , and live555 server running on "Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-32-generic x86_64)?. Linux ?distributions? (such as ?Ubuntu?) usually contain old, out-of-date versions of the LIVE555 Streaming Media software. Please make sure that you are running an up-to-date version of the software. I suggest using the pre-built Linux binary ?live555MediaServer? application, available from http://www.live555.com/mediaServer/ Note also that to perform ?trick play? operations when streaming a MPEG Transport Stream file, you will need a corresponding ?index file?. This is described here: http://www.live555.com/liveMedia/transport-stream-trick-play.html > ============================================================ > Elecard Vod server packets to Amino STB: (correct position value) > ============================================================ > RTSP/1.0 200 OK > CSeq: 13 > Session: 0B79DEDA10ED9DA693BAEC052DC8E0E86CEEFB07 > Server: Elecard Video On Demand Server 3.0.37159.131022 > Elecard-SessionID: 000000000000022E > Range: npt=0.000000-7983.040000 > Content-Type: text/parameters > Content-Length: 22 > Allow: SETUP, TEARDOWN, PLAY, PAUSE, DESCRIBE, GET_PARAMETER, OPTIONS > > position: 9.760000 Amino apparently uses a RTSP ?GET_PARAMETER? command - with a parameter name of ?position? - to request that a server return the ?Normal Play Time? (i.e., current position in the stream). This is completely non-standard, and I will not consider adding *any* such support to the ?LIVE555 Streaming Media? software unless I am contacted *directly* by ?Amino Corporation? (not just by some intermediary). See also > I recommend that you instead use some other - more standards-compliant - set-top box client, other than ?Amino?. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Wed Nov 12 06:00:22 2014 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 12 Nov 2014 04:00:22 -1000 Subject: [Live-devel] Opus test file In-Reply-To: <5463738F0200004D000B693E@pta-emo.csir.co.za> References: <5463738F0200004D000B693E@pta-emo.csir.co.za> Message-ID: <74382572-EB8D-49D2-9094-04686885A9FB@live555.com> > Would it be possible to provide the opus test file you use for testing opus RTP streaming on live555.com or in the tar file? I?ve left an example ?.opus? file in http://www.live555.com/liveMedia/public/opus/ This can be streamed by our RTSP server (including the ?LIVE555 Media Server? application). Note, however, that I don?t know of any media player yet that can *play* an Opus RTSP/RTP stream. (It should be trivial to update VLC to do this, however.) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From francisco at eyelynx.com Wed Nov 12 10:01:24 2014 From: francisco at eyelynx.com (Francisco Feijoo) Date: Wed, 12 Nov 2014 19:01:24 +0100 Subject: [Live-devel] Possible authentication bug opening several RTSPClient from the same event loop In-Reply-To: <265E0883-983E-42C4-A01F-5FBBE6F92A32@live555.com> References: <265E0883-983E-42C4-A01F-5FBBE6F92A32@live555.com> Message-ID: Hi Ross, Thanks for your support. Looks like I can reproduce the problem with the unmodified openRTSP command. It works on VLC. ./openRTSP rtsp://admin:12345 at 192.168.1.222:554/h.264/ch7/main/av_stream Opening connection to 192.168.1.222, port 554... ...remote connection opened Sending request: OPTIONS rtsp:// admin:12345 at 192.168.1.222:554/h.264/ch7/main/av_stream RTSP/1.0 CSeq: 2 User-Agent: ./openRTSP (LIVE555 Streaming Media v2014.11.07) Received 100 new bytes of response data. Received a complete OPTIONS response: RTSP/1.0 200 OK CSeq: 2 Public: OPTIONS, DESCRIBE, SETUP, TEARDOWN, PLAY, PAUSE, SET_PARAMETER Sending request: DESCRIBE rtsp:// admin:12345 at 192.168.1.222:554/h.264/ch7/main/av_stream RTSP/1.0 CSeq: 3 User-Agent: ./openRTSP (LIVE555 Streaming Media v2014.11.07) Accept: application/sdp Received 73 new bytes of response data. Received a complete DESCRIBE response: RTSP/1.0 401 Unauthorized CSeq: 3 WWW-Authenticate: Basic realm="/" Failed to get a SDP description for the URL "rtsp:// admin:12345 at 192.168.1.222:554/h.264/ch7/main/av_stream": 401 Unauthorized 2014-11-12 13:14 GMT+01:00 Ross Finlayson : > Can you reproduce the problem using the *unmodified* ?openRTSP? command - > or does the problem occur only with your custom RTSP client application > (that is opening multiple RTSP streams)? > > If it?s the latter, then I suspect that your problem may be that you are > using the same ?Authenticator? object for more than one RTSP stream. You > can?t do this (especially not with the latest LIVE555 version); you have to > use a different ?Authenticator? object for each ?RTSPClient?. > > But if you can reproduce the problem with ?openRTSP? (i.e., on a single > RTSP stream), then please post the *complete* RTSP protocol exchange. > > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > -- *Francisco Feijoo * -------------- next part -------------- An HTML attachment was scrubbed... URL: From francisco at eyelynx.com Wed Nov 12 10:40:05 2014 From: francisco at eyelynx.com (Francisco Feijoo) Date: Wed, 12 Nov 2014 19:40:05 +0100 Subject: [Live-devel] Possible authentication bug opening several RTSPClient from the same event loop In-Reply-To: References: <265E0883-983E-42C4-A01F-5FBBE6F92A32@live555.com> Message-ID: Hi Ross, Looking at the code fAllowBasicAuthentication is never initialized. 2014-11-12 19:01 GMT+01:00 Francisco Feijoo : > Hi Ross, > > Thanks for your support. > > Looks like I can reproduce the problem with the unmodified openRTSP > command. It works on VLC. > > ./openRTSP rtsp://admin:12345 at 192.168.1.222:554/h.264/ch7/main/av_stream > Opening connection to 192.168.1.222, port 554... > ...remote connection opened > Sending request: OPTIONS rtsp:// > admin:12345 at 192.168.1.222:554/h.264/ch7/main/av_stream RTSP/1.0 > CSeq: 2 > User-Agent: ./openRTSP (LIVE555 Streaming Media v2014.11.07) > > > Received 100 new bytes of response data. > Received a complete OPTIONS response: > RTSP/1.0 200 OK > CSeq: 2 > Public: OPTIONS, DESCRIBE, SETUP, TEARDOWN, PLAY, PAUSE, SET_PARAMETER > > > Sending request: DESCRIBE rtsp:// > admin:12345 at 192.168.1.222:554/h.264/ch7/main/av_stream RTSP/1.0 > CSeq: 3 > User-Agent: ./openRTSP (LIVE555 Streaming Media v2014.11.07) > Accept: application/sdp > > > Received 73 new bytes of response data. > Received a complete DESCRIBE response: > RTSP/1.0 401 Unauthorized > CSeq: 3 > WWW-Authenticate: Basic realm="/" > > > Failed to get a SDP description for the URL "rtsp:// > admin:12345 at 192.168.1.222:554/h.264/ch7/main/av_stream": 401 Unauthorized > > > > 2014-11-12 13:14 GMT+01:00 Ross Finlayson : > >> Can you reproduce the problem using the *unmodified* ?openRTSP? command - >> or does the problem occur only with your custom RTSP client application >> (that is opening multiple RTSP streams)? >> >> If it?s the latter, then I suspect that your problem may be that you are >> using the same ?Authenticator? object for more than one RTSP stream. You >> can?t do this (especially not with the latest LIVE555 version); you have to >> use a different ?Authenticator? object for each ?RTSPClient?. >> >> But if you can reproduce the problem with ?openRTSP? (i.e., on a single >> RTSP stream), then please post the *complete* RTSP protocol exchange. >> >> >> Ross Finlayson >> Live Networks, Inc. >> http://www.live555.com/ >> >> >> _______________________________________________ >> live-devel mailing list >> live-devel at lists.live555.com >> http://lists.live555.com/mailman/listinfo/live-devel >> >> > > > -- > > *Francisco Feijoo * > -- *Francisco Feijoo - MSc (Eng)* Chief Technical Officer EyeLynx a subsidiary of Zaun Limited +44 020 8133 9388 | www.eyelynx.com | @EyeLynxLtd | Youtube EyeLynx Limited, the creators of SharpView are defining Video Intelligence, full HD and Megapixel CCTV to provide state-of-the-art electronic security solutions for critical needs. EyeLynx, develops and manufactures total perimeter protection solutions using RADAR, Video Analytics, PIDs, Thermal Imaging and PIRs which fully integrate into fencing and barrier systems using SharpView. For more information, contact us via sales at eyelynx.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcin at speed666.info Wed Nov 12 10:47:14 2014 From: marcin at speed666.info (Marcin) Date: Wed, 12 Nov 2014 19:47:14 +0100 Subject: [Live-devel] Possible authentication bug opening several RTSPClient from the same event loop In-Reply-To: References: <265E0883-983E-42C4-A01F-5FBBE6F92A32@live555.com> Message-ID: <5463AB32.80500@speed666.info> Hello Ross, To be honest, i sometimes observe this kind of situation in VLC that even with good User:Pass combination - VLC was giving me 401 auth problem. But i never fired the bug here as you manytimes statded that VLC is not your software. Looks like it is not realted to VLC directly. Marcin W dniu 2014-11-12 19:01, Francisco Feijoo pisze: > Hi Ross, > > Thanks for your support. > > Looks like I can reproduce the problem with the unmodified openRTSP > command. It works on VLC. > > ./openRTSP > rtsp://admin:12345 at 192.168.1.222:554/h.264/ch7/main/av_stream > > Opening connection to 192.168.1.222, port 554... > ...remote connection opened > Sending request: OPTIONS > rtsp://admin:12345 at 192.168.1.222:554/h.264/ch7/main/av_stream > RTSP/1.0 > CSeq: 2 > User-Agent: ./openRTSP (LIVE555 Streaming Media v2014.11.07) > > > Received 100 new bytes of response data. > Received a complete OPTIONS response: > RTSP/1.0 200 OK > CSeq: 2 > Public: OPTIONS, DESCRIBE, SETUP, TEARDOWN, PLAY, PAUSE, SET_PARAMETER > > > Sending request: DESCRIBE > rtsp://admin:12345 at 192.168.1.222:554/h.264/ch7/main/av_stream > RTSP/1.0 > CSeq: 3 > User-Agent: ./openRTSP (LIVE555 Streaming Media v2014.11.07) > Accept: application/sdp > > > Received 73 new bytes of response data. > Received a complete DESCRIBE response: > RTSP/1.0 401 Unauthorized > CSeq: 3 > WWW-Authenticate: Basic realm="/" > > > Failed to get a SDP description for the URL > "rtsp://admin:12345 at 192.168.1.222:554/h.264/ch7/main/av_stream > ": 401 > Unauthorized > > > > 2014-11-12 13:14 GMT+01:00 Ross Finlayson >: > > Can you reproduce the problem using the *unmodified* "openRTSP" > command - or does the problem occur only with your custom RTSP > client application (that is opening multiple RTSP streams)? > > If it's the latter, then I suspect that your problem may be that > you are using the same "Authenticator" object for more than one > RTSP stream. You can't do this (especially not with the latest > LIVE555 version); you have to use a different "Authenticator" > object for each "RTSPClient". > > But if you can reproduce the problem with "openRTSP" (i.e., on a > single RTSP stream), then please post the *complete* RTSP protocol > exchange. > > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > > > > -- > *Francisco Feijoo > * > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Wed Nov 12 10:50:07 2014 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 12 Nov 2014 08:50:07 -1000 Subject: [Live-devel] Possible authentication bug opening several RTSPClient from the same event loop In-Reply-To: References: <265E0883-983E-42C4-A01F-5FBBE6F92A32@live555.com> Message-ID: <7B6E9E15-633E-45CD-AACF-02ABEA9FCC7E@live555.com> Yes - my fault! I?ve just installed a new version (2014.11.12) that fixes this. Thanks for the report. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Wed Nov 12 11:00:30 2014 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 12 Nov 2014 09:00:30 -1000 Subject: [Live-devel] Possible authentication bug opening several RTSPClient from the same event loop In-Reply-To: <5463AB32.80500@speed666.info> References: <265E0883-983E-42C4-A01F-5FBBE6F92A32@live555.com> <5463AB32.80500@speed666.info> Message-ID: <6DDE8130-3EB6-41AC-B625-E316772DA6A7@live555.com> > To be honest, i sometimes observe this kind of situation in VLC that even with good User:Pass combination - VLC was giving me 401 auth problem. But i never fired the bug here as you manytimes statded that VLC is not your software. That?s correct. However, if you see such an error with VLC, then you may also be able to reproduce it by running openRTSP -u Should you see such an error (running ?openRTSP?), then please let us know. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at sparkalley.com Wed Nov 12 11:52:28 2014 From: eric at sparkalley.com (Eric Blanpied) Date: Wed, 12 Nov 2014 11:52:28 -0800 Subject: [Live-devel] Adding streams to already-running event loop In-Reply-To: <01FE53A7-D9DB-4A43-8D56-1E5898AB4FD9@live555.com> References: <3A4D66FE-9592-4490-BF24-3F03FAC89337@sparkalley.com> <20990371-A1CB-4722-905F-C6990F0C9272@live555.com> <01FE53A7-D9DB-4A43-8D56-1E5898AB4FD9@live555.com> Message-ID: Hi Ross, Thanks for your help so far. I?ve got this working along the lines you?ve described, but the problem I?m running into is a need for some kind of external (to live555 thread) reference to each RTSPClient, in order to trigger other events (stopping one, for example). That?s why I thought about creating the clients outside of the loop, but I can understand why that?s not a good idea. Creating the RTSPClient instance inside the eventTrigger handler doesn?t provide a way to pass a reference back, either (as far as I can tell). Do you have any recommendation on how to give the main application thread a useful reference to each stream for use with later event triggers? It seems to me that this would be a common desire for an app that handles multiple streams. thanks, -e > On Nov 11, 2014, at 1:07 PM, Ross Finlayson wrote: > >> >> myClient = myRTSPClient::createNew(env, url, etc.); >> ourScheduler->triggerEvent(myRTSPClient::addStreamEventTriggerId, myClient); > > Because you are calling ?triggerEvent()? from an external thread (again, if you?re in the LIVE555 event loop thread when you decide to create a new stream, then you don?t need event triggers at all), you can?t call ?myRTSPClient::createNew()? from within that thread. Instead, the ?myRTSPClient::createNew()? call should occur (along with a subsequent ?sendDescribeCommand()?) within the event handler function - i.e., within the LIVE555 event loop. > > So, to get this, you can do something like: > > ///// Within the LIVE555 thread: > > void newStreamHandler(void* clientData) { > char* url = (char*)clientData; > myRTSPClient* myClient = myRTSPClient::createNew(env, url, etc.); > myClient->addStreamToLoop(); > } > > TaskScheduler* live555TaskScheduler = &(envir().taskScheduler()); // A global variable > EventTriggerId newStreamEventTrigger = envir().taskScheduler().createEventTrigger(newStreamHandler); // A global variable > // Note that you need to create only one event trigger > > envir().taskScheduler().doEventLoop(); // does not return > > ///// Then later, within a separate thread: > > live555TaskScheduler->triggerEvent(newStreamEventTrigger, url_for_the_new_stream); > -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Wed Nov 12 12:31:04 2014 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 12 Nov 2014 10:31:04 -1000 Subject: [Live-devel] Adding streams to already-running event loop In-Reply-To: References: <3A4D66FE-9592-4490-BF24-3F03FAC89337@sparkalley.com> <20990371-A1CB-4722-905F-C6990F0C9272@live555.com> <01FE53A7-D9DB-4A43-8D56-1E5898AB4FD9@live555.com> Message-ID: <77B73F25-CF17-4962-9AE0-263E6A2A2970@live555.com> > I?ve got this working along the lines you?ve described, but the problem I?m running into is a need for some kind of external (to live555 thread) reference to each RTSPClient, in order to trigger other events (stopping one, for example). That?s why I thought about creating the clients outside of the loop, but I can understand why that?s not a good idea. Creating the RTSPClient instance inside the eventTrigger handler doesn?t provide a way to pass a reference back, either (as far as I can tell). > > Do you have any recommendation on how to give the main application thread a useful reference to each stream for use with later event triggers? It seems to me that this would be a common desire for an app that handles multiple streams. I suggest creating - in the external thread - some object (struct) that represents the stream. This struct could contain some fields (such as the RTSP URL) that would be set by the external thread, and other LIVE555-specific fields (such as a pointer to a ?RTSPClient? object) that would be set and accessed only by the LIVE555-event-loop thread. Then, pass a pointer to this struct (rather than just the RTSP URL) as the ?clientData? parameter to ?triggerEvent()?. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at sparkalley.com Wed Nov 12 12:58:12 2014 From: eric at sparkalley.com (Eric Blanpied) Date: Wed, 12 Nov 2014 12:58:12 -0800 Subject: [Live-devel] Adding streams to already-running event loop In-Reply-To: <77B73F25-CF17-4962-9AE0-263E6A2A2970@live555.com> References: <3A4D66FE-9592-4490-BF24-3F03FAC89337@sparkalley.com> <20990371-A1CB-4722-905F-C6990F0C9272@live555.com> <01FE53A7-D9DB-4A43-8D56-1E5898AB4FD9@live555.com> <77B73F25-CF17-4962-9AE0-263E6A2A2970@live555.com> Message-ID: <6E4C656A-6E32-4CE9-A79B-6741BCE406A9@sparkalley.com> > On Nov 12, 2014, at 12:31 PM, Ross Finlayson wrote: > >> I?ve got this working along the lines you?ve described, but the problem I?m running into is a need for some kind of external (to live555 thread) reference to each RTSPClient, in order to trigger other events (stopping one, for example). That?s why I thought about creating the clients outside of the loop, but I can understand why that?s not a good idea. Creating the RTSPClient instance inside the eventTrigger handler doesn?t provide a way to pass a reference back, either (as far as I can tell). >> >> Do you have any recommendation on how to give the main application thread a useful reference to each stream for use with later event triggers? It seems to me that this would be a common desire for an app that handles multiple streams. > > I suggest creating - in the external thread - some object (struct) that represents the stream. This struct could contain some fields (such as the RTSP URL) that would be set by the external thread, and other LIVE555-specific fields (such as a pointer to a ?RTSPClient? object) that would be set and accessed only by the LIVE555-event-loop thread. > > Then, pass a pointer to this struct (rather than just the RTSP URL) as the ?clientData? parameter to ?triggerEvent()?. OK, so the ?addNewStream? triggered event, where the RTSPClient is created, would add the pointer to it then. I?m a bit concerned about thread-safety here, since the struct is created on the GUI thread, then manipulated on the live555 thread. I suppose you?re implying that safety would come from being careful about each field only being set from the appropriate thread. -e -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Wed Nov 12 13:14:13 2014 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 12 Nov 2014 11:14:13 -1000 Subject: [Live-devel] Adding streams to already-running event loop In-Reply-To: <6E4C656A-6E32-4CE9-A79B-6741BCE406A9@sparkalley.com> References: <3A4D66FE-9592-4490-BF24-3F03FAC89337@sparkalley.com> <20990371-A1CB-4722-905F-C6990F0C9272@live555.com> <01FE53A7-D9DB-4A43-8D56-1E5898AB4FD9@live555.com> <77B73F25-CF17-4962-9AE0-263E6A2A2970@live555.com> <6E4C656A-6E32-4CE9-A79B-6741BCE406A9@sparkalley.com> Message-ID: <3F13B713-491B-452A-82BB-31A58C050FF9@live555.com> > OK, so the ?addNewStream? triggered event, where the RTSPClient is created, would add the pointer to it then. I?m a bit concerned about thread-safety here, since the struct is created on the GUI thread, then manipulated on the live555 thread. I suppose you?re implying that safety would come from being careful about each field only being set from the appropriate thread. Yes - if the LIVE555-related fields in the struct are set or accessed only within the LIVE555-event-loop thread (and, of course, the struct itself never gets deleted while the LIVE555-event-loop thread can be working with it), then you should be OK. (In any case, we?ve probably now reached the limit in how much assistance I can provide your project ?for free? on this mailing list. If your company is interested in having me consult further on your project, then please let me know (via separate email).) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From taha.ansari at objectsynergy.com Wed Nov 12 21:35:51 2014 From: taha.ansari at objectsynergy.com (Taha Ansari) Date: Thu, 13 Nov 2014 10:35:51 +0500 Subject: [Live-devel] Streaming to RTSP clients with different ports Message-ID: <000d01cfff03$b33d7d90$19b878b0$@objectsynergy.com> Message-ID: <4D4BA832-F4D3-4634-9A6E-C36B0E040574 at live555.com> Content-Type: text/plain; charset="utf-8" > I have a test scenario, where I am streaming webcam input + mic over to a remote server using RTP protocol, which re-streams this to another client (other client pulls data through RTSP): > > FFmpeg (RTP AAC audio + RTP video) -> Server -> (RTSP AAC audio + RTSP > video) FFmpeg client >>You didn?t say specifically which parts of these are using the ?LIVE555 Streaming Media? software; I presume it?s just the ?Server?. Sorry, my bad. The scenario I mentioned at start was only the background of the problem; in that scenario I was not using live555, but was using some other server (called NGMS lite for Linux). This particular server does not support audio only RTSP streams tunneled over HTTP. So similar to this theme, I would rather use live555 to re-stream (if it is easily possible using just the binaries) in similar pattern: [FFmpeg (RTP (AAC) audio)] --> [live555 server module] --> [FFmpeg (RTSP (AAC) audio)] In above case, I will be eventually using many different RTSP ports (unlike suggested default ports like 554), to support different source(s) and client(s). My understanding is, I can only get this thing running on different RTSP ports if I am successful in using RTSP tunneled over HTTP... So basically this was the reason for this question. >>You should first use ?testRTSPClient? and ?openRTSP? (our command-line RTSP client applications) as clients to test your server, before using something like ?FFmpeg? or ?VLC?. While I do understand this - the thing is: I have already had good success with FFmpeg clients (on both transmission and reception end - if only fully for local network, and partially on live) so I would like to continue in this direction (if of course possible). So in rephrased question, which live555 server module can give me this functionality, i.e.: [FFmpeg (RTP (AAC) audio)] --> [live555 server module] --> [FFmpeg (RTSP (AAC) audio)] >>See http://live555.com/openRTSP/ >>Ross Finlayson >>Live Networks, Inc. >>http://www.live555.com/ Thanks, Taha From finlayson at live555.com Wed Nov 12 22:47:33 2014 From: finlayson at live555.com (Ross Finlayson) Date: Wed, 12 Nov 2014 20:47:33 -1000 Subject: [Live-devel] Streaming to RTSP clients with different ports In-Reply-To: <000d01cfff03$b33d7d90$19b878b0$@objectsynergy.com> References: <000d01cfff03$b33d7d90$19b878b0$@objectsynergy.com> Message-ID: <64B04BD1-6BCC-4F34-860D-9A9792D1C1B6@live555.com> > So in rephrased question, which live555 server module can give me this > functionality, i.e.: > > [FFmpeg (RTP (AAC) audio)] --> [live555 server module] --> [FFmpeg (RTSP > (AAC) audio)] If the first RTP AAC audio source is accessible via RTSP (i.e., if it has a RTSP server), then you should be able to use the ?LIVE555 Proxy Server? http://www.live555.com/proxyServer/ Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From rglobisch at csir.co.za Thu Nov 13 01:27:19 2014 From: rglobisch at csir.co.za (Ralf Globisch) Date: Thu, 13 Nov 2014 11:27:19 +0200 Subject: [Live-devel] Opus test file In-Reply-To: <74382572-EB8D-49D2-9094-04686885A9FB@live555.com> References: <5463738F0200004D000B693E@pta-emo.csir.co.za> <74382572-EB8D-49D2-9094-04686885A9FB@live555.com> Message-ID: <546495970200004D000B6B34@pta-emo.csir.co.za> >>> I?ve left an example ?.opus? file in http://www.live555.com/liveMedia/public/opus/ Perfect, thanks Ross! >>> This can be streamed by our RTSP server (including the ?LIVE555 Media Server? application). >>> Note, however, that I don?t know of any media player yet that can *play* an Opus RTSP/RTP stream. (It should be trivial to update VLC to do this, however.) Thanks, just added preliminary opus support to our open-source DirectShow RTSP source filter (https://sourceforge.net/projects/videoprocessing/) which uses live555. FYI streaming works fine (i.e. audio plays correctly) with testOnDemandRTSPServer and the test file you provided. Regards, Ralf -- This message is subject to the CSIR's copyright terms and conditions, e-mail legal notice, and implemented Open Document Format (ODF) standard. The full disclaimer details can be found at http://www.csir.co.za/disclaimer.html. This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. Please consider the environment before printing this email. From giovanni.iamonte at quintetto.it Thu Nov 13 05:50:08 2014 From: giovanni.iamonte at quintetto.it (Giovanni Iamonte) Date: Thu, 13 Nov 2014 14:50:08 +0100 Subject: [Live-devel] RSTP Live streaming from USB camera Message-ID: <70CD532BCEB06C4ABC4756BED41FC7A698287C@caravaggio.Quintetto.local> Hello Our goal was to generated an H264 + AAC live streaming starting from an USB camera and a microphone. To reach the goal we have used the following chain: USB camera -> ffmpeg -> RSTP live555. Finally, the live stream works and when we connect with a client like VLC or ffplay, we can see the camera shooting. The only problem that we have is due to fact that we can only have a limited number of connections (client vlc) and this number is related to the source's resolution. If you exceed this number all the VLC clients begin to display artifacts. Source's resolution 320 x 240 allows just 6 VLC connections. Source's resolution 640 x 480 allows just 3 VLC connections. Source's resolution 1920 x 1080 allows just 1 VLC connection. We already checks the cpu usage and the bandwidth, the cpu usage is around 40% and the average bandwich is 1 Mbit. The OS is windows. Below, what we did: 1) We used ffmpeg to capture the images from the camera and convert them to H264 + AAC frames (avcoded) 2) These frames were pushed in a circular queue 3) In a thread we created a RTP Server, the media session and two subsession, one for the video and the other one audio (see the code below) 4) Starting from the DeviceSource.cpp we created a source that reads the frames from the circular queue. 5) When a client connects to the RTPserver, we create a NewStreamSource and a NewRTPSink. As you can see in the code below, for the video StreamSource we create a H264VideoStreamDiscreteFramer for the audio we leave as it is. Regarding the RTPSink, for the video, we create an H264VideoRTPSink and for the audio we create MPEG4GenericRTPSink. I will appreciate any help. Thanks Bye ************************************************************************ ********** unsigned long WINAPI Live555Thread (void *param) { OutPacketBuffer::maxSize = MAX_FRAME_SIZE; TaskScheduler *serverTsk = BasicTaskScheduler::createNew(); UsageEnvironment *serverEnv = BasicUsageEnvironment::createNew(*serverTsk); RTSPServer *rtspServer = RTSPServer::createNew(*serverEnv, g_nRTSPServerPort, NULL); ServerMediaSession *sms; if (rtspServer == NULL) { *serverEnv << "LIVE555: Failed to create RTSP server: %s\n", serverEnv->getResultMsg(); return 0; } else { char const* descriptionString = "Session streamed by \"QMServer\""; char RTSP_Address[1024]; RTSP_Address[0]=0x00; sms = ServerMediaSession::createNew(*serverEnv, RTSP_Address, RTSP_Address, descriptionString); sms->addSubsession(Live555ServerMediaSubsession::createNew(VIDEO_TYPE, *serverEnv, ESTIMATED_VIDEO_BITRATE)); sms->addSubsession(Live555ServerMediaSubsession::createNew(AUDIO_TYPE, *serverEnv, ESTIMATED_AUDIO_BITRATE)); rtspServer->addServerMediaSession(sms); } char* url = rtspServer->rtspURL(sms); *serverEnv << "Play this stream using the URL \"" << url << "\"\n"; for (;;) { serverEnv->taskScheduler().doEventLoop(&g_cExitThread); // does not return if (g_cExitThread) break; } Medium::close(rtspServer); return 0; } ************************************************************************ ********** FramedSource* Live555ServerMediaSubsession::createNewStreamSource (unsigned /*clientSessionId*/, unsigned& estBitrate) { estBitrate = fEstimatedKbps; m_source = Live555Source::createNew(envir(), m_type, false); if (m_type == VIDEO_TYPE) { return (H264VideoStreamDiscreteFramer::createNew(envir(), m_source)); } else return m_source; } RTPSink* Live555ServerMediaSubsession::createNewRTPSink (Groupsock* rtpGroupsock, unsigned char /*rtpPayloadTypeIfDynamic*/, FramedSource* inputSource) { OutPacketBuffer::maxSize = MAX_FRAME_SIZE; if (m_type == VIDEO_TYPE) { return (H264VideoRTPSink::createNew(envir(), rtpGroupsock, 96)); } else { unsigned char audioSpecificConfig[2]; char fConfigStr[10]; audioSpecificConfig[0] = (AUDIO_AAC_TYPE << 3) | (AUDIO_SRATE_INDEX >> 1); audioSpecificConfig[1] = (AUDIO_SRATE_INDEX << 7) | (AUDIO_CHANNELS << 3); sprintf(fConfigStr, "%02X%02x", audioSpecificConfig[0], audioSpecificConfig[1]); return (MPEG4GenericRTPSink::createNew(envir(), rtpGroupsock, 96, AUDIO_SRATE, "audio", "AAC-hbr", fConfigStr, AUDIO_CHANNELS)); } } ________________________________________________________________ Ing. Giovanni Iamonte Area Tecnologie e sviluppi Quintetto Srl - Pont Saint Martin (AO) ( mobile: +39 393 9196310 ( tel: +39 0165 1845290 + e-mail: iamonte at quintetto.it [ web: www.quintetto.it -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Thu Nov 13 08:42:45 2014 From: finlayson at live555.com (Ross Finlayson) Date: Thu, 13 Nov 2014 06:42:45 -1000 Subject: [Live-devel] RSTP Live streaming from USB camera In-Reply-To: <70CD532BCEB06C4ABC4756BED41FC7A698287C@caravaggio.Quintetto.local> References: <70CD532BCEB06C4ABC4756BED41FC7A698287C@caravaggio.Quintetto.local> Message-ID: <61AC82EC-4D8F-42C6-8945-32781EEA2891@live555.com> > The only problem that we have is due to fact that we can only have a limited number of connections (client vlc) and this > number is related to the source's resolution. > If you exceed this number all the VLC clients begin to display artifacts. > > Source's resolution 320 x 240 allows just 6 VLC connections. > Source's resolution 640 x 480 allows just 3 VLC connections. > Source's resolution 1920 x 1080 allows just 1 VLC connection. Issues like this are almost always caused by running resource limitations (CPU and/or network), rather than any inherent problem with the LIVE555 software. Note also that (based on the experience of others) running more than one copy of VLC on the same computer tends to perform very poorly, so if you?re testing multiple VLC clients, you should do so on separate computers (separate *physical* computers, not separate ?virtual machines?). (Also, a reminder (yet again) that VLC is not our software. The best way to test RTSP client connections is to begin with our ?openRTSP? software: >. Then, and only then, should you use a media player (such as VLC).) > The OS is windows. That may well be (at least part of) your problem :-( Windows is simply not a serious operating system for running server software. (It?s 2014; noone should be doing this anymore.) > Below, what we did: > > 1) We used ffmpeg to capture the images from the camera and convert them to H264 + AAC frames (avcoded) > 2) These frames were pushed in a circular queue > 3) In a thread we created a RTP Server, the media session and two subsession, one for the video and the other one audio (see the code below) > 4) Starting from the DeviceSource.cpp we created a source that reads the frames from the circular queue. > > 5) When a client connects to the RTPserver, we create a NewStreamSource and a NewRTPSink. As you can see in the code below, > for the video StreamSource we create a H264VideoStreamDiscreteFramer for the audio we leave as it is. > > Regarding the RTPSink, for the video, we create an H264VideoRTPSink and for the audio we create MPEG4GenericRTPSink. This all looks good. One more thing. Because you?re streaming from a ?live source?, make sure that your ?Live555ServerMediaSubsession? constructor - when it calls the ?OnDemandServerMediaSubsession? constructor - sets the ?reuseFirstSource? parameter to True. (That way, your input source will never be instantiated more than once concurrently, regardless of how many RTSP clients you have.) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Fri Nov 14 22:28:07 2014 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 14 Nov 2014 20:28:07 -1000 Subject: [Live-devel] Network video cameras that use the "LIVE555 Streaming Media" software? Message-ID: Several network video cameras use the "LIVE555 Streaming Media" software to implement their internal RTSP server. I am putting together a list of these cameras. If you make use of a network video camera that uses our software, could you please send an email (privately to me, if you'd prefer not to send it to the whole mailing list), mentioning: 1/ The make and model of the network camera 2/ If known, which version of the "LIVE555 Streaming Media" software that it uses. (Often, you can see this in the SDP description that the server returns in response to a RTSP "DESCRIBE" command.) Thanks in advance, Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcin at speed666.info Sat Nov 15 03:20:19 2014 From: marcin at speed666.info (Marcin) Date: Sat, 15 Nov 2014 12:20:19 +0100 Subject: [Live-devel] Network video cameras that use the "LIVE555 Streaming Media" software? In-Reply-To: References: Message-ID: <546736F3.4060209@speed666.info> Hi Ross, I deal a lot with different IP Cams - i am not a manufacturer but have plenty of different models online so i can check if you are interested. Marcin W dniu 2014-11-15 07:28, Ross Finlayson pisze: > Several network video cameras use the "LIVE555 Streaming Media" > software to implement their internal RTSP server. > > I am putting together a list of these cameras. > > If you make use of a network video camera that uses our software, > could you please send an email (privately to me, if you'd prefer not > to send it to the whole mailing list), mentioning: > 1/ The make and model of the network camera > 2/ If known, which version of the "LIVE555 Streaming Media" software > that it uses. (Often, you can see this in the SDP description that > the server returns in response to a RTSP "DESCRIBE" command.) > > Thanks in advance, > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.j.chapman at lmco.com Sat Nov 15 09:57:46 2014 From: michael.j.chapman at lmco.com (Chapman, Michael J) Date: Sat, 15 Nov 2014 17:57:46 +0000 Subject: [Live-devel] Creating live tsx file or Updating existing TSX file Message-ID: <3E0D9B1C8DF091429909BD2FD6C86C800DCFF0BF@HDXDSP35.us.lmco.com> Hello, I am recording a transport stream file that needs to support rtsp trick play as it is being recorded (with minimal delay with respect to the live video source). I already have the logic for recording the stream and it will save the ts file to the filesystem in segments (so the file will continually be growing until the recording is stopped). We are using Live555 Media Server to allow rtsp playback of our recorded video files. We are also generating tsx files so the rtsp playback can support trick play. However, in order to support trick play while we are recording, we are continually re-creating the tsx file. We are using the newest released version of MPEG2TransportStreamIndexer to generate the tsx files and, as noted on your site, the tsx file generation can take quite a while for larger video files. Is it possible to create the tsx file as I am recording and continue indexing as new data if received (so having the indexer block and wait for new data) while also having that partially complete tsx file be used with the Live555MediaServer for trick play? Or can I create a tsx file (when the file has 1 min of video) and then later update the tsx file (at 2 min of video) without having to re-index that first 1 minute of video? >From what I can tell, the functionality that I need does not exist with the current MPEG2TransportStreamIndexer file. However, could it be updated to support this? Or could you explain how I could update the indexer via subclassing (assuming the tsx file structure and MediaServer logic supports this case)? Thank you, Michael Chapman -------------- next part -------------- An HTML attachment was scrubbed... URL: From sungjoo.byun at gmail.com Sat Nov 15 20:21:48 2014 From: sungjoo.byun at gmail.com (SungJoo Byun) Date: Sun, 16 Nov 2014 13:21:48 +0900 Subject: [Live-devel] Does live555 support matroska streaming? Message-ID: I have tested live555 server with matroska foramt.(mkv) But, Video does not work. (Audio does work.) * TEST CASE SERVER: DynamicRTSPServer. Client : ffplay, VLC. TEST MEDIA : test1.mkv ( http://www.matroska.org/downloads/test_w1.html ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Sun Nov 16 12:07:25 2014 From: finlayson at live555.com (Ross Finlayson) Date: Mon, 17 Nov 2014 07:07:25 +1100 Subject: [Live-devel] Creating live tsx file or Updating existing TSX file In-Reply-To: <3E0D9B1C8DF091429909BD2FD6C86C800DCFF0BF@HDXDSP35.us.lmco.com> References: <3E0D9B1C8DF091429909BD2FD6C86C800DCFF0BF@HDXDSP35.us.lmco.com> Message-ID: <7B58CD12-FA57-41D6-9556-A4CA7333A04B@live555.com> > I am recording a transport stream file that needs to support rtsp trick play as it is being recorded People often ask for this feature, and I'm a bit puzzled by it. What specific 'trick play' operations do you want the server to be able to perform on the Transport Stream file while it is still growing? (I'm trying to understand whether/when this actually makes sense.) > Is it possible to create the tsx file as I am recording and continue indexing as new data if received (so having the indexer block and wait for new data) while also having that partially complete tsx file be used with the Live555MediaServer for trick play? I may well end up adding this feature - but first, I'd need to get a feel for how much it makes sense (thus my question above). Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Sun Nov 16 13:31:08 2014 From: finlayson at live555.com (Ross Finlayson) Date: Mon, 17 Nov 2014 07:31:08 +1000 Subject: [Live-devel] Does live555 support matroska streaming? In-Reply-To: References: Message-ID: <5BFE8EAE-3A99-44E0-ADDE-6167F3FAA5DE@live555.com> > I have tested live555 server with matroska foramt.(mkv) > But, Video does not work. (Audio does work.) > > * TEST CASE > SERVER: DynamicRTSPServer. > Client : ffplay, VLC. > TEST MEDIA : test1.mkv ( http://www.matroska.org/downloads/test_w1.html ) The reason why video from this particular file can't be streamed is that it uses a proprietary video codec (Matroska codec id: V_MS/VFW/FOURCC) for which there is no standard RTP payload format. Note that the file "test2.mkv" (from the same web site) can be streamed OK (it uses H.264 for video). However, I don't suggest using the Matroska files from that web site, because many of them were specifically designed to illustrate various errors or anomalies in Matroska files. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dee.earley at icode.co.uk Mon Nov 17 01:56:22 2014 From: dee.earley at icode.co.uk (Deanna Earley) Date: Mon, 17 Nov 2014 09:56:22 +0000 Subject: [Live-devel] Network video cameras that use the "LIVE555 Streaming Media" software? In-Reply-To: References: Message-ID: <431a17ebb2c144069a69b5275d5106ec@SRV-EXCHANGE-2.icode.co.uk> A positive and a few negatives for you: Brickcom WCB-100Ap LIVE555 Streaming Media v2009.01.26 Axis M1011, 5.20.1 Unknown Axis P1343, 5.40.9.6 Unknown Vivotek IP8332, 0301c Unknown Etrovision EV3151 RTSP Server (v1.2.5) Dahua DH-SD42212S-HN Rtsp Server/2.0 Sony SNC-WR630 Linux/2.6.35.14_nl-xarina+ Ze-PRO Sony SNC-XP600 Linux/2.6.35.14_nl-xarina+ Ze-PRO I know Axis swapped to a new RTSP server for 5.60 but I don't have one to test at the moment. -- Deanna Earley | Lead developer | icatchercctv w: www.icode.co.uk/icatcher | t: 01329 835335 | f: 01329 835338 Registered Office : 71 The Hundred, Romsey, SO51 8BZ. Company Number : 03428325 From: live-devel [mailto:live-devel-bounces at ns.live555.com] On Behalf Of Ross Finlayson Sent: 15 November 2014 06:28 To: LIVE555 Streaming Media - development & use Subject: [Live-devel] Network video cameras that use the "LIVE555 Streaming Media" software? Several network video cameras use the "LIVE555 Streaming Media" software to implement their internal RTSP server. I am putting together a list of these cameras. If you make use of a network video camera that uses our software, could you please send an email (privately to me, if you'd prefer not to send it to the whole mailing list), mentioning: 1/ The make and model of the network camera 2/ If known, which version of the "LIVE555 Streaming Media" software that it uses. (Often, you can see this in the SDP description that the server returns in response to a RTSP "DESCRIBE" command.) Thanks in advance, Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.j.chapman at lmco.com Mon Nov 17 07:44:02 2014 From: michael.j.chapman at lmco.com (Chapman, Michael J) Date: Mon, 17 Nov 2014 15:44:02 +0000 Subject: [Live-devel] EXTERNAL: Re: Creating live tsx file or Updating existing TSX file In-Reply-To: <7B58CD12-FA57-41D6-9556-A4CA7333A04B@live555.com> References: <3E0D9B1C8DF091429909BD2FD6C86C800DCFF0BF@HDXDSP35.us.lmco.com> <7B58CD12-FA57-41D6-9556-A4CA7333A04B@live555.com> Message-ID: <3E0D9B1C8DF091429909BD2FD6C86C800DD00188@HDXDSP35.us.lmco.com> Ross, The video streams we are dealing with for our project mainly involve Unmanned Aerial Vehicle (UAV) video footage which comes to us as a video stream over the network. Due to the nature of the video, we need to be able to view it as soon as possible. We have an option to view it live from the source (non-rtsp approach with no trick play features) and we also want to be able to view the video footage in near-real time via rtsp. The benefit of the rtsp approach is that each user can have a unique rtsp session of the video with the intent of near-real time trick play. If an analyst sees something of interest in the video, they need to ability to rewind (or at least seek backwards) a few seconds before the video segment of interest to verify the footage. The ability to re-verify a video segment in the recent past, while watching the near-real time video, is why we would like to have this Live555 Indexing feature. Also, these videos will remain archived and available via rtsp for future analysis, so we will be using the rtsp and trick play options after the video is completely received. We are just hoping that we could use a live indexing rtsp feature for the near-real time analysis use case instead of creating a custom caching of the video until the video is fully received and indexed. As the video stream we are receiving may be very long (may be up to or more than an hour is some cases), we cannot realistically index the file as we are receiving it without the requested feature. Thank you, Michael Chapman From: live-devel [mailto:live-devel-bounces at ns.live555.com] On Behalf Of Ross Finlayson Sent: Sunday, November 16, 2014 1:07 PM To: LIVE555 Streaming Media - development & use Subject: EXTERNAL: Re: [Live-devel] Creating live tsx file or Updating existing TSX file I am recording a transport stream file that needs to support rtsp trick play as it is being recorded People often ask for this feature, and I'm a bit puzzled by it. What specific 'trick play' operations do you want the server to be able to perform on the Transport Stream file while it is still growing? (I'm trying to understand whether/when this actually makes sense.) Is it possible to create the tsx file as I am recording and continue indexing as new data if received (so having the indexer block and wait for new data) while also having that partially complete tsx file be used with the Live555MediaServer for trick play? I may well end up adding this feature - but first, I'd need to get a feel for how much it makes sense (thus my question above). Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jshanab at jfs-tech.com Mon Nov 17 08:09:00 2014 From: jshanab at jfs-tech.com (Jeff Shanab) Date: Mon, 17 Nov 2014 11:09:00 -0500 Subject: [Live-devel] EXTERNAL: Re: Creating live tsx file or Updating existing TSX file In-Reply-To: <3E0D9B1C8DF091429909BD2FD6C86C800DD00188@HDXDSP35.us.lmco.com> References: <3E0D9B1C8DF091429909BD2FD6C86C800DCFF0BF@HDXDSP35.us.lmco.com> <7B58CD12-FA57-41D6-9556-A4CA7333A04B@live555.com> <3E0D9B1C8DF091429909BD2FD6C86C800DD00188@HDXDSP35.us.lmco.com> Message-ID: That seems similar to security video and if it is really just the recent past, can be done entirely in the client. It may be preferred for the smooth transition from now to recent-past and back to realtime. Note: You would only cache uncompressed video. I maintained 2 gops worth decoded and the rest however many seconds configured. ( I had my own http based server that served out gops of video so I could play both directions and have random access to the video. I only put things in containers on "export" I streamed to a player of my own design so I was not streaming rtsp, I streamed multiple protocols including my own http one from the same rtsp input. A bit odd, but it worked) On Mon, Nov 17, 2014 at 10:44 AM, Chapman, Michael J < michael.j.chapman at lmco.com> wrote: > Ross, > > > > The video streams we are dealing with for our project mainly involve > Unmanned Aerial Vehicle (UAV) video footage which comes to us as a video > stream over the network. Due to the nature of the video, we need to be able > to view it as soon as possible. We have an option to view it live from the > source (non-rtsp approach with no trick play features) and we also want to > be able to view the video footage in near-real time via rtsp. The benefit > of the rtsp approach is that each user can have a unique rtsp session of > the video with the intent of near-real time trick play. If an analyst sees > something of interest in the video, they need to ability to rewind (or at > least seek backwards) a few seconds before the video segment of interest to > verify the footage. The ability to re-verify a video segment in the recent > past, while watching the near-real time video, is why we would like to have > this Live555 Indexing feature. > > > > Also, these videos will remain archived and available via rtsp for future > analysis, so we will be using the rtsp and trick play options after the > video is completely received. We are just hoping that we could use a live > indexing rtsp feature for the near-real time analysis use case instead of > creating a custom caching of the video until the video is fully received > and indexed. As the video stream we are receiving may be very long (may be > up to or more than an hour is some cases), we cannot realistically index > the file as we are receiving it without the requested feature. > > > > Thank you, > > > > Michael Chapman > > > > > > *From:* live-devel [mailto:live-devel-bounces at ns.live555.com] *On Behalf > Of *Ross Finlayson > *Sent:* Sunday, November 16, 2014 1:07 PM > *To:* LIVE555 Streaming Media - development & use > *Subject:* EXTERNAL: Re: [Live-devel] Creating live tsx file or Updating > existing TSX file > > > > I am recording a transport stream file that needs to support rtsp trick > play as it is being recorded > > > > People often ask for this feature, and I'm a bit puzzled by it. What > specific 'trick play' operations do you want the server to be able to > perform on the Transport Stream file while it is still growing? (I'm > trying to understand whether/when this actually makes sense.) > > > > > > Is it possible to create the tsx file as I am recording and continue > indexing as new data if received (so having the indexer block and wait for > new data) while also having that partially complete tsx file be used with > the Live555MediaServer for trick play? > > > > I may well end up adding this feature - but first, I'd need to get a feel > for how much it makes sense (thus my question above). > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.j.chapman at lmco.com Tue Nov 18 07:42:50 2014 From: michael.j.chapman at lmco.com (Chapman, Michael J) Date: Tue, 18 Nov 2014 15:42:50 +0000 Subject: [Live-devel] Creating live tsx file or Updating existing TSX file Message-ID: <3E0D9B1C8DF091429909BD2FD6C86C800DD012CB@HDXDSP35.us.lmco.com> Sorry I didn?t clarify this in an earlier post, but one of the goals we were hoping to achieve is the ability to have any rtsp client able to connect to our Media Server and perform these commands. So, we were hoping to not have to develop and use a custom client if possible. For instance, having VLC player (merely as an example) connect to our Media Server and perform the near-real time trick play. I hope that makes sense and better clarifies our goal. That is why we are looking into the tsx file generation possibilities. Thank you, Michael Chapman From: live-devel [mailto:live-devel-bounces at ns.live555.com] On Behalf Of Jeff Shanab Sent: Monday, November 17, 2014 9:09 AM To: LIVE555 Streaming Media - development & use Subject: Re: [Live-devel] EXTERNAL: Re: Creating live tsx file or Updating existing TSX file That seems similar to security video and if it is really just the recent past, can be done entirely in the client. It may be preferred for the smooth transition from now to recent-past and back to realtime. Note: You would only cache uncompressed video. I maintained 2 gops worth decoded and the rest however many seconds configured. ( I had my own http based server that served out gops of video so I could play both directions and have random access to the video. I only put things in containers on "export" I streamed to a player of my own design so I was not streaming rtsp, I streamed multiple protocols including my own http one from the same rtsp input. A bit odd, but it worked) On Mon, Nov 17, 2014 at 10:44 AM, Chapman, Michael J > wrote: Ross, The video streams we are dealing with for our project mainly involve Unmanned Aerial Vehicle (UAV) video footage which comes to us as a video stream over the network. Due to the nature of the video, we need to be able to view it as soon as possible. We have an option to view it live from the source (non-rtsp approach with no trick play features) and we also want to be able to view the video footage in near-real time via rtsp. The benefit of the rtsp approach is that each user can have a unique rtsp session of the video with the intent of near-real time trick play. If an analyst sees something of interest in the video, they need to ability to rewind (or at least seek backwards) a few seconds before the video segment of interest to verify the footage. The ability to re-verify a video segment in the recent past, while watching the near-real time video, is why we would like to have this Live555 Indexing feature. Also, these videos will remain archived and available via rtsp for future analysis, so we will be using the rtsp and trick play options after the video is completely received. We are just hoping that we could use a live indexing rtsp feature for the near-real time analysis use case instead of creating a custom caching of the video until the video is fully received and indexed. As the video stream we are receiving may be very long (may be up to or more than an hour is some cases), we cannot realistically index the file as we are receiving it without the requested feature. Thank you, Michael Chapman From: live-devel [mailto:live-devel-bounces at ns.live555.com] On Behalf Of Ross Finlayson Sent: Sunday, November 16, 2014 1:07 PM To: LIVE555 Streaming Media - development & use Subject: EXTERNAL: Re: [Live-devel] Creating live tsx file or Updating existing TSX file I am recording a transport stream file that needs to support rtsp trick play as it is being recorded People often ask for this feature, and I'm a bit puzzled by it. What specific 'trick play' operations do you want the server to be able to perform on the Transport Stream file while it is still growing? (I'm trying to understand whether/when this actually makes sense.) Is it possible to create the tsx file as I am recording and continue indexing as new data if received (so having the indexer block and wait for new data) while also having that partially complete tsx file be used with the Live555MediaServer for trick play? I may well end up adding this feature - but first, I'd need to get a feel for how much it makes sense (thus my question above). Ross Finlayson Live Networks, Inc. http://www.live555.com/ _______________________________________________ live-devel mailing list live-devel at lists.live555.com http://lists.live555.com/mailman/listinfo/live-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcin at speed666.info Tue Nov 18 12:52:55 2014 From: marcin at speed666.info (Marcin) Date: Tue, 18 Nov 2014 21:52:55 +0100 Subject: [Live-devel] Creating live tsx file or Updating existing TSX file In-Reply-To: <3E0D9B1C8DF091429909BD2FD6C86C800DD012CB@HDXDSP35.us.lmco.com> References: <3E0D9B1C8DF091429909BD2FD6C86C800DD012CB@HDXDSP35.us.lmco.com> Message-ID: <546BB1A7.7010305@speed666.info> Hi, I think that Michael wants to have somekind of DVR functionality in RTSP which is not supported i think. Marcin W dniu 2014-11-18 16:42, Chapman, Michael J pisze: > > Sorry I didn't clarify this in an earlier post, but one of the goals > we were hoping to achieve is the ability to have any rtsp client able > to connect to our Media Server and perform these commands. So, we were > hoping to not have to develop and use a custom client if possible. For > instance, having VLC player (merely as an example) connect to our > Media Server and perform the near-real time trick play. I hope that > makes sense and better clarifies our goal. That is why we are looking > into the tsx file generation possibilities. > > Thank you, > > Michael Chapman > > *From:*live-devel [mailto:live-devel-bounces at ns.live555.com] *On > Behalf Of *Jeff Shanab > *Sent:* Monday, November 17, 2014 9:09 AM > *To:* LIVE555 Streaming Media - development & use > *Subject:* Re: [Live-devel] EXTERNAL: Re: Creating live tsx file or > Updating existing TSX file > > That seems similar to security video and if it is really just the > recent past, can be done entirely in the client. It may be preferred > for the smooth transition from now to recent-past and back to realtime. > > Note: You would only cache uncompressed video. I maintained 2 gops > worth decoded and the rest however many seconds configured. > ( I had my own http based server that served out gops of video so I > could play both directions and have random access to the video. I only > put things in containers on "export" I streamed to a player of my own > design so I was not streaming rtsp, I streamed multiple protocols > including my own http one from the same rtsp input. A bit odd, but it > worked) > > On Mon, Nov 17, 2014 at 10:44 AM, Chapman, Michael J > > wrote: > > Ross, > > The video streams we are dealing with for our project mainly involve > Unmanned Aerial Vehicle (UAV) video footage which comes to us as a > video stream over the network. Due to the nature of the video, we need > to be able to view it as soon as possible. We have an option to view > it live from the source (non-rtsp approach with no trick play > features) and we also want to be able to view the video footage in > near-real time via rtsp. The benefit of the rtsp approach is that each > user can have a unique rtsp session of the video with the intent of > near-real time trick play. If an analyst sees something of interest in > the video, they need to ability to rewind (or at least seek backwards) > a few seconds before the video segment of interest to verify the > footage. The ability to re-verify a video segment in the recent past, > while watching the near-real time video, is why we would like to have > this Live555 Indexing feature. > > Also, these videos will remain archived and available via rtsp for > future analysis, so we will be using the rtsp and trick play options > after the video is completely received. We are just hoping that we > could use a live indexing rtsp feature for the near-real time analysis > use case instead of creating a custom caching of the video until the > video is fully received and indexed. As the video stream we are > receiving may be very long (may be up to or more than an hour is some > cases), we cannot realistically index the file as we are receiving it > without the requested feature. > > Thank you, > > Michael Chapman > > *From:*live-devel [mailto:live-devel-bounces at ns.live555.com > ] *On Behalf Of *Ross Finlayson > *Sent:* Sunday, November 16, 2014 1:07 PM > *To:* LIVE555 Streaming Media - development & use > *Subject:* EXTERNAL: Re: [Live-devel] Creating live tsx file or > Updating existing TSX file > > I am recording a transport stream file that needs to support rtsp > trick play as it is being recorded > > People often ask for this feature, and I'm a bit puzzled by it. What > specific 'trick play' operations do you want the server to be able to > perform on the Transport Stream file while it is still growing? (I'm > trying to understand whether/when this actually makes sense.) > > Is it possible to create the tsx file as I am recording and > continue indexing as new data if received (so having the indexer > block and wait for new data) while also having that partially > complete tsx file be used with the Live555MediaServer for trick play? > > I may well end up adding this feature - but first, I'd need to get a > feel for how much it makes sense (thus my question above). > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From sam.hudson at thefoundry.co.uk Tue Nov 18 03:41:09 2014 From: sam.hudson at thefoundry.co.uk (Sam Hudson) Date: Tue, 18 Nov 2014 11:41:09 +0000 Subject: [Live-devel] RTSP Stream from remote machine Message-ID: Hi, I have compiled the './testH264VideoStreamer' prog, and it seems to work fine. Ive used openRTSP from the local machine, and it received the stream to file as it should. However the machine is an amazone EC2 instance, which has a public, and private ip address. When trying to access the stream using the public ip address it does not work. When using openRTSP from a remote machine, this is the output. It appears its trying to use the internal ip address when receiving the stream data. Could anyone give me a heads up on the correct way to set this up? $ ./openRTSP rtsp:// ec2-54-171-254-16.eu-west-1.compute.amazonaws.com:8554/testStream Opening connection to 54.171.254.16, port 8554... ...remote connection opened Sending request: OPTIONS rtsp:// ec2-54-171-254-16.eu-west-1.compute.amazonaws.com:8554/testStream RTSP/1.0 CSeq: 2 User-Agent: ./openRTSP (LIVE555 Streaming Media v2014.11.12) Received 152 new bytes of response data. Received a complete OPTIONS response: RTSP/1.0 200 OK CSeq: 2 Date: Tue, Nov 18 2014 11:38:54 GMT Public: OPTIONS, DESCRIBE, SETUP, TEARDOWN, PLAY, PAUSE, GET_PARAMETER, SET_PARAMETER Sending request: DESCRIBE rtsp:// ec2-54-171-254-16.eu-west-1.compute.amazonaws.com:8554/testStream RTSP/1.0 CSeq: 3 User-Agent: ./openRTSP (LIVE555 Streaming Media v2014.11.12) Accept: application/sdp Received 760 new bytes of response data. Received a complete DESCRIBE response: RTSP/1.0 200 OK CSeq: 3 Date: Tue, Nov 18 2014 11:38:54 GMT Content-Base: rtsp://172.31.9.199:8554/testStream/ Content-Type: application/sdp Content-Length: 591 v=0 o=- 1416309499604805 1 IN IP4 172.31.9.199 s=Session streamed by "testH264VideoStreamer" i=test.264 t=0 0 a=tool:LIVE555 Streaming Media v2014.11.12 a=type:broadcast a=control:* a=source-filter: incl IN IP4 * 172.31.9.199 a=rtcp-unicast: reflection a=range:npt=0- a=x-qt-text-nam:Session streamed by "testH264VideoStreamer" a=x-qt-text-inf:test.264 m=video 18888 RTP/AVP 96 c=IN IP4 232.76.230.47/255 b=AS:500 a=rtpmap:96 H264/90000 a=fmtp:96 packetization-mode=1;profile-level-id=64001F;sprop-parameter-sets=Z2QAH6wrQEAINgIgAAB9AAA6mBHjhlQ=,aO48sA== a=control:track1 Opened URL "rtsp:// ec2-54-171-254-16.eu-west-1.compute.amazonaws.com:8554/testStream", returning a SDP description: v=0 o=- 1416309499604805 1 IN IP4 172.31.9.199 s=Session streamed by "testH264VideoStreamer" i=test.264 t=0 0 a=tool:LIVE555 Streaming Media v2014.11.12 a=type:broadcast a=control:* a=source-filter: incl IN IP4 * 172.31.9.199 a=rtcp-unicast: reflection a=range:npt=0- a=x-qt-text-nam:Session streamed by "testH264VideoStreamer" a=x-qt-text-inf:test.264 m=video 18888 RTP/AVP 96 c=IN IP4 232.76.230.47/255 b=AS:500 a=rtpmap:96 H264/90000 a=fmtp:96 packetization-mode=1;profile-level-id=64001F;sprop-parameter-sets=Z2QAH6wrQEAINgIgAAB9AAA6mBHjhlQ=,aO48sA== a=control:track1 Created receiver for "video/H264" subsession (client ports 18888-18889) Sending request: SETUP rtsp://172.31.9.199:8554/testStream/track1 RTSP/1.0 CSeq: 4 User-Agent: ./openRTSP (LIVE555 Streaming Media v2014.11.12) Transport: RTP/AVP;multicast;client_port=18888-18889 Received 196 new bytes of response data. Received a complete SETUP response: RTSP/1.0 200 OK CSeq: 4 Date: Tue, Nov 18 2014 11:38:54 GMT Transport: RTP/AVP;multicast;destination=232.76.230.47;source=172.31.9.199;port=18888-18889;ttl=255 Session: ADC6AB32;timeout=65 -- -- *Sam Hudson* *Software Engineer* The Foundry Web: www.thefoundry.co.uk Email: sam.hudson at thefoundry.co.uk The Foundry Visionmongers Ltd. Registered in England and Wales No: 4642027 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jshanab at jfs-tech.com Tue Nov 18 13:31:07 2014 From: jshanab at jfs-tech.com (Jeff Shanab) Date: Tue, 18 Nov 2014 16:31:07 -0500 Subject: [Live-devel] Creating live tsx file or Updating existing TSX file In-Reply-To: <546BB1A7.7010305@speed666.info> References: <3E0D9B1C8DF091429909BD2FD6C86C800DD012CB@HDXDSP35.us.lmco.com> <546BB1A7.7010305@speed666.info> Message-ID: Rtsp with trick play should have no difficulty it is the container chosen. A lot of containers need to close to create an index. Every system I have ever worked with that needed this type of functionality has two files, One for the index and one for the video. You can read and write in parallel, easy peasy. It is why I did not tie myself to a container until it was an "export" operation. Then I could export any subset of the video since the index can then be created for the specific container and clip length on demand. be it AVI or whatever. On Tue, Nov 18, 2014 at 3:52 PM, Marcin wrote: > Hi, > I think that Michael wants to have somekind of DVR functionality in RTSP > which is not supported i think. > Marcin > > W dniu 2014-11-18 16:42, Chapman, Michael J pisze: > > Sorry I didn?t clarify this in an earlier post, but one of the goals we > were hoping to achieve is the ability to have any rtsp client able to > connect to our Media Server and perform these commands. So, we were hoping > to not have to develop and use a custom client if possible. For instance, > having VLC player (merely as an example) connect to our Media Server and > perform the near-real time trick play. I hope that makes sense and better > clarifies our goal. That is why we are looking into the tsx file generation > possibilities. > > > > Thank you, > > > > Michael Chapman > > > > *From:* live-devel [mailto:live-devel-bounces at ns.live555.com > ] *On Behalf Of *Jeff Shanab > *Sent:* Monday, November 17, 2014 9:09 AM > *To:* LIVE555 Streaming Media - development & use > *Subject:* Re: [Live-devel] EXTERNAL: Re: Creating live tsx file or > Updating existing TSX file > > > > That seems similar to security video and if it is really just the recent > past, can be done entirely in the client. It may be preferred for the > smooth transition from now to recent-past and back to realtime. > > Note: You would only cache uncompressed video. I maintained 2 gops worth > decoded and the rest however many seconds configured. > ( I had my own http based server that served out gops of video so I could > play both directions and have random access to the video. I only put things > in containers on "export" I streamed to a player of my own design so I was > not streaming rtsp, I streamed multiple protocols including my own http one > from the same rtsp input. A bit odd, but it worked) > > > > On Mon, Nov 17, 2014 at 10:44 AM, Chapman, Michael J < > michael.j.chapman at lmco.com> wrote: > > Ross, > > > > The video streams we are dealing with for our project mainly involve > Unmanned Aerial Vehicle (UAV) video footage which comes to us as a video > stream over the network. Due to the nature of the video, we need to be able > to view it as soon as possible. We have an option to view it live from the > source (non-rtsp approach with no trick play features) and we also want to > be able to view the video footage in near-real time via rtsp. The benefit > of the rtsp approach is that each user can have a unique rtsp session of > the video with the intent of near-real time trick play. If an analyst sees > something of interest in the video, they need to ability to rewind (or at > least seek backwards) a few seconds before the video segment of interest to > verify the footage. The ability to re-verify a video segment in the recent > past, while watching the near-real time video, is why we would like to have > this Live555 Indexing feature. > > > > Also, these videos will remain archived and available via rtsp for future > analysis, so we will be using the rtsp and trick play options after the > video is completely received. We are just hoping that we could use a live > indexing rtsp feature for the near-real time analysis use case instead of > creating a custom caching of the video until the video is fully received > and indexed. As the video stream we are receiving may be very long (may be > up to or more than an hour is some cases), we cannot realistically index > the file as we are receiving it without the requested feature. > > > > Thank you, > > > > Michael Chapman > > > > > > *From:* live-devel [mailto:live-devel-bounces at ns.live555.com] *On Behalf > Of *Ross Finlayson > *Sent:* Sunday, November 16, 2014 1:07 PM > *To:* LIVE555 Streaming Media - development & use > *Subject:* EXTERNAL: Re: [Live-devel] Creating live tsx file or Updating > existing TSX file > > > > I am recording a transport stream file that needs to support rtsp trick > play as it is being recorded > > > > People often ask for this feature, and I'm a bit puzzled by it. What > specific 'trick play' operations do you want the server to be able to > perform on the Transport Stream file while it is still growing? (I'm > trying to understand whether/when this actually makes sense.) > > > > > > Is it possible to create the tsx file as I am recording and continue > indexing as new data if received (so having the indexer block and wait for > new data) while also having that partially complete tsx file be used with > the Live555MediaServer for trick play? > > > > I may well end up adding this feature - but first, I'd need to get a feel > for how much it makes sense (thus my question above). > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > > > > _______________________________________________ > live-devel mailing listlive-devel at lists.live555.comhttp://lists.live555.com/mailman/listinfo/live-devel > > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From giovanni.iamonte at quintetto.it Wed Nov 19 08:30:02 2014 From: giovanni.iamonte at quintetto.it (Giovanni Iamonte) Date: Wed, 19 Nov 2014 17:30:02 +0100 Subject: [Live-devel] RSTP Live streaming from USB camera (Ross Finlayson) Message-ID: <70CD532BCEB06C4ABC4756BED41FC7A6A3C241@caravaggio.Quintetto.local> Hi Ross. Thank for your suggestion, after we set the reuseFirstSource to true everything works as we expect. Anyway, before this change, when more clients were connected to the server, the begin sequence that they receiving was SPS, PPS, keyframe, slice, ..., slice. Now when several clients were connected to the server, only the first one receives the sequence SPS, PPS, Key frames, all the others one receiving a slice, slice, ..., SPS, PPS, keyframe. While maintaining a single live source, is there a way for which, each new client can always receive a sequence beginning with SPS, PPS and key frames? This is not a blocking issue, because normally the client discards the first few slices and snap the correct sequence. Anyway we will appreciate any suggestions. Bye Message: 1 Date: Thu, 13 Nov 2014 06:42:45 -1000 From: Ross Finlayson To: LIVE555 Streaming Media - development & use Subject: Re: [Live-devel] RSTP Live streaming from USB camera Message-ID: <61AC82EC-4D8F-42C6-8945-32781EEA2891 at live555.com> Content-Type: text/plain; charset="utf-8" > The only problem that we have is due to fact that we can only have a > limited number of connections (client vlc) and this number is related to the source's resolution. > If you exceed this number all the VLC clients begin to display artifacts. > > Source's resolution 320 x 240 allows just 6 VLC connections. > Source's resolution 640 x 480 allows just 3 VLC connections. > Source's resolution 1920 x 1080 allows just 1 VLC connection. Issues like this are almost always caused by running resource limitations (CPU and/or network), rather than any inherent problem with the LIVE555 software. Note also that (based on the experience of others) running more than one copy of VLC on the same computer tends to perform very poorly, so if you?re testing multiple VLC clients, you should do so on separate computers (separate *physical* computers, not separate ?virtual machines?). (Also, a reminder (yet again) that VLC is not our software. The best way to test RTSP client connections is to begin with our ?openRTSP? software: >. Then, and only then, should you use a media player (such as VLC).) > The OS is windows. That may well be (at least part of) your problem :-( Windows is simply not a serious operating system for running server software. (It?s 2014; noone should be doing this anymore.) > Below, what we did: > > 1) We used ffmpeg to capture the images from the camera and convert > them to H264 + AAC frames (avcoded) > 2) These frames were pushed in a circular queue > 3) In a thread we created a RTP Server, the media session and two > subsession, one for the video and the other one audio (see the code > below) > 4) Starting from the DeviceSource.cpp we created a source that reads the frames from the circular queue. > > 5) When a client connects to the RTPserver, we create a > NewStreamSource and a NewRTPSink. As you can see in the code below, for the video StreamSource we create a H264VideoStreamDiscreteFramer for the audio we leave as it is. > > Regarding the RTPSink, for the video, we create an H264VideoRTPSink and for the audio we create MPEG4GenericRTPSink. This all looks good. One more thing. Because you?re streaming from a ?live source?, make sure that your ?Live555ServerMediaSubsession? constructor - when it calls the ?OnDemandServerMediaSubsession? constructor - sets the ?reuseFirstSource? parameter to True. (That way, your input source will never be instantiated more than once concurrently, regardless of how many RTSP clients you have.) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Wed Nov 19 12:56:09 2014 From: finlayson at live555.com (Ross Finlayson) Date: Thu, 20 Nov 2014 06:56:09 +1000 Subject: [Live-devel] RTSP Stream from remote machine In-Reply-To: References: Message-ID: <89FE1500-6503-4605-AFFA-35B792E022BE@live555.com> > When trying to access the stream using the public ip address it does not work. The reason for this is that ?testH264VideoStreamer? (like all of the ?*Streamer? demo applications) transmit the RTP stream over IP multicast, but you apparently don?t have IP multicast connectivity between the server and your client - perhaps just because the server?s public network interface doesn?t have a route for 224.0.0.0/8. If you?re sure that you have IP multicast routing between your server and client, then you can probably fix this by adding a route for 224.0.0.0/8 to the server?s public interface. Alternatively, you can use the ?LIVE555 Media Server? application, or the ?testOnDemandRTSPServer? demo application, both of which stream via unicast rather than multicast. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Wed Nov 19 13:33:23 2014 From: finlayson at live555.com (Ross Finlayson) Date: Thu, 20 Nov 2014 07:33:23 +1000 Subject: [Live-devel] Creating live tsx file or Updating existing TSX file In-Reply-To: References: <3E0D9B1C8DF091429909BD2FD6C86C800DD012CB@HDXDSP35.us.lmco.com> <546BB1A7.7010305@speed666.info> Message-ID: <2CC11FEF-5355-4DB4-B903-007CBDB6ECEE@live555.com> (Some misinformation has crept into this thread - in part because I?ve taken a while to respond. Sorry :-) As some people noted, it would be possible to implement the mechanism that Michael Chapman?s asking for (rewind/seek backwards) in the client - but it shouldn?t be necessary, because the RTSP protocol already has a mechanism for this - and, as Michael noted, RTSP is already used for ?off line? playback, so it makes sense to try to use the same RTSP client for both mechanisms. So yes, it should be possible for me to update the ?MPEG2TransportStreamIndexer? application to continually extend the index file if the underlying Transport Stream file keeps growing. (This will also require a small change to the RTSP server?s Transport Stream 'trick play' code to allow for the possibility of the index file growing after it was first read.) I have only limited Internet access over the next few weeks, but I?ll see if I can make this change sometime before mid-December. (If you are interested in paying money to expedite this, let me know by separate email.) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Thu Nov 20 13:59:09 2014 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 21 Nov 2014 07:59:09 +1000 Subject: [Live-devel] RSTP Live streaming from USB camera (Ross Finlayson) In-Reply-To: <70CD532BCEB06C4ABC4756BED41FC7A6A3C241@caravaggio.Quintetto.local> References: <70CD532BCEB06C4ABC4756BED41FC7A6A3C241@caravaggio.Quintetto.local> Message-ID: <6365A305-15A4-40F6-8962-199D37558436@live555.com> > While maintaining a single live source, is there a way for which, > each new client can always receive a sequence beginning with SPS, PPS > and key frames? No. However, each RTSP client will already have the SPS and PPS NAL units - because they will have been present in the SDP description that the client received from the server (as a response to the RTSP "DESCRIBE" command). Therefore, the client can insert these SPS and PPS NAL units at the front of the stream, before it starts to decode them. To illustrate this, look at the code for the "openRTSP" client application - see "testProgs/playCommon.cpp", lines 863-865. Note how the code calls "MediaSubsession::fmtp_spropparametersets()" to get a SDP format string that encodes the SPS and PPS NAL units. Then, note the code in "liveMedia/H264or5VideoFileSink.cpp", lines 48-55. Note how the code calls the function "parseSPropParameterSets()" to decode the SDP format string into a sequence of (binary) NAL units. (For H.264 video, this will usually be 2 NAL units: the SPS and the PPS.) You can then feed each of these binary NAL unit to your decoder, before it receives the rest of the NAL units (in the network stream). (Depending on your decoder, you may need to prepend each one with a 4-byte 0x00 0x00 0x00 0x01 'start code'.) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben.willcox at willcoxonline.com Thu Nov 20 08:14:48 2014 From: ben.willcox at willcoxonline.com (Ben Willcox) Date: Thu, 20 Nov 2014 16:14:48 +0000 Subject: [Live-devel] Problem with receiving RTSP from Antrica ANT-35000 Message-ID: <546E1378.30505@willcoxonline.com> Hi, I've been using openRTSP for several years in my application, receiving an rtsp stream from Axis video encoders and Axis IP cameras and saving to disk. I'm doing some trials with a new video encoder, an Antrica ANT-35000 which is an HD encoder (unlike the Axis ones which are 640x480 max). I've come across a strange problem when capturing the rtsp stream from this using openRTSP (v2014.11.12) whereby every time a keyframe is encountered the video freezes for a couple of frames (playing back in VLC). If I play back frame by frame in Avidemux I get an error in the console every 30 frames (the I-frame interval set in the encoder) and the picture does not update for 2 frames: ------------ [lavc] error in FFMP43/mpeg4! [lavc] Err: -1, size:9 Editor: Last Decoding2 failed for frame 63 [lavc] no frame! ------------ The command I'm running is: openRTSP -4 -n -t -Q -w 1920 -h 1080 -b300000 rtsp:///video1 > testfile.mp4 (basically the same I've been using with the Axis cams but a larger buffer size). If I use VLC to capture the rtsp stream and save to disk, this then plays back without any problems and is smooth so I'm not quite sure what is going on. Any ideas on where I should look to solve the problem? Thanks, Ben From peter at schlaile.de Thu Nov 20 14:23:58 2014 From: peter at schlaile.de (Peter Schlaile) Date: Thu, 20 Nov 2014 23:23:58 +0100 Subject: [Live-devel] [Patch] timeout patch for openRTSP to prevent stream cut off on broken servers Message-ID: <1416522238.10169.10.camel@peters.schlaile.de> Hi, I ran into the problem mentioned in 2010 on the development list, since we own several security DVRs which technically can stream RTSP videos but don't handle RTCP RRs properly. http://lists.live555.com/pipermail/live-devel/2010-July/012383.html Find a patch attached, that adds a small change to openRTSP (activated only on user's request), that sends OPTIONS request to the server in regular intervals to keep it alive and happy. Regards, Peter -- ---- Peter Schlaile -------------- next part -------------- A non-text attachment was scrubbed... Name: session_timeout.patch Type: text/x-patch Size: 4659 bytes Desc: not available URL: From finlayson at live555.com Fri Nov 21 09:40:04 2014 From: finlayson at live555.com (Ross Finlayson) Date: Sat, 22 Nov 2014 03:40:04 +1000 Subject: [Live-devel] [Patch] timeout patch for openRTSP to prevent stream cut off on broken servers In-Reply-To: <1416522238.10169.10.camel@peters.schlaile.de> References: <1416522238.10169.10.camel@peters.schlaile.de> Message-ID: <3EC08629-EE26-425F-BDE7-E4176E056B9A@live555.com> Thanks for the note. I'll likely add such an option to "openRTSP" sometime in the future. (It's not a high priority, though; I'd prefer that people fix their servers.) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Fri Nov 21 16:02:01 2014 From: finlayson at live555.com (Ross Finlayson) Date: Sat, 22 Nov 2014 10:02:01 +1000 Subject: [Live-devel] Problem with receiving RTSP from Antrica ANT-35000 In-Reply-To: <546E1378.30505@willcoxonline.com> References: <546E1378.30505@willcoxonline.com> Message-ID: > If I use VLC to capture the rtsp stream and save to disk, this then plays back without any problems and is smooth so I'm not quite sure what is going on. > Any ideas on where I should look to solve the problem? The code for the "QuickTimeFileSink" class - which is what we use to record ".mp4"-format files. (The reason why VLC behaves differently is that it decodes the media that it receives, then re-encodes it for storing into a ".mp4"-format file. "openRTSP", however, just records the data that it receives, 'as is' (because we don't do any decoding/encoding).) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From chieppa at elmaxsrl.it Fri Nov 21 05:26:00 2014 From: chieppa at elmaxsrl.it (chieppa at elmaxsrl.it) Date: Fri, 21 Nov 2014 14:26:00 +0100 Subject: [Live-devel] Struggling with a stange behavior Message-ID: Dear All, ?I'm using Live555 to realize a C++ RTPS client for IP cameras. I'm using most of the testRTSPClient code.I used Poco library and Poco::Thread class too.In other words any client for each camera runs in a separate thread that owns his instance of Live555 objects (any thread uses an instance with his UsageEnvironment and TaskScheduler).. This to avoid shared variables and synchronization stuff. It seems to works well and fast.My runnable (following the Poco library requirements) object IPCamera has the run method as simple as:void IPCamera::run() { openURL(_myEnv, "", _myRtspCommand.c_str(), *this); //taken from the testRTSPClient example _myEnv->TaskScheduler().doEventLoop(&_watchEventLoopVariable); //it runs until _watchEventLoopVariable change to a value != 0 //exit from the run; }When the run is finished I call also the join() to close the thread (by the way I find out that if I don't call myThread->join() the memory is not freed totaly).When I make the shutdown, following the requirments in Live555-devel I put in my code: void IPCamera::shutdown() { ... _myEnv->reclaim(); delete _myScheduler; }Using Valgrind to detect memory leaks I saw a strange behaviour:1) case: Run the program - Close the program with all the IPCameras that run in the proper manner.a) At the end of the program all the distrucotors are invoked.b) exit from doEvenLoop().c) join the thread (actually is terminated because it exits from run method.d) destroy the _myEnv and _myScheduler as showed.e) destroy all the others objects, including IPCamera and Thread associated.-> no memory leaks are found by Valgrind. OkNow comes the problem.2) case: I'm implementing a use case where a Poco::Timer checks every X seconds if the camera is alive using ICMP ping. It raises an event (using Poco events) in case it doesn't answer because the network is down and I do the follow:IPCamera down or network down :a) put the _watchEventLoopVariable = 1 to exit from the run method;b) shutdown the client associated to the IPCamera as showedc) join the threadAt this point I don't destroy the thread because I would like to reuse it when the network cames up again and the camera works again.in this case:?a)I set the _eventWatchVariable = 0.?b) Let start again the thread with: myThread->run() Valgrind tells me that memory leaks (or something else that lost bytes) are found: 60 bytes direct, 20.000 indirect bytes are lost in the thread,?in the H264BufferdPackedFactory::createNewPacket(...) by ReorederingPacketBuffer::getFreePacket() by SocketDescriptor::tcpReadHandler() by BasicTaskScheduler::singleStep().I'm going crazy. Why this behaviour in this case? Does someone solve this case? How to avoid this? 3 days of debugging and alternative solutions, but...nothing. If the nwtwork is always on, the memory is ok. If the network goes down, the memory leak is there. And the same logic is applied to my sw.Thank you. Cristiano ChieppaElmax -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Fri Nov 21 19:50:55 2014 From: finlayson at live555.com (Ross Finlayson) Date: Sat, 22 Nov 2014 13:50:55 +1000 Subject: [Live-devel] Network video cameras that use the "LIVE555 Streaming Media" software? In-Reply-To: <431a17ebb2c144069a69b5275d5106ec@SRV-EXCHANGE-2.icode.co.uk> References: <431a17ebb2c144069a69b5275d5106ec@SRV-EXCHANGE-2.icode.co.uk> Message-ID: > Brickcom WCB-100Ap LIVE555 Streaming Media v2009.01.26 Is this running the most up-to-date version of Brickcom's firmware? One of the implications of the LGPL is that Brickcom must, upon request, upgrade their firmware to use the latest version of the "LIVE555 Streaming Media" software, or else provide a way for you to perform this update yourself. (This applies to *any* product that uses our software.) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Fri Nov 21 21:11:31 2014 From: finlayson at live555.com (Ross Finlayson) Date: Sat, 22 Nov 2014 15:11:31 +1000 Subject: [Live-devel] Struggling with a stange behavior In-Reply-To: References: Message-ID: <566F60CD-1614-4B74-887F-530BA4630D97@live555.com> > any client for each camera runs in a separate thread > Note that you don't need to do this. Instead, it is possible (and, in fact, much easier) to have multiple RTSP clients running in a single thread, using a single event loop. Note how the "testRTSPClient" code does this. > Valgrind tells me that memory leaks (or something else that lost bytes) are found: 60 bytes direct, 20.000 indirect bytes are lost in the thread, > > in the H264BufferdPackedFactory::createNewPacket(...) by ReorederingPacketBuffer::getFreePacket() by SocketDescriptor::tcpReadHandler() by BasicTaskScheduler::singleStep(). > > I'm going crazy. Why this behaviour in this case? > I don't know, but I wouldn't pay too much attention to the details in "valgrind" reports; they're notoriously unreliable. But in any case I suggest reimplementing your application as a single-threaded application (as noted above). Not only will it be simpler, but it'll be easier to debug. (Alternatively, if your clients are truly independent, then you should also consider running each client as a separate *process* (i.e., as a separate running instance of an application).) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From grom86 at mail.ru Thu Nov 20 21:39:38 2014 From: grom86 at mail.ru (=?UTF-8?B?bWludXM=?=) Date: Fri, 21 Nov 2014 08:39:38 +0300 Subject: [Live-devel] =?utf-8?q?Video_delay_on_the_new_connection_to_the_s?= =?utf-8?q?treamer?= In-Reply-To: <6A76559F-1FC5-44C0-AD6E-4E9BF1A6AE40@live555.com> References: <1415102541.405766293@f258.i.mail.ru> <6A76559F-1FC5-44C0-AD6E-4E9BF1A6AE40@live555.com> Message-ID: <1416548378.14200096@f428.i.mail.ru> Seems it is CPU resource limitation. Is it possible to synchronize audio and video tracks after such CPU?resource limitations, during record process. I mean, how to determine that?desynchronisation happened in playing? process? Maybe some RTCP packet data information can help or something else? Wed, 5 Nov 2014 20:12:11 -0800 ?? Ross Finlayson : >There?s no known issue with our library code that can be causing the delays that you are seeing. ?In my experience, delays like this are almost always caused by resource limitations (e.g., network bandwidth and/or CPU) somewhere. > >Ross Finlayson >Live Networks, Inc. >http://www.live555.com/ >_______________________________________________ >live-devel mailing list >live-devel at lists.live555.com >http://lists.live555.com/mailman/listinfo/live-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter at schlaile.de Sat Nov 22 01:43:14 2014 From: peter at schlaile.de (Peter Schlaile) Date: Sat, 22 Nov 2014 10:43:14 +0100 Subject: [Live-devel] [Patch] timeout patch for openRTSP to prevent stream cut off on broken servers In-Reply-To: References: Message-ID: <1416649394.9480.9.camel@peters.schlaile.de> Hi Ross, * there was a unified diff attached, that means, the only thing, you had to do was running the "patch" command after reviewing it *not* coding it all over again. (Or: alternatively tell me my coding style is bad and demand changes, whatever :) ) * hardware vendors seldomly *just fix their servers*, so: most people, that are affected, are just stuck with the current *non-working* situation... * as mentioned in the original posting from 2010, there is reason to believe, that the standard isn't even very specific regarding the *correct* handling of timeouts, so: maybe *you* got it wrong, and *they* were right. BTW: I even made it an *option*, so people have to explicitly *enable* the work around on the command line. Just my 2 (oh wait 3) cents. Regards, Peter Am Freitag, den 21.11.2014, 16:02 -0800 schrieb live-devel-request at ns.live555.com: > Thanks for the note. I'll likely add such an option to "openRTSP" > sometime in the future. (It's not a high priority, though; I'd prefer > that people fix their servers.) From finlayson at live555.com Sat Nov 22 13:10:31 2014 From: finlayson at live555.com (Ross Finlayson) Date: Sun, 23 Nov 2014 07:10:31 +1000 Subject: [Live-devel] Video delay on the new connection to the streamer In-Reply-To: <1416548378.14200096@f428.i.mail.ru> References: <1415102541.405766293@f258.i.mail.ru> <6A76559F-1FC5-44C0-AD6E-4E9BF1A6AE40@live555.com> <1416548378.14200096@f428.i.mail.ru> Message-ID: <45AD6595-779A-4006-93F8-B28D3C9AF88C@live555.com> > Is it possible to synchronize audio and video tracks [?] during record process. For audio and video to be properly synchronized, the source media must be given proper ?presentation time? values (?fPresentationTime?), which must be aligned with ?wall clock? time (i.e., the time that you?d get by calling ?gettimeofday()?). > Maybe some RTCP packet data information can help Yes, our code does this automatically, provided that ?presentation time? values are set properly in the server (see above). Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben.willcox at willcoxonline.com Sat Nov 22 13:44:36 2014 From: ben.willcox at willcoxonline.com (Ben Willcox) Date: Sat, 22 Nov 2014 21:44:36 +0000 Subject: [Live-devel] Problem with receiving RTSP from Antrica ANT-35000 In-Reply-To: References: <546E1378.30505@willcoxonline.com> Message-ID: <547103C4.4020500@willcoxonline.com> On 22/11/2014 00:02, Ross Finlayson wrote: > The code for the "QuickTimeFileSink" class - which is what we use to > record ".mp4"-format files. > > (The reason why VLC behaves differently is that it decodes the media > that it receives, then re-encodes it for storing into a ".mp4"-format > file. "openRTSP", however, just records the data that it receives, > 'as is' (because we don't do any decoding/encoding).) > Thanks Ross. I was kind of thinking more of some higher level type testing rather than at that level as unfortunately diagnosing the code is beyond my capabilities. I'm interested to know whether the problem is due to the encoder itself perhaps not adhering to standards or a problem within openRTSP. If the problem is within the encoder then I can pass that information back to the hardware developers (whether they'd actually fix it is another matter), but I don't know how to determine where the problem lies. If I was to make a sample MP4 file available for analysis is this likely to give any clues? Thanks, Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Sat Nov 22 14:33:31 2014 From: finlayson at live555.com (Ross Finlayson) Date: Sun, 23 Nov 2014 08:33:31 +1000 Subject: [Live-devel] [Patch] timeout patch for openRTSP to prevent stream cut off on broken servers In-Reply-To: <1416649394.9480.9.camel@peters.schlaile.de> References: <1416649394.9480.9.camel@peters.schlaile.de> Message-ID: <81BA4398-F68F-428A-B863-F30769CCAF9F@live555.com> Yes, reviewing and applying the patch is trivial. However, I?ll decide when to make the next release of the software, thank you :-) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dee.earley at icode.co.uk Mon Nov 24 00:54:42 2014 From: dee.earley at icode.co.uk (Deanna Earley) Date: Mon, 24 Nov 2014 08:54:42 +0000 Subject: [Live-devel] Network video cameras that use the "LIVE555 Streaming Media" software? In-Reply-To: References: <431a17ebb2c144069a69b5275d5106ec@SRV-EXCHANGE-2.icode.co.uk> Message-ID: <57543cd07f924d6c86af2dfb1174c343@SRV-EXCHANGE-2.icode.co.uk> It is not, and any attempts I've made to upgrade it have failed. (It has various other issues that we have to live with too) -- Deanna Earley | Lead developer | icatchercctv w: www.icode.co.uk/icatcher | t: 01329 835335 | f: 01329 835338 Registered Office : 71 The Hundred, Romsey, SO51 8BZ. Company Number : 03428325 From: live-devel [mailto:live-devel-bounces at ns.live555.com] On Behalf Of Ross Finlayson Sent: 22 November 2014 03:51 To: LIVE555 Streaming Media - development & use Subject: Re: [Live-devel] Network video cameras that use the "LIVE555 Streaming Media" software? Brickcom WCB-100Ap LIVE555 Streaming Media v2009.01.26 Is this running the most up-to-date version of Brickcom's firmware? One of the implications of the LGPL is that Brickcom must, upon request, upgrade their firmware to use the latest version of the "LIVE555 Streaming Media" software, or else provide a way for you to perform this update yourself. (This applies to *any* product that uses our software.) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ulrich.teichert at vs-n.de Sun Nov 23 23:55:24 2014 From: ulrich.teichert at vs-n.de (Ulrich Teichert) Date: Mon, 24 Nov 2014 07:55:24 +0000 Subject: [Live-devel] Struggling with a stange behavior In-Reply-To: <566F60CD-1614-4B74-887F-530BA4630D97@live555.com> References: <566F60CD-1614-4B74-887F-530BA4630D97@live555.com> Message-ID: <3EBF04C724020C4FA1FB878F9BC3D93B101FFE@VSVR-EX10-MB1.pocatec.com> Hi, >From: live-devel [mailto:live-devel-bounces at ns.live555.com] On Behalf Of Ross >Finlayson >>Valgrind tells me that memory leaks (or something else that lost bytes) are found: >>60 bytes direct, 20.000 indirect bytes are lost in the thread, >>?in the H264BufferdPackedFactory::createNewPacket(...) by >>ReorederingPacketBuffer::getFreePacket() by SocketDescriptor::tcpReadHandler() by >>BasicTaskScheduler::singleStep(). >>I'm going crazy. Why this behaviour in this case? >I don't know, but I wouldn't pay too much attention to the details in "valgrind" >reports; they're notoriously unreliable. I recently suspected that an application of ours was leaking memory and ran valgrind against it. After fixing the obvious leakages (singletons, lazyness..), I only had false positives left in ALSA - this was on an ARM-Linux platform. All other reports were IMHO correct. So I tend to disagree here ;-) The problem with less obvious memory leaks is that valgrind can only report where the memory gets allocated. To find the responsible object where to delete/free it can be tricky, but only when it's not clear which object owns the memory. HTH, Uli From jewen at dnake.com Mon Nov 24 18:19:45 2014 From: jewen at dnake.com (jewen at dnake.com) Date: Tue, 25 Nov 2014 10:19:45 +0800 Subject: [Live-devel] live555 err terminate called after throwing an instance of 'int' terminate called recursively Message-ID: <201411251019446409039@dnake.com> hi,? ? version:?"live555 0.84"? ? config: ?config.armlinux? ? problem: when run 'wis-streamer' and vlc connect it make err.? ??? ??? ??terminate called after throwing an instance of 'int' terminate called recursively ? ? note:?I found than when call?getAuxSDPLine it make err. ? ? OnDemandServerMediaSubsession::setSDPLinesFromRTPSink();? ??? ??char const* auxSDPLine = getAuxSDPLine(rtpSink, inputSource);? jewen at dnake.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Tue Nov 25 02:21:08 2014 From: finlayson at live555.com (Ross Finlayson) Date: Tue, 25 Nov 2014 18:21:08 +0800 Subject: [Live-devel] Network video cameras that use the "LIVE555 Streaming Media" software? In-Reply-To: <57543cd07f924d6c86af2dfb1174c343@SRV-EXCHANGE-2.icode.co.uk> References: <431a17ebb2c144069a69b5275d5106ec@SRV-EXCHANGE-2.icode.co.uk> <57543cd07f924d6c86af2dfb1174c343@SRV-EXCHANGE-2.icode.co.uk> Message-ID: <2B5FF94B-C0A0-483E-871C-2E6D62D514E1@live555.com> > It is not, and any attempts I've made to upgrade it have failed. > (It has various other issues that we have to live with too) FYI, I have just sent a note to Brickcom, reminding them of their obligations under the LGPL, and asking them to release a new version of their firmware that uses the latest version of the ?LIVE555 Streaming Media? software. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter at schlaile.de Tue Nov 25 04:19:44 2014 From: peter at schlaile.de (Peter Schlaile) Date: Tue, 25 Nov 2014 13:19:44 +0100 Subject: [Live-devel] [Patch] timeout patch for openRTSP to prevent stream cut off on broken servers In-Reply-To: References: Message-ID: <1416917984.24383.67.camel@peterbuero.schlaile.de> Hi Ross, > Yes, reviewing and applying the patch is trivial. However, I'll decide > when to make the next release of the software, thank you :-) no offense, I never doubted that :) Happy patching! Kind regards & keep up the good work! Peter -- Peter Schlaile -------------- next part -------------- An HTML attachment was scrubbed... URL: From markb at virtualguard.com Tue Nov 25 09:22:53 2014 From: markb at virtualguard.com (Mark Bondurant) Date: Tue, 25 Nov 2014 17:22:53 +0000 Subject: [Live-devel] Playback Speed In-Reply-To: References: <6555ad5ce84f48d189928894094ae3e1@VGI-EX1.vg.local> <708eef3d62f641b7ac0cbbbd8a3a74b6@VGI-EX1.vg.local> <782e1b3f8cbe4209ae3831669d1f7fe5@VGI-EX1.vg.local> <1bb41c8a63c14a2d968f0c98e74f0519@VGI-EX1.vg.local> <00115A6A-D5E6-43BB-94B4-1523AD4046D1@live555.com> <0ad348a0997b4056a2b3ee87a9c12656@VGI-EX1.vg.local> Message-ID: I'm encountering a curious problem with payback speed. I'm pulling an RTSP play stream from my cameras, copying the NAL's, prepending them with the start code, as contiguous GOP units straight to disk, headers and all. And they play fine. Just at double speed. The camera is set to NTSC/H264/CIF/12 fps/Constant Bit Rate/0.125 Mbps. Any ideas what I'm doing wrong? -------------- next part -------------- An HTML attachment was scrubbed... URL: From gappa88 at gmail.com Wed Nov 26 08:44:31 2014 From: gappa88 at gmail.com (Marco Gasparini) Date: Wed, 26 Nov 2014 17:44:31 +0100 Subject: [Live-devel] live555 compiled windows7 Message-ID: hello, I need to comunicate with a device with SIP protocol and audio and video streaming full duplex via RTP protocol. I have compiled live555 with NMake following the instructions. Now I would like to use those library in a qt project. Before doing that I tryed to compile playSIP tutorial with QTcreator. I have linked all libraries this way without success: unix|win32: LIBS += -L"$$PWD/../../Downloads/live/liveMedia/" -llibliveMedia unix|win32: LIBS += -L"$$PWD/../../Downloads/live/groupsock/" -llibgroupsock unix|win32: LIBS += -L"$$PWD/../../Downloads/live/BasicUsageEnvironment/" -llibBasicUsageEnvironment unix|win32: LIBS += -L"$$PWD/../../Downloads/live/UsageEnvironment/" -llibUsageEnvironment INCLUDEPATH += "$$PWD/../../Downloads/live/liveMedia/include" DEPENDPATH += "$$PWD/../../Downloads/live/liveMedia/include" INCLUDEPATH += "$$PWD/../../Downloads/live/BasicUsageEnvironment/include" DEPENDPATH += "$$PWD/../../Downloads/live/BasicUsageEnvironment/include" INCLUDEPATH += "$$PWD/../../Downloads/live/groupsock/include" DEPENDPATH += "$$PWD/../../Downloads/live/groupsock/include" INCLUDEPATH += "$$PWD/../../Downloads/live/UsageEnvironment/include" DEPENDPATH += "$$PWD/../../Downloads/live/UsageEnvironment/include" I get "error LNK2019" and "LNK2001" if I compile the program with qtcreator with VS compiler. I get "undefined reference to SIPClient::createNew ..." if I compile the program with qtcreator with Mingw Compiler do you have any suggestion please? thanks Marco -------------- next part -------------- An HTML attachment was scrubbed... URL: From jshanab at jfs-tech.com Wed Nov 26 17:57:16 2014 From: jshanab at jfs-tech.com (Jeff Shanab) Date: Wed, 26 Nov 2014 20:57:16 -0500 Subject: [Live-devel] Playback Speed In-Reply-To: References: <6555ad5ce84f48d189928894094ae3e1@VGI-EX1.vg.local> <708eef3d62f641b7ac0cbbbd8a3a74b6@VGI-EX1.vg.local> <782e1b3f8cbe4209ae3831669d1f7fe5@VGI-EX1.vg.local> <1bb41c8a63c14a2d968f0c98e74f0519@VGI-EX1.vg.local> <00115A6A-D5E6-43BB-94B4-1523AD4046D1@live555.com> <0ad348a0997b4056a2b3ee87a9c12656@VGI-EX1.vg.local> Message-ID: How are you playing the file? The container format you use is in charge of maintaining the timing information and the player you use or write uses the stored timestamps to gate out the frames. Double speed is ominous though. Could it be that you have interleaved frames? If it is AVI, then the creation code was passed the wrong numerator? or denominator? On Tue, Nov 25, 2014 at 12:22 PM, Mark Bondurant wrote: > I'm encountering a curious problem with payback speed. I'm pulling an > RTSP play stream from my cameras, copying the NAL's, prepending them with > the start code, as contiguous GOP units straight to disk, headers and all. > And they play fine. * Just at double speed.* The camera is set to > NTSC/H264/CIF/12 fps/Constant Bit Rate/0.125 Mbps. Any ideas what I'm doing > wrong? > > > > > > _______________________________________________ > live-devel mailing list > live-devel at lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dee.earley at icode.co.uk Thu Nov 27 01:07:01 2014 From: dee.earley at icode.co.uk (Deanna Earley) Date: Thu, 27 Nov 2014 09:07:01 +0000 Subject: [Live-devel] Playback Speed In-Reply-To: References: <6555ad5ce84f48d189928894094ae3e1@VGI-EX1.vg.local> <708eef3d62f641b7ac0cbbbd8a3a74b6@VGI-EX1.vg.local> <782e1b3f8cbe4209ae3831669d1f7fe5@VGI-EX1.vg.local> <1bb41c8a63c14a2d968f0c98e74f0519@VGI-EX1.vg.local> <00115A6A-D5E6-43BB-94B4-1523AD4046D1@live555.com> <0ad348a0997b4056a2b3ee87a9c12656@VGI-EX1.vg.local> Message-ID: <1b073030c41c429d82d55d79cd3864b7@SRV-EXCHANGE-2.icode.co.uk> Play in what? Raw H.264 streams have no ?speed? encoded with them, so you?d need to find a way to specify that in your player, or use a container (like AVI, mp4, etc) that stores the extra metadata. The player is probably assuming a 25fps. -- Deanna Earley | Lead developer | icatchercctv w: www.icode.co.uk/icatcher | t: 01329 835335 | f: 01329 835338 Registered Office : 71 The Hundred, Romsey, SO51 8BZ. Company Number : 03428325 From: live-devel [mailto:live-devel-bounces at ns.live555.com] On Behalf Of Mark Bondurant Sent: 25 November 2014 17:23 To: 'LIVE555 Streaming Media - development & use' Subject: [Live-devel] Playback Speed I'm encountering a curious problem with payback speed. I'm pulling an RTSP play stream from my cameras, copying the NAL's, prepending them with the start code, as contiguous GOP units straight to disk, headers and all. And they play fine. Just at double speed. The camera is set to NTSC/H264/CIF/12 fps/Constant Bit Rate/0.125 Mbps. Any ideas what I'm doing wrong? -------------- next part -------------- An HTML attachment was scrubbed... URL: From tencio2001 at skku.edu Thu Nov 27 04:20:16 2014 From: tencio2001 at skku.edu (=?ks_c_5601-1987?B?wMy/67/s?=) Date: Thu, 27 Nov 2014 21:20:16 +0900 Subject: [Live-devel] a Question about live555MediaServer Message-ID: <04ed01d00a3c$80aea8b0$820bfa10$@skku.edu> Hi. I have a question to ask about live555MediaServer. I would like to make slight modification about RTP packet. (except for the header) Should I take a look at all the code in liveMedia folder? I think the InputFile.cpp handles reading a file and in MultiFramedRTPSink.cpp, it packs a packet. is it right? And also, in the codes, what does a ?frame? means? Is it the same as ?video frame?? Best regards. Yongwoo Lee -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Thu Nov 27 08:39:09 2014 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 28 Nov 2014 00:39:09 +0800 Subject: [Live-devel] a Question about live555MediaServer In-Reply-To: <04ed01d00a3c$80aea8b0$820bfa10$@skku.edu> References: <04ed01d00a3c$80aea8b0$820bfa10$@skku.edu> Message-ID: <084AC989-1DDE-4CA9-83C3-33E6EE79A317@live555.com> > I have a question to ask about live555MediaServer. > I would like to make slight modification about RTP packet. (except for the header) What specific change(s) do you want to make to RTP packets? (Note once again that we support only protocols that are described by IETF standard RFCs (or standards-track Internet-Drafts).) > And also, in the codes, what does a ?frame? means? Is it the same as ?video frame?? Often, but not always. In the code, ?frame? refers to a discrete unit of data that can be packed into RTP packets. (In particular, for H.264 or H.265 video, a ?frame? - as used by our code - will actually be a NAL unit, not (necessarily) an entire ?picture?.) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ilwoonam75 at paran.com Fri Nov 28 01:17:24 2014 From: ilwoonam75 at paran.com (=?utf-8?B?64Ko7J287Jqw?=) Date: Fri, 28 Nov 2014 18:17:24 +0900 (KST) Subject: [Live-devel] Connection problem to some rtsp server Message-ID: <20141128181724.HM.U000000000I54zC@bosinnam.wwl1726.hanmail.net> An HTML attachment was scrubbed... URL: From finlayson at live555.com Fri Nov 28 03:01:30 2014 From: finlayson at live555.com (Ross Finlayson) Date: Fri, 28 Nov 2014 19:01:30 +0800 Subject: [Live-devel] Connection problem to some rtsp server In-Reply-To: <20141128181724.HM.U000000000I54zC@bosinnam.wwl1726.hanmail.net> References: <20141128181724.HM.U000000000I54zC@bosinnam.wwl1726.hanmail.net> Message-ID: <6C900F81-5098-42A9-9533-DF7EAE5DCE49@live555.com> Thanks for the report. I have reproduced the problem (with your server), and hope to have a fix available soon. (Please leave your server online for another day or so, so I can use it for testing. Thanks.) Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From finlayson at live555.com Fri Nov 28 15:37:55 2014 From: finlayson at live555.com (Ross Finlayson) Date: Sat, 29 Nov 2014 07:37:55 +0800 Subject: [Live-devel] Connection problem to some rtsp server In-Reply-To: <20141128181724.HM.U000000000I54zC@bosinnam.wwl1726.hanmail.net> References: <20141128181724.HM.U000000000I54zC@bosinnam.wwl1726.hanmail.net> Message-ID: <24D925D8-C1A9-48D7-B03A-CC3342A2E145@live555.com> The problem with your server is that it has a bug that causes it to add an extra line of ?whitespace? at the end of its ?DESCRIBE? response. This extra line of ?whitespace? was confusing our RTSP client code. (The old RTSP client was not affected by this because it was not set up to handle ?pipelined? RTSP responses.) I have now installed a new version - 2014.11.28 - of the ?LIVE555 Streaming Media? code that fixes this problem. Thanks again for the report. Ross Finlayson Live Networks, Inc. http://www.live555.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: