[Live-devel] Correct use of 'select' to avoid packet loss in Linux+BSD; correct use of WSAGetLastError and codes
Aurélien Nephtali
aurelien at sitadelle.com
Wed Apr 15 09:12:13 PDT 2009
On Wed, Apr 15, 2009 at 4:45 PM, Ross Finlayson <finlayson at live555.com> wrote:
>> On Thu, Apr 9, 2009 at 9:52 AM, Ross Finlayson <finlayson at live555.com>
>> wrote:
>>>
>>> Thanks for the suggestion; I like this.
>>>
>>> Unless anyone sees a problem with this, I will include it in the next
>>> release of the software.
>>
>> Hello,
>>
>> On Linux, select() is implemented above poll().
>
> What do you mean by this?
>
I mean that on Linux (and at least FreeBSD), select() is just a
wrapper around poll().
>
>> In net/core/datagram.c
>> (linux 2.6.26 sources), in datagram_poll() you can see the lines that
>> check for readable event on the socket :
>>
>> /* readable? */
>> if (!skb_queue_empty(&sk->sk_receive_queue) ||
>> (sk->sk_shutdown & RCV_SHUTDOWN))
>> mask |= POLLIN | POLLRDNORM;
>>
>> The event is set if the socket input buffer is not empty.
>
> So what is your conclusion? Are you implying that we should (on Unix
> systems) be using "poll()" instead of "select()", and that if we use
> "poll()", we won't need the optimization that Bryan Moore proposed? Or are
> you implying that we don't need this optimization even if we use
> "select()"??
I always use poll() on Linux because it is just a wrapper of select()
and because it does not have a limit on the number of sockets to
watch. The only problem is that it is not available on Windows.
In conclusion you should use poll() for Unix and select() for Windows.
The FIONREAD method is useless with poll()/select() under Unix because
there is no internal flag which tells the socket has events ; the
socket status is re-checked at each call.
I don't known how select() is implemented on Windows.
--
Aurélien Nephtali
More information about the live-devel
mailing list