Major Vista Overlapped I/O change


I'm still reading the Richter book, highly recommended even if you've read one of the earlier versions. In fact it's possibly MORE highly recommended IF you've read one of the earlier versions... It seems that lots of things have changed a little, and some things have changed a lot. Unfortunately the book doesn't detail the actual changes. Note to publishers; I'd PAY for a slim book that DOES detail the changes between the APIs that are being discussed...

Take this throwaway line in the Cancelling Queued Device I/O Requests section of the Asynchronous Device I/O chapter of the latest book: "When a thread dies, the system automatically cancels all I/O requests issued by the thread, except for requests made to handles that have been associated with an I/O completion port." This is then clarified later in the chapter in a note which points out that prior to Windows Vista if you associated a device with an I/O completion port and then issued overlapped I/O requests on it then you had to make sure that the thread that issued the requests remained alive until the I/O requests had completed. Not anymore! Vista now allows threads to issue overlapped I/O requests and exit and it will still process the requests and queue them to the completion port. This makes perfect sense and will simplify writing general purpose I/O completion port code.

When I designed The Server Framework I decided that I couldn't require users of the framework to use my own brand of thread start and thread termination functions so that I could keep track of device I/O requests that their threads may have issued. What's more, the first server that I designed with the framework had a flexible thread pool for database access and could easily create a thread, issue an I/O request and then shut the thread down before the request completed. To avoid these issues I added code that "marshalled" all I/O requests into the I/O thread pool (the pool of threads that serviced the I/O completion port, which was fixed in size and existed for as long as the I/O completion port existed). Thus I/O requests were passed across to the I/O threads and issued from there to avoid the thread termination causes cancelled I/O issue. It seems that I can improve performance again by removing this indirection if we're running on Vista or above...

However, it seems that the MSDN documentation hasn't caught up yet, see the documentation for WSASend(), which says this in a note in the Overlapped Socket I/O section; "Note All I/O initiated by a given thread is canceled when that thread exits. For overlapped sockets, pending asynchronous operations can fail if the thread is closed before the operations complete. For more information, see ExitThread."

If someone locates some information on an MSDN page that confirms Richter's position then please let me know. In the meantime I'll put some changes into the 5.3 release of The Server Framework and start running some tests. This should result in some quite significant performance improvements from some server designs.


Mark Russinovich seems to say the same thing at Great blog btw!

Thanks for the link, and glad you enjoy the blog.

Hi, I'm interested in overlapped I/O for network operations. Was is improved in .NET 3.5?
Also what do you think about xf.server component for .net network programming?


I'm sure I recognise your email address from some previous xf.server comments that were little more than spamed links to their site... But maybe not.

Anyway, there are articles on MSDN about I/O changes in .Net 3.5; I suggest you look there. I tend to focus on unmanaged socket I/O.

As for the xf.server thing, well, there's hardly any documentation, so it's quite hard to know exactly what it does. The website seems mostly empty hype and puff and doesnt give me confidence in the company behind the project; there's no blog, as such, so I hjave no idea who the guys who wrote it are or how they thing. In summary, I wouldn't buy that kind of component. Still, I guess it's early days for them. They've only been spamming me for a month or so.

Leave a comment