I've been noticing a strange thing for a while on Windows 8/8.1 and the equivalent server versions. The issue occurs when I'm using a Slim Reader/Writer Lock (SRWL) exclusively in exclusive mode (as a replacement for critical sections). What happens is, when a thread that has just unlocked a SRWL exits cleanly, immediately after unlocking the lock, sometimes threads that are waiting on the lock do not get woken and none of them acquire the lock.
At first I spent ages thinking that this was some kind of subtle bug in my LockExplorer tool as initially the problem only manifested itself during test runs that were using LockExplorer to detect potential deadlocks. Recently, however, I've been seeing the identical problem in normal programs being run with no clever lock instrumentation going on.
I still think it's more likely my bug than the operating system's but I recently found this Knowledge Base article, #2582203, "A process that is being terminated stops responding in Windows 7 or in Windows Server 2008 R2" which says "This issue occurs because the main thread that terminates the process tries to reactivate another thread that was terminated when the thread released an SRW lock. This race condition causes the process to stop responding.". This sounds suspiciously like the problem that I'm seeing, though not exactly.
The problem I see is that other threads waiting on the SWRL don't get notified that the lock is now unlocked. This locks my program up during shutdown purely because I have code that waits for these threads to exit cleanly and they can't as they're waiting on an unlocked SRWL. It's not a deadlocked attempt at recursive SWRL acquisition as I have debug code in my wrapper class which detects such behaviour. It's not an orphaned locked SRWL as breaking the hung process into the debugger and immediately continuing it causes one of the threads that's blocking on the lock to immediately acquire it and continue.
I haven't applied the hotfix yet, partly because my systems are Win 8 rather than Win 7 and partly because I'm not yet convinced that this is the issue.
As a work around I've added a call to Sleep at the point where my thread class allows a thread to exit. This seems to reduce the instances of the problem, which leads me to believe it's a race condition of some kind.
I've written an article for Overload, one of the the ACCU's journals. It's based on my Efficient Multi-Threading blog post from a few weeks ago. Chris Oldwood mentioned to me about how the object described in Efficient Multi-Threading was similar to an Active Object which steals a calling thread to do its work rather than using one of its own and I agreed, he then suggested that I write it up for the ACCU journal.You can read the article here.
- Bug fixes and potential fixes to all code that used function level non-trivial static objects. These have all been replaced with alternative designs as they were not thread safe and could, in very unlikely scenarios, cause problems. See here for more details.
- Bug fix for
JetByteTools::Win32::IQueueTimers::SetTimerWithRefCountedUserData(). We now wrap the
SetTimer()call in a try/catch block and
Release()the reference we created with
AddRef()if there's an exception.
- Bug fix to
JetByteTools::IO::CBufferChain::AddDataToBufferChain()to allow for passing in an empty list of buffers to add.
- Bug fix to
JetByteTools::IO::CBufferChain::ConsoliateData()to ensure that buffers that we attempt to consolidate are writeable.
- Bug fix in
JetByteTools::Socket::TConnectionManagerBase::ReleaseSocket()to ensure exceptions are trapped appropriately.
- Bug fix in
JetByteTools::Service::CShutdownHandler::Run(). Changed the way we respond to shutdown and pause events (if enabled) so that we only respond to each event once rather than repeatedly routing shutdown events to the service whilst the event is set. This was causing non-paged pool exhaustion in some situations when the service failed to shutdown.
JetByteTools::Win32::CActivatableObject. See here for details.
JetByteTools::Win32::CalculateRequiredPrecision()which takes a double value and returns the optimum precision required to format the value for display.
- Added Intrusive containers. A set, map and multi-map which can be used to replace STL containers with ones which do not need to do memory operations during insertion and removal.
JetByteTools::IO::IBufferBase::GetSpace()which returns the available space in a buffer, size - used.
JetByteTools::IO::IBuffer::GetTotalSpace()which returns all available space in a buffer, this is
GetSpace()plus any space that is currently unused at the front of the buffer due to calls to
JetByteTools::IO::IBuffer::RemoveSpaceAtFront()which compacts a buffer by removing any unused space at the front and copying any data back towards the front of the buffer.
JetByteTools::IO::CSequencedBufferCollectionwhich is a collection based on
JetByteTools::Win32::TIntrusiveRedBlackTreewhich stores the buffers in sequence number order.
JetByteTools::Socket::CFilterDataBasea base class for filter data that is based on
JetByteTools::Socket::CReadOnlyFilterDataas base classes for filters which only work on one side of the connection.
JetByteTools::Socket::CStreamSocketConnectionFilterBasea base class for filters which use
- Added lots of functionality to
JetByteTools::Socket::CFilteringStreamSocketConnectionManagerBaseto allow for more complete filtering.
- Added a Datagram flow control filter,
JetByteTools::Socket::CFlowControlDatagramSocketConnectionFilter, see here for more details.
- Completely redesigned the OpenSSL and SChannel filters so that they use the new
JetByteTools::Socket::CFilterDataBasefilter data base classes.
- Changed all versions of
JetByteTools::Win32::ToString()which take a double or long double to also accept a precision value. This value defaults to 6, which is the default precision used if precision isn't specified.
JetByteTools::Win32::CreateDirectoryIfRequired()to return a bool which indicates if the directory was created.
JetByteTools::Win32::CreateDirectoriesIfRequired()to return a count of the number of directories created.
JetByteTools::Win32::CThreadPool::OnThreadPoolThreadStopped()so that it calls
JetByteTools::Win32::IMonitorThreadPool::OnThreadPoolThreadStopped()before deleting the thread and decrementing the counter. This makes it possible to clean up the monitor when the final thread is deleted (and the wait on the counter returns).
JetByteTools::Win32::TLockableObjectso that it only uses a SRWL if we're building for Windows 7 or later. This is so that the
TryEnter()API is available for the SRWL.
- We now use
JetByteTools::Win32::CombinePath()where possible rather than combining paths by hand.
JetbyteTools::Win32::CCallbackTimerWheelto use the new intrusive containers.
JetByteTools::IO::IBuffer::MakeSpaceAtFront()so that it takes an optional value which indicates how much space must be available at the rear of the buffer. This allows us to be able to create space at the front only if there's enough space in the buffer for the space we need at the front and rear of the buffer.
JetByteTools::IO::CBufferChainStoresNullsso that you can pass it an instance of
JetByteTools::IO::IAllocateBufferin its constructor and it can then use that to allocate a new buffer if the number of 'null buffers' stored becomes too great. This effectively removes the limit on the number of 'null buffers' that can be stored between non-null buffers.
JetByteTools::IO::CSortedBufferChainso that it caches whether you can get the next buffer or not. This reduces the work that needs to be done to find out if you can get the next buffer.
- Changed the implementation of
JetByteTools::Socket::CCompressingStreamSocketConnectionFilterso that it uses the new filter base classes.
- Changed the implementation of
JetByteTools::Socket::CFlowControlStreamSocketConnectionFilterso that it uses the new filter base classes.
I don't believe that UDP should require any flow control in the sending application. After all, it's unreliable and it should be quite OK for any stage of the route from one peer to another to decide to drop a datagram for any reason. However, it seems that, on Window's at least, no datagrams will be dropped between the application and the network interface card (NIC) driver, no matter how heavily you load the system.
Unfortunately most NIC drivers also prefer not to drop datagrams, even if they're overloaded (see here for details of how UDP checksum offloading can cause a NIC to make non-paged pool usage considerably higher). This can lead to situations where a user mode application can bring a box down due to non-paged pool exhaustion simply by sending as many datagrams as it can as fast as it can. It's likely that it's actually poorly implemented device drivers that are at fault here; by failing to gracefully handle situations where non-paged pool allocations fail, but it's the application that is putting these drivers into a situation where they could fail in such a catastrophic manner.Since the NIC driver and the operating system will not drop datagrams it's down to the application itself to do so if it senses that it's overloading the NIC. I've recently added code to The Server Framework to allow you to configure behaviour like this so that an application can prevent itself from exhausting non-paged pool due to pending datagram writes.
Performance is always important for users of The Server Framework and I often spend time profiling the code and thinking about ways to improve performance. Hardware has changed considerably since I first designed The Server Framework back in 2001 and some things that worked well enough back then are now serious impediments to further performance gains. That's not to say that the performance of The Server Framework today is bad, it's not, it's just that in some situations and on some hardware it could be even better.
One of the things that I changed last year was how we dispatch operations on a connection. Before the changes multiple I/O threads could block each other whilst operations were dispatched for a single connection. This was unnecessary and could reduce the throughput on all connections when one connection had several operation completions pending. I fixed this by adding a per-connection operation queue and ensuring that only one I/O thread at a time was ever processing operations for a given connection; other I/O threads would simply add operations for that connection to its operation queue and only process them if no other thread was already processing for that connection.
Whilst those changes deal with serialising multiple concurrent I/O completion events on a connection there's still the potential for multiple threads accessing a given connection's state if non-I/O threads are issuing read and write operations on the connection. Not all connections need to worry about the interaction of a read request and a completion from a previous request, but things like SSL engines may care. We have several areas in The Server Framework, and more in complex server's built on top of the framework, where it's important that only a single operation is being processed at a time. Locks are currently used to ensure that only a single operation is processed at a time, but this blocks other threads. What's more most of the code that cares about this also cares about calling out to user callback interfaces without holding locks. Holding locks when calling back into user code is an easy way to create unexpected lock inversions which can deadlock a system.
In the diagram above, given the threads 1, 2, 3 & 4 with four work items A, B, C & D for a single object, the threads are serialised and block until each can process its own operation on the object. This is bad for performance, bad for contention and bad for locality of reference as the data object can be bounced back and forth between different CPU caches.The code that I describe in the rest of this article forms the basis of a new "single threaded processor" object which enables efficient command queuing from multiple threads where commands are only ever processed on one thread at a time, minimal thread blocking occurs, no locks are held whilst processing and an object tends to stay on the same thread whilst there's work to be done.