Rants

MiniDumpWriteDump now mostly useless for in process use

I’ve been using the MiniDumpWriteDump() API from DbgHelp.dll for 20 years or so. It has proven to be a useful diagnostic tool, and I use it in all manner of places, including many where others may simply use an assert(). It’s a heavy-weight debugging tool, but it has proved useful over the years; rather than just throwing an exception because things that shouldn’t happen have happened, I often also generate a dump file so that I can get far more data than you could ever log or report in another way.

Old and cranky me from 20 years ago

Back in 2004 I wrote this, I wonder why I used to make some people unhappy… I’m starting to believe that, at 37, I must now be old and cranky because to be quite honest with you; if you’re not writing code in such a way that you define concepts and abstractions in such a way that the actual main line business logic code that you write is clear and easy to understand then you’re just not doing it right.

How little things kick you out of the zone

I had an internet outage this morning. This shouldn’t have been much of a problem for me as all of the code that I needed to be working on is on my local git server and all of the dependencies are local. Or so I thought. The client has a c# shim layer that builds as part of the C++ server build. They rely on NuGet to grab some components from somewhere for some reason.

Setting the preferred NUMA node for a Windows Service (and making it work after a reboot)

When your machine has multiple NUMA nodes it’s often useful to restrict a process to using just one for performance reasons. It’s sometimes hard to fully utilize multiple NUMA nodes and, if you get it wrong, it can cost in performance as the nodes need to keep their caches consistent and potentially access memory over a slower link than the memory that is closer to the node, these things can be relatively expensive.

VS2022 Version 17.6.0 Preview 3.0 - Standard Library Modules warnings (std.ixx)

So, this morning I’m back from my Easter break and working on some code for a client and the first thing I do is kick off my CI build and things start failing. It seems that my “cunning plan” to have my CI build use the preview version of Visual Studio 2022 whilst the client uses earlier versions has paid off again… We build with all warnings enabled and treat warnings as errors.

Unsupported protocol - and the geeks score another own goal...

As of the latest Chrome, Edge, Opera, and FireFox updates all of my ‘obsolete’ hardware (routers, NAS drives, network switches, etc) are inaccessible as they don’t use TLS 1.2. I’m unlikely to be alone in this. I can understand the technical decision but IMHO it’s wrong and, actually pretty stupid. To make it more than a click through warning to access these obsolete devices on my local subnet. Sure ban connections to other subnets (that would cause me pain too as I manage some stuff via a VPN) but 90% of users would be fine.

Are all fully patched Windows boxes really vulnerable to this easy UDP DDOS attack?

UPDATED: 23 August 2021 see here As I mentioned a while ago, it seems there’s a strangely fatal bug in the Windows networking stack at present. This manifests as massive non-paged pool memory usage if a process creates a UDP socket, binds it to an address and fails to read from it faster than other people are writing to it. The issue appears to be present on all current Windows operating systems but is NOT present on Windows Server 2012 R2 if recent patches have NOT been applied but IS present as soon as the box is patched up to date… My test box was patched up until March 2020 and ran fine, as soon as it was patched to June 2020 it started to behave badly.

Strangely fatal UDP issue on Windows...

UPDATED: 23 August 2021 see here One of my clients runs game servers on the cloud. They have an AWFUL lot of them and have run them for a long time. Every so often they have problems with DDOS attacks on their servers. They have upstream DDOS protection from their hosting providers but these take a while to recognise the attacks and so there’s usually a period when the servers are vulnerable.

It must be about time for a 'the state of C++ tooling' rant...

I tend to develop code with JITT (Just in time testing), this is like TDD when I’m doing it but it doesn’t always get done. What does get done, generally, is the “first test” for each class. This ensures that subsequent testing is possible as the code doesn’t have hidden dependencies and it gives me a test harness that’s ready to go when I find that I need it. More complex code end up being TDD, easier bits end up as JITT where the tests are only written when I’ve wasted time banging my head on the desk trying to debug problems the “old fashioned way”.