Assert is evil

| 13 Comments | 1 TrackBack

Christopher Baus has been away and whilst he was a away his HTTP proxy shutdown due to a bug which caused an assert and his site was unavailable.

Personally I abhor assert; but I completely agree with what Chris is talking about.

This CodeProject link explains what most people mean by assert. My problem with this view of the world is that, in the majority of cases, these tests go away in production code. In my opinion that's simply madness; and before you tell me that they have to be compiled out of release builds for performance reasons I'd like to see the reports from all the profiling that you did to come to that conclusion.

Assertions are often used for function argument validation and for object/system state validation. You've been passed x, is the caller conforming to your expectations of what x should be able to do? The system is about to do y and it can only do it when x and z are true, are they? Etc.

It seems to me that often asserts are used to patch over poor design. Often they're thrown into a system haphazardly, much like debug trace statements; we're debugging y because it's failing due to x being wrong, let's assert that x is how we want it in these places so we can work out where it's wrong... Or, this pointer that we're passed can never be null so let's assert it isn't... Or, this string should only ever contain numeric characters; let's assert that it does... Etc.

My first problem is that these checks, if they're important, should be just as important in release code. Even if you have 100% code coverage from your unit tests and the tests actually test 100% of the possible permutations of how your code can be executed you still want to know why that pointer is null if it is null in your release build on one Sunday evening, in July. In fact you're more likely to want to know as much detail as you can get if the "assertion" fails when the code is live... So, to me, assertions that compile away to nothing in release builds are a waste of time; they're premature optimisation of the worst kind.

My second problem is that use of asserts tends to encourage poor design; rather than designing an object that can't be put into inappropriate states the designer simply adds asserts to detect the inappropriate states during debugging. In these cases you don't need asserts, you need a better design. Likewise with parameter validation, if you're using C++ then there's no excuse for a function that takes a pointer and then immediately asserts that the pointer is not null; pass a reference! In C if it's an error to pass a null pointer then it's always an error and decent error handling is better all the time is better than code that dissolves away in release builds.

My third problem is that assertions are often hard to test for; try and write a unit test that proves that a "traditional" assert works... It's complicated because assert ends the program...

So, "traditional" asserts are evil, but what Chris's system does is good. If your internal validation code fails then pretty much all you can do is shut down. It may be appropriate to try and produce a lot of information about why you're shutting down, it may not be. In C++ I find that exceptions are very useful here. Most of the time you will want to try to shut down cleanly, if you can. I agree that there are situations where attempting a clean shutdown is worse than just falling over at the point of failure but, in my opinion, they're rarer than some people would have you believe.

Exceptions can be used in almost all places where, after designing your way out of many of the problems, a traditional assert might still seem appropriate. Using exceptions like this means that all return values are checked, always, but that you, the programmer, don't usually need to worry about the stuff that "can't fail". Code like this:

void CEvent::Set()
   if (!::SetEvent(m_hEvent))
      throw CWin32Exception(_T("CEvent::Set()"), ::GetLastError());

means that failure of SetEvent() is one less thing to worry about. It's not going to fail and there's nothing you can do if it does but you DO need to know about it if it DOES fail because it probably means something is seriously wrong somewhere.

[Updated 3rd October]
See the following follow up postings:

1 TrackBack

Asserting Oneself from Games from Within on October 3, 2005 4:28 AM

I've been following the discussion on the evils of assert started by Len Holgate. Poor old assert was getting beaten up from every side, so I'm going to have to step forward and defend it. Read More


I think you live just down the road from me, so I had better not disagree too much. But I couldn't resist the temptation completely...


I don't think we disagree at all really. As you point out at the start of your article, everyone has a slightly different definition of what an assert is. It's the traditional C style asserts that I object to; the .Net stuff that you describe sounds very similar to how I believe C++ 'assertions' should work...

I'll write a more detailed response a little later; I guess I'll have to do it here as I couldn't work out how to leave a comment on your site.

Sorry about that. Adding comments to my blog is something I started to do, but never finished. Citydesk doesn't have any direct facility for this. I've updated my article to make your point about C-style asserts versus .NET-style assertions.

oh , not again

"Your comment could not be submitted due to questionable content"

I disagree with most of this entry. It mostly critizises the implementation of assert and not the idea.

If you are talking about C then why do you suggest C++ exceptions as a solution?

Who catches the CWin32Exception?

What is a CWin32Exception?
(My MSDN Library does not know)

Some food for thought

Why "just" throwing an exception is not a magic solution

"That's only part of the problem. If a class' invariants no longer hold,
it is quite possible that its destructor simply cannot be safely called.
(Java avoids this by not having destructors:-). Seriously, what can you
guarantee about code in a finally block if the internal state of one or
more classes is corrupted?) "

C's assert is available and is used as the 'default model' by lots of C++ code.

Most of these kind of exceptions would only be caught at process or thread boundaries. You'd catch, log, and shut down.

I could have used std::exception but it's not the most suitable in this situation (though it can be found on MSDN...) If you were using the code that was shown in the example then you would have the documentation for the CWin32Exception class. The point is not that the example is throwing a particular kind of exception, it's that it's bothering to always test for failure, not just during debug builds, and it's doing more than just immediately terminating the program when the test fails.

Interesting links. Like most of these things it depends on the kind of code you're writing and your target audience. If you're writing the STL or other library code that will be used in lots of varied circumstances that are not under your control then you need to work in a different way to when you're building applications where you have control over how the code is used. If you're responsible for building assertions into STL containers then I can see why your performance requirements will dictate that he go away completely for some release builds. Most code isn't like that.

As for calling destructors when the class invariants no longer hold; I think we're getting off topic a little. I tend to aim to keep a class's invariants correct by using exceptions to prevent the class getting into an invalid state in the first place.

The standard C assert is inappropriate for testing and the checks disappear in production code. A non standard assert requires that you couple all of your code to your non standard assertion code. Throwing exceptions is usually a better solution; the tests stay in production code and there's no coupling except to a language feature. If you system has got to the point where you've trashed it so much that throwing an exception is no longer safe then you have more problems than deciding on how to assert, your code isn't checking enough things in the first place...

>> If you system has got to the point where you've trashed it so much that throwing an exception is no longer safe then you have more problems than deciding on how to assert, your code isn't checking enough things in the first place...

Very bad point.

I should use asserts only when my program is bug-free ?!

The "your code" is also dubious. If I am writing a library function and the caller calls with a null pointer (and the docs say it must be non-null) then I CANNOT do any recovery. The calling code has a bug and can do crazy things with the pointer (even use it in destructor .... not unlikely).

> As for calling destructors when the class
> invariants no longer hold; I think we're getting
> off topic a little.

What is your topic? :-) Maybe we are confusing things by not being clear about the topic: checking invariants/preconditions or throwing documented exceptions.

There is an interesting thread in comp.lang.c++.moderated about this (including the confusion).

I am on the David Abrahams / Peter Dimov side.

Regaring Marks blog entry. I consider it very
scary that an assertion failure is signalled by throwing something derived from ApplicationException. Lots of code can catch this an hide his programming error.

I'm probably odd because I tend to work in such a way that I don't accumulate bugs as I work. As soon as I write a bug all other work is suspended until I've fixed the bug (there is, obviously, some leeway on this sometimes, but we're talking hours rather than days). So code that I'm working on very rarely gets to the point where it has so many bugs that it's staggering around in an unstable manner. Therefore; enforcing preconditions using exceptions that are always present in all builds works very well. I don't write large amounts of code and then debug it, I tend to try and write tests for the code as I go. Using exceptions for things that others might use asserts for works well in that situation too as I can write a test that violates the "rules" and ensure that an exception is thrown. The test can be used to show that the code operates safely in all builds in the presence of "bad" data, etc. Because of this, classes don't often get into a mess internally, and because of this, I've not often had a situation where the stack has been trashed so much that throwing an exception is an issue. Of course it does happen during development and when it does I simply run the test that shows the problem, or write one to show it, and then toggle the debugger to 'break on exceptions' and I'm there in the debugger at the point where the problem is; much like I would be with a traditional assert.

If you're writing a library function and someone calls with a null pointer that shouldn't be null and you're using C++ then you have an API bug to fix as the API should take a reference... However, if the first thing that the library does is validate that the pointer is not null and if the first thing that the validation does when it finds that it IS null is throw an exception then the library code is protected and safe and can't be called. It's true that the world outside of your library code may be about to do strange things with the pointer that they passed you if you throw an exception but, in my opinion, that's because there's a bug in that code. If you're in development then, even if the app tanks due to destructor badness or a trashed stack you can usually debug that situation as quickly with a 'break on exceptions' option. The advantage is that the library that you have control over cannot get into an incorrect state in any build. The fact that the user cannot ever get your code to work with their buggy code means that they're likely to fix their problem rather than just switch to your release build library/dll/whatever.

I now understand your "but what about the destructors?" question in the context above. I was viewing it in the context of the destructors for objects that are protected using the scheme that I propose and for those object it simply isn't an issue, ever, because the objects can never, in any build, get into a contrived state. I'm assuming here that we wouldn't put the exception or assert in at the same point in development and that prior to either protection the object could get into a disturbed state; my style of development means that the validation checks tend to go in first so it doesn't happen.

Again, interesting link but, to be honest, it's a) focused on the semantics of the argument rather than what's best to produce working code with no bugs and b) gives equal weight to likely problems and highly unlikely problems.

All due respect to Dave Abrahams but saying that if you document that you'll check a precondition then it's no longer a precondition is mad or it simply means that "preconditions" as defined by the current version of the wikipedia page that is referenced are a waste of time. And, I'm sorry, but insisting that in the presence of any precondition violation you have to assume that the stack is hosed is not how things are in the real world. I'm quite happy to take all that Dave says under advisement but here he seems to be off on a frolic of his own and whilst he may be right and I may come to regret the way I've been using C++ for the past 7 or 8 years I doubt that his view of the world, or even the world that he inhabits, is that similar to most C++ programmers writing code for general consumption on a day to day basis.

Take this, from Bob, for example:

"Testing preconditions isn't about assigning blame or deciding on cause. It's just about testing a condition that must be true, or else the function can't do what it's supposed to do. If the precondition is "index < size()", and this condition evaluates to false, the function cannot know or care why. It could be because the caller just passed in a bad value. It could be because the stack is corrupted, causing garbage to be read from index. It could be because the size() function returned garbage because its invariants are broken. It could be because of a buggy compiler that emitted garbage code. It could be because of a hardware fault.

It doesn't matter which of these is the case; all that matters is that the function cannot continue if index < size(). "

I'm sorry but if we're assuming all of these are possible and equally likely every time we get an index that's out of range then I'd suggest that you're in a different business to me. And surely, if we live in such a world, then the asserts should stay in during production because the compiler could be just as buggy there and we're just as susceptible to hardware failre in production...

And this, also from Bob;
"Suppose you have a function F() that is documented to throw X when condition Y fails. Suppose you call F(), and Y legitimately fails. You don't have undefined behavior, an X is thrown, and life goes on.

If, however, F() is called and Y appears to fail because some undefined behavior has occured (such as stack corruption or whatnot), well now you're in undefined behavior land, and anything could happen. The program could crash, wipe your hard disk, or throw an X. If it manages to actually throw an X, it doesn't change the fact that your program is now exhibiting undefined behavior.

The key point is that your condition Y did not help you detect and avoid undefined behavior. "

It's academic drivel.

As for Mark's "problem", well, sure things can catch these exceptions if they want to; in fact it's one of the selling points, test harnesses can catch them and validate that they happened. However, if you have a programmer who is catching these and ignoring them then that's a slightly different issue. The key point is that the programmer is NOT using your API in the wrong way because he's being stopped as soon as he enters it. He may be catching and ignoring the exceptions but he's not doing any useful work and he's not causing any data corruption within your objects. I'd contrast this with the situation you'd be in with the same programmer using code with traditional assertions; he simply switches to the release build which doesn't give all these damn error boxes, assumes it was a 'compiler bug' and moves to using your API to trash its internals. I know which I prefer.

Vagn, other developers might make the mistake of catching ApplicationException and not dealing with that properly - but this is an entirely separate issue. I can't stop other developers doing silly things, but that's not going to prevent me doing the "right thing" in throwing an exception to roll back everything in my code.

Having said that, the latest "best practice" is to derive from Exception rather than ApplicationException. Apparently the branching idea (Exception and ApplicationException) was not well thought-through and is now considered to have been a mistake.

I like to ask one thing. You mentioned above something like:

" If you're writing a library function and someone calls with a null pointer, then the first thing you prefer is to validate that the pointer is not null and if the first thing that the validation does when it finds that it IS null is throw an exception"

Suppose your function exposes 3 or more such input pointers, then how you structure your code. I mean do you check each for NULL value and throw exception. I mean how you prevent your code looking ugly ?


It depends. ;) Firstly I don't find that it happens very often. I don't tend to have functions which take 3 or more required values by pointer; I often pass them in as references even if that means the caller needs to deref a pointer and pass it to me only for me to take its address... The issue of the pointer being null is then something for the caller to deal with...

This kind of thing crops up a little more regularly in COM objects where you can't just switch from pointers to references so assuming the function must exist in the form that you specify...

If I care about which pointer was null then I'd check each individually and name the parameter in the exception. Something like: throw myExceptionType("Name cannot be null"); perhaps. If I didn't I might check them all at once; if (!pName || !pValue1 || !pValue2). I usually find that I do care...

When the input parameter checking takes up a lot of space I sometimes decide that the input function is just a parameter checking shim and move the main meat of the function off into a seperate function that is private and called from one place and doesn't need to validate its arguments; or takes references...

Leave a comment