Practical Testing: 4 - Taking control of time

| 5 Comments
I'm writing some blog entries that show how to write tests for non trivial pieces of code. This is part 4.

We have a test for SetTimer() but it's not as robust as we'd like. The problem is that the class under test is runs its own thread which reacts based on time and this makes our testing harder and less predictable. What's more it actually makes testing for our known bug practically impossible; to test for our bug we'd have to have a test which called GetTickCount() to determine the current value and which then slept so that it could execute the test at the point when the counter rolled over to 0. That would mean that, on a bad day, the test could take 49.7 days to run...

I've written about the problems in testing time related code before. The solution we need now is the same one that I recommended then. We need to apply a level of indirection. Rather than having the object go off to a well known place to get an indication of the current time we need it to go to a place that we've provided. Once we are in control of where the object goes to get the resource it needs we can substitute the resource with one that we can manipulate in the ways that we require for testing. So, in this case, rather than embedding calls to GetTickCount() directly into the code we should pass the object an interface that allows it to obtain the time values that it needs. This is classic parameterize from above; the object doesn't decide, its creator does. If we create an interface like this:

class IProvideTickCount
{
   public : 
  
      virtual DWORD GetTickCount() const = 0;
  
   protected :
  
      ~IProvideTickCount() {}
};

We can pass something that implements it to the timer queue in its constructor and it can call through the interface when it requires the current tick count. Once this indirection is in place we can pass in a mock object for testing and that mock object can define time however we decide is appropriate for the test in hand.

Unfortunately parameterize from above tends to make all of the object wiring explicit and, generally, somewhat more ugly and complicated. In this particular situation, where there's really only one 'real' implementation of the interface and the parameterization is purely for testing I find that it's often useful to make the parameterization optional. Rather than having a single constructor which takes an IProvideTickCount implementation we can have two, one that does and one that doesn't. The object can provide the default, real, implementation when none is supplied and when we need to test we can plug in our own version. Of course, should the code in question turn out to be a performance hot spot we could go one further and #define the indirection just for testing (but I prefer to let profiling lead the way in this kind of optimisation).

So, our object under test ends up looking a bit like this:

static const CTickCountProvider s_tickProvider;
  
CCallbackTimer::CCallbackTimer(
   const IProvideTickCount &tickProvider)
   :  m_shutdown(false),
      m_tickProvider(tickProvider)
{
   Start();
}
  
CCallbackTimer::CCallbackTimer()
   :  m_shutdown(false),
      m_tickProvider(s_tickProvider)
{
   Start();
}

We implement a mock tick count provider for testing and needn't worry about the exposed wiring when using the code for real.

The next issue is how to implement the mock implementation of the interface; something like this springs to mind:

class CMockTickCountProvider : 
   public IProvideTickCount,
   public JetByteTools::Test::CTestLog
{
   public : 
  
      CMockTickCountProvider();
  
      void SetTickCount(
         const DWORD tickCount);
  
      // Implement IProvideTickCount
  
      virtual DWORD GetTickCount() const;
  
   private :
  
      volatile DWORD m_tickCount;
  
      // No copies do not implement
      CMockTickCountProvider(const CMockTickCountProvider &rhs);
      CMockTickCountProvider &operator=(const CMockTickCountProvider &rhs);
};

It simply returns the tick count that we tell it to. Our test can then become this:

void CCallbackTimerTest::TestTimer()
{
   const _tstring functionName = _T("CCallbackTimerTest::TestTimer");
  
   Output(functionName + _T(" - start"));
  
   CMockTickCountProvider tickProvider;
  
   CCallbackTimer timer(tickProvider);
  
   CLoggingCallbackTimerHandleCallback callback;
  
   CCallbackTimer::Handle handle(callback);
  
   tickProvider.SetTickCount(1000);
  
   timer.SetTimer(handle, 100, 1);
  
   // Prove that time is standing still
   THROW_ON_FAILURE(functionName, false == callback.WaitForTimer(1000));
  
   callback.CheckResult(_T("|"));
  
   tickProvider.SetTickCount(1100);
  
   THROW_ON_FAILURE(functionName, true == callback.WaitForTimer(100));
  
   callback.CheckResult(_T("|OnTimer: 1|"));
  
   Output(functionName + _T(" - stop"));
}

And at last we have control of time. Our test for tick count rollover is now within our grasp. We could set our tick count to just before the rollover, set a timer that expires after the rollover and, well, observe and then fix the problems.

This particular approach can be applied to all services that an object uses and it tends to result in a flexible, decoupled and far more granular design for code. Once your services are provided via interfaces you can mock them up for testing or demo purposes. When working with services that provide changing data; such as live, "ticking", financial market data, it's so much easier to prove that the code works as expected if you can mock up a data source and have that source provide just the data you require as and when you require it. Gathering the data to supply is made easy due to the same indirection that makes the mock provider possible; rather than mocking up a provider, you simply instrument a real service provider and have that save down live data that you can then edit and use as test data. In summary; cool. ;)

The CCallbackTimer can now be controlled completely by the test; well, almost... Since we're unit testing we know about how the object is implemented; these are white box tests. Looking at that implementation, or the results of the test log from the mock time source, will show us that our current time source causes the object to spin in a busy loop, waiting around 100 ms per loop. Time is standing still, but our object expects it to be moving forward at its usual rate. We have control, but not quite enough; we can't determine how many times our mock is accessed, we just know that when we change the time to after the timeout then the timer will go off. I think it would be better to have a little more control...

Code is here. Same rules as before.

5 Comments

I've decided that I prefer ASSERT_TRUE rather than THROW_ON_FAILURE. Personal preference I guess...

Does the assert cause an assertion failure or throw an exception?

I prefer exceptions over assertions as, in this case, for example, I could decide that I will allow other tests to continue even if one fails and handle the exception and allow the other tests to continue running.

If yours throws an exception then it's just confusingly named ;)

The "ASSERT" causes an assertion failure. ASSERT_TRUE (and ASSERT_FALSE) throw an exception. I find THROW_ON_FAILURE a little confusing sometimes. It's only in my test code, and they are clearly tests ;).

The whole assert thing is a thorny subject anyway. It is essentially an untested assumption in code isn't it?


assert(cond); // typical comment for this- "should always be true"

> I find THROW_ON_FAILURE a little confusing sometimes

So do I ;)

> The whole assert thing is a thorny subject anyway. It is essentially an untested assumption in code isn't it?

My issue with it is that it goes away in production code and it sometimes acts as a crutch; it can be used instead of a better design. For example, there is absolutely NO justification for ever asserting a pointer is non-null in C++. If it can't be null it should be a reference; anything else is pure laziness.

> My issue with it is that it goes away in production code and it sometimes acts as a crutch

The way I've seen it used is interesting - quite often I've seen some developers litter debug, or pre-production code, if you prefer, with assertions. It then all gets removed when they're happy with the code. Since an assertion is simply testing whether a condition is true or not, and they're using it in 'pre-production' code, they might as well hoist it out and into a test!!

Leave a comment