Just In Time Testing

| 0 Comments

Once we'd integrated the new data provider we were in a position to do some more testing. We configured the code that used the new component to request the same data from the new component and the old code and to save the data to files. Then we wrote some code to compare the files and highlight any changes. Once that was done we located the source of the differences, wrote tests that failed due to the problems and then started to fix the bugs.

A while ago I called this process Just In Time Testing (JITT); it's like TDD-lite. You start the code in a TDD fashion and write that first test but as soon as you feel the baby steps and tests aren't giving you the best bang for your buck you switch out of TDD mode knowing that you can drop back into it at any point. Find a bug? Write a test. Fix the bug. Write no bugs. Write no tests. Just In Time Testing...

Writing the tests that we needed to fix the bugs that caused our data discrepancies was a surprisingly easy thing to do because of the fact that we'd already written the first test for most of the code we needed to fix. Adding new tests to code that has tests is easy, writing the first test and is harder because code without tests tends to be code that's hard to test due to accidental coupling.

By starting all code in a TDD way we knew we could test the code and we were forced to address the coupling issues, stub out some mocks and get the thing to build and run in a test harness; this is the stuff that's hard to retro-fit into code. From that point on we wrote tests only when we found it useful. Some code is hard to write correctly, sometimes it's just the day on which you're writing it; sometimes it's the code itself. Either way, when the code is hard to get right we can write tests and move forward secure in the knowledge that the code works right. Some code just flows and works and it seems unnecessary to write tests for it straight away; so we don't... We accept that we're acquiring a technical debt and that one day we'll need to write tests for the code but we also know that since we have the first test we're half way there. Writing the subsequent tests will be possible...

It's one of those scary trade-off situations where you know that it's probably better to do the entire development test first but you feel certain that you can move faster without the tests. The key thing about JITT is that you write the first test so that you know you can write others as and when you need to; the first test forces the coupling issues into the open and the fact that you need to keep these first tests compiling and running means that you're kept aware of any accidental coupling that starts to creep into the design.

Even when I'm religiously following the TDD way I know that I'm not clever enough to write tests that prove the code is correct. I just know that the tests I write will help me design the code better and support me when I need to change it. I still expect to find bugs that aren't covered by tests and I still expect to write new tests for these bugs before fixing them. JITT about taking bigger steps because you feel that they're more appropriate, it's a calculated risk, but it's less of a risk because you know you can write the tests when you need to and you know you can slip back into baby steps as soon as you realise that you're not as clever as you think you are...

Leave a comment