In our dev meeting, we discussed Test Driven Development. So, of course, we did it TDD-style: we started off by asking whether everyone knew what TDD is; and then testing that; and then asking more questions about TDD until we got a failing test – i.e. several people who didn’t know. Then we updated our knowledge of TDD until we didn’t have a failing test, and iterated…
We agreed that there are many benefits to using TDD, of which “ending up with a working test suite” is only the most obvious:
- Allows incremental development, so you are able to run the code as you’re going along
- Provides documentation for how the code works
- Helps you to get a feel for how the code will work as you’re writing the test
- Proves safety, confidence, and a greater ability to refactor
- Writing a test beforehand means that you are forced to have a failing test
- Forces a different mindset – because you’re thinking about your design and API as you write
- Helps you concentrate on what’s relevant to the task at hand, rather than doing unnecessary work
- Encapsulates features nicely – encourages better organization
- Helps keep focussed on the problem, and keeping the solution simple, rather than solving unimportant problems
But, we concluded that while many of us used TDD for some of the time, it wasn’t true that all of us were using TDD for all of the time. So, given all these great things about TDD, why aren’t we using TDD for everything?
- We’re lazy
- It takes longer to get to a result – longer term benefits but a short term cost
- Not as good if you’re exploring
- Some types of test are hard to write
- Slow feedback loop for some types of test – e.g. some integration tests
- Limited framework support
- Working in an existing codebase that has been developed without TDD
- Working with frameworks that have been written without testing in mind – particularly older frameworks
- Working in very data-heavy projects, based on mutable source data
- TDD is a bit less useful when the language is more focussed on the problem domain (e.g. C#) rather than more focussed on technical issues (like C++)
We discussed the level that we were “pure” in our application of TDD – whether we were always using a strict, Kent Beck approved “write the most minimal test, write a dumb implementation that just passes that test, then add more tests and refactor”. We concluded that we mostly weren’t – the range of TDD-like approaches that we use are:
- Pure
- Use a hybrid approach – write some minimal tests, write a real implementation, and extend the tests to cover edge cases
- Write TDD-style when it will actively speed up implementing a feature due to better turnaround time
- Write out the specs first, and then implement the code based on that –
- Write tests to help the code reviewer understand what’s been written: sometimes before writing the code, and sometimes after
We discussed a potential pitfall we had sometimes hit with TDD – that sometimes it is tempting to keep hammering at the code until the tests pass, rather than to step back from it to reconsider the overall strategy.
In conclusion – we use TDD a lot of the time, but we tend to use hybrid approaches rather than a strict pure approach.