TDD - The larger issues

Test Driven Development (TDD) is a programming methodology that has seen a great deal of success but has also caused an equally great deal of friction in the development community. In [a previous article](GHOST_URL/common-mistakes-in-tdd/" target="_blank), I described the common mistakes that developers tend to do when practising TDD. Today, I will be talking about the set of wider problems surrounding TDD. This post is written at the back of the infamous ["Is TDD dead"](http://martinfowler.com/articles/is-tdd-dead/" target="_blank) discussion series between David Heinemeier Hansson (DHH), Martin Fawler and Kent Beck and is based on a talk I recently delivered at SwanseaCon, Wales.

The issues I highlight below are usually some of the main reasons that make some developers and teams feel so apprehensive or negative about TDD.

1. False confidence

Developers should be confident about their code. This confidence derives from effective testing and reasonable code coverage. TDD should not be the only path to code confidence. The TDD mandate should not dictate whether we can feel good or bad about our code. It’s the overall testing that should add confidence to the code. Testing is more than TDD or unit testing. It's a whole suite of different tests:

  • UI
  • UAT
  • Integration
  • Component
  • Unit
  • Manual

I’ve noticed that when sometimes people commit to TDD they feel guilty if they divert. I had a chat with a developer recently who said that he tries to do TDD but he only manages this 50% of the time. due to time, pressure and deadlines. He worries that the other 50% isn’t true TDD and he can only add tests at a later time. He feels guilty because he's "failing" TDD. But is he? Remember that getting green tests is the end goal. How you get there doesn't matter. Is a test written after the fact less effective than a test written before the code?

2. Testing too much

Testing is great. Discovering TDD for the first time is like getting access to a whole new world. You become obsessed with green. Very much so that you can go overboard. This is particularly prevalent among more inexperienced developers. Add on top of that tools like nCover and nCrunch that give you instant feedback on code coverage and it's easy to get carried away. Your goal becomes to test everything under the sun. This may be a noble goal but tests aren't free. Every line of code has a cost. It takes time to write, update, read and understand. To make sense, each line of codes needs to give more than it takes to make it. Over testing is the opposite and can be detrimental.

3. Test-induced architecture damage

The maxim of TDD is that code should be easier to test first, tests should be small, fast and work in isolation. The adoption of the TDD mantra “Do the simplest thing that could possibly work” can easily lead to bad code architecture through either needless indirection or conceptual overhead. The code ends up being out of shape solely to accommodate testing objectives. Some TDD advocates may suggest that testing can only be good for the code. However, this can lead to unnecessary friction. On one hand, you have managers and companies shoving TDD down the developers' throats. On the other hand, you have the uprising push-back from developers who declare that "TDD is dead" and that it doesn't work for them. If we accept the premise that TDD and its "green-red-refactor" cycle are the one true guide to programming design, then we should be able to sacrifice everything on its altar without feeling guilty. Introducing complexity and additional layers in order to make the code testable is totally justified. But is it really?

Imagine that you invite a famous builder and architect to build a skyscraper. After a month they come back to give you a progress update:

“The first floor is done. It looks gorgeous; all the apartments are in perfect, liveable condition. The bathrooms have marble floors and beautiful mirrors, the hallways are carpeted and decorated with the best art. However, I just realized that the walls I built won’t be able to support a second floor, so I need to take everything down and rebuild with stronger walls. Once I’m done, I guarantee that the first two floors will look great.”

4. Heavy mocking

This point is more than mocking too much or not using a mocking framework. If we go down the extreme programming route and stick with TDD then every unit test should be tested in isolation. To do so, we need to ensure that the infrastructure can support it. Since we rarely get code that works in isolation (controllers use services, services use some kind of DAL and so on) and isolation is the goal, we need to mock things out. Unless we are implementing our own mocking framework, most of the existing ones require code to be written in a certain way to satisfy particular criteria - interfaces, Dependency Injection (DI), Inversion of Control (IoC) etc. Testing code this way, leads to bloat. Libraries and code are written to satisfy the needs of TDD and extreme unit testing

5. TDD in the real world

It's all nice and rosy when we join a green-field project and we start with a fresh and new codebase. TDD in this context makes sense. However, most developers join existing projects or mature products with a few thousand/million lines of code. In the real world, we often have to work with tasks that involve adding a feature or fixing a bug. And before we even start tackling the problem we think: "I know, I'll go full TDD on this one because this is the right way". Then we open the code that needs to be changed and we are presented with 3 possible scenarios:

  1. No unit tests or any other tests whatsoever (worst nightmare)
  2. Some tests with some code coverage (OK, workable situation)
  3. Full TDD (ideal)

I will assume that we've got presented with #2. Depending on the design of the application we may be in a position to easily add a mocking framework to allow us to start testing. In some cases, the code may be too coupled and dependent on other services to allow us to do TDD. For example, everything is a concrete object and there are no interfaces. Surprise! In this case, we can either do our bit, write/extend a few tests around our code and continue to the next feature/ticket/work item. Alternatively, we can start ripping things apart and going down the refactoring hell route. We choose this route in order to get to a point that we can effectively start doing some form of TDD. By this point, we are too far down the rabbit hole and can either commit to the task at hand or do a "git checkout\rebase" to wipe out our changes in order to come out of the hole. In all my years, I've only been lucky to join a new team on a green-field project once! This was the only time we had a chance to practice proper TDD. In most cases, money and time are too restrictive to allow developers to perform extensive refactoring to make the code testable through TDD. A fair compromise is that anything new can be done using TDD but old, strongly coupled code needs to continue living in its current form.

6. Not everyone can think in TDD terms

This is my final point. There are certain developers that are adept in thinking in a specific way. They are OK with going from the specific to the general which means they are well suited to work in a TDD way. Unfortunately, not everyone is made the same way. There are people that feel more comfortable going from the general to the specific. For them, it's easier to take a big problem domain and break it down to small manageable units (see what I did there?) before writing any code or tests. They use this thought process, iterating through the problem and breaking it down to smaller pieces because this what makes sense. These people are the ones that struggle with TDD so forcing them down that route can only have negative impact to a project.

Make TDD work better for you.

These are my suggestions for making TDD work better for you and your team. I am a strong proponent of the right tool for the right job. TDD has become the hammer and now everything looks like a nail! Unit testing everything doesn't even make sense. If we take MVC as an example then in the 3 components, only 1 qualifies for testing - the Model. The purpose of the controller is to integrate requests from the client to the application and relay an appropriate response. There is a whole heap of components that need to be mocked out in order to unit test a controller in isolation. In the end it doesn't even make sense. Controllers are better suited for integration testing because they are inherently tightly coupled with other services. The View sits at the top of the MVC stack and, as such, it's much better suited for end-to-end or UI testing.

TDD is still valid and important but should be used when applicable. Going overboard or becoming a TDD fanatic helps no one. Following the TDD methodology and rules blindly is not recommended. Common logic should be used to make TDD useful:

  • Code to test ratio should not exceed 1:2 (more than that is code smell)
  • Find the right coverage ratio, don't aim for 110%!
  • If testing takes more than the time it takes to write your code, you're probably doing it wrong
  • Try to keep a clear separation between unit and integration testing
  • Don't force yourself to test everything first. Find your own personal ratio that promotes your performance and confidence in the code. Something like 30-70 works best for me, but I'm not afraid to tweak this to better satisfy my needs.

How do you find TDD and does it really work for you and your team? Feel free to let me know in the comments.


  • Share this post on