The post below is an excerpt from the talk I recently delivered at [SwanseaCon](http://swancon.co.uk/" target="_blank) (Wales). This is part 1 of a series of TDD posts I'm planning on writing. The main focus of the series is to highlight common issues in Test Driven Development (TDD) and analyse the bigger problems surrounding the use of TDD. First I'll give a brief definition of TDD and then mention some of the biggest benefits of using TDD in your development process. Finally, I'll try to highlight some of the usual pitfalls developers tend make.
A TDD definition
Some of us already practice TDD and many of us have heard the term. However, for those of us that have been living under a rock or have been developing in a basement with no human contact in the past 12 years, it may be useful to explain what TDD is. TDD is a development approach which combines test-first development and continuous refactoring. Test-first is self-explanatory, you start by writing the test before you even write any production code. Refactoring refers to the continuous short cycle of red-green-refactor where you write just enough code to satisfy the requirement of the feature or bug you’re working on. You can find the rules of TDD and a good analysis of each one on [Uncle Bob's blog.](http://blog.cleancoder.com/uncle-bob/2014/12/17/TheCyclesOfTDD.html" target="_blank)
What TDD isn't
This part is, in my opinion, very important. Sometimes people tend to confuse TDD tests with unit testing. TDD tests are often used to drive unit testing but they aren't the same as unit tests. Unit tests validate our code and ensure that, for each tested unit, given a specific input you get the expected output. A reason developers tend to confuse TDD and unit tests is because they rely on the same test frameworks and tooling as unit tests.
TDD is not User Acceptance tests (UAT) either. User Acceptance test are used to test behaviour. They test that the application behaves in a specific way given a set of user actions or external input. A set of successful TDD tests should not be used as the ultimate criterion for marking a piece of code as production-ready. Nonetheless, I've seen companies make this mistake before. Oh my...
The benefits of TDD
TDD has many great benefits. This explains why it has seen such great adoption in the industry. It also explains why you can’t find a single advertised developer job that doesn't have TDD as part of the requirements. Regardless of whether the hiring company understands how TDD truly works. They usually refer to unit tests, but let's ignore this for now. So, what are the benefits if adopting TDD in your development process?
Small pieces of code with only the necessary functionality to satisfy the requirement criteria
You start with a failing test, you add minimal code, you test again, you add a bit more code, you test again, you implement the functionality you need, you test it and you’re done. No more, no less.
Short feedback loop
Test, code, test, code, red, green, done. With small iterations you know you’ve introduced a bug as soon as the test(s) fail. You refactor your code to extend the current feature or fix a bug and you know straight away if the change works or not using your tests for feedback
Helps create design specification
When you write your unit tests first, you are forced to think about what you need to test before you write any production code. This, in effect, forces you to think about the design of your code sooner than later. It encourages you to "solve the problem" going from the specific to the abstract.
Reduces the time it takes to refactor your code or fix bugs
Already covered by the short feedback loop and in tests carried out in real teams, those that didn't use TDD had, generally, a harder time closing their tickets as fast.
Test-driven not debugger-driven development
Using an automated test suite to run small, concise and targeted tests you spend less time in the debugger. Because the test(s) will tell you which part of your code failed. You still have to use the debugger, but now, instead of running through the full fail path, you know with better precision where the fault lies.
Makes your code more adaptable to change. Having confidence in your code makes it easier to respond to ever-evolving business requirements. Your product or service becomes more innovative and competitive when you can release features and bug fixes faster.
Code confidence - a real life example
I recently worked on an ASP.NET Web Forms project. One of the main .aspx pages in the application had no HTML or asp.net tags. Instead, it contained over 1800 lines of uncompiled, tangled JavaScript code. This code, written inline, was used to drive the whole page using ajax calls to pull data from the API to build the DOM. The performance on this page was appalling but this is not what matters here. The biggest problem was the fact that there wasn't a single line of unit tests for the JavaScript. Imagine my horror when I was asked to add a new field in the form. This field had to be added from the UI all the way down to the database. How confident would you feel in this case?
Common Pitfalls in TDD
So far, we’ve seen how TDD can help improve the development process. Unfortunately, as with everything else in programming, there isn’t one right solution to a problem. Neither a single true path to follow. With so many things open to interpretation it’s hard to judge whether something you implemented is right or wrong. The stuff I will be mentioning next are traps that many TDD practitioners have fallen in before, me included.
1. Not using a Mocking Framework
TDD requires that things are tested in isolation. To achieve this, we need to ensure that dependencies to external code (services, DAL, helpers etc) are isolated using stubs, mocks or fakes. A common mistake is not using an established mocking framework. Instead, developers prefer to manually mock, stub or fake these dependencies, in effect hand-rolling their own framework to support their testing. As an example, in a project I worked we chose to use IoC and inject test concrete implementations of all our interfaces to allow the tests to run in a controlled environment. With large interdependencies, it soon became a maintenance nightmare having to provide custom implementations for every interface and method we had in the system. A mocking framework would have saved me a lot of time that I could have invested in viewing cat videos on the internet. If only I knew back then!
2. Too much setup
From not using a mocking framework, to overusing mocking frameworks. It'ss not uncommon to overdo it when setting up tests. In most production code most objects have many interdependencies. In some case these dependencies can be as high as 9-10. Even with mocking frameworks at hand to help, it’s hard to decide how far to go. How many methods do we need to set up and how many dependencies need to be satisfied before we reach the point that our TDD tests can work? In some cases, we may configure dependencies and methods that are not needed. It’s definitely a tricky situation
3. Asserting everything
When looking at a test method, you should be able to describe the assertions without using AND. If you need to add AND in your description you’re doing it wrong and your test is covering too much. Example:
GetByIDReturnsRightObjectNoNullNameAndNoNullAge();
The more assertions you add, the more chances there are for your tests to fail. Consequently, you’ll have to fire off the debugger to figure out which of the test(s) has failed. In an ideal world, each test should test one thing only. This approach may add more tests but you gain clarity and readability.
4. Unnecessary testing
Testing every single line of code can prove counterproductive and can add duplication. Consider the following example:
public Student GetByID(string id)
{
return studentRepository.GetByID(id);
}
Does this method warrant a test? The problem here is automated test tools and test coverage reports. If we leave this method untested, then the tools will report this as an omission. A big red mark! But there’s no value in testing this and it will result in losing precious time. Yet, many developers will disregard common logic to satisfy an arbitrary metric.
How do you feel about TDD? Do you practice TDD and if you do, what is it about the methodology that you like or dislike? What are your mistakes (if any)? Let me know in the comments.