Test-Driven-Development is a cool concept, but it has rarely been applied in the real world.

What is the great benefit of TDD? Is it the reliability given by the various unit tests? Well, if it would be, then there would be no reason to start with a test. Is it the additional reliability that the test works? If it would be, there would be no need to write the code / check it extensively.

The concept of Test-Driven-Development or TDD is smart and beautiful. We start with a test to be see that it indeed fails if our code is not doing the right thing. This is a first indicator that the unit test actually works. It is not a guarantee, but a good sign. Then we do everything to make this unit test (and at the same time, all the other already existing unit tests) pass. Finally we might improve our method.

Where is the flaw in this method? The whole process seems to be so over-engineered, that barely anyone is actually following it. To be honest I like to call this an ideal process. I talk a lot about it and I usually state that TDD is a good thing. But I am rarely using real TDD. Instead my process is most of the time a derivative of the following:

  1. Write some code (probably after some specification)
  2. Write a bunch of unit tests to test the code
  3. Improve the code and / or the unit tests (more or less strict depending on the problem or asertations)
  4. Do some refactorings
  5. Also write more tests for that specific part of the code
  6. Maybe adjust the unit tests to the refactoring

The adjustment process is necessary due to API changes or other previously unknown issues. No unit test will be removed in this process. Also no assertion will be deleted, only because it is the reason for failing a test. There are boundaries and the basic idea is to improve the API and keep the existing unit tests (with everything working, always).

Additionally I will usually follow my concept of Bug-Driven-Development or BDD (this is not the same as the anti-pattern described in the Wikipedia article given in the references). This concept has basically three parts:

  • Write the code
  • Test the code
  • Maintain the code

The third part consists mainly of refactoring steps. But there is also the bug report step, where reported issues are basically treated in the following order:

  1. Project the issue to the problem
  2. Write code that illustrates the problem (a failing test)
  3. Make the test pass without making other tests fail

The biggest problem here is certainly the first step. Here we need to boil down the original bug report to the method that is responsible for the misbehavior. Finally we can then write a unit test that makes the error reproducible. This is not a guaranteed process, as some problems might occur in dependence of the environment (hardware, software, date, ...). But this is the point where concepts like mockups are becoming crucial.

The philosophy of BDD gives me great freedom while increasing the stability of the program with every user report. I am quite sure that fixing a new bug is not (re-)opening old issues. It is nevertheless worth noting that BDD has nothing to do with treating your customers like beta testers. Most issues should be reported and listed while actually writing the program. The usual use-case is to play around with the source code. This behavior will then result in more test scenarios and the encounter of possible problems.

In my opinion BDD is just applied TDD with less boundaries and more flexibility. In the end finding the right mixture between TDD and BDD sounds quite reasonable for me.

Created .


Sharing is caring!