Test Driven Development (TDD)
TDD, as usually practiced, focuses on writing automated Unit Tests. The developer writes the test so that it fails until the code performs the functionality that the test expects. First the coder writes the test, then the coder writes functionality that passes the test. Refactoring the code to improve structure and readability is commonly done as a final tidy-up step.The main focus of TDD is on testing the low-level functionality in single modules, classes or methods. Kent Beck wrote Test Driven Development: By Example (ISBN 978-0321146533) in 2002, and is credited with developing and promoting TDD as a practice.Beck said “Test-Driven Development is a great excuse to think about the problem before you think about the solution”.
TDD is one of the core practices of Agile's XP Technical Practices and is frequently credited with being one of the reasons for the low defect density associated with Agile projects.
Criticism:
1. David Heinemeier Hansson (Agile developer and creator of Ruby on Rails and Instiki Wiki) compares Agile's inflexible requirement for test-first development to religious fundamentalism. Hansson is a strong supporter of Automated Regression Testing, but questions the value of always writing the unit test first. He asserts that test-first development is leading to inelegant software design.2. James O Coplien (often known as "Cope") is one of the founders of Agile and a leading software theoretician who offers a similar view - suggesting that "Most Unit Testing Is Waste". He points out that the massive computational complexity of testing every unit through every pathway renders the idea of 100% unit test coverage unachievable - thus calling into question the value of attempting such broad testing at the unit level. He is not opposed to the concept of writing tests to prove the functionality of code matches the requirements - he simply questions the value of broad testing at the unit level. His advice is to focus most attention on Functional and Integration Tests. He provides this guidance:
- "Low-Risk Tests Have Low (potentially negative) Payoff "
- "So one question to ask about every test is: If this test fails, what business requirement is compromised? Most of the time, the answer is, "I don't know." If you don't know the value of the test, then the test theoretically could have zero business value. The test does have a cost: maintenance, computing time, administration, and so forth. That means the test could have net negative value."
- "If you cannot tell how a unit test failure contributes to product risk, you should evaluate whether to throw the test away. There are better techniques to attack quality lapses in the absence of formal correctness criteria, such as exploratory testing and Monte Carlo techniques. (Those are great and I view them as being in a category separate from what I am addressing here.) Don’t use unit tests for such validation."
- "there are some units and some tests for which there is a clear answer to the business value question. One such set of tests is regression tests; however, those rarely are written at the unit level but rather at the system level."
- "if X has business value and you can text X with either a system test or a unit test, use a system test — context is everything."
- "Design a test with more care than you design the code."
- "Turn most unit tests into assertions."
Response to criticism:
Kent Beck has mounted a spirited defence to these criticisms. Among other things, he responds that TDD is NOT intended to be universally applicable. In conversations with Martin Fowler (http://martinfowler.com/articles/is-tdd-dead/) he further points out that the much of the criticism is about granularity and extent of testing, but this is a decision that should come down to the preference of individual developers:- "TDD puts an evolutionary pressure on a design, people have different preferences for the grain-size of how much is covered by their tests."
Beck also makes the comment (with regard to the level at which tests are written and the utility of maintaining tests) that he would:
- "often write a system-y test, write some code to implement it, refactor a bit, and end up throwing away the initial test. Many people freak out at throwing away tests, but you should if they don't buy you anything"
This certainly suggests that Kent Beck has a more pragmatic approach to TDD than some of the Agile evangelists who are promoting the practice and this guidance can perhaps be used in conjunction with the other comments above as a guide when trying to decide what tests to write and at what level the tests should be written.
Acceptance Test Driven Development (ATDD)
ATDD is a collaborative exercise that involves product owners, business analysts, testers, and developers. By defining the tests that need to pass before the functionality will be accepted, ATDD helps to ensure that all project members understand precisely what needs to be implemented. After it is implemented in an automated testing suite, ATDD continues to guarantee that the functionality works - providing regression testing after code or configuration changes.So, in theory, TDD focuses on the low level and ATDD focuses on the high level, thus suggesting a clear separation. But in practice it simply isn't that clean. Many coders struggle with questions like what to test, where (at what level) tests should "live" and how to demonstrate to customers that a code-based Acceptance Test actually proves that the code is performing as defined in the requirements.
What is the Difference - TDD Vs ATDD
The concepts that drive Test-Driven Development are related to those that drive Acceptance Test-Driven Development (ATDD). Consequently, tests used for TDD can sometimes be compiled into ATDD tests, since each code unit implements a portion of the required functionality. However TDD is primarily a developer technique intended to create a robust unit of code (module, class or method). ATDD has a different goal - it is a communication tool between the customer, developer, and tester intended to ensure that the requirements are well-defined. TDD requires test automation. Although ATDD is usually automated this is not, strictly required.Limitations of Conventional ATDD as a Communication Tool
ATDD is envisaged to be a collaborative exercise that involves product owners, business analysts, testers, and developers. In theory, these stakeholders work together to design ATDD tests that provide automated code-based traceability from requirements to functionality. But this is difficult to do in practice because the tests are specified and written in a programming language that is usually incomprehensible to non-technical stakeholders.This opaqueness acts as a blocker to communication and makes the required collaboration with the PO and other non-technical stakeholders difficult for the developer and frustrating for the stakeholders. Can the PO agree that an automated test effectively addresses the required functionality, if the automated test is implemented using code that is opaque to the non-technical PO? How can the PO understand what is being tested if there is no shared and mutually comprehensible language for specifying the coded tests? Or to put it another way - how can we specify code and test behaviour in a way that is comprehensible enough to achieve customer sign-off, yet precise enough to be directly mapped to code? These questions led to a new approach, known as Behaviour Driven Development (BDD).
Behaviour Driven Development (BDD)
BDD was an effort pioneered by Dan North (http://dannorth.net/introducing-bdd/) to clarify questions like•What should I test?
•How do I derive tests from Acceptance Criteria?
•How can I make automated tests that are easy for my customer to understand?, and
•How will the customer confirm traceability from requirements to tests?
It provides a philosophical framework for approaching everything from eliciting requirements to defining tests.
BDD provides a shared and mutually comprehensible technique for specifying the tests, by linking the tests back to the Acceptance Criteria, and specifying that the Acceptance Criteria should be defined using scenarios in a "Given-When-Then" format (see the section on How To Write Conditions of Satisfaction and Acceptance Criteria). The Given-When-Then format supports the definition of scenarios in terms of:
•preconditions/context (Given),
•input (When)
•expected output (Then)
This style of scenario construction lends itself well to defining tests. The tests are easy to specify and after they are coded the test behaviour is easy to demonstrate to a non-technical PO. Since the test implements a scenario, it can be instantly grasped by the PO.
BDD links Acceptance Criteria to tests in a way that is both customer-focussed and customer-friendly. BDD is usually done using very natural language, and often uses further tools to make it easy for non-technical clients to understand the tests - or even write them. "Slim" is an example of such a tool (or FitSharp in the .NET world). BDD is thus intended to allow much easier collaboration with non-technical stakeholders than TDD.
In line with the benefit-driven approach, BDD frequently changes the order in which the usual formula for story writing is framed. Instead of the usual formula:
As A…. <role>
I Want… <goal/desire>
So That… <receive benefit>
The new BDD order puts the benefit first:
In Order To… <receive benefit>
As A… <role>
I Want… <goal/desire>
But all this doesn't mean that BDD should replace TDD or ATDD. BDD can be used as a philosophy behind both TDD and ATDD, helping the coder decide what to test and which tests should live where.
Even more importantly the BDD "Given, When, Then" formula allows the conversation with the customer (PO or BA) to continue into the domain of acceptance tests - further validating the coder's understanding of the product while simultaneously further validating the customer's confidence that the code does what is expected.
Experiment Driven Development
Experiment Driven Development (EDD) is a relatively new concept in Agile.The theory is that implementing TDD, ATDD or BDD only works if the assumptions about business benefits (usually made by the PO) are correct! EDD suggests that we test those assumptions with lighweight experiments before we commit a major coding effort to the concept.
EDD suggests that if the Product Owner fails to grasp the big picture, or makes a wrong assumption then TDD and BDD will simply drive optimizations in one area that negatively impact the business as a whole (in line with the predictions of Systems Thinking and Complexity Theory that unexpected outcomes are the norm in Complex Adaptive Systems). The EDD recommendations are in line with the concept of Agile Experiments.
To support the experimental nature of this type of development, EDD introduces a User Story formula for experimental stories:
We believe <this capability>
Will result in <this outcome>
We will know we have succeeded when <we see this measurable sign>
To implement this story formula, the first step is to develop a hypothesis concerning features that might provide business benefits, then construct a story around the hypothesis:
We believe <this capability>
- What functionality could you develop to test the hypothesis? Define a test for the product or service that you are intending to build.
Will result in <this outcome>
- What is the expected outcome of your experiment? What specific business benefit do you expect to achieve by building this test functionality?
We will know we have succeeded when <we see this measurable sign>
- Based on the possible business benefits that the test functionality might achieve (remember that there are likely to be several), what key metrics (qualitative or quantitative) will you measure?
Example
An example of an EDD User Story might be:
We Believe That increasing the size, prominence and graphic nature of images of fire damage and casualties on our page
Will Result In earlier civilian evacuation prior to a fire
We Will Know We Have Succeeded When we see a 5% decrease in unprepared civilians staying in their homes.