Thursday, 30 October 2014

Agile Experiments - Mitigating Risk, Maximising Benefits, and Guiding The Way

Overview

Complexity Theory tells us that the systems that we deal with (corporations, business units, development teams, and the interactions between them) are Complex Adaptive Systems.

In Complex Adaptive Systems, using experiments to guide decisions will frequently outperform using expert opinion.


A CAS is a system in which the components interact dynamically, frequently producing unexpected outcomes. (How often have you checked in and run new code, or sent off an email, or just asked a question, and then been surprised by the explosion that followed?)

These unexpected outcomes make predictions (and thus plans) unreliable, so Complexity Theory suggests that when dealing with a CAS and a substantial number of unknown factors, using experiments to guide decisions will frequently outperform using expert opinion.

Unexpected interactions are not always negative. In addition to reducing our risk profile, experiments allow us to identify unexpected opportunities.

The kind of uncertainties that we deal with exist at many levels, ranging from code behavior, through requirement elicitation, to organisational uncertainty:

  • How will this new code module interact with the old code?
  • Is there really a business case underlying this unclear user story?
  • How will overseas users respond to this UX change?
  • How will the governance process owner respond to this Agile planning approach?
  • How will the Operations team respond when they are told that they need to support these new products with Automated Regression Testing frameworks built into the code?



Examples of experiments that might guide decision making include:

  • Rapid Prototyping 
  • Wireframes
  • Walkthroughs
  • Reconnaissance by coding 
  • Buy the head of the Project Management Office (PMO) a coffee and run him through a new process over Afternoon Tea
  • Implement a new process incrementally in one area of the organisation

Why Experiments?

About Complex Systems:
  • A complex system has no repeating relationships between cause and effect. For example: Offering the Apple Mac in citrus colours was a big hit for Apple in the late 1990s - it is frequently credited with saving the company - but if they did it again today, would it work again? In complex systems, reliable repeatability simply cannot be expected - the same answer rarely works twice. To quote a lecturer in economics who was asked why his exam papers were identical every year: "Sure, the questions are always the same, but the correct answers change every year!"
  • A complex system is highly sensitive to small interventions - the famous "Butterfly Effect". For example it is common for a tiny change to a piece of code or a configuration (such as a slight change to an IP number) to completely bring down a large software system.
  • The success of an action in a Complex System cannot be assumed even if it achieves the intended goal, because unexpected side-effects are common and the down-side of the side-effects may outweigh the upside achieved by the intended goal. (Example: Changing the IP to move a Web server into the same domain as the database server may speed up access to the database - the intended goal - but moving the Web Server behind a firewall may render it unreachable - an unintended but lethal side-effect).
Hence, when dealing with complex systems there are benefits in experimentation. "Safe-fail Probes" are small-scale experiments that approach issues from different angles, in small and safe-to-fail ways. The intent of these probes is to approach issues in small, contained ways to allow emergent possibilities to become more visible. The emphasis is not on avoiding failure, but on monitoring the outcome and allowing ideas that are not useful to fail in small, contained ways, while the ideas that show desirable outcomes can be adopted and used more widely.  
See more on Safe-fail probes at: http://cognitive-edge.com/library/methods/safe-to-fail-probes/

The take-away lesson is this:

  • The outcome of acting on a complex system is rarely predictable. There is no repeating relationships between cause and effect and the emergent behavior of a complex system is frequently unrelated to the actions of the component parts.
  • A complex system is input-sensitive - small changes may produce large payoffs.
  • Consequently complex systems sometimes offer big payoffs for small experiments.
  • The precise outcome of any given experiment is hard to predict, so managing this process requires that the outcome of each experiment is monitored. Successful experiments are expanded, while unsuccessful experiments are terminated.

Specific examples of safe-to-fail experiments might be:

  • At the developer level - adopting a new design pattern in a contained area of the project
  • On a larger scale - introducing a new methodology in a small Business Unit
Monitor the outcome and consider what effects the intervention had (both outcome and side-effects). If the undesirable impacts outweigh the desirable effects this experiment is not a "failure", the observed outcome and side-effects will provide a guide towards evolving a path towards a more desirable outcome.
Note that the requirement for safe-to-fail has an important implication in Software Engineering: We need the capability to create test environments that are sufficiently "production like" to yield useful lessons, but sufficiently disconnected from the main codebase to ensure that side-effects of experiments have no impact. This leads to Agile practices such as "mocking" (http://en.wikipedia.org/wiki/Mock_object ).

When To Use Experiments?


As a guide, if either of the following is true, then you are looking at a candidate for experiments:
  • You have identified an area that you want to exploit, but don't know how to exploit it. Keep the investment small until you have demonstrated that the upside of success is large, then focus on the identified upside. Areas such as automation and the use of coding patterns are examples of commonly identified areas where experiments might yield large benefits.
  • You have a job to do, but you don't know how to do it and the environment is large and complex. If the area is large and volatile, with a number of unknowns or risks, then the focus is on a broader picture - minimising risk and maximising opportunity by iteratively refining your strategy for reaching your intended destination.  Experiments that seek to navigate the unknown should be small probes, run iteratively, with the learning from each probe guiding the direction of the next. For example, when seeking to implement an Organisational Transition, such as an Agile Transformation, run your experiments in cycles:

When navigating the unknown, (e.g. Performing an Organisational Transition), run experiments in cycles, implementing what works and discarding what does not work.

Suiting The Experiment Type To the Situation.

It is often said that Fleming's discovery of penicillin was an accident. This is not entirely true. Fortuitous perhaps, but not a complete accident.

How it happened: Fleming was experimenting with growing bacteria in culture plates. After going on holidays, Fleming came back to his lab to discover that a number of uncovered culture plates had been contaminated by a mould that blew in through a carelessly-opened window. Wherever the mould grew, the bacteria was absent. Fleming recognised the significance at once and collected and labelled the plates, and so began the study that would result in antibiotics - one of the greatest advances in health in the history of humanity. 

Accident? Fleming had been looking for agents that would kill bacteria. He was working in a lab that was equiped to detect these agents and most importantly, Fleming was working with the right mindset to recognise an agent when he saw one, even if it wasn't where and when he expected to see it. The contamination of the plates was fortuitous, but the right environment and mindset had been created and the experiments were running.

In deciding what type of experiment to run, ask yourself what upside am I trying to achieve? Define areas of focus where the cost can be small and the upside large.

Then make sure that your experimenters are equipped with not just the right equipment and information, but the right mindset. They should be looking for opportunities to achieve upside, not just attempting to achieve a particular outcome. 

Remember that you need to chose between two types of experiment:
1.        Experiments that focus on knowns - seeking to achieve an upside in a known high-value area (in Fleming's case - an agent that would kill bacteria was known to be a likely high-value discovery). 
2.        Experiments that seek to navigate the unkown - reducing risk and identifying areas that might yield high-value results. This type of experiment yields two things: Clarity of direction and opportunities for future exploration.
 
The approach to each is similar, but with some notable differences. 
If you are seeking to navigate the unknown:
1.        Gather domain experts. 
2.        Clarify what areas might be of interest. Where are your unknowns?  
3.        Once you have identified your areas, gather your testing/UX/experimentation experts. What is the best tool to use in each area? (Wireframes to clarify thinking? Rapid prototyping to prove which options might work? Reconnaissance by coding to find out which approach demonstrates acceptable performance? Combinations of some or all? Or something else entirely?)
4.        Put together a team and set them up to do the experiments.
5.        Make the timebox for each experiment clear. This is not open-ended investment.
6.        Make it clear that they are not just there to achieve a particular outcome, they are trying to learn lessons and identify risks and opportunities. 
7.        Design your first experiment.
8.        Implement the experiment.
9.        Once the experiment is done, identify the lessons. Have we achieved clarity of direction? Have risks/issues/opportunities for future exploration been identified?
10.        Retain the positive and implement it. Discard the negative and learn from it.
11.        Based on what you have learned, is there a business case for continuing to run experiments?
12.        If there is a business case, design your next experiment, and go to step 8. Repeat as required.

The Differences

  • Experiments in an area known to be likely to yield upside will tend to focus on identifying rewards in the specific area.
  • Experiments for navigating the unknown will tend to focus on identifying positive directions for further development or investigation and (equally importantly) directions that are unlikely to reward further investigation..
 Further References:
EDD: Experiment Driven Development  
David Clarke at LAST Conference 2014: Managing Risk Exploiting Chaos

No comments:

Post a Comment