Two related bodies of knowledge, System Thinking and Complexity Theory, help us to understand the theoretical basis behind the success of Agile. Consequently, understanding these theories can help us to better apply Agile practices and behaviours.
I have been considering these areas recently. I open up some of my early thoughts for review:
Policy Paralysis: Unanticipated Interactions.
I recently worked for a very large IT company that was going through hard times. It came up with a raft of seemingly reasonable policies that proved insane when applied in combination. Here are two seemingly sensible policies in response to economic hard times:
1. The company did not want to fire anybody unnecessarily, so it put in place a policy that said that if a local customer needed someone that we did not have locally, before we could hire new staff we had to check to see if the company had someone at some other location who was “on the bench” and in danger of being fired. If so, we would offer that person to the customer.
2. The company needed to reduce costs, so it had a policy that said that we were not allowed to fly to a meeting – we should use telephones or video-conferencing instead. The only travel that would be approved by upper management was travel that the customer approved and paid for.
These seemingly innocent policies, when taken in combination with the existing policies and environment, drive the system into suicidal dysfunction. It works like this:
The salesperson (I’ll call him John) sees an opportunity to supply four Scrum Masters and a Project Manager to the customer for a 9 month contract in my city. John brings the opportunity to me. We don’t have any spare Scrum Masters or PMs in my city, so I implement the above Policy 1 and discover we have some spare people interstate. However the company has an older policy that states that for any interstate relocation of longer than 6 months but less than a year, the employee has the right to choose between a range of options including to be paid relocation costs for his whole family.
All very enlightened. But in view of the requirement in Policy 2, that the customer must explicitly approve and pay for all travel costs, this meant that we needed to directly pass that cost to the customer, rather than just building travel costs into our day rate.
So I told John that the customer would have to pay all travel and/or relocation costs as a defined line item, rather than just wrapping it up in an overhead built into the day rate.
Naturally, we have a policy that says that salespeople who don’t sell anything get fired. John knew that he couldn’t make the sale if he told the customer that they had to pay the relocation costs to move our employee’s families to this city for 9 months, particularly since there are plenty of very capable Scrum Masters and PMs in this city already.
So John did what he needed do to avoid the sack. He decided that how the staff would get here was a Delivery issue and therefore owned by the Delivery Team. He made the sale without discussing travel costs, and left it to Delivery to discuss relocation costs with the customer.
This broke the customer’s trust in us of course, and it will be some time before we do any business with them again.
Customers in general are reporting that we are “too hard to do business with”, and this invariably stems from requirements to meet multiple internal and contradictory policies – each of which seems individually perfectly sensible.
Did we try to change these policies? Of course we did. But each policy was issued by a different authority, and each authority was charged with meeting a different agenda. Each policy addressed the authority’s required agenda, so we had no hope of getting the issuing authority to change it.
Individually, every policy was perfectly sensible. It was only in combination that they were toxic.
Which brings me to Systems Thinking…..
Systems Thinking Overview
Systems thinking is not one thing, but a set of habits and practices within a
framework. It is based on the belief that, rather than study a system in
isolation, a system is best understood by examining the context of the system
and its interactions with other systems. Systems thinking focuses on the impacts
of the interactions.
Systems Thinking seeks to understand how small catalytic events in one part
of a system can trigger significant and unexpected changes in other parts
of complex systems. An improvement in one area of a system can have a
catastrophic impact on another area of the system - a phenomenon very familiar
to Software Developers.
Russel Ackoff is usually considered to be one of the first to apply Systems Thinking to Management.
Ackoff is credited with a number of insights into Systems Thinking as it applies to management including:
· A System is not a sum of the behaviour of its parts, it is the product of their interactions.
· If we have a system of improvement that is directed at improving the parts individually and separately then you can be absolutely sure that the performance of the whole will not be improved.
Classifying Systems
Many different methods for classifying systems exist, each allowing us to consider the systems from a different perspective but Russell Ackoff proposed the following 4 classifications be applied to Systems Thinking, based on the level to which the system was able to implement choice (or purpose):
1.
Deterministic (or Mechanical) Systems. If individual parts of the system display no choices, and the system as a whole does not display choices it is a Deterministic System: A wind-up watch for example.
2.
Animated (or Organic) Systems. If parts do not display choices while the whole does, then it is an animated system: A person for example. These systems are also often called organic systems.
3.
Social Systems. If both parts and the whole display the ability to make choices it is a social system: An organisation for example.
4.
Ecological Systems. If some parts of the system do display choices but not the system as a whole then it is an ecological system: An island ecology or a business ecology would be examples.
(
Classification is adapted from Ackoff on Leaders: http://frank543.homestead.com/Ackoff_on_Leaders.pdf )
Historically, Governments, Corporations, Business Units and Project Teams
have been studied and run like mechanical systems - machines for delivering
services (Government) or generating profit (Corporations).
More recently
Governments and Corporations have been viewed as organic systems - an organism
aiming for growth, with profit being one measure of growth. But neither of these
models fit the reality. The reality is that Governments, Corporations, Business
Units and Project Teams are examples of Social Systems - able to make decisions
at each level of abstraction - and treating the people in these organisations as
if they are mechanised parts of a mechanistic systems can lead to as many
problems as treating the organisation itself as if it will display simple,
mechanistic and predictable behavior.
Because complex organisations display decision making at multiple levels, and
because these decisions can interact with each other and with the
environment, large organisations display a very high level of complexity, with
unexpected paths and interactions constantly emerging and changing. Because of
these complex interactions, large organisations may behave in unpredictable -
sometimes even irrational - ways.
Amoral Companies: The Behaviour of the System is Not The Sum of its Parts
In such a complex environment, the interactions between the parts of a system will influence behaviour more than the actions of its individual parts because the emergent properties of the system derive from the interactions between the parts, not the individual actions or even the individual intentions of the individuals who make up those parts.
Emergent properties are defined as properties of the system but not of its individual parts. For instance, a corporation may be composed of people who do not display amoral or predatory behaviour, yet the corporation may display both amoral and predatory behaviour as an emergent property of the interaction between its individual parts within the context of its environment.
Systems need to adapt to their environment to survive. Any system (in our area of interest a system might be a corporation, a Business Unit, or even a project team) that freezes or chooses to go in the wrong direction in a changing environment will suffer from harm or even die. Yet for seemingly incomprehensible reasons, systems frequently choose to take such actions despite the good intentions of the people making up these systems.
(Have you ever asked one of these questions: “How come we have so many smart people in this nation, yet we have a foreign policy that seems to be mostly a collection of pointless wars, while our trade policy revolves around sending everything we have overseas, and then buying it back by going into debt?” Or: “How come my company is full of smart people yet we are being slowly strangled by misguided policies while being driven headfirst towards certain destruction by a strategy that looks like it was produced as an April Fools joke by a departing work-experience student?)
Ackoff believed that the cause of these irrational and potentially lethal actions is generally found to be not just one problem or influence, but a set of interacting problems, goals and influences. Systems Thinking does not offer a “Fix”, it just tells you what to avoid. Once the interacting "mess" is identified, systems thinking tells you what not to do if you want to avoid further irrational actions.
Unfortunately, just identifying what to avoid is rarely an effective management technique.
Management of a system requires directing the system towards a goal, not just avoiding undesirable outcomes, thus there is a need for strategies for developing solutions in an environment of constant change and unpredictable interacting factors.
The Argument So Far: Unexpected interactions sometimes drive irrational and unwanted emergent behaviours in complex organisations. These interactions are often unpredicted, but can be understood and explained after they emerge. The Agile Methodology’s emphasis on flexibility and adaptation is well suited to this environment, and offers us an approach for dealing with it.
Complexity Theory
Complexity theory helps us:
· classify the nature of the problems we face,
· recognise which of these problems are of the complex type described above,
· aids us in defining approaches that will help us direct our projects towards a goal, despite the constant, unexpected turbulence around us.
My Take on Complexity
In the Agile Community, Dave Snowden is probably one of the most influential shapers of our understanding of Complexity Theory.
The decision-making implications of Snowden's complex systems framework (
http://en.wikipedia.org/wiki/Cynefin ) and his emphasis on the importance of running Experiments to help inform decisions and strategies in Complex environments has been influential in shaping the actions of many Agile practitioners in some of the most high-profile and difficult projects and environments.
In 2007 Snowden's conceptual framework around Complexity Theory was the cover feature in the Harvard Business Review (November 2007 issue -
http://hbr.org/2007/11/a-leaders-framework-for-decision-making/). The paper was a Citations of Excellence winner as one of the 50 best papers published in 2007. It was also designated as the 2007 Best Practitioner-Oriented Paper in Organizational Behavior by the Organizational Behavior Division of the Academy of Management with the following comment:
"This paper introduces an important new perspective that has enormous future value, and does so in a clear way that shows it can be used. The article makes several significant contributions. First, and most importantly, it introduces complexity science to guide managers' thoughts and actions. Second, it applies this perspective to [] help leaders to sort out the wide variety of situations in which they must lead decisions. Third, it advises leaders concerning what actions they should take in response."
Snowden's Framework
The concepts behind Snowden's framework are deceptively simple. Snowden said that problems can be divided into 5 broad categories:
1. Simple problems. Simple is the domain of Best Practice. The problems are recurring and the Best Practice response is known.
Characteristics: Problems are well understood and solutions are evident. Solving problems requires minimal expertise. Many issues addressed by help desks fall into this category. They are handled via pre-written scripts.
Approach: Problems here are well known. The correct approach is to sense the situation, categorize it into a known bucket, and apply a well-known, and potentially scripted, solution.
2. Complicated problems. Complicated is the domain of "good practice". Problems are rarely repeated, so Best Practice has not been defined.
Characteristics: You have a general idea of the known unknowns — you likely know the questions you need to answer and how to obtain the answers. Assessing the situation requires expert knowledge to determine the appropriate course of action. Given enough time, you could reasonably identify known risk and devise a relatively accurate plan. Expertise is required, but the work is evolutionary, not revolutionary.
Approach: Sense the problem and analyse. Apply expert knowledge to assess the situation and determine a course of action. Execute the plan.
3. Complex Problems. Complex is the domain of emergent solutions. The cause-effect relationship is broken, with a given action rarely achieving precisely the same outcome twice.
Characteristics: There are unknown unknowns — you don’t even know the right questions to ask. Even beginning to understand the problem requires experimentation. The final solution is only apparent once discovered. In hindsight it seems obvious, but it was not apparent at the outset. No matter how much time you spend in analysis, it is not possible to identify the risks or accurately predict the solution or effort required to solve the problem.
Approach: Develop and experiment to gather more knowledge. Execute and evaluate. As you gather more knowledge, determine your next steps. Repeat as necessary, with the goal of moving your problem into the “Complicated” domain.
4. Chaotic Problems. Chaotic is the domain of novel solutions.
Characteristics: As the name implies, this is where things get a bit crazy. Things have gone off the rails and the immediate priority is containment. Example: Production defects. Your initial focus is to correct the problem and contain the issue. Your initial solution may not be the best, but as long as it works, it’s good enough. Once you’ve stopped the bleeding, you can take a breath and determine a real solution.
Approach: Triage. Once you have a measure of control, assess the situation and determine next steps. Take action to remediate or move your problem to another domain.
Example: Production defects.
5. Disorder. Disorder is the space of flux between problem types.
Characteristics: If you don’t know where you are, you’re in “Disorder.” Your first priority should be to identify your problem type and move to a known domain.
Approach: Gather information on what you know and identify what you don’t know. Get enough information to generate a plan to move to a more defined domain.
Where to turn:
· If it’s simple, trust the manual
· If it’s complicated, trust the experts
· If it’s complex, trust the team’s experiments
· If it’s chaotic, trust the leader
The Cynefin Framework
Moving Between Domains
The boundaries of these domains
are not hard. Based on activity, situations can bounce between domains or live
on the borderlands between two domains. A commonly observed pattern is that a
failure to make correct decisions in the Simple domain can lead to unexpected
interactions and tip a system very quickly into the Chaos domain. The key to
recovery is to recognise that, if the situation has deteriorated into an area
where the outcomes are unkown, then rapid experimentation may be needed to
establish a path back to stability. Managers must be quick to recognise the
point at which expert opinion is no longer applicable (we've never been here
before). At that point the correct plan is to move from "Analyse and Decide" to
"Execute and Evaluate" - take an action, then check to see if it worked and
shape the recovery plan iteratively, rather than
trying to define the entire recovery plan in advance.
As an example, below is the framework as it applied to the various decisions
leading up to a potential air disaster. It demonstrates that if Simple problems
are handled incorrectly there is frequently a transition from Simple problems to
Complicated problems, which then quickly transitions to a descent into Chaos. In
this case an incorrect fuel calculation by the flight crew
interacted unexpectedly with a technician's mistaken attempt to address a
technical fault by disabling the fuel gauges.....
When, without warning, the fuel ran out and the engines flamed out at 41,000
ft, the pilots discovered that most of their control systems were inoperable
and, on consulting the expert guides, they discovered that there was no
documented procedure for attempting to fly a 767 with no engines, electrics,
instruments or hydraulic controls. They were in the realm of Chaos, with no
knowledge of what actions might produce their required outcomes. In order to
succeed, they had to find a way to transition from Chaos to the Complex realm.
Pilots usually work in the Complicated realm, so their framework for making
decisions needed to change from analysis of the situation and reference to
Operating Manuals ("Analyse and Decide") to experiment-guided decision making
("Execute and Evaluate"). In this case, the pilots successfully transitioned
from Chaos into the Complex domain by attempting a series of experiments and
evaluating the outcomes in order to generate the knowledge they needed. Using
these experiments as a guide, they identified a chain of novel actions that
worked and achieved a successful (survivable) landing of the aircraft.
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhRIRwdOccmv1q8MEUHALCyTQh44Fqt1pNgYaU7ej5-0vXqPI9vo8LVR8N28gCBvdXrR4yvN_kSnw1wqGHoasW7UKQJMQVlxWOSlTJFSlY4rrBdX19tFvDsta2zUDKH_Jh2qdILyyBdzH4/s1600/cynefinExample.jpg)
The Cynefin Framework as Applied to the "Gimli Glider" - an Air Canada Boeing 767 that ran out of fuel half way through its flight.
(From
http://emergingoptions.wordpress.com/2011/11/17/the-cynefin-framework-in-action-the-gimli-glider/ )
Conclusion
The net takeaway is that because the Agile methodology requires a lot of Just-In-Time decision making, you need to understand the type of problem so that you can pair your decision-making approach with the problem type. Cynefin provides a simple Framework that provides a guideline on how to match your decision-making against the situation.
When complex interactions make it difficult to identify a successful path forward for your project, use Agile Experiments to guide your journey.
More on Snowden's conceptual framework can be found here:
http://cognitive-edge.com/library/more/video/introduction-to-the-cynefin-framework/
So how do you ensure that your team doesn't form a system that is dumber than its parts?
Policies and procedures are a solution appropriate to the Simple realm - they
assume that the correct action is simply to read the script and carry out the
action. Yet our work is mostly conducted within Complex, not Simple, problem domains. So is it any surprise that the
emergent interactions of numerous simple Policy documents within a complex
environment can generate unexpected and frequently unwelcome outcomes?
It seems obvious to me that we need to depend more on sensible, guided, decisions and we need to rely less on inflexible, unpredictably-interacting processes and policies. Encouraging decisions guided by evidence, gained empirically within your specific problem space, rather than simply implementing blind policy is one way to ensure that you manage a team that is at least as smart as its parts.