Home / more articles - The author - Contact

Dynamic GraphicA new approach to management control: Dynamic Management

by Matthew Leitch, 13 November 2002, version 1.2

Why?

Have you noticed that things at work don't often work out the way you expected or wanted originally, and the goalposts keep moving? These days most people have experienced this. Budgetary control seems increasingly inadequate. Scorecards and targets are the new hope but most organisations should already be noticing that they have many of the same flaws as bad old budgeting. Is it hopeless? Should we embrace "complexity theory" and accept chaotic muddle as natural and healthy?

I don't think so. There is a way to manage under great uncertainty and complexity. It breaks rules you may be so familiar with you don't even know you believe them, but there is a way.

As you read on there will be times when problems come up in your mind - objections - reasons why what I'm explaining couldn't work. Trust me and read on. I've spent years working out solutions to those problems and I've solved many problems most people haven't even thought of. If I've missed something you won't find it that easily. And even if you don't agree with everything I'm certain you'll find something here that is new and useful to you.

Here's the index. Please, read on and enjoy!

Definition and introduction
This is different
The main advantages of Dynamic Management
Key concepts of Dynamic Management
Five examples of Dynamic Management
How to manage Dynamically
- Better, quicker understanding of goal systems
- Faster uncertainty management extended to include goal systems
- Faster teamwork that gets more people involved in rethinking
- Incentives that point in the right direction
- Building flexibility into contracts
- Planning and forecasting
- Management information focussed on learning and rethinking
- A new management process
- Easy adaptation to different domains and scales of organisation
- Creating pockets of Dynamic Management in static organisations
- Summary of differences between Dynamic Management and common practice
Conclusion
Further reading

Definition and introduction

Dynamic Management is simply management that expects the goal system (i.e. goals and the way alternative futures are valued) to change, though not necessarily in a predictable way. It "expects" change in both senses:

Dynamic Management is applicable to both operations and projects, since changing goal systems occur in both. It is also applicable at every level, from a large organisation down to individuals within it, and individuals in their private lives.

Here are some examples of changing goal systems:

Example: Changed reason for existence. Franklin D Roosevelt suffered a crippling condition as a result of polio. In 1938, at the height of his own popularity and the seriousness of the polio problem, he founded the National Foundation for Infantile Paralysis to fight polio. The organisation quickly grew into a successful fund raiser. In less than 20 years polio had largely been defeated thanks to the Salk and Sabine vaccines. The Foundation was left with a choice: find a new goal or close down. They decided to find a new goal and concentrate on "other crippling diseases" with a particular emphasis on birth defects.

Example: Project level change. In the late 1990s British Telecommunications (BT) began a project to create an internal market by which its divisions could trade with each other. The idea was to give senior managers responsibility for profit making organisations and give them more meaningful management accounts, as well as encourage more parts of the vast company to behave in a commercial way and be competitive. As the telecom gold rush reached its height, a reorganisation was announced which involved taking this idea much more seriously. Now the intention was to create separable businesses that could be floated separately though still as part of the group, making the true value of the BT group clearer to investors and analysts in the city. As separation gathered momentum it became clear that just offering a minority of the shares in its most exciting divisions was not going to be enough. Investors wanted completely separate businesses to be created. The mobile communications division was floated and demerged, with other divisions making preparations. Then the telecom bubble burst and BT's top team changed. Further flotations were abandoned and divisions were encouraged to act together instead of straining to go their separate ways.

Example: A life change. Most people find that becoming a parent is a life changing event, upturning priorities and plans dramatically. Some people adapt faster than others. In my own case, I went from working to have a career for myself to working to get money for my family in about a month.

This is different

Of course in most late twentieth century management thinking goals change, but this is as a result of strategy formation or some other kind of cyclical planning process, not a routine part of day to day, month to month management. Changing goals is thought of as an upheaval, a disruptive, emotional, heavyweight activity restricted only to a senior elite in the corporate hierarchy. In contrast, Dynamic Management makes learning and changing goals a frequent occurrence, carried out at any level in an organisation, on receipt of relevant news rather than because another year has ended.

The main concerns of managers using Dynamic Management include updating goals and forecasts to reflect what is being learned, and keeping plans up to date with the latest goals and forecasts.

While this is common sense, the vast body of management literature and stated practice (though less often actual behaviour) makes the assumption that, once determined, objectives remain fixed. The main concern is to adjust plans to reduce the difference between actual outcomes and original plans and expectations. This idea is particularly strong in project management, and nearly all advice on how to maintain "control" of projects and operations is based on comparing actual results with expectations or targets which reflect the original view rather than the latest and most informed view. This is true regardless of whether or not there is some contract or agreement in place that gives special weight to goals agreed at a particular point in the past.

In recent years, dissent and dissatisfaction with fixed goals has grown. Nearly everyone dislikes their budget process and some companies have already rejected budgeting altogether while others place increasing reliance on rolling forecasts. In the face of consistent failure, "performance management systems" (i.e. the practice of getting individuals to set goals annually and be judged by them) have also come under fire.

At the same time, "risk management" has become increasingly important in many fields and a new view of it is just emerging in which uncertainty is replacing risk as the focus of management. Whereas risk management has tended to be seen as a way to achieve your original objectives come what may, uncertainty management includes managing events that turn out unexpectedly favourably, and it's obvious that in these situations you want to change your goals to take advantage of new opportunities presented. So there are signs of the beginning of a cultural shift towards Dynamic Management, in principle.

Dynamic management is a natural evolution of what is normally called "risk management". The progression from assuming a stable, known world to Dynamic Management goes something like this:

The main advantages of Dynamic Management

Uncertainty exists, including uncertainty about goals, in all but the most trivial ventures. The question is whether it is better to ignore it, or to deliberately manage it. Personally, I'd rather expect the unexpected than be shocked by it, particularly when it comes to having the goal posts moved. The main advantages of Dynamic Management are:

I can't find any empirical research that specifically assesses the effects of attempting to practice management methods that assume fixed goals in conditions where goals do or should change. Therefore I can only speculate based on personal experiences as an employee in various organisations.

I think that most of the time people don't follow the management methods and principles they say they follow. I think we actually do spend quite a lot of time trying to second guess how objectives given to us might change, though we might not do this very efficiently or systematically.

However, at times when we are disappointed by the results we are getting someone will usually suggest it's time to get things "under control" and manage "properly", by which they mean against the original goals. The more energetically and systematically this is done the worse the results will become, unless there are some really good things happening elsewhere to offset the effect of this ineffective management method. It would be much more effective to practice Dynamic Management, knowingly and systematically, when better results are desired.

Two famous pieces of management research lend a little support to this.

A famous study of manager behaviour is Henry Mintzberg's comparison of actual behaviour with the theoretical notion of the scientific, systematic manager. Mintzberg found managers more interested in the latest gossip than in formal management reports from the company's information systems. They made a myriad of decisions every day and their direction gradually changed over time, rather than making big strategic decisions occasionally and then systematically rolling out the implications in detail.

Another famous study is the research at the end of the 1950s by Charles Kepner and Benjamin Tregoe. They observed managers actually doing their jobs and concluded that there were three basic mental activities that occupied most of their thinking time: problem analysis, decision analysis, and potential problem analysis (which is risk management). Problem analysis includes comparisons of behaviour expected or desired with actual behaviour, but all their examples are for things like problems with manufacturing machinery, where what should be happening is much clearer and with no uncertainty, so this is not comparable with control against a plan/budget/forecast. Also, the decision analysis technique they came to recommend recognises that there may be many objectives to meet simultaneously and resembles the technique I suggest later. Finally, in recent years Kepner-Tregoe has renamed "Potential Problem Analysis" as "Potential Problem and Opportunity Analysis", to recognise that things can turn out unexpectedly better as well as worse.

Key concepts of Dynamic Management

Dynamic Management uses some simple concepts:

Five examples of Dynamic Management

Examples of recognised management methods that resemble Dynamic Management are surprisingly rare. However, I have found five examples, most of which happen to be from the world of IT.

Active Benefits Realisation

For many years a controversy has raged between those who believe computers have been a great benefit to the world and to businesses and those who point to the actual statistics which usually show none. Faced with the problem that an IT project is as likely to have a negative effect on an organisation as a positive effect it was only a matter of time before someone coined the phrase "benefits management" and offered it to the world (for a price) as the answer. "You want more benefits from your IT? Then you need Benefits Management."

The obvious approach to this was to set goals, measures, targets, and so on at the outset, monitor actual results, and "manage" them to somehow force into being the benefits wished for at the start. Case studies of actual IT projects consistently show that this is rarely fulfilled other than by sheer fluke. More recently, a survey of IT practices in Australia by Chad Lin and Graham Pervan showed that 83% of their respondents did not think it was possible to anticipate all potential benefits at the project approval stage. The fact is that the benefits emerge over time. [Note that it is these benefits that should be the goals of the project, though this is rarely how people see it.]

In response to this problem of evolving benefits, Dr Dan Remenyi and a colleague have proposed what they call "Active Benefits Realisation" in which representatives of all stakeholders confer repeatedly during the project to update their ideas about the benefits of the project as learning proceeds, as well as perform formative evaluations (i.e. evaluations of potential benefits that help to shape the project).

Dynamic Systems Development Method

In the 1980s many software developers realised that many projects were failures because the system delivered was no longer the system that was actually needed, even if it met the original requirements perfectly. "Prototyping" became fashionable as a way to show customers/users what they were going to get at an early stage of development so that their reactions and realisations about their requirements could be captured and incorporated.

It was also realised that doing a series of incremental developments, each of which provided something useful, even if it was not the full and final answer, was more useful and less risky than a single, longer development project.

The "Dynamic Systems Development Method" (DSDM) is a written down method for doing this kind of development.

The main objective of the method is different from the "waterfall" approach more common at the time. Instead of attempting to deliver a system that meets the original, given requirements, DSDM aims to deliver a system that meets the actual requirements at the time the system comes into operation.

Dramatic improvements in productivity and success rate are claimed for DSDM (which is promoted by a non-profit organisation), though these are not available in all types of project as DSDM concentrates on systems where the user interface is important.

Evolutionary Development/Project Management

An approach to project management that is very similar to DSDM in its principles but has been used on some much bigger projects and for longer is called Evolutionary Project Management. According to Tom Gilb, Hewlett-Packard has used it in at least eight divisions since 1985, with the main benefit being the ability to get early, well-informed feedback from users at an early stage and respond to it. Other users include NASA, Loral, Lockheed Martin, and IBM Federal Systems Division.

The idea is to deliver project results early, through delivering frequent, useful increments - typically 50 micro-projects, each representing about 2% of what a traditional project would be. The aim is to deliver the most useful increments first, where possible.

However, this is not the same as Incremental Delivery, which means delivering small slices of the original requirements. Evolutionary Delivery allows for requirements to change as a result of changes and discoveries during the project. Detailed plans are drawn up for the next increment only, but there are still outline plans and architectures for other increments (even though these will probably change).

According to Gilb, the main difficulty for organisations adopting this is getting used to thinking of incremental ways to deliver. However, once people get used to thinking about the value their "customer" might get from the project they can see how deliveries other than what might have been asked for initially would be useful. Gilb gives a number of guidelines for identifying suitable increments, including:

In those rare cases where tiny increments cannot be found, Evolutionary Management resorts to the more familiar risk management techniques of insurance, contracting risk to others, sticking with established technology, and so on.

Dynamic Strategic Planning

Another method that has been applied on large scale engineering projects is Dynamic Strategic Planning, as described by Professor Richard de Neufville of MIT. This is long term planning that recognises the difficulties of forecasting and tries to build in flexibility in the plan, and adjust the plan according to events that occur. It is like playing chess, in that the planner thinks many moves ahead, but only commits to one at a time, and adjusts the game plan to events as they unfold.

An important aspect of Dynamic Strategic Planning is the attention paid to the interests and powers of major stakeholders, though this does not appear to be explicitly linked with the problem of anticipating possible changes to goals as the project unfolds.

Beyond Budgeting

A very exciting development is probably best known as "Beyond Budgeting". The story starts in Sweden in 1979 where Svenske Handelsbanken abandoned budgetary control in favour of new methods. The man behind this, Dr Jan Wallander, later wrote about what he did and the book was published in Swedish. Some other Swedish companies followed Dr Wallander's lead and companies that have abandoned budgeting now include IKEA and Volvo. However, the rest of the world paid little attention.

That all changed when an organisation called the Beyond Budgeting Round Table (BBRT), under the wing of CAM-I, discovered what was happening. The BBRT, led by Robin Fraser, Jeremy Hope, and Dr Peter Bunce, was set up in January 1998 initially to research alternatives to budgeting by visiting companies who had successfully replaced budgets with something better. The research was sponsored by companies interested in improving their own management methods and the results were shared among BBRT members.

The BBRT is now moving into a new phase of helping companies implement the Beyond Budgeting management model that resulted from their research.

The Beyond Budgeting management model is still being refined as new cases are examined and new thinking emerges, but the model is already well developed with various published articles, papers, and case studies available. In February 2003 a book is due to be published that deserves to be a big step forward.

One way the Beyond Budgeting has been refined is that it has moved from apparently just recognising the flaws of budgetary control to recognising the limitations of all systems of management control that work by negative feedback loops i.e. setting fixed targets then trying to manage variances.

The Beyond Budgeting management model now has two major areas: adaptive planning, and decentralisation. The current principles of the model include the following:

How to manage Dynamically

I have defined Dynamic Management as management that expects goals to change and given the basic concepts. Within this framework there are many ways one could go about actually managing dynamically so the following proposals are just one version - a version that will surely change in future years.

The demands of Dynamic Management create some important general constraints on any techniques used:

More specifically, Dynamic Management requires the following:

Techniques to achieve all of these are described in the following sections.

Better, quicker understanding of goal systems

In one sense, the ultimate goal is unchanging - maximise utility. However, this on its own is no guide to action and we have to think about how future outcomes have value for us, and try out specific goals that might be worth planning to achieve. In practice, our objective functions and goals look more like shopping lists than mathematical expressions. Why?

Multiple dimensions of value

The objective function is a function that places a value on an imagined future. This is not the state at a point in future time but rather the value of an entire future path.

A common approach to evaluating the value of some proposed business venture is to convert its impact into cash flows and calculate a single monetary value using Discounted Cash Flow techniques, ideally using a risk adjusted rate of return reflecting the company's weighted average cost of capital. This is called the Net Present Value, and often considered to be the same as the Shareholder Value created by the venture. Impressive as this usually appears it has very serious drawbacks, of which the most serious have been pointed out by Igor Ansoff in his classic book, "Corporate Strategy".

Looking ahead to the short term, identifying cash flows usually looks tough but possible. However, the further ahead you look the more difficult it is to convert every impact into a cash flow. Many important positive and negative effects of a strategy, such as a strengthened balance sheet, or increased intellectual capital, do not convert readily into cash flows. The uncertainty is simply too great.

Ansoff suggests a different approach which is also much more practical, being quicker and more robust, and better at suggesting specific actions than "maximise shareholder value". Ansoff suggests a system of points for valuing various outcomes, including things that do not convert to cash flows. This may be used in combination with the Net Present Value of relatively short term and predictable cash flows to provide an overall score for desirability. This is, in effect, a heuristic scoring method similar in principle to those used in chess playing computer programs.

While the specific set of measures suggested by Ansoff may not be useful in every case, the simple idea of listing all the desirable aspects of future outcomes that would influence the choice of strategy, and giving some indication of their relative importance, is easy and useful. It can be used easily in the vast number of situations where quantification is not worthwhile.

With a bit more work one can start to structure the objectives to show how they are linked causally. One can also think about the relationship between degrees on each dimension and the actual value. For example, for some things one either hits the target and gets the value, or doesn't and gets nothing. More often, the utility of an outcome varies more smoothly. For example, typically, reducing costs a certain amount is more desirable than reducing them a little less than that, for all degrees of cost reduction, other things being equal.

The next step is to select a set of goals, which usually means specific values for aspects of the future situations.

Example: A better garden - part 1. What is a "better" garden? Imagine you're just starting out on the property ladder but lucky enough to be able to afford a property with a garden. Your first attempt to write down something about your objective function might look like this:

Imagine you don't define a formal objective function, but your judgment is that a very messy garden is unacceptable but there's no need to create perfection. Similarly, low maintenance is important but you are willing to make some small concessions if that makes a significantly more attractive garden. Planting annuals each year is too much effort, but there are other smaller efforts you are willing to make.

As for goals, you might decide to replace certain plants at a particular time of year, redesign the edges of the lawn in spring, and put down weed suppressing plastic sheeting under bark chips. All these things have value you to you, as the objective function shows. However, on investigating further you decide to change your goals to replace the bark chips with gravel for certain parts of the garden near the house.

A better garden - part 2: Later, you learn more about the environmental impact of gardens and, being concerned for our environment generally, you decide you would like to change your objective function as follows:

Again, your objective function rules out certain things as unacceptably damaging to the environment, but also makes some concessions to attractiveness and easy maintenance.

A better garden - part 3: Suddenly everything changes as you discover you are to become a parent. Now the objective function changes a lot:

The weighting that feeds into the objective function now gives a high priority to safety and low maintenance, at the expense of attractiveness.

Explicit objective functions and quantification

There are two strong reasons for making objective functions more explicit and trying to quantify them. Firstly, humans are very bad at judging the value of combinations - much worse than most people think. Secondly, quantification is much easier than you might think, as the following techniques will show. The quantification might just be putting numbers on subjective feelings. You don't always need evidence and often can't get it anyway.

Example: Negotiations. The value of quantifying objective functions (even subjective ones) in a negotiation has been shown by Howard Raiffa in a series of simulated negotiations. The scenario was a negotiation between a city council and a union over 11 issues including pay, hours, vacations, and the fate of a hated council official. A large number of pairs of people played the roles of union and city negotiators in various conditions. Some were given confidential instructions that showed their valuation of various outcomes on each issue using a points system. Others had similar confidential instructions but the numbers were replaced by words and phrases intended to convey about the same values. Negotiations where both negotiators lacked numbers were highly variable in outcome and the most inefficient in the sense that joint gains were left on the table. Both teams were better off with numbers regardless of what the other team were using.

Making our objective functions more explicit is a fascinating subject and there are a number of useful techniques. Multi-Attribute Utility Theory (MAU) is the name for models that use "utility" as a kind of common currency for human value. The idea is that the utility of some object or future situation depends on more than one attribute.

"Conjoint analysis" is the name for a group of techniques for making MAU models of the way people value different attributes of things. It arose from mathematical psychology and is most often used in market research to find out what combination of features is most attractive to potential customers.

The practical attraction is that with some simple software it is possible to find out how a person values different aspects of the future or of things, and so create an approximate model of their objective function in just a few minutes. This can be done for an individual, or for a group of individuals. If a leader wanted to know what people thought they should be aiming for in the organisation this is a simple and unusually precise way to do it.

Example: Project trade offs. A rather generic way to assess a person's priorities in a project is to use three attributes of the project's outcome: cost, completion date, and quality of result. A number of levels for each of these must be defined specific to the project. The conjoint analysis program then asks people, individually, questions about which trade offs they prefer. For example, "What is your preference between a plan that means finishing the project in December for a cost of 2m, and finishing in October for 2.5m, all other things being equal?" Differences between individual views will quickly become apparent.

To apply conjoint analysis you have to define the object or situation in terms of a number of attributes, each of which has two or more possible levels. (If an attribute varies continuously so that there are infinitely many levels it can take it is still possible to choose a small sample of these and interpolate approximately for levels in between.) The software then poses a series of questions about how much you prefer some combinations of attribute levels to others, and uses your answers to build up a multi-attribute utility function.

In the simplest models each attribute level has a utility worked out for it, and the total utility of a situation is simply the sum of the utilities attached to each attribute value. (In other models the value of an attribute level depends on the levels of other attributes, but these require asking more questions and are rarely much more accurate so are less commonly used.)

The model can even give an idea of the relative importance of different attributes. However, this is for a given set of possible levels for each attribute and meaningless otherwise. For example, if the difference in utility between the best and worst levels of attribute A is 10 and the difference between best and worst for attribute B is 5, then A is twice as important as B for the attribute levels considered.

So, in selecting levels for attributes in your testing it is a good idea to choose them to span the full feasible range for every attribute so that the relative importance of attributes is not misleading.

Putting numbers into a simple formula improves decision making. Humans have so much difficulty combining several factors together in any decision that even apparently crude models perform better, e.g. a simple linear model such as: overall score = L1 x w1 + L2 x w2 + L3 x w3, with three attribute levels (L1, L2, L3) multiplied by three weights (w1, w2, w3). Numerous experiments have shown that making probability judgments (a similar task involving combining factors) is better done with a linear function than with unaided judgment. If the level values are "normalised" so that they have the same distribution as each other even choosing the weights at random gives a judgment more accurate and consistent than un-aided human judgment. Conjoint analysis means we can assign meaningful weights that mimic human judgements, but perform more consistently.

Faster uncertainty management extended to include goal systems

The techniques of most value are those that allow us to make powerful inferences about the actions we should take, quickly and easily, from limited and poor quality information. They should help us pick out the relevant information from a confused situation, quickly and reliably, and allow us to make the main decisions about future action with ease.

With the ability to get close to good answers quickly, it only remains to add more sophisticated refinements where necessary for precision in big decisions.

Inferences from degrees of uncertainty

A basic technique is to list out areas of relevant uncertainty (or update your existing list), consider the amount of uncertainty faced, and consider the extent to which you could reduce that uncertainty and how you would do it.

Knowing the common reasons for uncertainty can make it easier to spot:

First cut actions flow as follows:

If this leads to too many actions or to actions whose cost-benefit is in doubt one can always prioritise the actions and decide which are worthwhile.

Example: Running conferences. Imagine you have a job in the Conferences department of a thriving society for people who study lichens. Your role is to gather ideas for conferences, arrange them, sell them, run them, and report on the results obtained. You've been doing the job for a year and it's turned out to be a lot more difficult than you expected. On top of all the stress of getting everything ready for each conference there is the problem of deciding whether to go ahead with a conference idea. It all depends on how many people attend, which is proving very difficult to predict and nobody in your department seems to have more than a vague idea, even though they still seem confident they can predict demand and always act surprised when things turn out differently from their expectation.

One of the most awkward situations is to have to abandon a conference because of poor take up after it has been advertised and some people have signed up. This has happened three times in your first year and it is more costly than you expected because the venue is usually paid for in advance and only a small part of the fee is refundable.

Thinking about the uncertainty you face it is obvious that it is mainly uncertainty about demand. This is generated by the intangible nature of the interests of the scientists who might come, and is worse for conferences that are on unusual topics not previously explored. By contrast, the annual conference on Scandinavian tree lichens has been running for 12 years and attendance is pretty steady.

However, this uncertainty can be reduced by appropriate research. You decide to try a survey using a market research method called "conjoint analysis" at the next big conference and also on the society's web site to find out what the members value in a conference (including rating specific topics), and exactly how much. For example, what is the impact of the venue, the time of the year, the narrowness/breadth of the topic area, the style of the presentations, and the reputation of the main speakers? Using the analysis will give you a much better chance of proposing conferences with a good chance of at least breaking even.

The uncertainty can also be reduced by waiting more time to see how many people actually do sign up. However, the problem of non-refundable venue fees has to be reduced at least. You survey suitable conference venues to find out their cancellation fees and discover that some will refund closer to the date of the conference than others. By favouring these venues for novel conferences, encouraging members to sign up early, and monitoring the growth of committed attendees you can avoid the venue fee problem much more often than in the past.

Inferences from specific uncertainties

Another basic technique is to list out specific uncertainties in a relevant area (or update your existing list) and consider the extent to which you could reduce that uncertainty, influence the probabilities involved, or change the impact of alternative outcomes.

Decisions about how to reduce the amount of uncertainty by research and monitoring flow as previously described. For influencing probabilities and impacts, first cut actions flow as follows:

Once again, having quickly roughed out some actions it may be worthwhile going into more detail to get precision or a more comprehensive and knowledgeable analysis by using a more sophisticated approach as well. There are lots of different styles to choose from but actual practice tends to be let down by a range of reasoning errors. My paper on "Straight and crooked thinking about uncertainty" mentions several.

Considering the uncertainty around objectives

Having identified a goal system it is helpful to have a quick look to see which elements have most uncertainty attached, and which are most likely to change in future. Knowing that a particular element is likely to change, you can make it easier to handle those changes should they arrive.

Often, the goals are the result of a customer's requirements. The customer might be internal or external to the organisation concerned. To identify uncertainty around these you can talk to the customer, and also consider the conditions they face. This helps identify potential changes of requirements - perhaps even before the customer recognises them.

Quantification

Quantification is important in thinking about risks and uncertainty, but need not imply expensive research or computation. Putting numbers on subjective certainty is a vital step forward from ignoring uncertainty or assuming you have no probabilistic information at all. Supporting judgement with data makes those judgements more reliable.

Individual risks (i.e. potential outcomes) can be rated for their probability of occurrence and impact if they did occur. However, this technique cannot be applied to sets of risks, so in most practical situations a more sophisticated quantification method is needed.

Example: Corporate risk registers. Company risk registers tend to cover very many risks in a summarised way so it is not surprising that many of the "risks" that appear on them are actually sets of risks. This happens in various ways:

If sets of risks are rated for probability and impact in the usual way this is a logical error that invalidates the ratings.

Sets of risks can be rated by estimating the probability distribution of impact. If the number of risks in the set is small the distribution can be a discrete distribution with every risk considered individually. For example, if there are only five outcomes their probability and impact ratings can be combined on one table or graph e.g.

Discrete probability distribution of impact
OutcomeProbabilityImpact

A

0.2

10,000

B

0.3

-2,000

C

0.1

300,000

D

0.3

-200,000

E

0.1

2,500

If the number of risks in the set is high or infinite a continuous probability density function is more appropriate. This can be tabulated or graphed, and various summary statistics can be computed for more compact summaries. For example, estimates might be made of the probability that the impact will fall between a set of ranges:

Continuous probability density function for impact
Impact rangeProbability

0 to 20,000

0.1

20,000+ to 40,000

0.3

40,000+ to 60,000

0.3

60,000+ to 80,000

0.2

80,000+ to 100,000

0.1

Using robust project patterns

Projects can be designed in various ways and it helps to use one or more of the robust patterns in your designs. The main robust patterns are:

While trying lots of things to see what works is essential in many common areas and should be happening in all organisations, trial and error alone is not as efficient as trial and error plus expertise. For example, people who design user interfaces for computer systems vary in their skill. Really skillful ergonomists know that only usability tests will drive out the usability bugs in their designs because of the difficulty in predicting human responses. However, experiment has shown that the initial design by an expert ergonomist is generally better than a non-expert's design after several rounds of usability testing.

Faster teamwork that gets more people involved in rethinking

Dynamic Management is not a mandate for unconstrained, uncoordinated improvisation. It is not a mandate for undirected, haphazard experimentation. On the other hand, Dynamic Management does work much better if people from all levels and areas of an organisation can feed into decisions about goals systems, plans, uncertainty management, and so on. Uncertainty and the risk of failure are much reduced when plans are made rapidly right down to the crucial details on which performance will depend. People need to be able to see the whole picture and make their plans accordingly.

Example: Sam Walton. The man who created Wal-Mart and made himself a billionaire in the process was relentless in his search for grass-roots ideas to improve his business. Many of his clothes were bought from his own stores. He visited competitors' shops almost compulsively. On one occasion he flagged down a Wal-Mart 18 wheeler to ride for 100 miles, pumping the driver for ideas all the way. The driver was not expecting his passenger but knew, like other employees, that Sam would take his ideas seriously.

Involving team members at various levels in thinking about changes to goals and the objective function has nothing to do with achieving "buy in". The reason for doing it is to maximise learning and get the best information, experience, and thinking applied to management. This contrasts with typical management thinking from the late twentieth century where a senior elite decides the goals then tries to get them supported by "involving" people at lower levels to get their "buy in". This has always seemed somewhat phoney to me as so much has actually been decided and will not change even if it is discussed. Often, the discussion is really negotiation over performance related pay or evaluation targets in which underlings try to argue for the easiest targets possible while their bosses demand more while trying not to seem to be imposing targets, even though eventually they do.

In Dynamic Management, the involvement of people at "lower" levels in the corporate hierarchy is to get the benefit of their knowledge and take account of their interests to some extent, and will normally lead to some modifications to goals and the objective function. This is because so much depends on the operational detail of performance (particularly in contacts with "customers").

In "Bottom Up Marketing", advertising experts Jack Trout and Al Ries, famous for inventing the modern marketing concept of "positioning", point out that so much depends on having a tactic that wins in the marketplace, and such tactics are so hard to find, that making a strategy without having one identified is likely to lead to failure. They give numerous examples of companies where a senior elite set out a vision of rising revenues and profits, handed targets down to lower levels of management for them to find tactics that would deliver those financial results, and watched while no such tactics could be found and the company lost money.

Ries and Trout liken a winning tactic to a nail, and strategy to the hammer. The purpose of the strategy is to drive home the nail as strongly as possible. But you have to have a nail, and you should make sure you have a good one before making commitments to a strategy.

(This is a general rule of uncertainty management. If there's a big uncertainty in a venture, it is usually preferable to try to resolve it early if possible rather than waiting until you have committed more resources.)

Goals should reflect what you and your organisation are capable of doing. To find out what you're capable of doing you need to get down to the detail that determines the results of trading or service delivery. Therefore, what happens at the detailed, everyday level familiar to those at the bottom of the hierarchy is vital to the goal system and must be brought into the thinking. The input of people who actually do the work of the organisation rather than just managing it must be able to influence all the goals and objective function.

Efficient involvement in changes

Getting lots of information from different parts and levels of an organisation pulled together and considered, repeatedly as conditions change, demands extremely efficient techniques for communication and decision making. If every piece of news meant getting away for an "off site" with the whole organisation to debate it nothing else would get done.

There are a number of techniques that can be efficient enough provided they are done well and the appropriate techniques are chosen to suit circumstances.

Who should be involved in decisions to change goals, world models, etc? It cannot be everyone, every time as this would mean nothing could be changed quickly.

The group's leaders will have to make some decisions about who needs to agree to each change, who needs to give their views, who will just be told about the change afterwards, and who won't find out unless they take the trouble to look at the group's documentation. The group's leaders will have to make some decisions about who will talk to whom, what groups ought to meet, and so on.

Here are some techniques that can be used:

Spreading knowledge

It is helpful if members of a team working with Dynamic Management know the goals and objective function, and know the world models being used. They also need to know about changes in the factors that drive the goal system. This helps them fill in the detail for themselves and organise and act without waiting for centrally issued commands. However, it is unrealistic to expect that everyone will know and understand all of this material. Inevitably, some people will have a much better grasp than others.

Dynamic Management tends to help with this because it requires frequent reviews of goal systems and their related thinking. By going through that thinking repeatedly group members improve their memory of it, their understanding of it, and their ability to act in accordance with it.

This contrasts with the common experience of finding that a strategic review has become dust-gathering shelfware.

Shared views?

It is also unrealistic to expect a commonly held view of what the goals etc should be. Some people in a group will believe that the goals etc in use are the right ones, but most will see them as just "the ones we're working with at the moment" and disagree with some or all of them. Provided people work along with the formally adopted group position there is no harm in dissent. On the contrary, it is from the dissenters that improved thinking is likely to arise.

Personally, I dislike being in a group where holding a different view triggers strong social pressure to conform both in action and belief. In addition, I believe it inhibits thinking and slows adaptation. The technical term for this is "groupthink".

Skilled uncertainty management tends to reduce disagreement - or at least makes it possible to live more harmoniously with it. Much of the difference in view between people is in their subjective estimates of the likelihood of various events. Often, the heat of an argument exaggerates the magnitude of the difference. When this is found to be the case it is often possible to agree that the future is not known by anyone and we can at least agree that the outcome is uncertain. That may be all the common ground needed to agree an appropriate action, such as monitoring the risk.

For example, suppose a plan is put forward and adopted, but soon a member of the group realises that the plan is a poor one because it is based on a bad forecast and will fail because of a factor that has not been taken into consideration. While proponents of the original plan may not accept that their plan is flawed, they might still be persuaded that there is a risk that threatens their plan, and that a sensible step can be taken to manage that risk. Taking that step may well reveal whether or not the plan was flawed and so provide the facts needed to revise it without further argument.

Empowerment

Empowering people in an organisation is also helpful in creating the ability to adapt to a changing, unpredictable, complex, even chaotic environment. This does not help directly with the problem of getting low level input into high level goals, but it does allow low level decision making to be better aligned to overall strategy.

Empowerment has been well explained by a number of authors and it is generally accepted that it involves:

Good reasons

The various elements of the world view and goal system are likely to be more acceptable and memorable if they are explained with good reasons. For example:

Which is more memorable and more inspiring?
Without good reasonsWith good reasons

"Christian values are part of our culture."

"This organisation was founded by a group of Christians five years ago, so Christian values are part of our culture."

"We aim to introduce a range of 3 to 6 new products aimed at the 5 - 9 age group this Christmas. A key objective for the design team will be to come up with something really exciting and attractive."

"Our competitors have been bringing out some very appealing products aimed at our younger customers. We think we can also do better in that area and defend our share, so we aim to introduce a range of 3 to 6 new products aimed at the 5 - 9 age group this Christmas. Obviously, a key objective for the design team will be to come up with something really exciting and attractive."

"At next month's exhibition we must make a strong impression as well as learning as much as we can about how people react to the advantages of our product."

"At next month's exhibition our stand will be right next to the market leader's. Because we're new we're likely to get a lot of interest from people who have never heard of us before. It's vital we make a strong impression as well as learning as much as we can about how people react to the advantages of our product."

I hope you can see that the reasons provide a context that makes the goals more important and compelling. They convey a sense of the leader's awareness of what is going on and sensible leadership decisions. By contrast, goals without reasons just sound like hot air from optimistic managers with little original or insightful to say.

Control

Dynamic Management provides management control by incentivising people to plan and act towards the latest, best informed goals. These goals are agreed goals, not just any goals people fancy pursuing. The above techniques are designed to make it possible to bring together the thinking of lots of people quickly so that this is possible.

Incentives that point in the right direction

In much of the Western world it is now generally accepted that people work better if they have incentives, which normally means pay that depends to some extent on results. However, there are a number of ways this can go seriously wrong. The wrong incentives, strongly applied, can create innumerable problems and even block Dynamic Management.

Distorted objective functions

It is better to base incentives on the objective function than on goals. However, goals are the more usual basis. This tends to lead people to make decisions on the basis of a distorted view of the objective function as well as encouraging fraud and false accounting.

Example: Incentives for salesmen. Before becoming a billionaire by creating EDS from nothing, Ross Perot worked as a computer salesman for IBM. One year IBM introduced a commission scheme where salespeople were paid a good bonus for reaching a certain, challenging figure for total sales in a year. To get more they had to sell twice that amount (which was almost impossible) to get two bonuses. That year Perot sold IBM's top of the range system to a major customer on the first day of the year and by so doing he reached the target for a bonus. With no realistic hope of doing it again that year and no incentive to sell anything less he was disillusioned by the experience and it helped push him towards going his own way to escape the culture of IBM.

In this example, IBM's objective function for sales should have shown that value was smoothly related to the sales made rather than being stepped as in the commission scheme. While this is an extreme example, all incentives based on goals tend to have thresholds that trigger a reward and these thresholds distort the objective function worked to.

Incentives with trigger points tend to create weak motivation away from the trigger point, but intense motivation when results are heading towards being just below the trigger point. In this situation people can see that just a little bit of effort or deception can produce a big pay off for them. Suddenly the risk of getting caught is overcome by the big pay off and the temptation to manipulate the figures often proves irresistible.

Rewarding the wrong behaviour

Two reasons why incentives tend to reward the behaviour are that: (1) the targets used tend to have been set too long ago (e.g. at the start of a year), and (2) the target is based on something that is bad proxy for the true contribution of the behaviour it is trying to encourage. The consequences of these have already been illustrated above.

Incentives should be based on the view point at the time the incentive is calculated, not some earlier time. This means people can expect the basis to change through the year and have to consider what it might be at the end of the year. Rather than think of ways to exploit the weaknesses of a scheme they now have to anticipate how it might be improved to reflect current goals and better proxies of contribution, and work towards those.

Blocking learning

If people in an organisation are to manage uncertainty well they need to be able to take risks, and to experiment. In particular, they need to feel safe to try new things and evolve solutions to difficult, complex problems. They also need to be safe to report failures as well as successes. If they do not feel safe the organisation will not learn quickly and will be unable to take necessary risks intelligently.

People will not feel safe if they are held closely responsible for results achieved. There is also experimental evidence that people motivated by money tend to get attached to solutions that have worked in the past and do not recognise the need to try something different when conditions change and results suffer as a result. Instead of experimenting, people incentivised with money tend to keep repeating old behaviours but with more intensity and effort.

Also, people do not feel safe to take risks when they feel that "errors" are punished much more severely than missed opportunities and lack of improvement.

Sadly, most evaluation and incentive schemes try to push risk onto employees and hold them closely responsible for results, while human nature seems fixated on criticising errors while ignoring missed opportunities.

While some connection with results must remain, a better philosophy is to reward excellent Dynamic Management, and uncertainty management in particular, rather than results. The leader's message should be "I don't mind if you fail from time to time - your bets won't always be the right ones. What I want is smart betting. I'll only reprimand you if you are negligent in your management of uncertainty."

Leaders can also show by personal example that they are sincere by admitting easily that they themselves are not in complete control of events and by not being punished when things turn out badly.

Haphazard rewards and judgments about people

Agency Theory is the branch of thinking that studies incentives, their weaknesses, and ways of reducing them. Typically, it is assumed that the Principal pays the Agent for their results and cannot see what actions the Agent takes to achieve those results. Normally, it is also assumed that the results are at least partly dependent on factors outside the control of the Agent and these are usually seen as random or uncertain. Therefore, there is the risk of rewarding someone when their good results are because of luck rather than skill or effort, or failing to reward someone who was competent and hard working but not lucky.

Some jobs have a close link between behaviour and results, while others are less deterministic. Also, people at different levels in a large organisation face different levels of risk. Typically, people at a senior level are held responsible for results that are the outcome of many events, while people lower down are held responsible for results that are the outcome of just a few events. The statistical law of large numbers says that the senior person's results are more likely to show the influence of his/her skill and effort than the junior person's results which will be more random.

So, as we go down the organisational pecking order, other things being equal, payment for results gets less and less effective. (Some roles may be more deterministic than others, and it also depends what you measure.) However, you can compensate to some extent by:

Both these techniques have costs, but these must be set against gains in fairness and morale, more reliable identification of strong and weak performers, and reduced scope and drive for gaming.

When allowing for the conditions that affected performance it is essential to be aware of hindsight bias. Hindsight bias is a powerful mental bias to which everyone is susceptible. Suppose something happens that surprises you. Looking back you realise that the clues were there all the time and you feel you should have seen them and realised what was going to happen. The problem is that it is only with the benefit of hindsight that you know which clues were relevant. Even the way you interpret events is affected by hindsight. Also, one tends to forget about all the other clues that were present to obscure the relevant clues and even point in other directions.

To reduce the effects of hindsight bias consider all the cues that were present, not just the ones that seem relevant in hindsight.

Building flexibility into contracts

Past promises can easily get in the way of guiding action using the best, most informed, most up to date thinking. From pointless arguments about missed budgets to bonus payments to chief executives for completing deals that later turn out to be gigantic mistakes, this problem is ever present in today's organisations.

Contracts and commitments - the impact of uncertainty

No approach to management can avoid the extra complexities of contracts and other forms of commitment between parties. A traditional view of these is that "good management" (always seen from the buyer/manager's perspective) means getting people tied down to specific delivery criteria with rewards and penalties attached, then using this as a lever to influence behaviour during the project or operation. The emphasis is usually on delivering to the originally agreed criteria.

This is unrealistic and can lead to missed opportunities. Most projects and operations change over time and more is learned from experience. Goals should, and do, change, even if managers fail to recognise formally what has happened. In major IT projects, for example, it is common to find the goal system changing radically over a period of a year or more. The consequences can be bad for both sides.

Example: U turn. I once saw a project (not an IT project) for a customer which started out with everyone enthusiastic about the "deliverable" promised. The customer thought it would be helpful in a negotiation they were having with a third party. The supplier thought the deliverable was feasible. As the first version of the deliverable became available and negotiations continued with the third party, the customer realised that the deliverable was anything but advantageous. In fact using it in the negotiation as originally envisaged would be disastrous! They changed their strategy and started to suppress the deliverable. Furthermore, the individuals who had bought the deliverable were then faced with explaining to their bosses why they were paying a large amount of money for something they wished did not exist. They managed to get some money off as the price had not been properly agreed at the outset, so both customer and supplier shared in the pain.

What is the use of holding a supplier to the terms of the original contract if the deliverables are no longer of value, or a different deliverable (perhaps even one that would also be preferred by the supplier) is more desirable and also achievable?

Ideally, we would like a method of structuring and managing contracts that allows for revisions or re-negotiations so Dynamic Management is possible.

This is possible using a contract that contains a "same or better" clause. The contract shows the deliverables that will be provided and perhaps a plan of action, and is structured into stages, each with its own deliverables and perhaps plans. At the end of each stage, the remaining stages can be re-negotiated with new deliverables and, perhaps, plans and prices being agreed. However, an umbrella agreement states that the revised terms must preserve certain things for each party. For example, this might be:

At each revision point, the buyer declares the value they see in suggested revisions, while the supplier declares the effort they think would be required. If they cannot agree on a revised set of requirements/plan then they must continue with the one originally set, even though its value and effort may be much less attractive than when it was first agreed. Both parties have an incentive to be at least somewhat truthful in their declarations since, if they mis-state value or effort too much, they could find they miss out on a better deal.

This should be attractive to both parties because, at any stage, you would normally expect to be able to come up with a better plan than had been envisaged at some earlier point in time. This is true even if the plan is constrained by having to maintain or better some aspect of the original agreement.

Dynamic Management also helps with contracts and commitments by reducing the risk of gaming. A constant factor in human behaviour is deceit. Sadly, playing games to get an advantage is a fact of life and even buyers and suppliers who value their open and honest relationship have to have controls in place to prevent gaming by the other side. If they don't that open and honest relationship soon breaks down. Here's a summary of the main gaming behaviours and beliefs.

Real behaviours are often driven by uncertainty
Buyer/Principal/BossContractor/Agent/Subordinate

View of self:

Honest, fair, acting in good faith, organised, in control of own operations, prepared to pay a fair price.

View of self:

Honest, fair, acting in good faith, competent, hard working, competitively priced.

View of Contractor:

Lazy, incompetent, dishonest, desperate, disloyal, unscrupulous, exploitative, inflexible, tricksters.

View of Buyer:

Greedy, stupid, unscrupulous, desperate, disloyal, exploitative, dishonest, shambolic, inflexible, time wasters.

Nightmare scenario:

As work under the contract proceeds progress continually disappoints, while Contractor provides false reports of progress and prospects, repeatedly breaking promises and giving worthless assurances. Buyer becomes dependent on the project and the Contractor and cannot back out or negotiate better terms. Provides money in advance to keep things going. Ultimately, the whole thing is an expensive failure with nothing useful delivered and lots of money lost.

Nightmare scenario:

Work turns out to be much harder than expected, with the customer continually failing to do their part of the work, not making necessary decisions, and generally causing the work to fall behind. The customer has a strong negotiating position and will not accept responsibility for what is happening, while withholding agreed payments and giving threats. Cash flow position worsens but as the Contractor depends on the buyer he can't pull out or negotiate better terms. Ultimately, the whole thing is an expensive failure with substantial losses incurred because of work done but not paid for, with litigation pending.

Tactics up to signing the deal:

Challenge value asserted by the seller.

Get the most valuable work agreed at the lowest price.

Put the onus of proof of performance on the Contractor, otherwise, get the Contractor to make specific, measurable, testable commitments, with penalties on the Contractor for failure.

Be agreed as the party that will do the measurements of performance.

Avoid getting locked in by dependency.

Tactics up to signing the deal:

Persuade the Buyer the project/deal is valuable.

Sign up for the easiest work, at the highest price.

Put the onus of proof of non-performance on the Buyer, otherwise, avoid making risky commitments. Get caveats and excuses built in.

Be agreed as the party that will do the measurements of performance.

Put requirements onto the Buyer.

Lock the buyer in. Get them to go down a path that will lead to further purchases from a weak negotiating position.

Tactics during performance:

Show the Contractor's work is easy.

"Prove" failure/deny success.

Distort measurements if necessary, otherwise deny validity of Contractor's measurements and claim validity of own measurements/judgments.

Deny circumstances beyond the Contractor's control and assert Contractor failings.

Deny own failings.

Add requirements for no extra price.

Withhold or delay payments. Invoke fixed penalties and retentions.

Avoid getting locked in.

Lock in Contractor. Dangle extra contracts as bait.

Interpret contract in ways advantageous to self.

Tactics during performance:

Show work is hard - therefore price is fair.

Prevent proof of failure. "Prove" success.

Distort measures of performance, or undermine their credibility if unfavourable. Deny validity of measurements or judgments by the Buyer.

Point to circumstances beyond Contractor's control including failings by the Buyer.

Claim all failings were caused by the Buyer.

Claim extra payments.

Sell extra work to locked-in Buyer.

Lock in buyer.

Get out of requirements for no pay cut.

Interpret contract in ways advantageous to self.

Tactics afterwards:

"Prove" failure/deny success.

Distort measurements if necessary, otherwise deny validity of Contractor's measurements and claim validity of own measurements/judgments.

Deny circumstances beyond the Contractor's control and assert Contractor failings.

Deny own failings.

Withhold or delay payments. Invoke fixed penalties and retentions.

Interpret contract in ways advantageous to self.

Litigation.

Tactics afterwards:

Show work was hard - therefore price is fair.

Prevent proof of failure. "Prove" success.

Distort measures of performance, or undermine their credibility if unfavourable. Deny validity of measurements or judgments by the Buyer.

Point to circumstances beyond Contractor's control including failings by the Buyer.

Claim all failings were caused by the Buyer.

Claim extra payments.

Get out of requirements for no pay cut.

Interpret contract in ways advantageous to self.

Litigation.

The severity of this gaming depends on several factors, most of which are closely related to uncertainty, and so reducing these factors can reduce the severity and risk of gaming. The factors could be described as:

The major contributions of Dynamic Management to this are to emphasise management of risk and uncertainty, including project patterns that reduce the risks taken by both sides, and to avoid incentives that create payoff trigger points unnecessarily. This reduces the likelihood of a strong motive to deceive being created.

Planning and forecasting

Various theories are held by different people about planning, forecasting, and their contribution to project over-runs in particular. One school of thought is that estimates are usually underestimates. Another says that the estimates are fine, but people will take whatever the deadline is and over-run. So if you allow extra time you will still have an over-run and also have a longer project. It has also been seen that even honest estimates are beaten down by higher layers of management, who assume deliberate padding and respond by demanding that the project be done quicker.

There are also opposing theories of how to state and use deadlines and cost budgets. Most believe that the best approach is to get people to commit firmly to specific targets and to incentivise performance using them. Others argue that setting specific dates and costs far in advance is unrealistic and leads to people "suppressing" uncertainty (i.e. pretending it doesn't exist) and failing to manage it. They argue that a range of outcomes should be quoted for later phases, with specific commitments only for the next phase, or next three months.

All of these theories seem to be true to some extent:

Dynamic Management offers a better way to approach the three fundamental problems, which are:

Firstly, as already explained, thinking through the objective function and project goals initially and at later stages will normally show that project outcomes and value are smoothly linked. For example, consider the end date. There is rarely a special date after which the project is worthless, but before which it is valuable. Incentives should reflect this so that, regardless of the outcome expected, there is an incentive to get the best possible. The question is not "Will we hit our dates?", but "When will we finish, how valuable will that be, and can we do better?"

This is slightly different if there are some genuine deadlines. For example, Millenium Bug projects had a real deadline of 31 December 1999. However, even here a sensible project plan was to do the most risky things first, so the question was not "Will we finish on time?" but "What won't we have done by the end of 31/12/1999 and how much risk do those items carry?" In this way the relationship between value and end date was smoothed out by most projects.

Management will be repeatedly revising the targets and forecasts to reflect the latest news and thinking, including news about the difficulty of the task and not just actual progress so far. Sometimes the end date will slip, and sometimes it will tighten. Project team members must anticipate their efforts being judged against what, in retrospect, seems to management to have been achievable. This is less demoralising than being compared to management's initial fantasies, but still puts the pressure on since, if conditions turn out to be unexpectedly favourable, there is still no room for slacking.

Secondly, as already explained, Dynamic Management puts great emphasis on uncertainty management.

Thirdly, there should be a "most likely" date for completion, based on the latest plans, that guides project teams from day to day (though it will be changed from time to time), and a more sophisticated forecast (also revised frequently) that shows the probability distribution of project outcomes and is used by anyone in the organisation that wants to plan other things around the completion of the project. The "most likely" plan is the plan against which uncertainty is managed. Outcomes better than the "most likely" are upside uncertainties, while outcomes worse than the "most likely" are the downside.

Similar principles apply to resources used and quality of deliverables.

Example: Costs. Imagine your are running a project for which you/your employer will receive a large but fixed sum of money on "successful completion". You could try to control your costs by drawing up a budget at the outset, showing how much you expect to have spent at each stage, and comparing actual costs each month with your budget. Unless you are very lucky indeed this is not going to help much. Very soon your budget will be wrong, in amount and timing. A far better approach is to record the costs to date, and project forward the costs to "completion" (as currently defined rather than initially defined) assuming you will follow the most recently updated plan. This is much more effective, and provided the people in charge of the work plan are sufficiently involved in the estimate to completion, thinking about costs and approach to completion will be useful. Do not leave your plan unchanged if the cost estimates look better than or the same as your original estimate. If you can complete more cheaply that will be even more profit than expected. Also, do not use your forecasts merely as a way to predict the financial end result. Also use the forecasts to predict the implications of what you now expect - some of which may show that your plan is not viable and needs to be revised. For example, it may be that the latest plan calls for increased purchases of a particular raw material, and predictions of the amount of that material required shows it is more than can be provided by your supplier. Obviously time for a rethink, but you would not have noticed the problem if your forecasting model was just built to predict how angry or happy the project sponsor will be.

Lockheed Martin's "Skunkworks" is famous for creating awesome military aircraft like the SR-71 Blackbird (superfast spyplane) and the F-117 Night Hawk (stealth bomber) created in record time through being small, secretive, and anti-bureaucratic. However, repeated reforecasts to completion are one piece of bureaucracy the Skunkworks has written into its basic rules of operation. "No surprises" is the aim, but of course this does not mean that costs will always be equal to the original estimate.

Fourthly, the question of whether it is worth continuing will have to be considered repeatedly, taking into account only costs and benefits in the future. [Costs and benefits incurred/enjoyed in the past are "sunk" and irrelevant.]

Ideally, project estimates should be done with simulations or decision trees that incorporate decisions that will be made in the future of the project. There is a subtle but important effect that needs to be included in any forecasts. Imagine a project whose initial evaluation shows it is a good idea that will have big net benefits. Suppose the project runs into trouble and its value is reconsidered after 6 months. At that point there is less work remaining, and the benefits are still to be enjoyed. It may well look as though the best course is to continue with the project, even if it would have been a loser if evaluated 6 months earlier using the actual costs now known. A good way to tackle this is to simulate the future evolution of the project, from the start, using Monte Carlo simulation and including stop/continue decision points which correctly reflect the decisions as they might appear in the future.

Stable updating

There are many ways in which human thinking about uncertainty is flawed and a sample of these is presented in "Straight and crooked thinking about uncertainty", another of my web publications. One particular error is of great importance to Dynamic Management.

We tend to be bad at combining evidence. We are too influenced by the latest evidence on something (if we believe it), and forget to combine it with other evidence we have. In Dynamic Management this can lead to dramatic changes of direction and plan as managers over-react to every new piece of news.

What we should do is combine evidence in a Bayesian fashion (with appropriate adjustments if evidence suggests a change has occurred so that to some extent we have to restart). The Bayes rule and Bayesian statistics generally capture important principles about evidence and how to combine it:

You don't need to use numbers and the Bayes formula to practice a Bayesian style of evidence combination. Just remember that new evidence needs to be considered along with existing evidence to arrive at your best view. It is not correct to ignore all previous evidence and go with the most recently received evidence.

One caveat here is that if there has been a change you need to restart because the question you have been trying to settle has just changed.

Example: Process reliability. Imagine you are trying to measure the reliability of a process in a large company. The process in question is people typing information into a computer system. Your research might proceed as follows. Your set of hypotheses is the infinite set of error rates represented by the numbers from 0 to 100% wrong. Based on book research you start with a probability density function that shows the relative likelihood of each error rate being the actual error rate of the process you will study. You perform a detailed check on a sample of data entered in which you find a certain error rate. This was only a sample so you can't take the error rate in your sample and assume that applies to the whole population. You revise your probability density function to combine the evidence of the book research with the figures from your sample. You carry out another sample check and find a different error rate again, which you use to revise your probability density function. At this point, the software used is changed and a more user friendly screen design is introduced. Now it is time to start again. You revise your estimate from book research and start to check further samples.

Failing to observe this principle leads many managers to tinker unnecessarily with processes that are doing fine, as they react to every random fluctuation in results. This is destabilising and inefficient.

Management information focussed on learning and rethinking

Reports of "management information" look different in Dynamic Management. Instead of the typical, late twentieth century comparisons of annual budgets/targets to actuals there are more forecasts, more graphs of changes month by month, more ratios, and more attempts to search for correlations and causal links. There are also more non-numeric facts. Why?

The allocation of space in reports reflects the main concerns of management in Dynamic Management, which are to manage uncertainty, learn, and adapt:

A new management process

Dynamic Management requires a lot of rethinking. It would be wasteful to have to repeat a complex process of analysis every time a revision was needed. Instead, revisions need to be selective, and flexibility is helpful in achieving this. Another consideration is that Dynamic Management will often be done cooperatively by a group as well as by individual group members, so the thinking goes on in parallel.

For these reasons, the "process" of Dynamic Management should be seen as a collection of processes that communicate. There is no "methodology" in the sense of a multi-step recipe. This diagram shows what needs to happen. Revisions could start just about anywhere.

Process

Documentation of the thinking should be done and be orderly, reflecting logical relationships between items in the documentation. However, it is not necessary for the documents to be completed in any set order, provided the end result flows logically.

Easy adaptation to different domains and scales of organisation

Application to particular domains

The techniques presented above do not rely on special characteristics of the domain of application e.g. medicine, finance, construction projects. Most methods of management are vastly more efficient once domain specific knowledge has been built up. For example, identifying likely future problems of a proposed chemical manufacturing plant is much more difficult the first time than the tenth time you do it.

The relative importance of different techniques also varies between domains.

Getting better at Dynamic Management involves learning to apply it efficiently and effectively in specific domains.

Scaling up the techniques to really big organisations and projects

Many of the examples so far have been simple, accessible ones. However, Dynamic Management is also necessary with very large organisations and projects. The techniques scale up as follows:

However, whatever the scale of the organisation it is still made up of ordinary people whose thinking capacity is strictly limited. Whatever techniques are used the results have to be simplified down to decisions a human being can take, and that means making quick decisions using rules of thumb most of the time, as illustrated above.

Creating pockets of Dynamic Management in static organisations

Most people wishing to practice Dynamic Management will have to do so surrounded by people who are not. Is this a problem?

Advantages and disadvantages of Dynamic Management within static organisations
AdvantagesDisadvantages

Better results, on average.

Sometimes less than optimal match to the obsolete targets used, though this is offset by the fact that most people think more flexibly than the formal management systems allow for and do recognise that things have changed since the target was set.

Greater ability to manage expectations.

Frustration of others at getting probabilistic answers to questions that expect a simple (if unrealistically certain) answer.

Lower stress.

Some Dynamic Management techniques are hampered by poor incentives, inflexible contracts, budget fixation, lack of information from leaders, inefficient communication, weak strategy, and other symptoms of static management. However, this affects everyone and not just Dynamic Managers.

Summary of differences between Dynamic Management and common practice

Dynamic Management differs in a number of ways from common practice, as typically stated in advice on management and control. Here is a summary of the differences.

Summary of differences
Common practiceDynamic Management

Focus on goals only.

Understanding of the goal system, with emphasis on the objective function.

Control attempted by incentivising people to plan to move towards the original goals.

Control achieved by incentivising people to plan to maximise the current and/or future objective function.

Goals only change because of strategy changes, and are revised infrequently according to a regular calendar (usually annually) or in crisis.

The goal system adapts frequently in response to new learning and significant developments. The average frequency varies according to the size of the unit managed and other factors, but is never less than 4 per year.

Plans made annually.

Plans made at least quarterly and often much more often in order to respond to conditions rather than fulfil a planning calendar.

Plans communicated with the help of goals.

Plans communicated with the help of full goal system.

Incentives based on results compared with original goals.

Incentives based on results compared with the objective function in use at the time of evaluation, and modified to some extent to allow for the conditions actually experienced over the period being evaluated, as well as recognise actions taken.

Contracts not expected to be revised.

Contract revision is expected, planned, and provided for in contracts.

Limited risk management, defending against hazards only.

Strong emphasis on uncertainty management (i.e. covering hazards and opportunities), extending even to the goal system. Emphasis on anticipation and making robust and flexible plans.

Goals set by a senior elite.

Goals influenced by everyone.

"High level first, detail later" approach to planning.

"Whole picture together" approach to planning, accomplished by rapid assembly of prefabricated solutions drawn from a large store of such solutions.

Communication is up and down a hierarchy, or constrained by computers.

Variety of accelerated communication techniques used, with conversations preferred to databases.

Conclusion

Management thinking in the late twentieth century has become increasingly concerned with "control" exercised by comparing actual results with original plans or expectations. The emphasis is on achieving original goals, which themselves are set by a strategy formation activity that occurs infrequently and is done by a senior elite in the organisation.

Yet this is neither sensible nor close to actual, natural behaviour.

In contrast, Dynamic Management is management that confidently and skillfully learns and adapts to changing circumstances, refining and revising goals as often as needed so that action is always guided by the latest, best, and most informed thinking.

Managing in today's organisations would be more effective, and more enjoyable if we rejected the focus on goals fixed in the past and instead recognised and welcomed the changes we face, using Dynamic Management.

Further reading



Acknowledgments: I would like to thank all those who have read this page and commented. I consider every point carefully and often make improvements as a result.

About the author: Matthew Leitch has been studying the applied psychology of learning and memory since about 1979 and holds a BSc in psychology from University College London. Until very recently he worked as a consultant in risk management and systems for a leading professional services firm. Working with internationally known organisations, he pioneered new methods of designing and evaluating internal control systems for large scale systems and business processes and contributed to thinking on corporate governance. However, this web site is not connected in any way with his former employer nor are the views expressed here connected with the views of his former employer.

Contact the author at: matthew@dynamicmanagement.me.uk

Words © 2002 Matthew Leitch

Home / more articles - The author - Contact