Andrew Graham | February 13, 2019
As Canada’s federal government starts looking for a replacement for its failed payroll system and the Ontario provincial government launches yet another major shake-up of its health-care system, it’s useful to remind decision-makers of a long history of failures in major public sector implementations.
Research from around the world shows a consistent pattern of failures in public sector policy and project implementation. Yet we continue to embark upon implementation built on bias and faulty logic.
So maybe it’s time to better understand the architecture of failure and what can be done to overcome it.
Recent publications from Australia, Canada, the United Kingdom and the United States deliver some consistent messages. The Blunders of Government delves into the many restarts of the UK National Health Service. The Learning from Failure report details major project failures in Australia. In the U.S., A Cascade of Failures: Why Government Fails, and How to Stop It, reports similar themes. In Canada, the auditor general’s latest reports on the Phoenix pay system echo the common basis for implementation failure. It’s not often an auditor uses the phrase “incomprehensible,” but there it is.
When distilling all this research and all these investigations, certain themes are common to them all.
First and foremost in the public sector, announcement was equated with accomplishment. This is the equivalent of thinking that just cutting the ribbon is enough.
A corollary of this is that most projects get lots of attention by both political and bureaucratic leaders at first, but that attention fades as the boring, detail-oriented work begins and the next issue, crisis or bright shiny object comes along.
‘We design it. You make it work.’
In many cases, there is a cultural disconnect in the project design that prevents bad news from making it to those at the top of the chain of command, minimizes problems that are often warning signs and deliberately downplays operational issues as minor.
What can be called the “handover mentality” often takes over between a project’s designers and the people who have to actually implement it and get it up and running. It’s best characterized by the phrase: “We design it. You make it work.”
The next element is that when things go wrong, those who speak up about the problems are dismissed, discounted or just plain punished. This leads to groupthink, a failure to challenge assumptions and to just go along, even when danger signs are in full sight.
Policy designers and those who must implement government projects or infrastructure are often guilty of what’s known as optimism bias (“What could possibly go wrong?”) when, in fact, they should be looking at the end goal. They should be working backwards to identify not only what could go wrong, but how the whole process will roll out.
Instead, they focus on the beginning — the announcement, the first stages.
We hear the word complexity a lot when examining government project failures. Indeed, most of the problems examined in the aforementioned research pointed to the increasing complexity in failed implementations that went well beyond IT, and the failure to map those complexities out.
But that complexity increases the risks of some moving part of a government project malfunctioning and shutting down the entire system.
Gears start slipping
People get busy and distracted. If a policy is just the flavour of the week and something else becomes popular next week, the project starts to lose momentum, needed attention, reaction and adaptation to inevitable challenges. The gears start to slip.
Then there is the churn of officials. At both the political and bureaucratic level, this is a consistent theme in projects failing or in governments responding poorly to crises as they arise.
The champions for a policy simply move on, and their successors are left to decide how much energy to put into someone else’s pet project. Similarly, the rapid turnover of senior managers in government often leaves well-intentioned people to respond to emergencies in areas where they have little experience.
An interesting element in all of this research is the confirmation that cognitive biases play a significant role in assessing risks in policy implementation in a number of ways, often in the face of a mountain of contrary evidence.
Cognitive biases tend to confirm beliefs we already have. Biases block new information. While we need biases to short-hand our interpretation of events, they often filter and discount new information. Our experiences are our greatest asset and greatest liability in this process.
The bottom line on the causes of major implementation failure really rests with a culture focused on blame avoidance and getting along. We now know enough to avoid failure, backed by ample evidence that confirms common sense about how to better structure policy, its implementation and our major projects.
Can we do it?
Andrew Graham, Professor, Queen’s University, Ontario
This article is republished from The Conversation under a Creative Commons license. Read the original article.