In the Risk Reassessment section of the Global Risks Report, we invite selected risk experts to share their insights about risk and risk management. The aim is to encourage fresh thinking about how to navigate a rapidly evolving risks landscape. In this year’s report, John D. Graham discusses the importance of considering trade-offs between risks—because efforts to mitigate one risk can often exacerbate others. And András Tilcsik and Chris Clearfield highlight a number of the steps that can be taken to protect organizations from systemic risks.
By John D. Graham
Corporate executives, regulators, physicians and security officials often face a shared dilemma in decision-making: deciding which risks to accept, at least for now. The stark reality is that few decision options in these fields are without any risk. The executive may decide in favour of a promising acquisition, despite knowing that merging with an unfamiliar company is fraught with downside risks. Heart patients often trust cardiologists to help them decide whether the longevity gains from coronary artery bypass surgery are worth its additional surgical dangers compared with the simpler angioplasty procedure. The bold German phase-out of nuclear power is indirectly forcing Germany to incur greater risks from coal-fired electricity, at least until the ambitious path to renewables is accomplished. And measures to counteract terrorism at airports may not reduce overall societal risk if terrorists simply respond by shifting to new vulnerable targets such as sporting events, concerts and subways.
Trade-offs between risks
What might be called the “target risk” is the one of primary concern to decision-makers. The Trump administration sees imports from China as an immediate threat to American businesses because there are plenty of US businesses that have been damaged by government-subsidized Chinese products. The “countervailing risk” is the unintended risk triggered by interventions to reduce the target risk. Slapping tariffs on Chinese imports may bring the Chinese to the negotiating table but, in the interim, the tariffs make some US goods more expensive in global markets, especially those that rely on Chinese inputs. US tariffs also invite a trade war with the Chinese that will create some countervailing risks for US exporters that do business in China.
The challenge of resolving trade-offs between target and countervailing risks is particularly perplexing in the short run. Technological options are fairly fixed, research and development (R&D) solutions are beyond the relevant time horizon, and current legal and organizational arrangements in both government and business are difficult to reform quickly. In the long run, there are more “risk- superior” solutions because the extra time for risk management allows R&D, innovation and organizational change to work against both the target and countervailing risks.
The most promising short-run solution to risk trade-offs is as simple in theory as it is devilishly difficult in practice: identify and carefully weigh the competing risks of decision alternatives. For example, with the global economy in an encouraging recovery, it is tempting for policy-makers to enforce monetary discipline—but that discipline might cause interest rates to rise above the surprisingly low levels that have become familiar throughout much of the world. If interest rates rise too much or too fast, the adverse effects on business activity are predictable. Weighing the risks and benefits of monetary discipline is a crucial responsibility of monetary policy-makers.
Geography and culture
Risk trade-offs are particularly sensitive for decision-makers when the parties suffering from the target risk are different from the parties likely to experience the countervailing risk. In China, electric cars look promising to families in polluted Eastern cities who breathe motor vehicle exhaust on a daily basis, especially those families living close to congested roads and highways. But, when electric cars are recharged by drawing electricity from the Chinese electrical grid, more pollution is generated at the electric power plants. Those facilities may be located on the perimeter of Chinese cities or in the less prosperous, inner regions of China where electricity plants are easier to site. It requires careful air quality modelling, informed by state-of-the-art atmospheric chemistry and high-resolution geographic information systems, to know precisely who will incur the indirect public health risks of plug-in electric cars. If the countervailing risks are not given the same analytic attention as the target risks, it is impossible for a thoughtful regulator to weigh the ethical aspects of shifting pollution from one population to another. In this setting, making the countervailing risks as transparent as the target risks is easier said than done.
When decisions about risk trade-offs are made in different cultures, it should be expected that some stark differences will result. In the United States, the national energy policies of both George W. Bush and Barack Obama facilitated a surge of unconventional oil and gas development through innovations such as multi-stage hydraulic fracturing and horizontal drilling. The diffusion of innovation occurred so rapidly in the states of Pennsylvania, North Dakota, Oklahoma and Texas that state regulators are only beginning to fully understand and regulate the resulting risks of earthquakes and water pollution. The same unconventional technologies used in the United States are seen as unacceptable in Germany, where bans on “fracking” were imposed before the new industry could get off the ground. Businesses and households in Germany are incurring high natural gas prices as well as greater dependence on Russian gas as a result of the ban on fracking, but German policy-makers are entitled to make those trade-offs.
Stark international differences in regulatory risk management are less acceptable when the alleged risks relate not to production activity, which is confined to a particular country, but to consumption of goods that are traded across borders in a global economy. The World Trade Organization (WTO) has already exposed several instances where countries have tried to use health-risk concerns to conceal protectionist motivations for product bans and restrictions. The Chinese are concerned that the United States and the European Union behave in this fashion; the United States has already won cases against the European Union at the WTO related to hormone-treated beef and genetically modified seeds.
One of the advantages of evidence-based approaches to resolving trade disputes is that all countries, regardless of cultural norms, have access to scientific evidence. Understanding cultural norms is a more subjective exercise. Scientific knowledge about risk and safety does not stop at an international border, though genuine uncertainty about the severity of established risks might justify differences in the precautionary regulations of different countries. The WTO is far from a perfect organization, but it has potential to promote an evidence-based approach to risk management and foster more international learning about risk trade-offs.
Investing to ease risk trade-offs
Fortunately, the long run opens up more promising opportunities for superior management of risk. New surgical techniques have made coronary artery bypass surgery much safer and more effective today than it was 20 years ago. The fracking techniques used today in the United States and Canada are much more sustainable and cost- effective than the techniques used only five years ago. And progress in battery technology is making electrification of the transport sector a more plausible, sustainable and affordable option than most experts believed possible a decade ago.
The hard question is how to foster productive R&D investments to ease difficult risk trade-offs. When will innovation occur productively through market competition, and when does an industry require incentives, nudging or even compulsion in order to innovate? Should governmental subsidies focus on basic research, or is there also a need for government to pick some promising technologies and subsidize real-world demonstrations? There are plenty of cases where government R&D policy has produced “duds” in the commercial marketplace, but there are also cases, such as fracking and plug-in electric vehicles, where government R&D policy has played a constructive role in fostering exciting and transformative innovations.
John D. Graham is Dean of Indiana University School of Public and Environmental Affairs.
Managing in the Age of Meltdowns
By András Tilcsik and Chris Clearfield
While we are right to worry about major events—such as natural disasters, extreme weather and coordinated cyber-attacks—it is often the cascading impact of small failures that brings down our systems. The sociologist Charles Perrow identified two aspects of systems that make them vulnerable to these kinds of unexpected failures: complexity and tight coupling.1 A complex system is like an elaborate web with many intricately connected parts, and much of what goes on in it is invisible to the naked eye. A tightly coupled system is unforgiving: there is little slack in it, and the margin for error is slim.
When something goes wrong in a complex system, problems start popping up everywhere, and it is hard to figure out what’s happening. And tight coupling means that the emerging problems quickly spiral out of control and even small errors can cascade into massive meltdowns.
When Perrow developed his framework in the early 1980s, few systems were both highly complex and tightly coupled; the ones that were tended to be in exotic, high-tech domains such as nuclear power plants, missile warning systems and space-exploration missions. Since then, however, we have added an enormous amount of complexity to our world. From connected devices and global supply chains to the financial system and new intricate organizational structures, the potential for small problems to trigger unexpected cascading failures is now all around us.
The good news is that there are solutions. Though we often cannot simplify our systems, we can change how we manage them. Research shows that small changes in how we organize our teams and approach problems can make a big difference.
In complex and tightly coupled systems—from massive information technology (IT) projects to business expansion initiatives—it is not possible to identify in advance all the ways that small failures might lead to catastrophic meltdowns. We have to gather information about close calls and little things that are not working to understand how our systems might fail. Small errors give us great data about system vulnerabilities and can help us discover where more serious threats are brewing. But many organizations fail to learn from such near misses. It is an all-too-human tendency familiar from everyday life: we treat a toilet that occasionally clogs as a minor inconvenience rather than a warning sign—until it overflows. Or we ignore subtle warning signs about our car rather than taking it into the repair shop. In a complex system, minor glitches and other anomalies serve as powerful warning signs—but only if we treat them as such.
Leaders can build organizational capabilities that attend to weak signals of failure. The pharmaceutical giant Novo Nordisk started developing such capabilities after senior executives were shocked by a manufacturing quality breakdown that cost more than US$100 million. In the wake of the failure, Novo Nordisk did not blame individuals or encourage managers to be more vigilant. Instead, it created a new group of facilitators tasked with interviewing people in every unit and at all levels to make sure important issues don’t get lost at the bottom of the hierarchy. The group follows up on small issues before they become big problems.
When success depends on avoiding small failures, we need to build scepticism into our organizations so that we consider our decisions from multiple angles and avoid groupthink. One approach, pioneered by NASA’s Jet Propulsion Laboratory (JPL), is to embed a sceptic in every project team—specifically, an engineer from JPL’s Engineering Technical Authority (ETA).
ETA engineers are ideal sceptics. They are skilled enough to understand the technology and the mission but detached enough to bring a distinct perspective. And the fact that they are embedded in the organization, but with their own reporting lines, means that project managers cannot just dismiss their concerns. If an ETA engineer and a project manager cannot agree about a particular risk, they take their issue to the ETA manager, who tries to broker a technical solution, gets additional resources for the mission, or escalates the issue to JPL’s Chief Engineer.
Another effective way to cultivate scepticism is through diversity. Surface-level diversity (differences of race and gender, for example) fosters healthy dissent in organizations. Research shows that diverse groups ask tougher questions, share more information and discuss a broader range of relevant factors before making a decision. Diversity in professional backgrounds matters, too. In one study that tracked over a thousand small banks for nearly two decades, researchers found that banks with fewer bankers on their boards were less likely to fail.2 The explanation: non-bankers were more likely to disrupt groupthink by challenging seemingly obvious assumptions. As one bank CEO with a professionally diverse board put it: “When we see something we don’t like, no one is afraid to bring it up.”
Learn to stop
When faced with a problem or surprising event, our instinct is often to push forward. But sticking to a plan in the face of an emerging problem can easily lead to a disaster. Stopping gives us a chance to assess unexpected threats and figure out what to do before things get out of hand. It sounds simple, but in practice it can be nerve-wracking for team members to trigger delays and disruption for something that might turn out to be a false alarm. This is something leaders need to actively encourage.
In some cases, stopping may not be an option. In those situations, effective crisis management requires rapidly cycling between doing, monitoring, and diagnosing. We do something to try and fix the system. We monitor what happens in response, checking to see if our actions had the intended effect. If they didn’t, we use the information from our monitoring to make a new diagnosis and move to the next phase of doing. Research shows that teams that cycle rapidly in this way are more likely to solve complex, evolving problems.
Cognitive biases are often the source of the small errors that trigger major failures in complex, tightly coupled systems. Fortunately, there are some simple techniques we can use to make better decisions. One is the “premortem”.3 Imagine that it’s six months from now and that the ambitious project you’re about to undertake has failed. The premortem involves working backward to come up with reasons for the failure and ideas for what could have been done to prevent it. The process is distinct from brainstorming about risks that might emerge: by asserting that failure has already happened, we tap into what psychologists call “prospective hindsight”, letting us anticipate a broader and more vivid set of problems.
Similarly, the use of predetermined criteria to make decisions can prevent us from relying on our (often incorrect) gut reactions. Too often, we base decisions on predictions that are overly simplistic, missing important possible outcomes. For example, we might anticipate that a project will take between one and three months to complete. One way of being more structured about this kind of forecast is to use Subjective Probability Interval Estimates (SPIES), which entails dividing the entire possible range of outcomes into intervals and then estimating the probability of each. In our example, we might consider six intervals for the project’s duration: zero to one month, one to two months, two to three months, three to four months, four to five months, and more than five months.4
Even with all these techniques, things will go wrong. When they do, we need to do a better job of learning lessons. Too often there is practically a script: a superficial post-mortem is conducted, an individual or a specific technical problem is found to be at fault, and a narrow fix is implemented. Then it’s back to business as usual. That is not good enough anymore. We need to face reality with a blameless process that not only identifies specific issues but also looks at broader organizational and systemic causes. Only by doing this—and by recognizing early warning signs, building scepticism into organizations, using structured decision tools and managing our crises better—will we be able to prevent the “unprecedented errors” that seem to be a defining feature of the modern world.
Chris Clearfield and András Tilcsik are the co-authors of Meltdown: Why Our Systems Fail and What We Can Do About It (Penguin Press, 2018).