Part 3: Emerging Technologies:
3.1 Understanding the Risk Landscape
The emerging technologies of the Fourth Industrial Revolution (4IR) will inevitably transform the world in many ways – some that are desirable and others that are not. The extent to which the benefits are maximized and the risks mitigated will depend on the quality of governance – the rules, norms, standards, incentives, institutions, and other mechanisms that shape the development and deployment of each particular technology.
Too often the debate about emerging technologies takes place at the extremes of possible responses: among those who focus intently on the potential gains and others who dwell on the potential dangers. The real challenge lies in navigating between these two poles: building understanding and awareness of the trade-offs and tensions we face, and making informed decisions about how to proceed. This task is becoming more pressing as technological change deepens and accelerates, and as we become more aware of the lagged societal, political and even geopolitical impact of earlier waves of innovation.
Over the years The Global Risks Report has repeatedly highlighted technological risks. In the second edition of the Report, as far back as 2006, echoes of current concerns were noted in one of the technology scenarios we considered, in which the “elimination of privacy reduces social cohesion”. This was classified as a worst-case scenario, with a likelihood of below 1%. In 2013, the Report discussed the risk of “the rapid spread of misinformation”, observing that trust was being eroded and that incentives were insufficiently aligned to ensure the maintenance of robust systems of quality control or fact-checking. Four years later, this is a growing concern; in Chapter 2.1, the Report considers the potential impact of similar trends on the very fabric of democracy.
In 2015, emerging technology was one of the Report’s “risks in focus”, highlighting, among other things, the ethical dilemmas that exist in areas such as artificial intelligence (AI) and biotechnology.
This year, the Global Risks Perception Survey (GRPS) included a special module on 12 emerging technologies (see Table 3.1.1). The results suggest that respondents are broadly optimistic about the balance of technological risks and benefits. Figure 3.1.1 shows that the average score is much higher for perceived benefits than it is for negative consequences. However, as Figure 3.1.2 makes clear, respondents still identify clear priorities for better governance of emerging technologies.
Table 3.1.1: Twelve Key Emerging Technologies
|3D printing||Advances in additive manufacturing, using a widening range of materials and methods; innovations include 3D bioprinting of organic tissues.|
|Advanced materials and nanomaterials||Creation of new materials and nanostructures for the development of beneficial material properties, such as thermoelectric efficiency, shape retention and new functionality.|
|Artificial intelligence and robotics||Development of machines that can substitute for humans, increasingly in tasks associated with thinking, multitasking, and fine motor skills.|
|Biotechnologies||Innovations in genetic engineering, sequencing and therapeutics, as well as biological-computational interfaces and synthetic biology.|
|Energy capture, storage and transmission||Breakthroughs in battery and fuel cell efficiency; renewable energy through solar, wind, and tidal technologies; energy distribution through smart grid systems, wireless energy transfer and more.|
|Blockchain and distributed ledger||Distributed ledger technology based on cryptographic systems that manage, verify and publicly record transaction data; the basis of “cryptocurrencies” such as bitcoin.|
|Geoengineering||Technological intervention in planetary systems, typically to mitigate effects of climate change by removing carbon dioxide or managing solar radiation.|
|Ubiquitous linked sensors||Also known as the “Internet of Things”. The use of networked sensors to remotely connect, track and manage products, systems, and grids.|
|Neurotechnologies||Innovations such as smart drugs, neuroimaging, and bioelectronic interfaces that allow for reading, communicating and influencing human brain activity.|
|New computing technologies||New architectures for computing hardware, such as quantum computing, biological computing or neural network processing, as well as innovative expansion of current computing technologies.|
|Space technologies||Developments allowing for greater access to and exploration of space, including microsatellites, advanced telescopes, reusable rockets and integrated rocket-jet engines.|
|Virtual and augmented realities||Next-step interfaces between humans and computers, involving immersive environments, holographic readouts and digitally produced overlays for mixed-reality experiences.|
Source: The 12 emerging technologies listed here and included in the GRPS are drawn from World Economic Forum Handbook on the Fourth Industrial Revolution (forthcoming, 2017).
The remainder of this chapter highlights the particular challenges involved in creating governance regimes for fast-moving technologies, and then summarizes the key results of this year’s GRPS special module on emerging technology. The chapter concludes with a discussion of the profound changes that new technologies will entail for businesses and of the cascading effects these changes may have on the global risk landscape.
How to govern emerging technologies is a complex question. Imposing overly strict restrictions on the development of a technology can delay or prevent potential benefits. But so can continued regulatory uncertainty: investors will be reluctant to back the development of technologies that they fear may later be banned or shunned if the absence of effective governance leads to irresponsible use and a loss of public confidence.
Ideally, governance regimes should be stable, predictable and transparent enough to build confidence among investors, companies and scientists, and should generate a sufficient level of trust and awareness among the general public to enable users to evaluate the significance of early reports of negative consequences. For example, autonomous vehicles will inevitably cause some accidents; whether this leads to calls for bans will depend on whether people trust the mechanisms that have been set up to govern their development.
But governance regimes also need to be agile and adaptive enough to remain relevant in the face of rapid changes in technologies and how they are used. Unexpected new capabilities can rapidly emerge where technologies intersect, or where one technology provides a platform to advance technologies in other areas.1
Currently, the governance of emerging technologies is patchy: some are regulated heavily, and others hardly at all because they do not fit under the remit of any existing regulatory body. Mechanisms often do not exist for those responsible for governance to interact with people at the cutting edge of research. Even where insights from the relevant fields can be combined, it can be hard to anticipate what second- or third-order effects might need to be safeguarded against: history shows that the eventual benefits and risks of a new technology can differ widely from expert opinion at the outset.2
To the extent that potential trade-offs of a new technology can be anticipated, there is scope for debate about how to approach them. There may be arguments for allowing a technology to advance even if it is expected to create some negative consequences at first, if there is also a reasonable expectation that other innovations will create new ways to mitigate those consequences. Even if there is widespread desire to restrict the progress of a particular technology – such as lethal autonomous weapons systems – there may be practical difficulties in getting effective governance mechanisms in place before the genie is out of the bottle.
The growing popular awareness of the dilemmas associated with governing new technologies is revealed by media analysis: relevant mentions of such quandaries in major news sources doubled between 2013 and 2016. But which technologies should we be focusing on? In the latest GRPS, we asked respondents to assess 12 technologies on their potential benefits and adverse consequences, public understanding and need for better governance.
Technologies that Need Better Governance
Figure 3.1.1 plots respondents’ perceptions of the potential benefits and negative consequences of the 12 technologies included in the GRPS. As noted above, the average score for benefits is much higher than it is for adverse consequences,3 suggesting that respondents are optimistic about the net impact of emerging technologies as a whole.4 Technologies considered to have above-average risks and below-average benefits, in the upper left quadrant of the figure, tended to be those where respondents felt least confident of their own assessments and also least confident of the public’s understanding.
Three technologies occupy the upper-right quadrant of Figure 3.1.1, indicating an above-average score for both potential benefits and risks: artificial intelligence (AI) and robotics, biotechnologies, and new computing technologies. Analysis of media coverage resonates with respondents’ high ranking for the risk associated with AI: from 2013 to 2016 there was a steady rise in reporting on whether we should fear AI technologies.5
Respondents also cited artificial intelligence (AI) and robotics most frequently when asked how the 12 emerging technologies exacerbate the five categories of global risk covered by The Global Risks Report. As Figure 3.1.2 illustrates, this was seen as the most important driver of risks in the economic, geopolitical and technological categories.
In Figure 3.1.3, two technologies stand out as requiring better governance in the view of GRPS respondents: both artificial intelligence (AI) and robotics and biotechnologies were cited by more than 40% of respondents. These two technologies differ greatly in terms of the current state of their governance.
Biotechnologies, which involve the modification of living organisms for medicinal, agricultural or industrial uses, tend to be highly regulated.6 Biotech became a global governance issue in 1992 with the Convention on Biological Diversity, now ratified by 196 countries.7 AI and robotics, meanwhile, are only lightly governed in most parts of the world. As “general purpose technologies”, in the words of economic historian Gavin Wright,8 they have applications in many fields that already have their own governance regimes. For example, where machine learning is used in areas such as online translation, internet search and speech recognition, it comes under governance related to the use of data. Industrial robots are governed by International Organization for Standardization (ISO) standards,9 while domestic robots are primarily governed by existing product certification regulations. There is increasing debate about the governance of AI given the risks involved, which are further discussed in Chapter 3.2.
The Disruptive Impact of Emerging Technologies
The potential of emerging technologies to disrupt established business models is large and growing. It is tempting to think of technological disruption as involving dramatic moments of transformation, but in many areas disruption due to emerging technologies is already quietly under way, the result of gradual evolution rather than radical change. Consider autonomous vehicles: we are not yet in a world of vehicles that require little or no human intervention, but the technologies that underpin autonomy are increasingly present in our “ordinary” cars.
As the technological changes entailed by the 4IR deepen, so will the strain on many business models. The automotive sector remains a good example. It has been clear for some time that car manufacturers need to plan ahead for a world in which many of the factors that determine current levels of car ownership may no longer be present. Increasing evidence of this planning is now starting to shape commercial decision-making. For example, in December 2016, Volkswagen launched a new “mobility services” venture, MOIA, in recognition of “an ever-stronger trend away from owning a vehicle towards shared mobility as well as mobility on demand”.10
The deep interconnectedness of global risks means that technological transitions can exert a multiplier effect on the risk landscape. This does not apply only to newly emerging technologies: arguably much of the recent social and political volatility that is discussed in Parts 1 and 2 of this year’s Global Risks Report reflects, in part at least, the lagged impact of earlier periods of technological change. One obvious channel through which technological change can lead to wider disruption is the labour market, with incomes pushed down and unemployment pushed up in affected sectors and geographical regions. This in turn can lead to disruptive social instability, in line with the GRPS finding this year that the most important interconnection of global risks is the pairing of unemployment and social instability.
Another prism through which to look at the interaction of risks and emerging technologies is that of liability – or, to put it another way, the question of who is left bearing which risks as a result of technological change. There are multiple potential sources of disruption here. The insurance sector is an obvious example when talking about liability; just as car manufacturers must prepare for a future of driverless vehicles, so the reduction in accidents this future would entail means insurance companies must prepare for plummeting demand for car insurance.11 But the idea of liability can also be understood more broadly, to include the kind of social structures and institutions discussed in Chapter 2.3 on social protection. Already there are signs of strain in these institutions, such as mounting uncertainty about the rights and responsibilities of workers and employers in the “gig economy”. One of the challenges of responding to accelerating technological change in the 4IR will be ensuring that the evolution of our critical social infrastructure keeps pace.
Chapter 3.1 was contributed by Nicholas Davis, World Economic Forum, and Thomas Philbeck, World Economic Forum.
Alford, K., S. Keenihan, and S. McGrail. 2012. “The complex futures of emerging technologies: challenges and opportunities for science foresight and governance in Australia“. Journal of Futures Studies 16 (4): 67–86.
Juma, C. 2016. Innovation and Its Enemies: Why People Resist New Technologies. New York: Oxford University Press.
Karembu, M., D. Otunge, and D. Wafula. 2010. Developing a Biosafety Law: Lessons from the Kenyan Experience. Nairobi: ISAAA AfriCenter.
KPMG. 2015. “Marketplace of change: Automobile insurance in the era of autonomous vehicles”. White Paper, October 2015. https://home.kpmg.com/content/dam/kpmg/pdf/2016/05/marketplace-change.pdf
Nuffield Council on Bioethics. 2016. Genome Editing: An Ethical Review. London: Nuffield Council on Bioethics.
Volkswagen. 2016. “MOIA: The Volkswagen Group’s new mobility services company”. Press release, 5 December 2016. https://www.volkswagen-media-services.com/documents/10541/4e91af8e-0b11-477c-a6fb-7ee089f1cc4d
Wright, G. 2000. “Review of Helpman (1998)”. Journal of Economic Literature 38 (March 2000: 161–62; cited in Brynjolfsson, E. and A. McAfee. 2014. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. New York and London: W. W. Norton & Company.