Part 2: Risks in Focus:
2.4 Engineering the Future: How Can the Risks and Rewards of Emerging Technologies Be Balanced?
Share
From networked medical devices to the Internet of Things, from drought-resistant crops to bionic prosthetics, emerging technologies promise to revolutionize a wide range of sectors and transform traditional relationships.39 Their impacts will range from the economic to the societal, cultural, environmental and geopolitical.
Emerging technologies hold great and unprecedented opportunities. Some examples are explored in detail in three boxes presented in this section:
- Synthetic biology could create bacteria that turn biomass into diesel (Box 2.6).
- Gene drives could assist in the eradication of insect-borne diseases such as malaria (Box 2.7).
- Artificial intelligence is behind advances from self-driving cars to personal care robots (Box 2.8).
Discoveries are proceeding quickly in the laboratory, and once technologies demonstrate their usefulness in the real world, they attract significantly more investments and develop at an even greater pace.
However, how emerging technologies evolve is highly uncertain. Their potential second- or third-order effects cannot easily be anticipated, such that designing safeguards against them is difficult. Even if the ramifications of technologies could be foreseen as they emerge, the trade-offs would still need to be considered. Would the large-scale use of fossil fuels for industrial development have proceeded had it been clear in advance that it would lift many out of poverty but introduce the legacy of climate change? Would the Haber-Bosch process have been sanctioned had it been evident it would dramatically increase agricultural food production but adversely impact biodiversity?40 A range of currently emerging technologies could have similar or even more profound implications for mankind’s future. Survey respondents highlighted technological risks as highly connected to man-made environmental catastrophes.
Emerging technology is a broad and loose term (see Box 2.5), and debate about potential risks and benefits is more vigorous in some areas than in others. In the examples that follow, the focus is on technologies that are considered to have wide benefits and for which there is strong pressure for development, as well as high levels of concern about potential risks and safeguards.
Causes for Concern
Risks of undesirable impacts of emerging technologies can be divided into two categories: the foreseen and the unforeseen. Some examples of foreseen risks are leakage of dangerous substances through difficulties of containment (as is sometimes the case with trials of genetically-modified crops) or storage errors (as with 2014 security failures in US disease-control labs handling lethal viruses);41 the theft or illegal sale of emerging technologies; computer viruses, hacker attacks on human transplants42, or chemical or biological warfare. The establishment of new fundamental capabilities, as is happening for example with synthetic biology and artificial intelligence, is especially associated with risks that cannot be fully assessed in the laboratory. Once the genie is out of the bottle, the possibility exists of undesirable applications or effects that could not be anticipated at the time of invention. Some of these risks could be existential – that is, endangering the future of human life (see Boxes 2.6 to 2.8).43
Both foreseen and unforeseen risks are amplified by the accelerating speed and complexity of technological development. Exponential growth in computing power implies the potential for a tipping point that could significantly amplify risks, while hyperconnectivity allows new ideas and capabilities to be distributed more quickly around the world. The growing complexity of new technologies, combined with a lack of scientific knowledge about their future evolution and often a lack of transparency, makes them harder for both individuals and regulatory bodies to understand.
Safeguards and Challenges
As illustrated by the boxes on synthetic biology, gene drives and artificial intelligence, governance regimes that could mitigate the risks associated with the abuse of emerging technologies – from formal regulations through private codes of practice to cultural norms – present a fundamental challenge that has the following main aspects.44
The current regulatory framework is insufficient. Regulations are comprehensive in some specific areas of emerging technology, while weak or non-existent in others, even if conceptually the areas are similar. Consider the example of two kinds of self-flying aeroplane: the use of autopilot on commercial aeroplanes has long been tightly regulated, whereas no satisfactory national and international policies have yet been defined for the use of drones.
Spatial issues include where to regulate, whether at the national or international level. The latter is further complicated by the need to translate regulations into rules that can be implemented nationally to be fully enforceable. Undesirable consequences have the scope to cross borders, but cultural attitudes differ widely. For example, public attitudes are more accepting of genetically-modified produce in the United States than the European Union; consequently the EU has institutionalized the precautionary principle, while there is more faith in the US that a “technological fix” will be available for most challenges.45 Safeguards, regulations and governance need to combine consistency across countries with the strength to address the worldwide impacts of potential risks and the flexibility to deal with different cultural preferences.
The timing issue is that decisions need to be taken today for technologies that have a highly uncertain future path, the consequences of which will be visible only in the long term. Regulate too heavily at an early stage and a technology may thus fail to develop; adopt a laissez-faire approach for too long, and rapid developments may have irrevocable consequences. Different kinds of regulatory oversight may be needed at different stages: when the scientific research is being conducted, when the technology is being developed, and when the technology is being applied. At the same time, the natural tendency to think short term in policy-making needs to be overcome. Compared with Internet technology, notably the physical and life sciences have longer cycles of development and need governance regimes to take a long-term approach. History shows that it can take a long time to reach international agreements on emerging threats – 60 years for bioweapons, 80 years for chemical weapons – so it is never too early to start discussions.46
The question of who regulates becomes significant when it is unclear where a new device fits into the allocation of responsibility across existing regulatory bodies. This is an increasingly difficult issue as innovations become more interdisciplinary and technologies converge. Examples include Google Glass, autonomous cars and M-healthcare: while all rely on Internet standards, they also have ramifications in other spheres. Often no mechanism exists for deciding which existing regulatory body, if any, should take responsibility for an emerging technology.
Striking a balance between precaution and innovation is an overall dilemma. Often potentially-beneficial innovations cannot be tested without some degree of risk. For example, a new organism may escape into the environment and cause damage. Weighing risks against benefits involves attempting to anticipate the issues of tomorrow and deciding how to allocate scarce regulatory resources among highly technical fields.
When a gap in governance exists, it may create a vacuum of power that could be filled by religious movements and action groups exerting more influence and potentially stifling innovation. With that risk in mind, industry players in emerging technologies where institutions are weak or non-existent may seek to respond to a governance gap by demonstrating their responsibility through self-regulating – as the “biohacker” community is attempting in synthetic biology. Another example of a private player highlighting a governance gap is the way Facebook effectively exerts regulatory power in online identity management and censorship, through policies such as forcing users to display their real names and removing images that it believes the majority of users might find offensive.
A fundamental question pertains to societal, economic and ethical implications. While emerging technologies imply the long-term possibility of a world of abundance, many countries are struggling with unemployment and underemployment, and even a temporary adjustment due to technological advancement could undermine social stability. In ethical terms, advances in transhumanism, using technology to enhance human physiology and intelligence, will require finding a definition for what people mean by human dignity: are enhanced human capabilities a basic human right, or a privilege for those who can pay, even if that exacerbates and entrenches inequalities? At the same time, governance regimes for emerging technologies are strongly influenced by the perceptions, opinions and values of society – whether people are more enthusiastic about a technology’s potential benefits than fearful about its risks. This is very domain-related, and not always rational or proportional: it can lead to some technologies being over-regulated and others under-regulated. Many biological technologies that touch on beliefs about religion and human life, for example, are regulated relatively stringently, as evidenced by the worldwide prohibition on human cloning.47 On the other hand, the human propensity to anthropomorphize means that robotic prototypes in some empathic form of assistive technology (such as Paro, a baby harp seal lookalike robot assisting in the care of people with dementia and other health problems) easily capture public sympathy, which may ease safety, ethical or legal concerns.4849 In other areas, such as lethal autonomous weapons, it would probably be easier to get close to unanimous public support to prohibit them as has been the case for landmines. As such, these societal implications constitute an important risk in themselves, as it is difficult to anticipate their impact on the use and path of emerging technologies.
Thoughts for the Future
Emerging technologies are developing rapidly. Their far-reaching societal, economic, environmental and geopolitical implications necessitate a debate today to chart the course for the future and reap the many benefits but avoid the risks of emerging technologies. This is not a trivial task given the many interdependencies and uncertainties and the fact that many challenges transcend the spheres of decision-makers both across technologies and borders. Regulators face the dilemma to design regulatory systems that are predictable enough for companies, investors and scientists to make rational decisions, but unambiguous enough to avoid a governance gap that could jeopardize public consent or give too much room to non-state actors. Against this backdrop, evolving and adaptive regulatory systems should be designed in a flexible manner to take into account changing socio-economic conditions, new scientific insights and the discovery of unknown interdependencies.
In light of the complexities and rapidly changing nature of emerging technologies, governance should be designed in such a way as to facilitate dialogue among all stakeholders. For regulators, to dialogue with researchers at the cutting edge of developing these technologies is the only way to understand the potential future implications of new and highly-technical capabilities. For the scientific community within and across certain fields, a safe space is needed to coalesce around a common language and have an open discussion around both benefits and risks. At the same time, given that risks tend to cross borders, so must the dialogue on how to respond. And given the power of public opinion to shape regulatory responses, the general public must also be included in an open dialogue about the risks and opportunities of emerging technologies through carefully-managed communication strategies. Governance will be more stable and less likely either to overlook emerging threats or to stifle innovation unnecessarily, if the various stakeholders likely to be affected are involved in the thinking about potential regulatory regimes and given the knowledge to enable them to make informed decisions.
Box 2.5: Classifying emerging technologies
In general, three broad categories of emerging technologies can be distinguished: first, those to do with information, the Internet and data transfer, which include artificial intelligence, the Internet of Things and big data; second, biological technologies, such as the genetic engineering of drought-resistant crops and biofuel, lab-grown meat, and new therapeutic techniques based on RNA1, genomics and microbiomes; and third, chemical technologies, those involved in making stronger materials (such as nanostructure carbon-fibre composites) and better batteries (through germanium nanowires, for example), recycling nuclear waste and mining metals from the by-products of water desalination plants.
However, any attempt to categorize emerging technologies is difficult because many new advances are interdisciplinary in nature. In particular, information technology underlies many, if not all, advances in emerging technology. A final category of cross-over technologies would include smart grids in the electricity supply industry, brain-computer interfaces and bioinformatics –
the growing capacity to use technology to model and understand biology.
Box 2.6: Synthetic biology – protecting mother nature
For thousands of years, humans have been selectively breeding crops and animals. With the discovery of DNA hybridization in the early 1970s, it became possible to genetically modify existing organisms. Synthetic biology goes further: it refers to the creation of entirely new living organisms from standardized building blocks of DNA. The technology has been in development since the early 2000s, as knowledge and methods for reading, editing and designing genetics have improved, costs of DNA sequencing and synthesis have decreased, and computer modelling of proposed designs has become more sophisticated.
(see Figure 2.6.1)
In 2010 Craig Venter and his team demonstrated that a simple bacterium could be run on entirely artificially-made DNA.1 Applications of synthetic biology that are currently being developed include producing biofuel from E. coli bacteria; designer organisms that act as sensors for pollutants or explosives; optogenetics, in which nerve cells are made light-sensitive and neural signals are controlled using lasers, potentially revolutionizing the treatment of neurological disorders; 3D-printed viruses that can attack cancer;2 and gene drives as a possible solution to insect-borne diseases (as discussed in Box 2.7).
Alongside these vast potential benefits are a range of risks. Yeast has already been used to make morphine;3 it is not hard to imagine that synthetic biology may allow entirely new pathways for producing illicit drugs. The invention of cheap, synthetic alternatives to high-value agricultural exports such as vetiver could suddenly destabilize vulnerable economies by removing a source of income on which farmers rely.4 As technology to read DNA becomes more affordable and widely available, privacy concerns are raised by the possibility that someone stealing a strand of hair or other genetic material could glean medically-sensitive information or determine paternity.
The risk that most concerns analysts, however, is the possibility of a synthetized organism causing harm in nature, whether by error or terror. Living organisms are self-replicating and can be robust and invasive. The terror possibility is especially pertinent because synthetic biology is “small tech” – it does not require large, expensive facilities or easily-tracked resources. Much of its power comes from sharing information and, once a sequence has been published online, it is nearly impossible to stop it: a “DIYbio” or “biohacker” community exists, sharing inventions in synthetic biology, while the International Genetically Engineered Machines competition is a large international student competition in designing organisms, with a commitment to open-sourcing the biological inventions.
Conceivably, a single rogue individual might one day be able to devise a weapon of mass destruction – a virus as deadly as Ebola and as contagious as flu. What mechanisms could safeguard against such a possibility? Synthetic biology and affordable DNA-sequencing also opens up the possibility of designing bespoke viruses as murder weapons: imagine a virus that spreads by causing flu-like symptoms and is programmed to cause fatal brain damage if it encounters a particular stretch of DNA found only in one individual.5
Synthetic biology is currently governed largely as just another form of genetic engineering. Regulations tend to assume large institutional stakeholders such as industries and universities, not small and medium-sized enterprises or amateurs. The governance gap is illustrated by the controversy surrounding the very successful 2013 crowdsourcing of bioluminescent plants, which exploited a legal loophole dependent on the method used to insert genes.6 The Glowing Plants project, which aims ultimately to make trees function as street lights, was able to promise to distribute 600,000 seeds without any oversight by a regulatory body other than the discretion of Kickstarter. The project caused concern not only among activists against genetically-modified organisms, but also among synthetic biology enthusiasts who feared it might cause a backlash against the technology.7
Differences can already be observed in the focus of DIYbio groups in Europe and the United States due to the differing nature of regulations on genetically-modified organisms in their regions, with European enthusiasts focusing more on “bio-art”.8 The amateur synthetic biology community is very aware of safety issues and pursuing bottom-up options for self-regulation in various ways, such as developing voluntary codes of practice.9 However, self-regulation has been criticized as inadequate, including by a coalition of civil society groups campaigning for strong oversight mechanisms.10 Such mechanisms would need to account for the cross-border nature of the technology, and inherent uncertainty over its future direction.11
Box 2.7: Gene drives – promises and regulatory challenges
In sexually reproducing organisms, most genes have a 50% chance of being inherited by offspring. However, natural selection has in some cases favoured certain genes that are inherited more often. For the past decade or so, research has been exploring how this could be triggered.12 The “gene drives” method “drives” a gene through a population, stimulating a gene to be preferentially inherited. This gene then can spread through a given population, whose characteristics could thus be modified by the addition, deletion, editing or even suppression of certain genes.
Gene drives present an unprecedented opportunity to cure some of the most devastating risks to health and the environment. Applications are foreseen in the fight against malaria and other insect-borne diseases, which the reprogramming of mosquito genomes could potentially eliminate from entire regions. They are also foreseen in combating herbicide and pesticide resistance, and in eradicating invasive species that threaten the biodiversity of ecosystems.
Technical challenges remain, relating mainly to the difficulty of editing genomes for programming drives in a way that is precise (with only the targeted gene affected) and reversible (to prevent and overwrite possible unwanted changes). A team at Harvard University, MIT and the University of California at Berkeley is making huge progress, such that the development of purpose-built, engineered gene drives is expected in the next few years.13
However, gene drives carry potential risks to wild organisms, crops and livestock: unintentional damage could possibly be triggered and cascade through other connected ecosystems. No clear regulatory framework to deal with gene drives currently exists. The US Food and Drug Administration would consider them as veterinary medicines, requiring the developers to demonstrate they are safe for animals that need to be protected. So how are they defined? Both the US policy on Dual Use Research of Concern, which oversees research that has clear security concerns, and the Australia Group Guidelines, a form of private regulations on transfers of biological material, rely on lists of infectious bacterial and viral agents.14 They do not have the functional approach that would be needed, for example, to regulate genetic modifications to sexually reproducing plants and animals.
Scientists and regulators need to work together from an early stage to understand the challenges, opportunities and risks associated with gene drives, and agree in advance to a governance regime that would govern research, testing and release. Acting now would allow time for research into areas of uncertainty, public discussion of security and environmental concerns, and the development and testing of safety features. Governance standards or regulatory regimes need to be developed proactively and flexibly to adapt to the fast-moving development of the science.15
Sources: Esvelt et al. 2014 and Oye et al. 2014.
Box 2.8: Artificial intelligence – rise of the machines
Artificial Intelligence (AI) is the discipline that studies how to create software and systems that behave intelligently. AI scientists build systems that can solve reasoning tasks, learn from data, make decisions and plans, play games, perceive their environments, move autonomously, manipulate objects, respond to queries expressed in human languages, translate between languages, and more.
AI has captured the public imagination for decades, especially in the form of anthropomorphized robots, and recent advances have pushed AI into popular awareness and use: IBM’s “Watson” computer beat the best human Jeopardy! players; statistical approaches have significantly improved Google’s automatic translation services and digital personal assistants such as Apple’s Siri; semi-autonomous drones monitor and strike military targets around the world; and Google’s self-driving car has driven hundreds of thousands of miles on public roads.
This represents substantial progress since the 1950s, and yet the original dream of a machine that could substitute for arbitrary human labour remains elusive. One important lesson has been that, as Hans Moravec wrote in the 1980s, “It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility”.16
These and other challenges to AI progress are by now well known within the field, but a recent survey shows that the most-cited living AI scientists still expect human-level AI to be produced in the latter half of this century, if not sooner, followed (in a few years or decades) by substantially smarter-than-human AI.17 If they are right, such an advance would likely transform nearly every sector of human activity.
If this technological transition is handled well, it could lead to enormously higher productivity and standards of living. On the other hand, if the transition is mishandled, the consequences could be catastrophic.18 How might the transition be mishandled? Contrary to public perception and Hollywood screenplays, it does not seem likely that advanced AI will suddenly become conscious and malicious. Instead, according to a co-author of the world’s leading AI textbook, Stuart Russell of the University of California, Berkeley, the core problem is one of aligning AI goals with human goals. If smarter-than-human AIs are built with goal specifications that subtly differ from what their inventors intended, it is not clear that it will be possible to stop those AIs from using all available resources to pursue those goals, any more than chimpanzees can stop humans from doing what they want.19
In the nearer term, however, numerous other social challenges need to be addressed. In the next few decades, AI is anticipated to partially or fully substitute for human labour in many occupations, and it is not clear whether human workers can be retrained quickly enough to maintain high levels of employment.20 What is more, while previous waves of technology have also created new kinds of jobs, this time structural unemployment may be permanent as AI could be better than humans at performing the new jobs it creates. This may require a complete restructuring of the economy by raising fundamental questions of the nature of economic transactions and what it is that humans can do for each other. Autonomous vehicles and other cases of human-robot interaction demand legal solutions fit for the novel combination of automatic decision-making with a capacity for physical harm.21 Autonomous vehicles will encounter situations where they must weigh the risks of injury to passengers against the risks to pedestrians; what will the legal redress be for parties who believe the vehicle decided wrongly? Several nations are working towards the development of lethal autonomous weapons systems that can assess information, choose targets and open fire without human intervention. Such developments raise new challenges for international law and the protection of non-combatants.22 Who will be accountable if they violate international law? The Geneva Conventions are unclear. It is also not clear when human intervention occurs: before deployment, during deployment? Humans will be involved in programming autonomous weapons; the question is whether human control of the weapon ceases at the moment of deployment. AI in finance and other domains has introduced risks associated with the fact that AI programmes can make millions of economically significant decisions before a human can notice and react, leading for example to a May 2012 trading event that nearly bankrupted Knight Capital.2324
In short, proactive and future-oriented work in many fields is needed to counteract “the tendency of technological advance to outpace the social control of technology”.25