• Agenda
  • Initiatives
  • Reports
  • Events
  • About
    • Our Mission
    • Leadership and Governance
    • Our Members and Partners
    • Communities
    • History
    • Klaus Schwab
    • Media
    • Contact Us
    • Careers
    • World Economic Forum USA
    • Privacy and Terms of Use
  • EN ES FR 日本語 中文
  • Login to TopLink

We use cookies to improve your experience on our website. By using our website you consent to all cookies in accordance with our updated Cookie Notice.

I accept
    Hamburger
  • World Economic Forum Logo
  • Agenda
  • Initiatives
  • Reports
  • Events
  • About
  • TopLink
  • Search Cancel

Report Home

  • Report Highlights
    • Global Risks Landscape 2015
    • Global Risks 2015 Interconnections Map
    • Risks-Trends 2015 Interconnections Map
    • Executive Opinion Survey 2014
    • Regional Risks Preparedness
  • Blogs and Opinions
  • Shareable Infographics
  • Video
  • Press Releases
  • [–divider–]
  • Preface
  • Foreword
  • Executive Summary
  • Introduction
  • Part 1 – Global Risks 2015
    • Introduction
    • Fragile Societies under Pressure
    • Growing Worries about Conflict
    • Economic Risks: Out of the Spotlight?
    • Environment – High Concern, Little Progress
    • Technological Risks: Back to the Future
    • Preparedness at the Regional Level Is Different
    • Conclusion
    • References
  • Part 2: Risks in Focus
    • 2.1 Introduction
    • 2.2 Global Risks Arising from the Accelerated Interplay between Geopolitics and Economics
    • 2.3 City Limits: The Risks of Rapid and Unplanned Urbanization in Developing Countries
    • 2.4 Engineering the Future: How Can the Risks and Rewards of Emerging Technologies Be Balanced?
    • 2.5 Conclusion
    • References
  • Part 3: Good Practices on Risk Management and Risk Resilience
    • Introduction
    • Practice 1: Interdisciplinary Science for Managing Water Resources and Improving Long-Term Water Security
    • Practice 2: Resilient America Roundtable
    • Practice 3: ZÜRS Public – Increasing Awareness of Flood Risk in Saxony
    • Conclusion
    • References
  • Conclusion
  • Appendices
    • Appendix A: Description of Global Risks and Trends 2015
    • Appendix B: The Global Risks Perception Survey 2014 and Methodology
    • Appendix C: The Executive Opinion Survey and Views of the Business Community on the Impact of Global Risks on Their Business
    • References
  • Acknowledgements
Global Risks 2015 Home
  • Report Home
  • Report Highlights
    • Global Risks Landscape 2015
    • Global Risks 2015 Interconnections Map
    • Risks-Trends 2015 Interconnections Map
    • Executive Opinion Survey 2014
    • Regional Risks Preparedness
  • Blogs and Opinions
  • Shareable Infographics
  • Video
  • Press Releases
  • [–divider–]
  • Preface
  • Foreword
  • Executive Summary
  • Introduction
  • Part 1 – Global Risks 2015
    • Introduction
    • Fragile Societies under Pressure
    • Growing Worries about Conflict
    • Economic Risks: Out of the Spotlight?
    • Environment – High Concern, Little Progress
    • Technological Risks: Back to the Future
    • Preparedness at the Regional Level Is Different
    • Conclusion
    • References
  • Part 2: Risks in Focus
    • 2.1 Introduction
    • 2.2 Global Risks Arising from the Accelerated Interplay between Geopolitics and Economics
    • 2.3 City Limits: The Risks of Rapid and Unplanned Urbanization in Developing Countries
    • 2.4 Engineering the Future: How Can the Risks and Rewards of Emerging Technologies Be Balanced?
    • 2.5 Conclusion
    • References
  • Part 3: Good Practices on Risk Management and Risk Resilience
    • Introduction
    • Practice 1: Interdisciplinary Science for Managing Water Resources and Improving Long-Term Water Security
    • Practice 2: Resilient America Roundtable
    • Practice 3: ZÜRS Public – Increasing Awareness of Flood Risk in Saxony
    • Conclusion
    • References
  • Conclusion
  • Appendices
    • Appendix A: Description of Global Risks and Trends 2015
    • Appendix B: The Global Risks Perception Survey 2014 and Methodology
    • Appendix C: The Executive Opinion Survey and Views of the Business Community on the Impact of Global Risks on Their Business
    • References
  • Acknowledgements

Part 2: Risks in Focus:

2.4 Engineering the Future: How Can the Risks and Rewards of Emerging Technologies Be Balanced?

Share

From networked medical devices to the Internet of Things, from drought-resistant crops to bionic prosthetics, emerging technologies promise to revolutionize a wide range of sectors and transform traditional relationships.39  Their impacts will range from the economic to the societal, cultural, environmental and geopolitical.

Emerging technologies hold great and unprecedented opportunities. Some examples are explored in detail in three boxes presented in this section: 

  • Synthetic biology could create bacteria that turn biomass into diesel (Box 2.6).
  • Gene drives could assist in the eradication of insect-borne diseases such as malaria (Box 2.7).
  • Artificial intelligence is behind advances from self-driving cars to personal care robots (Box 2.8).

Discoveries are proceeding quickly in the laboratory, and once technologies demonstrate their usefulness in the real world, they attract significantly more investments and develop at an even greater pace.

However, how emerging technologies evolve is highly uncertain. Their potential second- or third-order effects cannot easily be anticipated, such that designing safeguards against them is difficult. Even if the ramifications of technologies could be foreseen as they emerge, the trade-offs would still need to be considered. Would the large-scale use of fossil fuels for industrial development have proceeded had it been clear in advance that it would lift many out of poverty but introduce the legacy of climate change? Would the Haber-Bosch process have been sanctioned had it been evident it would dramatically increase agricultural food production but adversely impact biodiversity?40 A range of currently emerging technologies could have similar or even more profound implications for mankind’s future. Survey respondents highlighted technological risks as highly connected to man-made environmental catastrophes.

Emerging technology is a broad and loose term (see Box 2.5), and debate about potential risks and benefits is more vigorous in some areas than in others. In the examples that follow, the focus is on technologies that are considered to have wide benefits and for which there is strong pressure for development, as well as high levels of concern about potential risks and safeguards.

Causes for Concern

Risks of undesirable impacts of emerging technologies can be divided into two categories: the foreseen and the unforeseen. Some examples of foreseen risks are leakage of dangerous substances through difficulties of containment (as is sometimes the case with trials of genetically-modified crops) or storage errors (as with 2014 security failures in US disease-control labs handling lethal viruses);41 the theft or illegal sale of emerging technologies; computer viruses, hacker attacks on human transplants42, or chemical or biological warfare. The establishment of new fundamental capabilities, as is happening for example with synthetic biology and artificial intelligence, is especially associated with risks that cannot be fully assessed in the laboratory. Once the genie is out of the bottle, the possibility exists of undesirable applications or effects that could not be anticipated at the time of invention. Some of these risks could be existential – that is, endangering the future of human life (see Boxes 2.6 to 2.8).43

Both foreseen and unforeseen risks are amplified by the accelerating speed and complexity of technological development. Exponential growth in computing power implies the potential for a tipping point that could significantly amplify risks, while hyperconnectivity allows new ideas and capabilities to be distributed more quickly around the world. The growing complexity of new technologies, combined with a lack of scientific knowledge about their future evolution and often a lack of transparency, makes them harder for both individuals and regulatory bodies to understand.

Safeguards and Challenges

As illustrated by the boxes on synthetic biology, gene drives and artificial intelligence, governance regimes that could mitigate the risks associated with the abuse of emerging technologies – from formal regulations through private codes of practice to cultural norms – present a fundamental challenge that has the following main aspects.44

The current regulatory framework is insufficient. Regulations are comprehensive in some specific areas of emerging technology, while weak or non-existent in others, even if conceptually the areas are similar. Consider the example of two kinds of self-flying aeroplane: the use of autopilot on commercial aeroplanes has long been tightly regulated, whereas no satisfactory national and international policies have yet been defined for the use of drones.

Spatial issues include where to regulate, whether at the national or international level. The latter is further complicated by the need to translate regulations into rules that can be implemented nationally to be fully enforceable. Undesirable consequences have the scope to cross borders, but cultural attitudes differ widely. For example, public attitudes are more accepting of genetically-modified produce in the United States than the European Union; consequently the EU has institutionalized the precautionary principle, while there is more faith in the US that a “technological fix” will be available for most challenges.45 Safeguards, regulations and governance need to combine consistency across countries with the strength to address the worldwide impacts of potential risks and the flexibility to deal with different cultural preferences.

The timing issue is that decisions need to be taken today for technologies that have a highly uncertain future path, the consequences of which will be visible only in the long term. Regulate too heavily at an early stage and a technology may thus fail to develop; adopt a laissez-faire approach for too long, and rapid developments may have irrevocable consequences. Different kinds of regulatory oversight may be needed at different stages: when the scientific research is being conducted, when the technology is being developed, and when the technology is being applied. At the same time, the natural tendency to think short term in policy-making needs to be overcome. Compared with Internet technology, notably the physical and life sciences have longer cycles of development and need governance regimes to take a long-term approach. History shows that it can take a long time to reach international agreements on emerging threats – 60 years for bioweapons, 80 years for chemical weapons – so it is never too early to start discussions.46

The question of who regulates becomes significant when it is unclear where a new device fits into the allocation of responsibility across existing regulatory bodies. This is an increasingly difficult issue as innovations become more interdisciplinary and technologies converge. Examples include Google Glass, autonomous cars and M-healthcare: while all rely on Internet standards, they also have ramifications in other spheres. Often no mechanism exists for deciding which existing regulatory body, if any, should take responsibility for an emerging technology.

Striking a balance between precaution and innovation is an overall dilemma. Often potentially-beneficial innovations cannot be tested without some degree of risk. For example, a new organism may escape into the environment and cause damage. Weighing risks against benefits involves attempting to anticipate the issues of tomorrow and deciding how to allocate scarce regulatory resources among highly technical fields.

When a gap in governance exists, it may create a vacuum of power that could be filled by religious movements and action groups exerting more influence and potentially stifling innovation. With that risk in mind, industry players in emerging technologies where institutions are weak or non-existent may seek to respond to a governance gap by demonstrating their responsibility through self-regulating – as the “biohacker” community is attempting in synthetic biology. Another example of a private player highlighting a governance gap is the way Facebook effectively exerts regulatory power in online identity management and censorship, through policies such as forcing users to display their real names and removing images that it believes the majority of users might find offensive.

A fundamental question pertains to societal, economic and ethical implications. While emerging technologies imply the long-term possibility of a world of abundance, many countries are struggling with unemployment and underemployment, and even a temporary adjustment due to technological advancement could undermine social stability. In ethical terms, advances in transhumanism, using technology to enhance human physiology and intelligence, will require finding a definition for what people mean by human dignity: are enhanced human capabilities a basic human right, or a privilege for those who can pay, even if that exacerbates and entrenches inequalities? At the same time, governance regimes for emerging technologies are strongly influenced by the perceptions, opinions and values of society – whether people are more enthusiastic about a technology’s potential benefits than fearful about its risks. This is very domain-related, and not always rational or proportional: it can lead to some technologies being over-regulated and others under-regulated. Many biological technologies that touch on beliefs about religion and human life, for example, are regulated relatively stringently, as evidenced by the worldwide prohibition on human cloning.47 On the other hand, the human propensity to anthropomorphize means that robotic prototypes in some empathic form of assistive technology (such as Paro, a baby harp seal lookalike robot assisting in the care of people with dementia and other health problems) easily capture public sympathy, which may ease safety, ethical or legal concerns.4849  In other areas, such as lethal autonomous weapons, it would probably be easier to get close to unanimous public support to prohibit them as has been the case for landmines. As such, these societal implications constitute an important risk in themselves, as it is difficult to anticipate their impact on the use and path of emerging technologies.

Thoughts for the Future 

Emerging technologies are developing rapidly. Their far-reaching societal, economic, environmental and geopolitical implications necessitate a debate today to chart the course for the future and reap the many benefits but avoid the risks of emerging technologies. This is not a trivial task given the many interdependencies and uncertainties and the fact that many challenges transcend the spheres of decision-makers both across technologies and borders. Regulators face the dilemma to design regulatory systems that are predictable enough for companies, investors and scientists to make rational decisions, but unambiguous enough to avoid a governance gap that could jeopardize public consent or give too much room to non-state actors. Against this backdrop, evolving and adaptive regulatory systems should be designed in a flexible manner to take into account changing socio-economic conditions, new scientific insights and the discovery of unknown interdependencies.

In light of the complexities and rapidly changing nature of emerging technologies, governance should be designed in such a way as to facilitate dialogue among all stakeholders. For regulators, to dialogue with researchers at the cutting edge of developing these technologies is the only way to understand the potential future implications of new and highly-technical capabilities. For the scientific community within and across certain fields, a safe space is needed to coalesce around a common language and have an open discussion around both benefits and risks. At the same time, given that risks tend to cross borders, so must the dialogue on how to respond. And given the power of public opinion to shape regulatory responses, the general public must also be included in an open dialogue about the risks and opportunities of emerging technologies through carefully-managed communication strategies. Governance will be more stable and less likely either to overlook emerging threats or to stifle innovation unnecessarily, if the various stakeholders likely to be affected are involved in the thinking about potential regulatory regimes and given the knowledge to enable them to make informed decisions.


Box 2.5: Classifying emerging technologies

In general, three broad categories of emerging technologies can be distinguished: first, those to do with information, the Internet and data transfer, which include artificial intelligence, the Internet of Things and big data; second, biological technologies, such as the genetic engineering of drought-resistant crops and biofuel, lab-grown meat, and new therapeutic techniques based on RNA1, genomics and microbiomes; and third, chemical technologies, those involved in making stronger materials (such as nanostructure carbon-fibre composites) and better batteries (through germanium nanowires, for example), recycling nuclear waste and mining metals from the by-products of water desalination plants.

However, any attempt to categorize emerging technologies is difficult because many new advances are interdisciplinary in nature. In particular, information technology underlies many, if not all, advances in emerging technology. A final category of cross-over technologies would include smart grids in the electricity supply industry, brain-computer interfaces and bioinformatics –
the growing capacity to use technology to model and understand biology.



Box 2.6: Synthetic biology – protecting mother nature

For thousands of years, humans have been selectively breeding crops and animals. With the discovery of DNA hybridization in the early 1970s, it became possible to genetically modify existing organisms. Synthetic biology goes further: it refers to the creation of entirely new living organisms from standardized building blocks of DNA. The technology has been in development since the early 2000s, as knowledge and methods for reading, editing and designing genetics have improved, costs of DNA sequencing and synthesis have decreased, and computer modelling of proposed designs has become more sophisticated.
(see Figure 2.6.1)

In 2010 Craig Venter and his team demonstrated that a simple bacterium could be run on entirely artificially-made DNA.1 Applications of synthetic biology that are currently being developed include producing biofuel from E. coli bacteria; designer organisms that act as sensors for pollutants or explosives; optogenetics, in which nerve cells are made light-sensitive and neural signals are controlled using lasers, potentially revolutionizing the treatment of neurological disorders; 3D-printed viruses that can attack cancer;2 and gene drives as a possible solution to insect-borne diseases (as discussed in Box 2.7).

Alongside these vast potential benefits are a range of risks. Yeast has already been used to make morphine;3 it is not hard to imagine that synthetic biology may allow entirely new pathways for producing illicit drugs. The invention of cheap, synthetic alternatives to high-value agricultural exports such as vetiver could suddenly destabilize vulnerable economies by removing a source of income on which farmers rely.4 As technology to read DNA becomes more affordable and widely available, privacy concerns are raised by the possibility that someone stealing a strand of hair or other genetic material could glean medically-sensitive information or determine paternity.

The risk that most concerns analysts, however, is the possibility of a synthetized organism causing harm in nature, whether by error or terror. Living organisms are self-replicating and can be robust and invasive. The terror possibility is especially pertinent because synthetic biology is “small tech” – it does not require large, expensive facilities or easily-tracked resources. Much of its power comes from sharing information and, once a sequence has been published online, it is nearly impossible to stop it: a “DIYbio” or “biohacker” community exists, sharing inventions in synthetic biology, while the International Genetically Engineered Machines competition is a large international student competition in designing organisms, with a commitment to open-sourcing the biological inventions.

Conceivably, a single rogue individual might one day be able to devise a weapon of mass destruction – a virus as deadly as Ebola and as contagious as flu. What mechanisms could safeguard against such a possibility? Synthetic biology and affordable DNA-sequencing also opens up the possibility of designing bespoke viruses as murder weapons: imagine a virus that spreads by causing flu-like symptoms and is programmed to cause fatal brain damage if it encounters a particular stretch of DNA found only in one individual.5

Synthetic biology is currently governed largely as just another form of genetic engineering. Regulations tend to assume large institutional stakeholders such as industries and universities, not small and medium-sized enterprises or amateurs. The governance gap is illustrated by the controversy surrounding the very successful 2013 crowdsourcing of bioluminescent plants, which exploited a legal loophole dependent on the method used to insert genes.6 The Glowing Plants project, which aims ultimately to make trees function as street lights, was able to promise to distribute 600,000 seeds without any oversight by a regulatory body other than the discretion of Kickstarter. The project caused concern not only among activists against genetically-modified organisms, but also among synthetic biology enthusiasts who feared it might cause a backlash against the technology.7

Differences can already be observed in the focus of DIYbio groups in Europe and the United States due to the differing nature of regulations on genetically-modified organisms in their regions, with European enthusiasts focusing more on “bio-art”.8 The amateur synthetic biology community is very aware of safety issues and pursuing bottom-up options for self-regulation in various ways, such as developing voluntary codes of practice.9 However, self-regulation has been criticized as inadequate, including by a coalition of civil society groups campaigning for strong oversight mechanisms.10 Such mechanisms would need to account for the cross-border nature of the technology, and inherent uncertainty over its future direction.11



Box 2.7: Gene drives – promises and regulatory challenges

In sexually reproducing organisms, most genes have a 50% chance of being inherited by offspring. However, natural selection has in some cases favoured certain genes that are inherited more often. For the past decade or so, research has been exploring how this could be triggered.12 The “gene drives” method “drives” a gene through a population, stimulating a gene to be preferentially inherited. This gene then can spread through a given population, whose characteristics could thus be modified by the addition, deletion, editing or even suppression of certain genes.

Gene drives present an unprecedented opportunity to cure some of the most devastating risks to health and the environment. Applications are foreseen in the fight against malaria and other insect-borne diseases, which the reprogramming of mosquito genomes could potentially eliminate from entire regions. They are also foreseen in combating herbicide and pesticide resistance, and in eradicating invasive species that threaten the biodiversity of ecosystems.

Technical challenges remain, relating mainly to the difficulty of editing genomes for programming drives in a way that is precise (with only the targeted gene affected) and reversible (to prevent and overwrite possible unwanted changes). A team at Harvard University, MIT and the University of California at Berkeley is making huge progress, such that the development of purpose-built, engineered gene drives is expected in the next few years.13

However, gene drives carry potential risks to wild organisms, crops and livestock: unintentional damage could possibly be triggered and cascade through other connected ecosystems. No clear regulatory framework to deal with gene drives currently exists. The US Food and Drug Administration would consider them as veterinary medicines, requiring the developers to demonstrate they are safe for animals that need to be protected. So how are they defined? Both the US policy on Dual Use Research of Concern, which oversees research that has clear security concerns, and the Australia Group Guidelines, a form of private regulations on transfers of biological material, rely on lists of infectious bacterial and viral agents.14 They do not have the functional approach that would be needed, for example, to regulate genetic modifications to sexually reproducing plants and animals.

Scientists and regulators need to work together from an early stage to understand the challenges, opportunities and risks associated with gene drives, and agree in advance to a governance regime that would govern research, testing and release. Acting now would allow time for research into areas of uncertainty, public discussion of security and environmental concerns, and the development and testing of safety features. Governance standards or regulatory regimes need to be developed proactively and flexibly to adapt to the fast-moving development of the science.15

Sources: Esvelt et al. 2014 and Oye et al. 2014.



Box 2.8: Artificial intelligence – rise of the machines

Artificial Intelligence (AI) is the discipline that studies how to create software and systems that behave intelligently. AI scientists build systems that can solve reasoning tasks, learn from data, make decisions and plans, play games, perceive their environments, move autonomously, manipulate objects, respond to queries expressed in human languages, translate between languages, and more.

AI has captured the public imagination for decades, especially in the form of anthropomorphized robots, and recent advances have pushed AI into popular awareness and use: IBM’s “Watson” computer beat the best human Jeopardy! players; statistical approaches have significantly improved Google’s automatic translation services and digital personal assistants such as Apple’s Siri; semi-autonomous drones monitor and strike military targets around the world; and Google’s self-driving car has driven hundreds of thousands of miles on public roads.

This represents substantial progress since the 1950s, and yet the original dream of a machine that could substitute for arbitrary human labour remains elusive. One important lesson has been that, as Hans Moravec wrote in the 1980s, “It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility”.16

These and other challenges to AI progress are by now well known within the field, but a recent survey shows that the most-cited living AI scientists still expect human-level AI to be produced in the latter half of this century, if not sooner, followed (in a few years or decades) by substantially smarter-than-human AI.17 If they are right, such an advance would likely transform nearly every sector of human activity.

If this technological transition is handled well, it could lead to enormously higher productivity and standards of living. On the other hand, if the transition is mishandled, the consequences could be catastrophic.18 How might the transition be mishandled? Contrary to public perception and Hollywood screenplays, it does not seem likely that advanced AI will suddenly become conscious and malicious. Instead, according to a co-author of the world’s leading AI textbook, Stuart Russell of the University of California, Berkeley, the core problem is one of aligning AI goals with human goals. If smarter-than-human AIs are built with goal specifications that subtly differ from what their inventors intended, it is not clear that it will be possible to stop those AIs from using all available resources to pursue those goals, any more than chimpanzees can stop humans from doing what they want.19

In the nearer term, however, numerous other social challenges need to be addressed. In the next few decades, AI is anticipated to partially or fully substitute for human labour in many occupations, and it is not clear whether human workers can be retrained quickly enough to maintain high levels of employment.20 What is more, while previous waves of technology have also created new kinds of jobs, this time structural unemployment may be permanent as AI could be better than humans at performing the new jobs it creates. This may require a complete restructuring of the economy by raising fundamental questions of the nature of economic transactions and what it is that humans can do for each other. Autonomous vehicles and other cases of human-robot interaction demand legal solutions fit for the novel combination of automatic decision-making with a capacity for physical harm.21 Autonomous vehicles will encounter situations where they must weigh the risks of injury to passengers against the risks to pedestrians; what will the legal redress be for parties who believe the vehicle decided wrongly? Several nations are working towards the development of lethal autonomous weapons systems that can assess information, choose targets and open fire without human intervention. Such developments raise new challenges for international law and the protection of non-combatants.22 Who will be accountable if they violate international law? The Geneva Conventions are unclear. It is also not clear when human intervention occurs: before deployment, during deployment? Humans will be involved in programming autonomous weapons; the question is whether human control of the weapon ceases at the moment of deployment. AI in finance and other domains has introduced risks associated with the fact that AI programmes can make millions of economically significant decisions before a human can notice and react, leading for example to a May 2012 trading event that nearly bankrupted Knight Capital.2324

In short, proactive and future-oriented work in many fields is needed to counteract “the tendency of technological advance to outpace the social control of technology”.25


39
39 Emerging technologies are defined as contemporary advances and innovation in various fields of technology. See “Emerging Technologies: From Hindsight to Foresight” (page 3); http://www.ubcpress.ca/books/pdf/chapters/2009/emergingtechnologies.pdf.
40
40 Synthetic nitrogenous fertilizers now provide over half of the nutrients received by the world’s crops: see http://www.engineeringchallenges.org/cms/8996/9132.aspx. See also the Nature Education article “The Nitrogen Cycle: Processes, Players, and Human Impact”; http://www.nature.com/scitable/knowledge/library/the-nitrogen-cycle-processes-players-and-human-15644632.
41
41 See The Guardian article “From anthrax to bird flu – the dangers of lax security in disease-control labs”; http://www.theguardian.com/world/2014/jul/18/anthrax-bird-flu-dangers-lax-security-disease-control-labs.
42
42 See the University of Reading archive article “Attacking human implants: a new generation of cybercrime”; http://centaur.reading.ac.uk/35672/.
43
43 See Nick Bostrom’s article “Superintelligence: Answer to the 2009 EDGE QUESTION; WHAT WILL CHANGE EVERYTHING?”; http://www.nickbostrom.com/views/superintelligence.pdf.
44
44 Governance regime here is defined as the set of actors and processes that together determine how rules are made and applied. This includes regulation (such as top-down laws and regulatory frameworks made by governments or regulatory authorities) but also other approaches (such as private regulation, codes, standards and even practices determined by culture and history).
45
45 Wallach, 2011.
46
46 Based on an interview with University of California, Berkeley professor Stuart Russell, conducted by the Risks team on 12 November 2014.
47
47 Wallach, 2011.
48
48 See The Economist article “Seal of approval: A robot around the house doesn’t just have to be handy. It has to be likeable too”; http://www.economist.com/news/special-report/21599528-robot-around-house-doesnt-just-have-be-handy-it-has-be-likeable-too-seal.
49
49 Wallach, 2011.
1
1 Gibson, D. et al., 2010. “Creation of a bacterial cell controlled by a chemically synthesized genome”. Science 329 (5987): 52–6
2
2 See 3dprint.com article “Autodesk Genetic Engineer is Able to 3D Print Viruses, Soon to Attack Cancer Cells”; http://3dprint.com/19594/3d-printed-virus-fights-cancer/.
3
3 See Scientific American article “Yeast Coaxed to Make Morphine”; http://www.scientificamerican.com/podcast/episode/yeast-coaxed-to-make-morphine/.
4
4 See Inter Press Service News Agency article “Synthetic Biology Could Open a Whole New Can of Worms”; http://www.ipsnews.net/2014/10/synthetic-biology-could-open-a-whole-new-can-of-worms/.
5
5 See The Atlantic article “Hacking the President’s DNA”; http://www.theatlantic.com/magazine/archive/2012/11/hacking-the-presidents-dna/309147/.
6
6 See A. Evans’ article “Glowing Plants: Natural Lighting with no Electricity”; https://www.kickstarter.com/projects/antonyevans/glowing-plants-natural-lighting-with-no-electricit, and Scientific American article “Glowing Plants: Crowdsourced Genetic Engineering Project Ignites Controversy”; http://www.scientificamerican.com/article/glowing-plants-controversy-questions-and-answers/.
7
7 See Crowdfund Insider’s article “Kickstarter Bans GMOs In Wake Of Glowing Plant Campaign”; http://www.crowdfundinsider.com/2013/08/20031-kickstarter-bans-gmos-in-wake-of-glowing-plant-fiasco/.
8
8 See NCBI literature “European do-it-yourself (DIY) biology: Beyond the hope, hype and horror”; http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4158858/.
9
9 See for example BioScience article “Biosafety Considerations of Synthetic Biology in the International Genetically Engineered Machine (iGEM) Competition”; http://www.biofaction.com/wp-content/uploads/2012/04/igem-biosafety-2013.pdf; “A Biopunk Manifesto”; https://maradydd.livejournal.com/496085.html; and DIYbio Codes; http://diybio.org/codes/.
10
10 See “The Principles for the Oversight of Synthetic Biology”; http://www.biosafety-info.net/file_dir/15148916274f6071c0e12ea.pdf.
11
11 Zhang, J.Y. et al., 2011.
12
12 Begun in particular by Prof. Austin Burt, Imperial College London.
13
13 CRISPR-Cas9 is a tool that aims to accelerate the technology to edit genomes. It enables an organism’s DNA to be rewritten.
14
14 See Australia Group, “Guidelines for Transfers of Sensitive Chemical or Biological Items” (June 2012); www.australia-group.net/en/guidelines.html.
15
15 Adapted from an interview with Kenneth Oye, MIT, on the regulation of genetic engineering: “3 Questions: Kenneth Oye on the regulation of genetic engineering: Political scientist discusses regulatory gaps in assessing the impact of ‘gene drives’”; http://newsoffice.mit.edu/2014/3-questions-kenneth-oye-regulation-genetic-engineering-0717.
16
16 Moravec, 1988, p. 15.
17
17 Müller and Bostrom, 2014.
18
18 Bostrom, 2014.
19
19 Omohundro, 2008.
20
20 Brynjolfsson and McAfee, 2014.
21
21 Calo, 2014; http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2402972.
22
22 Human Rights Watch, 2012
23
23 Johnson et al., 2013.
24
24 See Reuters, “Error by Knight Capital rips through stock market”; http://www.reuters.com/article/2012/08/01/us-usa-nyse-tradinghalts-idUSBRE8701BN20120801.
25
25 Posner, 2004, p. 20.
Back to Top
Subscribe for updates
A weekly update of what’s on the Global Agenda
Follow Us
About
Our Mission
Leadership and Governance
Our Members and Partners
The Fourth Industrial Revolution
Centre for the Fourth Industrial Revolution
Communities
History
Klaus Schwab
Our Impact
Media
Pictures
A Global Platform for Geostrategic Collaboration
Careers
Open Forum
Contact Us
Mapping Global Transformations
Code of Conduct
World Economic Forum LLC
Sustainability
World Economic Forum Privacy Policy
Media
News
Accreditation
Subscribe to our news
Members & Partners
Member login to TopLink
Strategic Partners' area
Partner Institutes' area
Global sites
Centre for the Fourth Industrial Revolution
Open Forum
Global Shapers
Schwab Foundation for Social Entrepreneurship
EN ES FR 日本語 中文
© 2022 World Economic Forum
Privacy Policy & Terms of Service