Digital Wildfires in a Hyperconnected World
Digital Wildfires in a Hyperconnected World
The global risk of massive digital misinformation sits at the centre of a constellation of technological and geopolitical risks ranging from terrorism to cyber attacks and the failure of global governance. This risk case examines how hyperconnectivity could enable “digital wildfires” to wreak havoc in the real world. It considers the challenge presented by the misuse of an open and easily accessible system and the greater danger of misguided attempts to prevent such outcomes. See Figure 11
In 1938, when radio had become widespread, thousands of Americans confused an adaptation of the H.G. Wells novel The War of the Worlds with a news broadcast and jammed police station telephone lines in the panicked belief that the United States had been invaded by Martians.
It is difficult to imagine a radio broadcast causing comparably widespread misunderstanding today. In part this is because broadcasters have learned to be more cautious and responsible, in part because the media is a regulated industry, and in part because listeners have learned to be more savvy and sceptical. Moreover, the news industry itself is undergoing a transformation as the Internet offers multiple options to confirm or refute a breaking news story. But the Internet, like radio in 1938, is a relatively young medium. The notion that a tweet, blog or video posting could drive a similar public panic today is not at all far-fetched.
The Internet remains an uncharted, fast-evolving territory. Current generations are able to communicate and share information instantaneously and at a scale larger than ever before. Social media increasingly allows information to spread around the world at breakneck speed. While the benefits of this are obvious and well documented, our hyperconnected world could also enable the rapid viral spread of information that is either intentionally or unintentionally misleading or provocative, with serious consequences. The chances of this happening are exponentially greater today than when the radio was introduced as a disruptive technology, despite our media sophistication. Radio was a communication channel of “one to many” while the Internet is that of “many to many”.
The Internet does have self-correcting mechanisms, as Wikipedia demonstrates. While anyone can upload false information, a community of Wikipedia volunteers usually finds and corrects errors speedily. The short-lived existence of false information on its site is generally unlikely to result in severe real-world consequences; however, it is conceivable that a false rumour spreading virally through social networks could have a devastating impact before being effectively corrected. It is just as conceivable that the offending content’s original author might not even be aware of its misuse or misrepresentation by others on the Internet, or that it was triggered by an error in translation from one language to another. We can think of such a scenario as an example of a digital wildfire.
How might digital wildfires be prevented? Legal restrictions on online anonymity and freedom of speech are a possible route, but one which may also have undesirable consequences. And what if the source of a digital wildfire is a nation state or an international institution? Ultimately, generators and consumers of social media will need to evolve an ethos of responsibility and healthy scepticism similar to that which evolved among radio broadcasters and listeners since the infamous War of the Worlds broadcast in 1938. This risk case asks if explicitly recognizing the potential problem and drawing attention to possible solutions could facilitate and expedite the evolution of such an ethos.
Benefits and Risks of Social Media
From cuneiform to the printing press, it has always been hard to predict the ways in which new communication technologies will shape society. The scale and speed of information creation and transfer in today’s hyperconnected world are, however, historically unparalleled. Facebook has reached more than 1 billion active users in less than a decade of existence, while Twitter has attracted over 500 million active users in seven years. Sina-Weibo, China’s dominant micro-blogging platform, passed 400 million active accounts in summer 2012.1 Every minute, 48 hours’ worth of content is uploaded to YouTube. The world of social media is multicultural and young. Figure 12 shows the preferences across the world for different social networking platforms, and Figure 13 illustrates the trends of social media use by age group in the United States.
This phenomenon has many transformative effects. Studies of Twitter and Facebook activity in Egypt and Tunisia leave no doubt about the role social media played in facilitating the Arab Spring.23 The social networking site Patientslikeme.com connects individuals with others who have the same conditions and is helping to expedite the development of new treatments. Analysis of Twitter messages and networks has successfully predicted election results,4 movie box office success5 and consumer reactions to specific brands, among other things.67
However, some individuals and organizations have suffered losses due to the capacity for information to spread virally and globally through social media. Some examples:
- When a musician travelling on United Airlines had his claim for damages denied on a guitar that baggage handlers had allegedly broken, he wrote and performed a song – “United Breaks Guitars” – and uploaded it to YouTube, where it has been viewed more than 12 million times. As the video went viral, United Airlines stock dropped by about 10%, costing shareholders about US$ 180 million.89
- In November 2012, the BBC broadcast an allegation that a senior politician had been involved in child abuse, which transpired to have been a case of mistaken identity on the part of the victim. Although the BBC did not name the politician, his identity was easily discovered on Twitter, where he was named in about 10,000 tweets or re-tweets.10 On top of pursuing legal action against all the people who spread this false information on Twitter, the injured politician settled on £185,000 in damages with the BBC.11
- The existence on YouTube of a video entitled “Innocence of Muslims”, uploaded by a private individual in the United States, sparked riots across the Middle East. These riots are estimated to have claimed more than 50 lives.12
These are very different cases – a humorous response from a disgruntled customer, a defamation of character and an affront to religious sensitivities. What unites them is that hyperconnectivity amplified their impacts to a degree that would have been unthinkable in a pre-Internet age, when only a small number of large organizations had the capacity to broadcast information widely. This new reality has some challenging implications.
When Digital Wildfires Are Most Dangerous
As Hurricane Sandy battered New York in October 2012, an anonymous Twitter user tweeted that the New York Stock Exchange trading floor was flooded by three feet of water. Other Twitter users quickly corrected the false rumour, though not before it was reported on CNN.13 In Mexico, there have been cases of mothers needlessly keeping their children from school and shops closing due to false rumours of shootouts spreading through social networks.14 In the UK, the video imagery related to a low level tactical incident of the British Army in Basra, spread through Reuters agency feed, YouTube and Blinkx, led to a misleading impression of a significant military failure among the British public which was never fully eradicated.15
These cases indicate one of the two situations in which digital wildfires are most dangerous: in situations of high tension, when false information or inaccurately presented imagery can cause damage before it is possible to propagate accurate information. The real-world equivalent is shouting “fire!” in a crowded theatre – even if it takes only a minute or two for realization to spread that there is no fire, in that time people may already have been crushed to death in a scramble for the exit.
The other dangerous situation is when information circulates within a bubble of likeminded people who may be resistant to attempts to correct it. In the case of the Sandy NYSE tweet, other Twitter users rapidly posted accurate information and nobody had a vested interest in continuing to believe the original, false information.16 Cases in which false information feeds into an existing worldview, making it harder to dislodge, are far from unimaginable. This may be more of a problem with social networks where information is less publicly visible, for example, through friend networks on Facebook or more “opaque” social networks such as e-mail or text messaging.17 The spread of misinformation in such “trusted networks” can be especially difficult to detect and correct since recipients are more likely to trust any information originating from within the network.
We should, therefore, not underestimate the risk of conflicting false rumours, circulating within two online bubbles of likeminded individuals, creating an explosive situation. The extensive use of Twitter by both sides during the November 2012 clashes between Israel and Hamas in Gaza18 points to the possibility of situations in which competing versions of events are propagated in self-reinforcing loops among groups of people who are predisposed to believe one side or the other and do not share a common information source that might help to dissipate some of the self-amplified information loops.
“Astroturfing”, Satire, “Trolling” and Attribution Difficulties
While it is certainly possible for a digital wildfire to start accidentally, it is also possible for misinformation to be deliberately propagated by those who stand to reap some kind of benefit. Some examples:
- In politics, the practice of creating the false impression of a grassroots movement reaching a group consensus on an issue is called “astroturfing”. During the 2009 Massachusetts special election for the US Senate, a network of fake Twitter accounts successfully spread links to a website smearing one of the candidates.19
- Fake tweets have moved markets, offering the potential to profit from digital wildfires. A Twitter user impersonating the Russian Interior Minister Vladimir Kolokoltsev in July 2012 tweeted that Syria’s President Bashar al-Assad “has been killed or injured”, causing crude oil prices to rise by over US$ 1 before traders realized the news was false.20
- Thirty thousand people of Assam origin fled the tech centre Bangalore in panic in 2012 after receiving text messages warning that they would be attacked in retaliation for communal violence in their home state.2122
Executives interviewed by Forbes and Deloitte placed social media among the greatest risks that their corporations face.23 For example, after the BP oil spill in the Gulf of Mexico, a parody Twitter account quoting the chief executive Tony Hayward as saying such things as “Black sand beaches are very trendy in some places” attracted 12 times more followers than BP’s corporate Twitter account.24 While this example might have been intended to be humorous, it is possible for satire to be mistaken for fact. In October 2012, Iran’s official news agency ran a story that originated on the satirical website The Onion, claiming that opinion polls showed Mahmoud Ahmadinejad was more popular than Barack Obama among rural white Americans.25
More worrying for businesses may be misinformation that circulates at a time when markets are already anticipating an important announcement. On 18 October 2012, NASDAQ halted trading on Google shares as a leaked earnings report (coupled with weak results it entailed) triggered a US$ 22 billion plunge in Google’s market capitalization.26 In this case, the information was from a credible source, but it demonstrates impacts that could also be achieved by unfortunately timed misinformation or rumours.
It is not always easy to trace the source of a digital wildfire. It would be possible for careful cyber attackers to cover their tracks, raising the possibility of an organization or country being falsely blamed for propagating inaccurate or provocative information. Depending on existing tensions, the consequences of the false attribution could be exponentially worse than if no attribution had been made.
Towards a Global Digital Ethos
Around the world, governments are grappling with the question of how existing laws which limit freedom of speech, for reasons such as incitement of violence or panic, might also be applied to online activities. Such issues can be highly controversial: in the United Kingdom, courts initially convicted a man for making a joke on Twitter in which he threatened to blow up an airport in frustration at the cancellation of his flight – a conviction later overturned on appeal.27
Establishing reasonable limits to legal freedoms of online speech is difficult because social media is a recent phenomenon, and digital social norms are not yet well established. The question raises thorny issues of the extent to which it would be possible to impose limits on the ability to maintain online anonymity, without seriously compromising the usefulness of the Internet as a tool for whistle-blowers and political dissidents in repressive regimes.
Even if the imposition of such limits were enforceable, what authority would we trust to do it? The World Conference on International Telecommunications in Dubai aiming to revise the 1988 treaty governing the International Telecommunications Union28 sparked controversy in December 2012 when critics argued that seemingly innocuous technical regulations could have unintended negative consequences. Rules “ostensibly designed to do everything from fight spam to ensure ‘quality of service’ of Internet traffic could be used by individual governments to either throttle back incoming communications or weed out specific content they want to block.”29 As some revised treaty provisions were believed to “give a UN stamp of approval to state censorship and regulation of the Internet and private networks”30, the United States refused to sign the amended treaty; a decision seconded by Canada and several European countries.31
When the incentives behind installing “quality” checks are questionable, who can be trusted? And how do you create an established and recognized authority that can intervene or disrupt misinformation flows when they happen?
There are also profound questions of education and incentives. Users of social media are typically much less knowledgeable than editors of traditional media outlets about laws relating to issues such as libel and defamation. Many also have less to lose than traditional media outlets from spreading information that has not been properly fact-checked. But there are signs that new norms may be emerging. Figure 14 plots misinformation and correction tweets during Hurricane Sandy in October 2012. The misinforming tweet @ComfortablySmug’s about the NYSE floor flooding received substantially fewer re-tweets than the tweets that circulated fake photos depicting sharks swimming in New Jersey streets and the Statue of Liberty with monstrous looming storm clouds. Social media analysts say this is not surprising, as visual content tends to spread further than text alone. In addition, the actual misinforming tweets posted by @ComfortablySmug and @CNNweather peaked at significantly fewer re-tweets compared to the correction posted by @BreakingNews, even though the corrected information was posted within an hour of the misinforming tweet.32
One can speculate that people may have been more willing to re-tweet the photos of sharks and the Statue of Liberty because they were harmless and surprising and, most important, had significant entertainment value. The entertainment value may also explain the lack of interest in circulating the correction tweets from @BreakingNews. People may have been less prepared to re-tweet information that could be tied to serious consequences, such as NYSE flooding, before verifying. This suggests that norms may be emerging, and also re-emphasizes the fact-checking responsibility of trusted sources of information such as CNN. Slips like this could one day be a litigation risk for media corporations.
In addition to seeking ways to inculcate an ethos of responsibility among social-media users, it will be necessary for consumers of social media to become more literate in assessing the reliability and bias of sources. Technical solutions could help here. Researchers and developers are working on programmes and browser extensions, such as LazyTruth,33 Truthy34 or TEASE35, that aim to help people assess the credibility of information and sources circulating online. It is possible to imagine the development of more broad and sophisticated automated flags for disputed information, which could become as ubiquitous as programs that protect Internet users against spam and malware.
Feedback ratings on eBay, which enable users to assess the reliability of vendors, offer a potential template for the development of such a service. Until now, most rating systems are limited to specific websites – users do not carry their rating with them as a record of credibility wherever they go online. It remains still to be seen if that would be a desirable or feasible model. Information disputed for ideological reasons or deliberate misattribution will continue to pose a number of challenges; however, a system could be developed that would trace information to its source and provide information on whether the source was considered by a broader community to be official. The system could also reveal how widely the source was trusted by a spectrum of other Internet users – all while protecting the identity of the source.
It is not yet clear what a global digital ethos would look like, or how it could best be helped to develop. But given the risks posed by digital wildfires in our hyperconnected world, leadership is needed to pose these difficult questions and start the discussion.
Questions for Stakeholders
- Controlling the spread of false information online, either through national laws or sophisticated technologies, raises sensitive questions on the limits to the freedom of speech – a human value that is not regarded or celebrated equally across different societies. How can constructive international discussions be started to define a global digital ethos without further polarizing societies on issues of civil liberties?
- What actions can be taken to promote a new and critical media- or information-literacy among the general public that raises individuals’ capacities to assess the credibility of information and its sources?
- Where should different groups of stakeholders look to verify the source of information online? How can different markers of trust and information quality be promulgated to facilitate greater user clarity?
Hyperconnected World
Shaping Culture and Governance in Digital Media
Across the globe, the rules of digital content are being formed: laws and policies written, cultural norms emerging, industry coalitions forming. In this dynamic environment, the disparate expectations and interests of the primary stakeholder groups – government, industry, and citizens – are intertwined, and often at odds. Any government policy or business strategy will need to take into account numerous interlinked factors to achieve desired outcomes and avoid unintended consequences.
In a series of workshops held in Mexico City, Istanbul, Brussels, New York and New Delhi, and supported by a survey on Internet usage in 15 countries conducted in collaboration with comScore and Oxford University, the project aims to achieve the following over 2012 and 2013:
- Develop an alternative framework to think about issues relating to digital media that start with intentions of stakeholders (e.g. reward innovation and make content accessible) rather than the actions taken (e.g. protect intellectual property) to arrive at a shared understanding and framework concerning issues such as freedom of expression, intellectual property and privacy in the digital universe.
- Account for differences in regional values and cultures and how they are reflected in the digital world, which is borderless.
- Explore the context and conditions needed for any government or business intervention to be effective and sustainable, showcasing some regulatory policies on intellectual property that may have seemed effective in the short term but too costly in the long term.
- Highlight cases of collaborative efforts among stakeholders or leadership of a specific group of organizations that can prove most successful, especially relating to technological innovation.
The project is being led by media, entertainment and information industry partners from the publishing, social media and advertising industries, joined by regulatory bodies such as the Federal Communications Commission and the European Commission.

Source: World Economic Forum

Source: Adapted from “Search Engine Journal”, http://www.searchenginejournal.com/wp-content/uploads/2011/09/social-media-black.jpeg, 2012

Source: Adapted from “Search Engine Journal”, http://www.searchenginejournal.com/wp-content/uploads/2011/09/social-media-black.jpeg, 2012

Source: “#Sandy: Social Media Mapping”. Social Flow, http://blog.socialflow.com/post/7120245759/sandy-social-media-mapping, 2012.