Digital Wildfires
Share
Three years before the Brexit referendum and US presidential elections that brought the terms “fake news” and “post-truth” into broad circulation, a chapter in the 2013 Global Risks Report entitled “Digital Wildfires in a Hyperconnected World” warned of the increasing danger of misinformation being spread by social media. Among the key issues raised were the intentional use of social media to spread misinformation (for example, through the use of fake accounts to smear or impersonate political opponents), the difficulty of correcting misinformation when it spreads within trusted networks, global governance challenges and the danger that some governments might use well-intentioned constraints on misinformation to limit freedom of speech.
The prevalence and impact of digital wildfires have surged in the five years since we first discussed them. Even as the potential social, political and geopolitical risks are intensifying, however, the ways in which widely shared misinformation can influence human behaviour are still far from fully understood. While social media becomes increasingly deeply ingrained in daily life, mitigating adverse impacts will require sustained efforts by both policy-makers and technology leaders, and there will need to be a careful balance struck between regulation and preventing infringements of individual liberties.
The prevalence of online misinformation has surged…
Digital misinformation is not a new phenomenon—Freedom House has been tracking the use of paid pro-government commentators to mimic grassroots supporters since 2009. Nor is it confined to the United States: Freedom House’s Freedom on the Net report found 30% more countries using fake online grassroots activity in 2017 than 2016.37
However, it was during the 2016 US presidential election that “fake news” acquired global prominence, and much of the wave of research now underway has focused on this example. According to one study, in the three months immediately prior to the election, the top 20 false news stories outperformed—in terms of shares, reactions and comments—the top 20 stories from major news sources.38 Engagement with fake news stories increased by 53% compared with the previous three-month period.39 Another study noted that social media platforms directed 40% of the web traffic that went to fake news websites, compared with only 10% for the top mainstream news websites.40
…but its impact is difficult to gauge
Studies have found that people have a hard time distinguishing between accurate and fake headlines. One survey in late 2016 presented respondents with a random selection of six headlines—three accurate and three false—and asked them to rate the accuracy of the headlines they could recall having seen before.41 It found that 75% of the time respondents judged the false headlines to be “somewhat accurate” or “very accurate”—only slightly lower than 83% for the accurate news headlines.42 However, another study conducted in 2017 suggests a greater level of user scepticism about news consumed via social media—it found that while 55% of respondents said they consumed news from Facebook, only 18% said they trusted news from Facebook most or all of the time.43
Efforts are underway to bolster safeguards
Numerous efforts are now underway to limit the prevalence and potential disruptiveness of online misinformation by helping the public to critically evaluate news sources. Since early 2016, Facebook has launched a number of efforts to address false news, clickbait, and sensationalism, including a partnership with fact-checking organizations and a network of researchers called the News Integrity Initiative.44 An early study by Yale researchers suggests that these types of warnings reduce the likelihood of stories being shared, but has only a limited effect on users’ perceptions of accuracy when stories are shown repeatedly.45 And in 2017 the OECD announced plans to add critical thinking about information sources to its Global Competency tests.46 Programmes to teach students to evaluate online sources critically are a growing trend around the world.47
Amid increasing pressure from governments and users, technology companies have also been taking steps to reduce the financial incentives for the creators of fake news and to enhance the transparency of material on their platforms. For example, Google announced in November 2016 that it would restrict its AdSense ads on sites that “misrepresent, misstate, or conceal information about the publisher, the publisher’s content, or the primary purpose of the web property.”48 Facebook has taken action against ads on its platform that are “illegal, misleading or deceptive, which includes fake news”;49 however, these restrictions notably do not prevent users from writing or sharing inaccurate content.50
In September 2017, it was announced that a Russia-based organization spent US$100,000 on advertisements promoting divisive political issues during the US presidential campaign; Facebook said it would provide the ads to congressional investigators,51 and has launched tools to make all ads it runs publicly accessible in the future. In October, Twitter announced it would ban RT (formerly Russia Today) and Sputnik, two major media organizations, from advertising on the platform following an internal investigation and the identification by the US intelligence community of these companies as vehicles of Russian government interference in the 2016 presidential election.52 Twitter also announced that it is launching an “Advertising Transparency Center” and new policies that will (1) provide details about all ads carried on its platform, (2) place clear visual markers on political advertisements, (3) disclose how political ads are targeted and (4) strengthen policies regarding political advertising.53