Why trust in the digital economy is under threat
Share
Declining trust, prompted by unease at the way some organizations are using digital technology, could undermine the societal benefits of digitalization.
Trust is one of three areas we focus on as part of the societal implications cross-industry theme. The other themes we examine are skills and employment and a sustainable economy.
Digital technology has played a key role in driving transparency and trust. For example:
- Organizations are using remote sensors and RFID technology to track their end-to-end supply chain, enabling more transparent practices and enhancing product quality and safety.
- Social media has made it easier and quicker to share news of poor customer service or unpopular business practices to a global audience, putting more control over brand in the hands of individuals.
- The rise of the sharing economy and peer-to-peer websites (such as Airbnb and Uber) has enabled trust-based relationships to be developed with strangers from different backgrounds and locations, reinforcing positive behavior.
- Public institutions have been able to open up traditionally closed processes (e.g., through participative budgeting and crowdsourcing policy recommendations), improving citizen engagement and transparency.
However, while still the most trusted of all industry sectors, declines in trust were seen across all technology-based industries in 2015, according to Edelman. Privacy and security breaches have weakened trust in both technology products and the sector. According to an approximate estimate by McAfee, the costs associated with cybersecurity incidents amounted to $575 billion in 2014. Reflecting the need to address this challenge, many companies now recognize the need to build digital trust (see Figure 1).
Looking to the future, there is a broader set of issues playing out that threaten to undermine trust further and limit the ability of companies to innovate quickly. For example, ethical questions surrounding the use of data by organizations; the challenges of maintaining accountability in a world of algorithmic decision-making; and the impact on human capabilities of an increased reliance on machines.
How can the digital transformation of industries make a positive contribution to this challenge? Our value-at-stake analysis is providing new evidence on the potential benefits that can be delivered through digital initiatives. For example, by 2025 digital initiatives in the automotive industry could reduce the projected annual death toll from road-traffic accidents of more than 2 million by 10%, primarily through assisted driving and usage-based insurance. This could lead to an estimated $1.8 trillion gain to the economy through reduced crash costs and insurance premiums. However, these benefits must be set against the perceived risks and harms that attend such innovation; in this example there are concerns relating to the privacy of personal data, as well as the vulnerability of assisted-driving technologies to cyberattacks.
Challenges
While digital initiatives may be able deliver significant value to society, benefits must be set against a wider understanding of the potential impact on consumers. If organizations want to ensure that digital transformation delivers its potential benefits, they will need to address the following concerns.
1. Institutional accountability
- Concerns over privacy and security: Data sharing can deliver many benefits to consumers and society more broadly, as highlighted above. Yet there are low levels of trust in companies keeping consumers’ personal data private. A 2014 survey by the Pew Research Center found that only 11% of Americans were at least ‘somewhat confident’ that online video and social media sites would keep their personal data private. Part of the challenge is the difficulty of segmentation of consumers by their attitudes to privacy, which are context-specific and defy generalization. But it’s a challenge businesses need to address: nine in ten internet users in the United Kingdom and the United States would avoid doing business with companies that do not protect their privacy.¹
- Algorithmic governance: Algorithms have been instrumental in delivering more personalized customer experiences and enhancing operational efficiency. But as we move to a world in which algorithms nudge individuals into specific behaviors, concerns have grown about the extent to which clear accountability structures can be maintained. Researchers at Georgetown University highlight that if a company uses an algorithm to identify potential recruits, but only selects young people, the algorithm will learn to screen out older applicants next time. Unless the programmers realize the potential for algorithms to adapt to human biases, such practices would continue.² Understanding where accountability lies for such decisions becomes a challenge.
2. Information asymmetries
- Misunderstanding the ‘freemium’ economy: Search engines and social media sites have connected populations, democratized knowledge and improved access to many essential services. However they are also often maligned for the amount of customer data that they collect and monetize. Part of the challenge here is a disconnect or lack of transparency around the value exchange between customers and service providers. Reflecting this, a study by telecommunications company Orange found that 67% of European consumers surveyed believe that organizations benefit the most from using their personal data. Only 6% identify themselves as the main beneficiary.³
- A lack of transparency and control over personal data: Currently, businesses can manage their use of customers’ personal data through terms and conditions or the ‘end user license agreement’ (EULA). However, in many cases these documents are difficult for the average individual to understand and invariably give the user few options for the ownership or management of their personal data.⁴ One international study found that 63% of people do not fully read both documents on a website before accepting them.⁵
- Online profiling: Personalization and customer segmentation has been one of the key benefits to customers from digital business models. Yet there is a risk that profiling can have negative consequences, both intended and unintended. Researchers at Northeastern University found evidence of online price discrimination among several top e-commerce platforms, with significant differences in price, depending on variations in personal profile (such as zip code, mode of accessing website and shopping habits).⁶ Beyond this, the increasing use of big data and algorithmic decision making could lead to more instances of individuals being deprived basic needs (such as housing, access to government services) on the basis of their online data profile.
A number of privacy-enhancing technologies are emerging that could help take the human decision making and ethical judgments out of data sharing processes.
Case studies
Ethereum is a cryptocurrency platform that aims to allow a network of peers to administer their own user-created smart contracts in the absence of central authority.⁷ This initial attempt to build a digital governance system includes services such as smart contracts, blockchain technology and autonomous banking.
Wickr, established in 2012, is an instant messaging service that includes peer-to-peer encryption designed to secure all communications. Wickr already had 1 million users across 196 countries by 2014, and its technologies are enabling a range of new networks for both businesses and individuals. In 2014, it secured $30 million in funding.
DuckDuckGo is a search engine that users can access anonymously without their profile being shared. In early 2013, it handled about 1.7 million queries per day. Following the Snowden revelations of government surveillance, the number of daily queries doubled in the second half of 2013. By early 2015, queries had reached more than 7 million.
3. Questions of digital ethics
- Opinion-based value judgements: Technology companies are being asked with increasing frequency to provide opinion-based value judgements in response to customer queries. While this level of joint decision making may indeed serve customers well, it also presents questions of trust and accountability. Where do the lines of liability and responsibility rest if consumers rely on decisions that cause them harm? The example of search engines points to a broader set of ethical questions that will emerge in a world of machine-based decisions. For example, should an autonomous vehicle prioritize the safety of its occupants over that of other road users?
- Human-digital integration: A key feature of the future employment landscape is the prospect of humans and machines working in ever-closer harmony. This innovation promises to unlock new skills and capabilities within the workforce, driving productivity and efficiency. However, there are also ethical questions raised by this increasing interdependence. Will the use of robots lead to the enfeeblement of some human capabilities? Part of the challenge here is that a suitable framework or taxonomy for understanding the ethical issues raised by digital technology does not exist.
Five ways to get started
1. Measure the value of digital trust. Businesses should develop valuation models to understand the risks and benefits attached to the use of new digital technologies, bringing fresh insight into how changes in stakeholder attitudes could impact financial metrics. Companies have already made progress in valuing trust and societal impact, providing a foundation on which new models can be built. Multidisciplinary expertise will be essential here – for example, bringing together data scientists with marketers, ethicists and sociologists who understand local norms, cultures and contexts.
2. Improve transparency around data-driven business models. Businesses should take steps to help consumers understand the dynamics of data transactions that may currently be opaque. Finding ways to develop a two-way ‘fair’ value exchange of data could help restore trust. For example, taking a cue from startups such as Datacoup or Handshake, active engagement with consumers on data monetization can help bring greater transparency and more trusted relationships.
3. Collaborate with industry peers to develop best practice: The areas of responsible use of data and digital ethics are highly emergent. It is often not clear what best practice looks like, should it even exist. Businesses need to take steps to create a safe forum in which challenges and success stories can be shared and replicated.
4. Develop ‘data for good’ strategies. Proactively demonstrating the positive impact of digital will help to respond to concerns over how organizations are using digital technology. The example of usage-based insurance and its ability to reduce road deaths underlines how data can be used to make a tangible improvement to individual well-being. Beyond this, new partnerships can be established with researchers, NGOs and government agencies to explore the potential applications of data. More information about the World Economic Forum’s work on this topic is available here.
5. Form a multi-stakeholder consortium to collaborate on new norms and a code of digital ethics. Businesses should collaborate with government and civil society to articulate a new code or taxonomy of digital ethics, creating a framework for organizations to understand the potential risks and harms associated with digital technology and data use. This should be made open-source, to encourage feedback and make it relevant for different regions, sectors and industries. It should also be multidisciplinary, enabling lessons and parallels from the fields of science and medicine.
Footnotes:
1. Deasy, D., “A Booming Market for Mobile Commerce Where Trust Is Essential,” TRUSTe blog, January 28, 2013.
2. http://www.fordfoundation.org/ideas/equals-change-blog/posts/can-computers-be-racist-big-data-inequality-and-discrimination/?utm_source=dlvr.it&utm_medium=twitter
3. The Future of Digital Trust,” Orange, September 2014.
4. Fightback against internet giants’ stranglehold on personal data starts here. The Guardian. 2015 John Naughton.
5. Ipsos MORI. Personalisation versus Privacy. Accessed November 15, 2015.
6. Ethereum, 2015. https://www.ethereum.org/
Read the next article in this theme
Societal implications is one of four cross-industry themes (along with digital consumption, digital enterprise, and platform governance) that have been the focus of the World Economic Forum’s Digital Transformation of Industries (DTI) 2016 project. An overview of the DTI program can be found here.
Our in-depth analysis of the societal implications cross-industry theme is available in a white paper, which can be downloaded here.
To explore a selection of related articles and case studies, please select one of the tags below.