Note: This page is intended to be viewed as part of a larger report.
> Return to Network of Global Agenda Councils 2011-2012 report

Issue Overview

Predictably, 2011 was marked by increased threats of disruption to online services, including breaches and denial of service attacks. It was also a year peppered with heated discussions on the ramifications of Internet policy and governance on a global scale.

Given the constantly changing nature of these threats and policy responses, the Global Agenda Council on Internet Security decided to thoroughly document the complete risk profile of the Internet, addressing the needs of different stakeholders and identifying the critical areas that would help strengthen Internet policies and build a bridge between technology and governance.

Through the Risk Response Network’s annual Global Risks Report and the “Partnering for Cyber Resilience” initiative, the World Economic Forum’s work on risk over the past year has helped to put Internet security into the broader risk context.

The Global Risks Report has identified threats to critical infrastructure as one of the greatest concerns of the contributors surveyed. Critical infrastructure describes the assets essential for the functioning of society and the economy, and the Internet is arguably a core element when it comes to cyber risk, along with telecommunication networks and electric grids.

Due to its inherent openness and distributed structure, the Internet is probably the most complex and hardest to pin down of these networks, making security and resilience efforts challenging. The Forum’s “Principles for Cyber Resilience” offers a framework for addressing the larger global risk profile arising from the increased connectivity of people, processes and objects. The efforts of the Forum’s risk and technology communities have effectively moved the agenda to focus on interdependence and resilience.

Technology and Community-Driven Governance

One of the greatest limitations of policy-driven Internet security is the problem of reach. Local regulatory environments have ripple effects in the greater network, but do not ultimately cover the entire system. Given the fragmented, decentralized, and openly distributed nature of the Internet, the challenge of Internet security governance is akin to that of global governance. Internet security efforts come up against limitations of resources and jurisdictional reach.

The Internet in aggregate operates without one central governing body, and in mapping its architecture, it becomes clear how distributed the ownership of the Internet really is; each layer maps to different ownership and responsibility. This is the Internet’s greatest strength and its greatest weakness, and makes it complex to secure or ensure resilience of the system. Stakeholders have limited influence on the larger architecture individually, but in aggregate, the distributed network of stakeholders is where Internet security decisions are made.

In technology-mediated networks, at least two forms of governance are emerging: technology-embedded governance and community-emergent governance. The former refers to the embedding of rules surrounding use of a piece of physical or digital technology directly into the technology itself. For example, DVDs contain special codes that ensure that they can be viewed only in certain regions, while Digital Rights Management (DRM) attaches copyright information to software, music and other digital files to regulate their use. In short, the conduct of users is determined by the electronic data that is embedded into a piece of technology.

Community-emergent governance is the collective practice of communities of users agreeing on and enforcing a set of rules that govern a network. The laws of the network are decided and policed by all members of the network from the bottom up, rather than any higher authority establishing and administering legislation from the top down. A useful example is observed in Wikipedia, the online encyclopedia. We are beginning to see how cyber threats can be addressed from a combined technological- and community-driven approach. For example in the case of the Kelihos.b botnet, university researchers, Microsoft, and the US federal government simultaneously worked technical (reverse engineering) and legal (court orders) angles to take down the botnet. This same approach can be applied to information and communication technologies at all levels, even embedding permissions around individual pieces of information or data.

From top to bottom, the Internet stack is held together by technical, or rather techno-social, standards. These agreements are forged as the result of extended collaborative discussion in multistakeholder forums, specifically the Internet Engineering Task Force (IETF) and the World Wide Web Consortium (W3C).

These standards, the global and general adherence to which brings the Internet into being and defines its overall behaviour, are agreements defining protocols – the languages the machines speak to each other – and what they mean. These protocols imply agreements, expressed in electronic messages on the Internet; the standards encompass policy within their design. For example, “best effort” protocols help to prioritize different types of network traffic like videoconferencing, which requires minimum guaranteed bandwidth and latency, so is given high priority. The Transmission Control Protocol’s (TCP’s) “fairness” concept helps control comparable shares of the network to avoid congestion, even as new network transmission protocols are introduced.

In some protocols, the policy is inherent and implicit, in others explicit: W3C’s current work on privacy standards, Do Not Track (DNT), deliberately addresses behavioral targeting and privacy concerns. These global standards provide consistency across jurisdictions. In many cases they rely on the laws of each land, such as fraud, misrepresentation and privacy to be enforced locally.

Consolidating governance over Internet security is not necessarily a viable solution. For example, some fear that a UN resolution to expand the authority of the International Telecommunication Union (ITU) to include greater regulatory power over the Internet has the potential to threaten its openness through increasing state censorship and activity monitoring. Despite the ITU’s efforts to engage the private sector, some still feel that an ITU governing body would only represent governments rather than a truly representative sample of industry and private sector stakeholders.

In order to preserve the qualities of openness in the Internet and support resilience for a robust and reliable system, the Council looks to technical- and policy-based solutions that draw on distributed stakeholders. It continues to support a commons approach to Internet resilience, guided by principles of mutual aid. A mutual aid framework for resilience of the Internet draws conceptually from mutual aid treaties among states in the real world, and the Council has proposed to forge anew the relationships that sites and services have with each other on the Web. Recognizing that the Internet’s systemic risk is best tackled with openness and collaboration, not localized regulation and legislation, the framework supports cooperation and distributed collective action to maintain the core functions of the Internet in the face of a loss of connectivity or access to applications and information.

Local Policy, Global Effects

Council members discussed the challenges of addressing even well-known threats. Differences in local government policies, for example, mean that companies working with Internet service providers (ISPs) to root out botnets on their networks in the United Kingdom are not able to do so in the United States. This is just one example of how preventive action can be limited by, or is subject to, variation in policy. Council members operating on a global scale wish to see greater information sharing across industries and governments to navigate these nuances and disparities in local processes and policies.

In the past year, the Council has observed how policy to address one problem can raise other concerns and complications for the global Internet. A hotly contested debate has surged around the proposed Stop Online Piracy Act (SOPA) and Protect IP Act (PIPA) in the United States to combat the online trafficking of copyrighted material. These acts included technical provisions that, while aiming to protect intellectual property, inadvertently threatened the fundamental tenets of Internet openness.

While SOPA and PIPA were stopped in the United States, the Spanish Government enacted a similar Sustainable Economy Law or Sinde Law that allows copyright holders to have allegedly-infringing websites shut down within days of a complaint. Legislative policy-makers must often navigate the complexities of stakeholder requirements with a less-than-adequate understanding of the technical impacts their decisions have on the architecture of the Internet.

The effects of changes made by local operators or jurisdictions may be primarily local, or they may have ripple effects on a wider, often global community. Architectural diagrams, such as the one produced by the Council below, should help to clarify the technical implications of particular policies. Stakeholders can benefit from comparing their own architectural diagrams of risks to this macro view of the full range of Internet security risks. Doing so may pinpoint more explicitly where their influence lies and where internal decisions may have ripple effects on the macro system.

Individual states have made some headway towards advancing Internet security policy and recommendations over the past year. For example, the United Kingdom published its Cyber Security Strategy at the end of 2011. But even within nation states responsibility for cyber security is still up for debate, as shown by the competing bills in the US legislature that locate responsibility for cyber security either with the civilian Department of Homeland Security or the military National Security Agency.

The limits of established Internet policy bodies’ influence and efficacy were tested in 2011. To all intents and purposes, the Internet Corporation for Assigned Names and Numbers (ICANN) is the organization solely responsible for coordinating the domain names and Internet Protocol (IP) addresses. However, its recent decision to expand top-level domain registration has prompted international arguments for a separation between ICANN and its historical US influence. And within the United States, the Department of Commerce’s preliminary rejection of ICANN’s bid to continue to run the Internet Assigned Numbers Authority (IANA) made a statement about global pressure for separation of policy-making from implementation. Other multistakeholder bodies such as the Internet Society (ISOC) and the Internet Governance Forum (IGF) are working concurrently on Internet policy in a larger sense. These efforts will continue around the world.

Mapping Responsibilities

Looking at the incidents of the past year, the Council is reminded that vulnerabilities lie at every technical and architectural layer of the Internet, and even the most basic trust mechanisms are at risk. When vulnerability in one of Comodo Group’s Registration Authorities (RAs) partners allowed hackers to successfully fake certificates for popular e-mail services, it highlighted how trust mechanisms could be taken for granted. Related activities demonstrated how Secure Sockets Layer (SSL) certificates could be threatened. When these core mechanisms supporting Internet security become targets, how can users ensure that the standards and trust mechanisms of the Internet remain reliable?

Chief officers of every kind are familiar with architectural stack diagrams used to simplify complex systems so that high-level discussions can take place. But often CISOs, CIOs and CEOs use these diagrams to map only their corners of corporate information systems, that is, the systems over which they have control.

In traditional security discussions, these views inform and guide perimeter-securing efforts around particular closed systems. The Internet’s open and distributed design inherently resists consolidated ownership in terms of control, responsibility, or jurisdiction. There is no CSO of the Internet. However, the Council believes that Internet security could benefit from the exercise of diagramming the architectural stack to present a common vocabulary and shared vision so that more holistic discussions about Internet security are possible.

Developing a clear mental model will help to map weaknesses, known incidents and solutions, and even to chart jurisdiction and responsibilities across layers of the distributed system. Getting back to basics allows a diverse group of stakeholders to anticipate future vulnerabilities and identify fortification and resilience-building efforts.

The Internet Security Architecture diagram (right) outlines the Internet’s basic architectural layers, along with their familiar related technologies at each layer, loosely based on the Open Systems Interconnection (OSI) seven-layer model and the TCP/IP models. Using this simplified representation, it is possible to map entry points for known attacks and trace their effects. For example, the Stuxnet computer worm entered in at the human layer with socially-engineered e-mail attachments, yet threatened the bottom-most layer of physical infrastructure. This architectural view can easily be revised to map past and future incidents to show a constantly updated risk and vulnerability profile across the entire architecture.

The Internet security discourse has tended to focus on explaining specific incidents (for example, hashing out the details of a distributed denial of service or DDoS attack) and then extrapolating worst-case scenarios for the future. But when the different kinds of DDoS incidents of the last year are mapped out on this architectural model, it becomes evident that incidents targeted vulnerabilities at almost every layer of the stack and the nature of these attacks vary greatly on a case-by-case basis. For example, there is a significant difference in the tactic used in Operation Payback/Avenge Assange of targeting specific e-commerce web servers, compared to the root server level target threatened in the recent rumours of a 1 April, Domain Name System (DNS) attack. Without looking at the whole picture, there is a risk of confusing the issues for a less technical audience and overlooking weaknesses in areas whose security was taken for granted, as was the case with the Comodo SSL certification attack.

Recommendations

Building off the Council’s Internet Security Architecture diagram, there needs to be a greater understanding of the different Internet layers and the governance mechanisms between those layers. This involves continued dialogue, further research, and better data about the Internet infrastructure upon which better policy and governance decisions can be made.

A concerted effort to develop reliable and publicly available forms of measurement of the Internet, the Web and usage is required. For example, data across Internet service providers (ISPs) regarding the most basic questions of penetration, speed and price are limited and inconsistent, hindering the ability to understand flows of traffic and information. The Council could use global, impartial data from third parties as well. Better data would do more than just help governments meet their regulatory obligations; better data would also improve self-regulation by private sector players and empower individuals to make better decisions.

In addition to greater coordination and cooperation across the public and private sectors in information sharing, the Council would like to see more specific threat sharing across geographies. For instance, sharing known signatures across government agencies through an independent clearing house would create a very useful signature database, preventing further attacks from known entities. The specifics of such known threats are often deemed “classified” or hoarded as “trade secrets” by security industry players.

The Council notes that in the past year many of the risk entry points for incidents started at the human layer, often through socially-engineered attacks. This highlights the importance of promoting greater risk literacy among private and public sector leaders and among average Internet users. This goes beyond often-touted security education efforts. Instead, it calls for the kind of security hygiene that instills practical, common sense habits for behavioural change. The Council supports initiatives to reach larger audiences, and efforts like the Forum’s “Partnering for Cyber Resilience” are a step in the right direction.

Disclaimer

The opinions expressed here are those of the individual members of the Council, and not of the World Economic Forum or any institutions to which they are affiliated.