• Agenda
  • Initiatives
  • Reports
  • Events
  • About
    • Our Mission
    • Leadership and Governance
    • Our Members and Partners
    • Communities
    • History
    • Klaus Schwab
    • Media
    • Contact Us
    • Careers
    • World Economic Forum USA
    • Privacy and Terms of Use
  • EN ES FR 日本語 中文
  • Login to TopLink

We use cookies to improve your experience on our website. By using our website you consent to all cookies in accordance with our updated Cookie Notice.

I accept
    Hamburger
  • World Economic Forum Logo
  • Agenda
  • Initiatives
  • Reports
  • Events
  • About
  • TopLink
  • Search Cancel

Report Home

<Previous Next>
  • Preface
  • Introduction
  • Using the Playbook for Public-Private Collaboration
  • Reference architecture for public-private collaboration
  • Zero-days
  • Vulnerability liability
  • Attribution
  • Research, data, and intelligence sharing
  • Botnet disruption
  • Monitoring
  • Assigning national information security roles
  • Encryption
  • Cross-border data flows
  • Notification requirements
  • Duty of assistance
  • Active defence
  • Liability thresholds
  • Cyberinsurance
  • The future of cyber resilience
  • Appendix: Normative trade-offs framework
  • Acknowledgements
Cyber Resilience   Monitoring
Home Previous Next
Cyber Resilience   Monitoring
Home Previous Next
Cyber Resilience Home Previous Next
  • Report Home
  • Preface
  • Introduction
  • Using the Playbook for Public-Private Collaboration
  • Reference architecture for public-private collaboration
  • Zero-days
  • Vulnerability liability
  • Attribution
  • Research, data, and intelligence sharing
  • Botnet disruption
  • Monitoring
  • Assigning national information security roles
  • Encryption
  • Cross-border data flows
  • Notification requirements
  • Duty of assistance
  • Active defence
  • Liability thresholds
  • Cyberinsurance
  • The future of cyber resilience
  • Appendix: Normative trade-offs framework
  • Acknowledgements

    Monitoring

    Share

    Definition

    Metadata — basic information about data, which can make categorizing, finding and working with particular instances of data easier; in the case of surveillance — especially on the part of government agencies — metadata not only facilitates categorizing and retrieving content but provides information on its own and may also be used to legitimize collecting and examining content

    Internet service providers (ISP) — companies that provide access to the internet and other related services, such as website building and virtual hosting.33

    Technology platform — companies that facilitate communication or messaging over a variety of protocols (e.g. mobile messaging, instant messaging, email, etc.)

    Policy model

    One way to frame this policy question is similar to the discussion on encryption; namely, at a fundamental level, who should be able to see what? While end users necessarily observe digital content, what content should others be able to monitor to promote security and other valid national interests (e.g. privacy)?

    This tension surfaces in at least three scenarios: between an employee and an employer, between a customer and an ISP, and between a user and a technology platform.

    Additionally, for purposes of analytical simplicity, it is helpful to separate internet traffic into two components. The first is metadata, the instructions that allow entities to understand to whom to address content and how to relay it. Metadata is intrinsically difficult to mask (e.g. through encryption) — for example, if the address of the recipient of data is masked, how will an ISP know who to transit that data to? The second is content, the actual digital payload that a user is perceiving. Content may include malware and other malicious digital payloads.In the context of monitoring metadata and content, a wide variety of policy options can be undertaken, but it is helpful to think about three choices: what is approved or a priori legitimate, permissible (e.g. by court order) or forbidden (never permitted). To take a few examples:

    • Government may, by exception, be permitted to monitor metadata in the investigation of a crime (e.g. under subpoena). Alternatively, policy-makers may choose to entirely abrogate the government’s ability to perceive any digital data in transit by limiting the gathering of metadata, in general.
    • Employers may be presumptively allowed to observe the digital content accessed by employees, particularly if employees are utilizing employer-provided resources.

    Each policy configuration has its own unique risks and benefits, but a few are common:

    • The greater the extent to which monitoring content (that may include malicious payloads) is limited to users, the more end users are responsible for their own security; that is to say, security measures that could otherwise be deployed by government, a tech platform, an ISP or an employer cannot be utilized. 
    • Placing increasing capabilities and responsibilities to monitor content in the hands of ISPs, employers or tech platforms may create market-based incentives for security. For example, some users may avoid wanting to have their content monitored and are willing to assume the security risk that implies. Others may willingly subject content to inspection to minimize the security risk. Yet these market-based incentives may be thwarted by market concentration (particularly in the case of ISPs and tech platforms) and the quasi-public nature of ISPs in some countries.
    • In general, organizations in a position to monitor traffic must balance the risk of overly intrusive monitoring that violates privacy with the potentially heightened security that could be guaranteed.

    Two important countervailing technological trends impact the extent to which different actors are able to monitor internet traffic and enforce security protocols:

    • Increasing amounts of data flows are being encrypted by default, thereby stymieing the ability of government, ISPs, tech platforms and, to a lesser extent, employers from observing and filtering content, depending on product and context (e.g. data “in flow” vs data “at rest”) even if it were a priori permissible.
    • However, in some contexts the increasing adoption and proliferation of technology allow greater inference of content based solely on metadata. One example: if that metadata reveals the video compression protocol, the size of the packet transmitted and the address accessed, so-called deep packet inspection statistical techniques adopted by some ISPs could reveal that the precise size of the encrypted data has the digital “fingerprint” of accessing a particular form of objectionable content provided by a known website.35

    Policy model: Monitoring

    Key values trade-offs created by monitoring policy

    Connecting policy to values

    The value trade-offs implicated by enabling greater monitoring — whether by employers, government, ISPs or tech platforms — are to a large extent the same value trade-offs associated with weak encryption policy, whereby law enforcement has a mechanism to contravene encryption. However, one key difference, at least to date, has been that monitoring privileges are typically the province of the private sector. Consequently, whereas “backdoors” are likely to deteriorate trust and create security issues whose combined effect might reduce commerce, monitoring privileges by private-sector intermediaries have not yet deteriorated trust in ICT:

    • Greater monitoring privileges may improve security, provided those privileges are not compromised by bad actors who misuse the information gained. Monitoring content will allow different ecosystem intermediaries to filter and inspect traffic for malicious content.
    • Greater security will result in economic benefits (assuming trust is not compromised). Given that fewer individuals and organizations will fall victim to cyberattacks, increased monitoring should reduce the costs associated with cyberincidents.
    • Monitoring policy impacts the accountability of the public and private sectors. Given the ability to inspect content, both the public and private sectors will be enabled to use technology (of which deep-packet inspection is just one example) that will allow them greater capabilities to provide security for end users. Consequently, greater monitoring privileges will be associated with greater accountability.
    • Privacy is also impacted by choices in monitoring policy. Greater monitoring, particularly of content, will reduce privacy.
    33
    33 TechTarget. (n.d.). “ISP (Internet service provider)”. Retrieved 21 December 2017 from http://searchwindevelopment.techtarget.com/definition/ISP
    35
    35 Ars Technica. Anderson, N. (2007, 26 July). “Deep packet inspection meets 'Net neutrality, CALEA”. Retrieved 12 December 2017 from https://arstechnica.com/gadgets/2007/07/deep-packet-inspection-meets-net-neutrality/
    • Policy Models
    Back to Top
    Subscribe for updates
    A weekly update of what’s on the Global Agenda
    Follow Us
    About
    Our Mission
    Leadership and Governance
    Our Members and Partners
    The Fourth Industrial Revolution
    Centre for the Fourth Industrial Revolution
    Communities
    History
    Klaus Schwab
    Our Impact
    Media
    Pictures
    A Global Platform for Geostrategic Collaboration
    Careers
    Open Forum
    Contact Us
    Mapping Global Transformations
    Code of Conduct
    World Economic Forum LLC
    Sustainability
    World Economic Forum Privacy Policy
    Media
    News
    Accreditation
    Subscribe to our news
    Members & Partners
    Member login to TopLink
    Strategic Partners' area
    Partner Institutes' area
    Global sites
    Centre for the Fourth Industrial Revolution
    Open Forum
    Global Shapers
    Schwab Foundation for Social Entrepreneurship
    EN ES FR 日本語 中文
    © 2022 World Economic Forum
    Privacy Policy & Terms of Service