Research, data, and intelligence sharing
Threat intelligence — insight into the capability and intent of an existing or emerging menace. In the context of cybersecurity, this could range from technical indicators (e.g. samples of suspected malware) to non-technical indicators (e.g. hacker forum discussions)
Personally identifiable information (PII) — any data that could potentially identify a specific individual; any information that can be used to distinguish one person from another and can be used to de-anonymize anonymous data can be considered PII. Breach notification laws typically focus on notifying the public when PII might have been exposed to unauthorized individuals, particularly in the context of financial or medical information
As many more organizations in the private and public sectors are subject to cyberattacks, both sectors have been seeking to develop structured collaboration to ensure that individual research, data and threat intelligence are pooled to create a collective immune system-like response.The key policy question regarding threat intelligence is: what is the government’s role in sharing threat intelligence? Coordination and sharing are necessary as individual actors rarely see the entire landscape of potential threats. Threat intelligence has historically existed within a fragmented landscape, with companies relying on a combination of private-sector feeds provided by security vendors and internal research, with limited public-sector involvement. In the last few years, however, governments have increasingly attempted to formalize threat-intelligence-sharing relationships between the public and private sectors and to create scalable models for sharing data without compromising sources and methods (in the case of government-provided threat intelligence) or privacy (in the case of private-sector-provided intelligence).
Some currently proposed regulations intended to promote privacy and the limited sharing of PII may actually hinder information-sharing relationships. Companies may have legitimate concerns regarding whether collaboration will create more legal liability than averted cyberdamages.Policy positions on threat intelligence must consider four major questions:
- Who is involved in an intelligence-sharing relationship? Different models have been pursued and calls have been increasing to establish a broader direct-sharing relationship not only among the private sector but from the public to the private sector.
- What will be shared? Everything from raw data (e.g. URLs) to analysed intelligence (e.g. URLs that are determined to originate suspicious traffic using a specific protocol to target a particular vulnerability with the aim of exfiltrating a particular type of data) can be shared. However, the more analysed intelligence becomes, the less automatable its sharing becomes. Automation pertaining to threat intelligence is important because cyberattacks operate at network speed — responding quickly and updating firewalls and malware signatures may be decisive in preventing an intrusion. To put this into context, a recent report measures the median “dwell time” of cyberattackers, the length of time an attacker is within a network, as 99 days.26 While increased automation can diminish this time, the speed that automation can provide is not a panacea. Automating a response into one’s security posture may impede the legitimate use of an application or access to particular data.
- Is sharing mandated? Governments can choose to allow threat intelligence sharing on a voluntary or mandatory basis.
- What safe harbour will be provided? The concern here is specific to the private sector; namely, companies would like to avoid incurring customer or regulatory liability for sharing threat intelligence. In the United States, for example, companies historically were concerned about claims of anticompetitive collusion whose basis would be threat intelligence sharing.27 As such, a safe harbour from liability is often attached to a mandate to share intelligence.
The risks and benefits to the different arrangements for each of the aforementioned questions are significant:
- The greater the number of participants, the more threat intelligence can be generated, shared and validated.
- While greater data volumes are not necessarily correlated with greater capability to generate insight, in theory more data provides a richer sample for companies to analyse and draw inferences from. In some circumstances, increased data that is not properly curated can impair a security posture as some participants may contribute less-actionable or lower-quality intelligence.
- The richer the intelligence shared, the more actionable it is for practitioners to secure their own organization’s systems against a particular threat. Of course, for those generating such intelligence, documenting an adversary’s motivations and providing contextualized analysis that another company or the government can act upon requires significant resources.
- Mandates are likely to result in greater volumes of data being shared along with concomitant costs. In addition to the prior concerns about the diminishing and even negative returns to increasing volumes of data, mandated formalized sharing may reduce informal and productive arrangements developed by the security teams of larger, well-established institutions (particularly in the closely knit international financial sector).
- National policy on threat intelligence sharing must be sensitive to international concerns and the implications of potential reciprocity. For example, compelling a multinational company to share threat intelligence with the public sector could imperil a company’s international business if international customers feel that privacy or confidentiality may become compromised.
Policy model: Research, data and intelligence sharing
Key values trade-offs created by intelligence sharing policy
Case study: Department of Homeland Security, Automated Indicator Sharing (AIS)28
To promote the rapid and timely sharing of threat intelligence indicators between the public and private sectors, the U.S. Department of Homeland Security (DHS) created a voluntary and automated cyberthreat-sharing programme to facilitate collaboration between the public and private sectors. The DHS programme is a remarkable innovation in the following key respects:
- AIS facilitates sharing between the public and private sectors, addressing a common refrain from large companies that intelligence-sharing relationships often appear one-sided.
- AIS is automated, such that threats at network speed can be addressed almost as quickly as they materialize.
- AIS has vitiated the principal confidentiality and privacy concerns surrounding the use of automated threat intelligence sharing by providing limited safe harbour and scrubbing threat intelligence for PII.
One lesson from AIS, however, is the difficulty of obtaining traction for any voluntary threat intelligence sharing programme, at least when such a programme is “sub-scale.” Like any network-based model, the marginal value derived from AIS is small for the first few participants, even given the public sector’s contribution. However, as AIS becomes broadly adopted, each marginal would-be participant would likely derive greater value and thus more would join (a version of a “flywheel” effect).
Connecting policy to values
Research, data and intelligence-sharing policy implicate a number of values including, most importantly, security, economic value, accountability and fairness:
- Greater data sharing is likely to lead to greater security, particularly over the longer term. In the short run, greater data sharing may have an ambiguous impact as security practitioners learn to use and deploy analytical tools to ingest and process more data and draw insights. However, in the long run, as simpler forms of data analysis become automated and accessible tools augment human reasoning, greater data sharing should lead to greater collective security.
- Greater security will be realized over the longer term, meaning that the economic value of reduced costs from cyberincidents will also be realized over the longer term. That said, in the short term, greater data sharing is likely to impose significant capital and operating expenditures. Not only will sharing at machine speed require investing in new tools, but intelligently leveraging these tools and training new cyberprofessionals to use and embed them as part of the security workflow will be costly.
- Greater data sharing generally increases accountability for all ecosystem participants. Entities in the public and private sectors will need to take responsibility both for contributing actionable intelligence and for acting on the intelligence shared by ecosystem participants. Divergent sharing models, for example mandating the private sector to share intelligence with the public sector without a reciprocal demand, result in differential accountability. In the example, the private sector has increased accountability whereas the public sector does not.