Vulnerability liability
Share
Definition
Known vulnerability — in contrast to a zero-day software vulnerability, a known vulnerability has been announced to the security community, generally with publicly documented methods to prevent its exploitation (e.g. patches or simple avoidance)
End-of-life — a product no longer supported by its developer with ongoing patches and updates
Open source software — software with public access to the source code of the program itself with licensing requirements. The source code is basically a list of commands that dictates how the program executes. Linux is an example of open-source software
Closed source software — software with proprietary and limited access to the source code of the program. Microsoft Office is an example of closed-source software
Software-as-a-service — a software distribution model in which one party hosts and maintains applications and makes them available to users over the internet
Policy model
Historically, the terms of use for licensing software have included some version of caveat emptor: “buyer beware”. Software vendors have explicitly avoided accepting liability for the damages caused by vulnerability exploitation. As software has become embedded more deeply into processes integral to an individual, business or even a nation-state, the potential damages associated with exploiting software vulnerabilities have also grown.
Moreover, there are tremendous swaths of legacy end-of-life (EoL) software for which vendors have entirely stepped away as they have moved on to develop newer, better versions.18.19 In some cases, the vendors no longer even exist. It is also increasingly the case that, in some software categories, market incentives and user choice cannot independently promote greater security given increased market concentration.
Liability can be attached to actors throughout the software ecosystem to calibrate incentives for security. One way to conceptualize how liability could be distributed is based on risk, where liability is determined on the basis of the potential consequences of exploiting a vulnerability (e.g. greater risk is associated with more stringent liability for a vendor). In thinking about assigning responsibility for securing vulnerabilities, two main questions emerge:
Who is liable or otherwise responsible for securing software? At least four broad sets of liability regimes exist, ranging from no liability to holding vendors liable for their software:
- No liability, code close-sourced — this is the current norm where counterparties to software vendors may negotiate some level of accountability by exception. In this regime, if damages arise as a consequence of a vulnerability being exploited, the vendor is not held responsible.
- No liability, code open-sourced — in exchange for being released of liability, vendors could be required to open-source underlying code. In theory, users and implementers of software would be more empowered to address vulnerabilities on their own. In this regime, if a vendor has open-sourced the code, the vendor is not responsible for consequences of software being exploited.
- User, implementer liable — users and implementers could be held liable for damages arising from software being exploited. In practice, such a regime would create heightened incentives for users to contract for secure software. Within this context, the possibility also exists of differential liability that differs between enterprises with dedicated security teams and consumers (with more responsibility being attached to entities “that should know better”).
- Vendor liable — vendors could be held liable for damages arising from software being exploited. For example, if a vendor did not issue a patch for a known software vulnerability, the vendor would be held liable if damages arose as a consequence (thereby heightening incentives for vendors to design and maintain secure software).
How should that liability shift when software transitions to EoL, or vendors go out of business?
- Given that most software vendors cannot afford to support software ad infinitum, there are justifiable concerns around attaching liability to software for perpetuity. To address those concerns, some commentators have proposed a sliding scale of liability, such that liability shifts as software enters EoL. For example, software may begin as a vendor’s responsibility but as it becomes EoL, liability may be transferred to users/implementers. In such a regime, vendors would be held responsible for designing and maintaining secure software and there would be commercial incentives for users and implementers to upgrade to newer versions of software, when available.
Within this simplified framework, a number of hybrid arrangements could be proposed for splitting liability. For example, vendors can be held liable for rapidly providing mitigation guidance or a patch for a known vulnerability while users and implementers can be held liable for timely patch deployment.
Significant trade-offs are associated with different liability regimes:
- No liability, code close-sourced — this is likely to result in fast software releases with significant security risks in the current environment.
- No liability, code open-sourced — this is likely to result in fast software releases with perhaps slightly diminished security risks. Put differently, it is not necessarily the case that open-source software is more secure. For example, in 2014, independent researchers discovered a cryptographic flaw in an open-source common implementation of encryption affecting two-thirds of the world’s servers known as “Heartbleed”.20 Furthermore, open-sourcing code may dilute commercial incentives to innovate in software as a competing vendor may be able to engineer product changes more quickly relying on the investment of a first-mover.
- User, implementer liable — this is likely to result in slower software releases as commercial incentives from some users push vendors to develop more secure software.
- Vendor liable — this is likely to result in even slower software releases as vendors will have every commercial incentive to provide security by design and deploy engineering resources behind maintaining secure software.
- Embedded software — bespoke software embedded within hardware that is not traditionally understood as a locus of computation (e.g. industrial control systems) — raises particularly difficult policy considerations. The depreciation horizon of the hardware often exceeds that of the software. Consequently, organizations with embedded software run systems that are often no longer supported or well-documented to realize value
The increasing adoption of software-as-a-service is addressing the issue of both EoL software and incentives to keep software secure. Vendors regularly update and provision software for customers (and generally maintain only a few versions).
Breaches are the visible consequence of a very small share of the code base being exploited*

*Wired. Gates, B. (2002, 17 January). “Bill Gates: Trustworthy Computing”. Retrieved 11 December 2017 from https://www.wired.com/2002/01/bill-gates-trustworthy-computing/
Policy model: Vulnerability liability
Key values trade-offs created by vulnerability liability policy
Vulnerability disclosure
Case study: U.S. National Vulnerability Database, China National Vulnerability Database
To provide a structured repository for companies and researchers to mitigate and respond to security vulnerabilities, some countries have established national vulnerability databases. These vulnerability databases are then the source-of-record for vulnerability mitigation and form the basis of automated systems that security teams within enterprises use to prioritize patching.Of course, a database of vulnerabilities also presents an opportunity for adversaries to develop exploits that weaponize vulnerabilities — at least until companies and researchers develop patches.
The United States has established a national vulnerability database, otherwise known as NVD, which serves as the international database of record with its nomenclature and structure. This database is constructed based on the voluntary submissions of vendors and researchers — a “push”. Often, this voluntary submission occurs some period of time after the vendor or researcher publicly discloses a vulnerability.
In contrast, China’s national vulnerability database, otherwise known as CNNVD, relies principally on a “pull” model and its researchers actively search for vulnerabilities surfaced by researchers, vendors and other sources, and document those vulnerabilities in the CNNVD. As a consequence, for the very same vulnerability, the CNNVD is often more timely than NVD. Recent research suggests that the average delay between first disclosure and availability on CNNVD is 13 days while on NVD the average delay is 33 days.
The practical impact of the staggered release of vulnerability disclosures of national vulnerability databases is the opportunity for a form of vulnerability arbitrage — adversaries learning about vulnerabilities on CNNVD and developing exploits before companies, particularly in the US context, have the opportunity to begin researching the mitigation of those exploits.
Connecting policy to values
Policy-making around attaching liability to those who build, implement and use software implicates three key values: security, economic value and accountability:
- Attaching greater liability around software use will increase its security. Whether liability rests with vendors or users, any attachment of liability is likely to increase security as the liable party undertakes greater precautions. If liability is associated with security vendors, vendors will develop more secure products. Indeed, such a requirement might encourage greater research into patching mechanisms that are both less intrusive and more difficult for users to avoid. On the other hand, increased liability for users will increase security not only through the user taking greater precautions (e.g. through more rigorously limiting access and control of particularly critical systems) but also by creating greater market incentives for vendors whose products meet more exacting security standards.
- Requirements to open-source software may have an ambiguous impact on security. While in theory the ability to have a community curate a codebase and test it would improve security, there is little empirical evidence that open-source software is inherently more secure than its closed-source equivalent. That said, requirements to open source unsupported software that was previously closed source may improve security. For software vendors selling closed-source software, such a requirement would prevent those companies from monetizing products while avoiding the responsibility of ensuring security (open-source software is generally monetized differently from closed-source software). As a consequence, vendors may be more greatly incentivized to provision and secure products for a longer period of time.
- Insofar as greater liability results in greater security, the economic value of greater liability is positive. However, there is not a simple linear relationship between greater liability and greater economic value; at some threshold, greater liability will impose significant costs on the software ecosystem, outweighing the mitigated security damages. Furthermore, assigning liability may decrease the speed of innovation as vendors now bear the equivalent of a “warranty cost”, either directly or indirectly, through the demands of their users.
- Greater liability is likely to lead to greater private-sector accountability, particularly if vendors are held liable for security directly.