The Value of Threat Modeling: Three Data Points to Consider

  • 27 February 2024
  • 2 replies
  • 100 views
The Value of Threat Modeling: Three Data Points to Consider
Userlevel 4
Badge +1

This article is co-authored by Nick Kirtley and Michael Bernhardt, who met at ThreatModCon 2023 and started this exercise of define the value of threat modeling. This post marks the beginning of an iterative process aimed at refining the framework.

Nick and Michael would love to hear your thoughts! Your input is super important as they keep refining the framework. Whether you have questions, ideas, thoughts, or feedback, feel free to leave them in the comments below! 😊

It’s now been more than 15 years since a cohort of Microsoft thought leaders and Threat Modeling being applied by a number of experts in the field around the world. Yet, if we ask the thought-leading experts today about the current state of Threat Modeling, one of the themes would surely be ‘potential and anticipation’, rather than ‘implemented and justified’. 

We, the authors, believe that the lack of Threat Modeling uptake can in part be explained by limited monetary value justification. Put simply, it’s not clear how the Return of Investment (ROI) of Threat Modeling can actually been represented in a business context. This article aims to raise the exchange of a future value projection among the community.

Threat Modeling should give security experts a methodology to assess the security aspects of complex software stacks. In collaboration with the development team, it helps with the rationale of applying security controls as well as the identification of explicit architecture flaws. To be able to spend the effort for the structured security assessments in the form of Threat Modeling, security teams need to rationalize their efforts and value to the business. Not being able to articulate a measurable ROI hinders the broader adoption. The general discussion about the current shortcomings of Threat Modeling and the rationale for the quantification of its value has been collated in the talk “Hitchhiker’s Guide for Threat Modeling” which was held in October 2023 at ThreatModCon in Washington DC, USA.

Not familiar with ThreatModCon? It was the first Threat Modeling convention organized by Threat Modeling Connect. Many of the world's leading threat modeling experts and enthusiasts attended the convention and provided talks on interesting topics and perspectives.

How does the application of Threat Modeling compare to Penetration Testing?

Quantifying Threat Modeling, and showing that it has positive value and ROI, can help us to justify implementing it within a security program.

However, identifying the value and ROI of Threat Modeling is not easy. One way of achieving this is by comparing Threat Modeling to an established security practice: penetration testing. In most security mature organizations, penetration testing is a known and mandatory security activity. Contrast that to Threat Modeling, where it can be difficult to introduce it within an organization lacking significant justification.

 

Penetration Testing 

Threat Modeling 

Familiarity 

Very well known within the security organization, and also known within IT (and business) teams. The concept of breaking a running instance and being able to show the impact is well understood by Security as well as IT folks. 

It is a highly structural process and depicts issues in any stage of the development lifecycle. 

 

It is highly conceptual. It is often perceived as too conceptual and is barely understood by IT folks and even by other Security domains.

Cost justification 

Based on its imminent indication of the consequences, it requires limited justification. Despite, penetration testing facing the same constraints of showing actual coverage of all potential attack patterns as Threat Modeling and coming with higher cost of resolutions.

Some justification required for limited ad-hoc Threat Modeling.

 

Significant justification is required to develop a threat modeling program with mandatory Threat Modeling. Building up a structured security assessment process firstly ramps up the cost-factor and only shows the value at a later stage. 

Impact 

Penetration testing identifies security gaps once they have been introduced into the application or system in scope. The ROI can be considered significantly higher based on similar efforts for conducting the assessment but significant higher costs of resolution. 

Threat modeling identifies security threats and requirements at any phase of the application or system lifecycle but aims at an early identification. It shows a better ROI if applied in a well-managed manner, having similar efforts for conducting the assessment but significantly lowers the effort for resolution. 

 

 

How can we Define the ROI for Threat Modeling?

The Threat Modeling community has so far failed to express measurable criteria to define the value of applying Threat Modeling as part of the product development lifecycle. This paragraph outlines on possible ways to do so as seen by the authors.

 

Aspects to consider when aiming to define the value of Threat Modeling

Threat Modeling does not aim to prove its value by quantity, in the sense of identifying a high number of issues that could easily be identified by other means, e.g. automated scan by security tools. Rather, it aims to outline fundamental misconceptions of software that could lead to numerous attack paths, attack vectors, and (architectural) security weaknesses. Yet, every Threat Modeling workshop should reserve a slot for doing a retrospective with the development team in which the identified issues are reviewed and assigned a risk vector – if that wasn’t considered as part of your workshops, you may review the guidance from the “Hitchhiker’s Guide for Failing a Threat Modeling”.

The discussion about risk vectors can be explored through three crucial dimensions:

  • The savings of applying a proactive security assessment by the organization;
  • The cost associated with preventing a successful breach;
  • The reputational impact or cost of claims with preventing a successful breach;

Defining the value of proactively resolving issues

Identifying flaws before the application or product is deployed, brings the highest certainty for an organization to prevent dealing with the aftermath of a breach. Yet, as also mentioned beforehand, measuring what could be the value of the anticipated risks is quite hard. Both, Threat Modeling and penetration testing, aim for an indication before production usage. The value that Threat Modeling contributes thereby may be measured along the same lines that penetration testing is being quantified.

Approach: While penetration testing traditionally is typically conducted within the realm of an organization, nowadays Bug Bounty programs have gained popularity and provide insight into vulnerability findings and allow for market value estimation across a multitude of organizations and sectors. Corresponding reports and research papers exist. As part of the value definition, Bug Bounty datasets may therefore be pulled for correlation (or even automatic mapping) on the detailed metrics to the identified flaws during the discussion of the risks. An alternative is to do a generic mapping like for example presented in the referenced research paper based on CVSS risk vector classification.

Discussion: The proper correlation requires the dataset from Bug Bounty programs to contain enough contextual information about the flaw being identified, e.g. market segment of the organization, exposure of the application, and data classification affected by the vulnerability. Under the assumption that monetarization of Bug Bounty findings would anyhow require this information, corresponding metrics should be available. However, a restriction may be the limited number of providers with broad market coverage providing this kind of information.

Defining the Value of Preventing a Successful Breach

In the event of a successful breach, the long-tail costs due to indirect implications often outweigh the initial costs derived from the breach itself, e.g. by revamping whole system landscapes in comparison to the data disclosed from a database as part of the initial attack. The value that Threat Modeling contributes thereby is to prevent these costs to the organization.

Approach: As part of the value definition, Threat Modeling findings may therefore be related to publicly available data about security breaches and research on the long-tail costs. Examples are Akamai’s DDoS and Verizon’s Ransomware report. In order to use this data, it would require mapping the identified flaws during the discussion of the risk vector to the public categories associated with the reports. Another approach would be to quantify the value based on the market perception of corresponding flaws. For that, an analysis of black-market vulnerability exchange could be conducted and mapped to the identified flaw.

Discussion: The first approach highly depends on central reports being available by central knowledge providers. Often these providers can only provide the full picture in all specifics of the breach for some organizations that have been affected, thereby lacking some dimension that would allow an according mapping. In addition, the challenge of giving a clear rationale and mapping from proactively implementing security controls to particular attacks should not be underestimated.

Defining the Value of Preventing Reputational and Regulatory Impact

Under the assumption that your organization was able to cope with the attack with a reasonable effort, there may still be reputational or regulatory impacts from the initial implications of the breach. Examples for that is the disclosure of highly sensitive datasets as part of the initial attack and the external awareness of the shortcomings in your operational measures. This may lead to reduced trust in your organization with reduced sales, but also to regulatory bodies investigating it.

Approach: As part of the value definition, Threat Modeling findings may therefore be related to publicly available figures assessing the reputational impact of the notion of flaws being made public. For that, it would require the mapping of the identified flaws during the discussion of the risk vector to the public categories associated with its reputational damage. For defining the value in preventing regulatory claims, the European GDPR Tracker as a publicly available database shows how privacy by design shall be achieved by the concept of deterrence. Also, in that case, the identified flaws during the risk vector discussion would be mapped to the categories defined in the public database if that was available for security breaches.

Discussion: For both the outlined approaches, the authors challenge the availability of scalable and up-to-date data that allows the mapping of the according flaws. For the reputational impact information, also the referenced research paper indicates that the impact is industry-specific and is hard to relate to internal factors, e.g. the organization being prepared for the occasion of a breach according to technical measures and public communication. In the case of the regulatory claim database, we are even not aware of a version consolidating security cases. The effort for collecting relevant cases on the required level of information would therefore be tedious and available detailed information most likely be restricted.
 

Conclusion

The quantification of proactive security resolution as part of Threat Modeling is a challenge due to the need for projection of what could go wrong (and what’s the quantitative value of it). Based on the approaches outlined and the perceived constraints, the value estimation based on available data from Bug Bounty programs is considered the most prone approach by the authors. Considering the nature that Bug Bounty programs are managed and represent a broad segment of the market, this will ensure that current data with the required metrics would be available for the mapping. However, note that using publicly available information from bug bounty programs does not account for valuing Threat Modeling in a single organization, because in that case, the specifics of their business, their risk, and their costs apply.


Comments, questions, or feedback?

As mentioned in the beginning, this framework remains a work in progress, and we're keen to further develop it with input from the community. Whether you've attempted the methods outlined above and still face challenges, or if you've experimented with alternative approaches we haven't considered, we're eager to hear from you! Please share your experiences and insights with us in the comments below.


2 replies

Userlevel 1

I like this paper. It does a great job in capturing the essence of the situation. Threat modeling is somewhat failing providing the demonstrable value the business needs. My observations are that most security approaches suffer for similar reasons. Security in general is something you can measure after you implement it. You may calculate the costs before and after you implement your security program. But the truth is that we should be able to evaluate the effect of security a priori. And guess what? With threat modeling, we can! To do that, you must adopt a quantitative risk assessment methodology like Open FAIR (see https://aka.ms/tm-openfair).

That said, I am not entirely sure about the proposed approach. Yes, it may provide some indications, but it really depends on the context. For example, the Ponemon Institute estimates the average loss of a data breach to be around $ 4M. I argue that the actual number will depend on the affected application and data, on the size of the target organization, on who the attacker is and on its intents. This loss may seem limited to a huge multi-national corporation, and very high to a small organization. Would it allow to take decisions? Probably yes, but they would be based on wrong data and thus probably wrong. The huge organization may accept a risk that in reality is much bigger than the average number, while the small may dismiss the estimation as being not credible (and they would be right), or worst case may try desperately to build mitigations that are unnecessary and too expensive.

Bug bounties are focused on identifying vulnerabilities, but do they provide per se enough information to estimate the potential losses due to a potential exploit? Not really. Usually, they provide an evaluation of the CVSS, which notoriously is a poor metric to evaluate the severity of the vulnerability. They cannot do much better, because they lack most of the required information to fully estimate the potential losses.

We may try to put some compensating factors to our evaluations done using those approaches, but the truth is that we cannot directly use the numbers they provide. Still, we may use them as a reference, to double check the results obtained from the quantitative risk analysis.

Userlevel 4
Badge +1

@simonec you are raising absolute valid questions and indeed this consideration were also part of the assessment in preparation of the article. The substantial question is who would like to have the value for a Threat Modeling program assessed? The answer differs fundamentally depending on whether it is an expert question requiring absolute accuracy for the specific case or whether it’s a business question which strives for the decision whether the additional investment for the in-depth assessment justifies the ROI.

Based on the outcome of our investigation and the research groups standing behind the data, bug bounty programs providing the best data for representation of the latter one. The data from these reflect the variety of organizations based on their maturity of their security programs, size, business impact best. It provides a generic value which allows the security team to rationalize the investment, in absence / without the effort of the qualitative assessment for each individual case up.

Reply


V2