How do you define the success criteria for threat modeling?

  • 29 November 2022
  • 6 replies
  • 212 views

Userlevel 2

How do you determine the “success” of a threat model program? Is there any Key Performance Indicators you’re using?

It is not just the # of threat models created or # of threats reported, but the impact it makes. I’m curious how the community measures the impact of a threat model?


6 replies

Userlevel 2
Badge

Great question!

 

One possible KPI may be related to apps that are rejected at an Architecture Review Board review that then become acceptable after a threat modeling and redesign effort completes.  This is related to reducing the rework related to architecture reviews

Another one is measuring the reuse of threat models - as application features are added, existing threat models can be revisited and updated with new or revised designs that can them serve as reusable patterns or reference architectures as the firm’s app development and app security mature.

One useful way to help make this happen is by developing and sharing templates that standardize on how the firm conducts threat models and what elements are required for each new model.  In the past we used Confluence for this template that anyone could add to their project or product documentation and keep it alive by revisiting it with each Epic or new major feature.

 

 

Userlevel 3
Badge

I struggle with this regularly.

I encourage thinking about minimal-invasive requirements, which means experimenting with tracking threat model outcomes where teams track their work. There is no need for teams to do extra work - in my opinion, required extra work will only be a drag on adoption.

So for many that is Jira, Rally, or an other similar tool. 

  • Tagging stories with a specific label can help us understand the amount of threats/mitigations/work have come out of threat modeling and track their completion at individual or aggregate levels
  • Creating a story (and labeling it with a specific label) of the threat modeling activity can help us identify how many threat models have occurred
  • An even more mature process would be to have threat modeling as an acceptance criteria for certain tasks or larger grouping of efforts

Part of the issue I have with KPIs is the value threat modeling brings is sometimes hard to quantify, especially as decisions that are made could have significant widespread impact and value to the enterprise. 

Additionally, I think of doing some data analysis on the results of threat models can bring to light a lot of issues that the threat modeling team can add data to risk and governance conversations. How to measure the impact from those conversations? I’m not sure!

Userlevel 3
Badge

Hi,

 

adding a bit to the options above: I would focus on the stories created (as mentioned by @Hoss) and treat them as ordinary vulnerability findings. This way you can apply standard KPIs and metrics: arrival rate (rate of creation over time), survival rate (how long it takes to close then) and escape rate ( how many get from dev to prod before being fixed). You can look these up in the wonderful book called "The Metrics Manifesto".

A remark to the above: I have found that architecture board starts to reject deliverables with the introduction of TM. So a peak at the beginning is actually normal. This is when the org realizes the amount of architectural depth (well the security focused parts of it) That should stabilize over time.

 

 

Badge

Good question, for me a constant struggle!

I would like to add to the excellent suggestions and remarks already made.

As with all measurements of success and their KPIs, it starts with why you want to use threat modelling. Maybe implementing threat modelling is to promote security awareness. The implementation of threat modelling can also be used to be able to prioritise which security features or controls you need to build or code in your systems and or programs.

Both implementations would require different types of measurements and KPI's.

For the implementation, where the core goal is to raise security awareness, I would suggest looking at various security maturity measurement models like the SANS security awareness maturity model. You can also mix and match with a team competence model like DASA.

For the implementation of threat modelling with as goal to create more secure applications, there is a pitfall that can be difficult to steer clear off and that is the preparedness paradox.

I have created threat models to limit security incidents à I don't have that many incidents, why would I create a threat model for every application or solution change.

A potential good way to is to measure the “before and after” of a threat model.

Before, on completion of the first version of the model where you have identified threats and ways to counter them.
After, when you have implemented (some) of these countermeasures and evaluate how they impact the residual threat or risk.

This will work best if you have a Severity score of a threat and a ‘mitigation score’ of the countermeasure.

It could look something like this (ver, very, simple visualisation just to... well … visualise)

Before: Threat = 5, Countermeasure = 0, residual risk =5
After: Threat = 5, Countermeasure = 3, residual risk = 2

You can use this way to either represent the threats in the system / solution or estimate the impact of a countermeasure on a threat.

When you add a state to the threat model or an update time you can measure the progress over time or per state the solution or threat model was in, compared to previous time and or state

These are examples and you may think of other triggers more along line with the SDLC states the solution you are analysing is in.

I do suggest using a tool that does support this: mitigation scoring of countermeasures and severity scoring of threats, and Threat model update tracking (state or date time stamp of change)

When you do this manually a scoring poker method (like agile poker for user stories) can help, if you have the right subject matter experts in the room during the poker session.

I hope this helps!

Userlevel 2
Badge

...I would focus on the stories created (as mentioned by @Hoss) and treat them as ordinary vulnerability findings. This way you can apply standard KPIs and metrics: arrival rate (rate of creation over time), survival rate (how long it takes to close then) and escape rate ( how many get from dev to prod before being fixed). <clipped>

I agree with this approach 100% - treating security bugs/defects/weaknesses just like any other bugs is the best way to make sure they’re treated.  Security issues are not special snowflakes - they’re defects like any other.

 

Badge

...I would focus on the stories created (as mentioned by @Hoss) and treat them as ordinary vulnerability findings. This way you can apply standard KPIs and metrics: arrival rate (rate of creation over time), survival rate (how long it takes to close then) and escape rate ( how many get from dev to prod before being fixed). <clipped>

I agree with this approach 100% - treating security bugs/defects/weaknesses just like any other bugs is the best way to make sure they’re treated.  Security issues are not special snowflakes - they’re defects like any other.

 

I  also agree that you should treat security vulnerabilities as one of the development quality topics. It is good ensure that the metrics you are using are measuring the impact of the threat model and not the development team’s response time to identified threats.  

Reply


V2