Threat modelling program and practice maturity dilemma

  • 25 November 2022
  • 6 replies
  • 184 views

Userlevel 2

There are a few maturity models out there that one could apply to threat modelling however none that fit the mold. i am in search for a maturity model/framework to leverage as a yard stick to measure progress in threat modelling practice and threat modelling program. 

I wanted to seek expertise, advice and guidance from the community as to what folks are using or have used as a maturity framework that seem to work. 


6 replies

Userlevel 6

Great question @Robin! I know one of our founding members @JamesR is working on creating a maturity model with a few other colleagues. The entire framework is still work in progress but I believe there may be some takeaways that could be helpful for you. Can you provide more context about what you’re building, what you want to achieve, and (if there is any) specific insights you’re looking to get from a maturity model?

Userlevel 3
Badge

That's a great question. We have been working on a multi-domain version of a maturity model that could be used to describe best practices in a threat modeling program. The domains that we have worked through thus far include collaboration, cybersecurity maturity, automation, shared vision of success, process maturity, resource reliance, and threat modeling skill and knowledge. Each of those domains are then broken into discrete levels ranging from less preferred to more preferred states. Obviously this model can be adapted to several different use cases. 

Did you have any other areas that you would consider in this maturity model or any other? 

Userlevel 2

@JamesR , i like the idea and the concept that you are working on. We are working on a similar concept. Fundamentally there are 2 key questions we are trying to answer: 

  1. How mature is our threat modelling practice within the org?
  2. How mature is our threat modelling program? 

There are aspects of the practice and program that may overlap and there are aspects that may not overlap. Equally, i ‘ve realized that it is possible to have a more mature practice and an immature program and likewise a mature program and immature practices. Although in a lot of cases the practice and program are more or less in sync. 

I look forward to seeing a visual draft of what your model would look like James. And equally i am happy to share our visual draft as well.  

Badge

Looking forward to seeing / applying this, @JamesR !

I like the domains you’ve chosen.  I agree that this will be an important addition to current models / metrics, such as the descriptions of OWASP SAMM Levels (1 - 3), for example.

Userlevel 4
Badge

I totally agree with @Zoe Braiterman - as a simplified (yet sufficient!) model, the SAMM levels work great.

 

Userlevel 4
Badge

In fact, let me try and take @Zoe Braiterman ‘s lead and expand a bit on why I think it works:

Let’s look for a second at the OWASP SAMM (v2), under Model/Design/Threat Assessment:

  • Maturity level 1: best effort identification of high-level threats
  • Maturity level 2: standard and enterprise wide analysis of software-related threats
  • Maturity level 3: proactive improvement of threat analysis coverage

Those translate into two streams - Application Risk Profile and, more interesting to us, Threat Modeling:

  • level 1: perform best-effort threat modeling with brainstorming and existing docs
  • level 2: standardize threat model training and processes
  • level 3: continuous optimization and automation of threat modeling

Sounds pretty good to me as a basic program maturity model. As for practice, that would be a completely separate thing. You can do all that and still not have valuable findings. Or you can have perfectly valuable findings while being and staying at level 1 of maturity. 

Which takes us to the success criteria - what can be a formalized set of successes? We’re tempted to quantify and measure. Which is quite different from formalize. You can’t measure a percentage of a set of unknown size (the amount of findings that exists in the thing being modeled). So we can’t use those. We can’t measure the rate at which findings occur - there are too many independent variables that change that without being a function of the “goodness” of the threat model process and exercise.

I think that the ultimate success criteria of a threat modeling exercise is if the team feels that the time spent doing it was worth it. If they are not extracting any value out of the time and effort to conduct the exercise, then something is definitely wrong and the exercise has been unsuccessful.

With all that said, I am looking forward to what @JamesR is cooking!

Reply


V2