Using LLMs to help lead threat modeling sessions

  • 6 July 2023
  • 2 replies
  • 147 views

Hey all,

 

I have not seen any posts nor much out there… but I am sure some people are thinking about this right? With the right DFD metadata and possibly a gherkin-like way to describe scenarios, it feels like something could be done.

What are the collective thoughts around using AI to help lead threat modeling sessions to scale appsec teams efforts? Is there something out there that’s already midly useful?

If I knew what I was doing, that’s probably something I’d start thinking on building :-)

Cheers,

 


2 replies

Userlevel 1
Badge +1

It’s definitely been something that’s been discussed.  Back in April, there was a webinar between Adam Shostack and Gary McGraw on this very subject - When will Adam Shostack be replaced by ChatGPT? (https://www.youtube.com/watch?v=9k3scZFKYYA).  We’re frequently discussing AI (almost daily) and the different potential use cases (and risks) at work.

As for actual tools available, I’m not currently aware of any for LLM threat modeling….yet.  Most of what I’ve seen lately is around attack tools (e.g. BurpGPT, PentestGPT, etc).

Well, 
It’s possible to use founditional models for threat modelling purpose (i’m using it)
You just need to set the expectations on which phases would require what kind of training and tuning

I’ll be happy to disclose more if my paper gets accepted 
 

Reply


V2