One of the issues that I run into constantly with threat modeling is the noise level.
That is to say, many false positives.
One way to look at this, I suppose, is that these false positives are an indication that you’ve cast a wide enough net.
But this leads to hours tracking down the justification to write off these NA threats.
(These false positives may also tend to overwhelm developers who are new to threat modeling - it is difficult to convince them that threat modeling works when they have to spend so much time weeding out the junk.)
I tend to think that the answer to this is “better data”. However, given that threat modeling is *ideally* done as early in the SDLC as possible, when quality/rich/complete design data may not yet be available, how can one mitigate these false positives, and tune threat modeling to achieve higher quality, less noisy results?
Looking for ideas here…