Secure coding guru extraordinaire, Jim Manico, has in the past opined, “Forget threat models.”
One of Jim’s many superpowers is focusing attention so that people will thoughtfully consider the issues he’s raising. Often, he points out to us that, “the emperor has no clothes”, i.e., our practices are insufficient or downright dysfunctional. To be sure, some threat modelling approaches can, indeed, hurt more than help.
When I’ve discussed this with Jim, I agree with what I understand his chief criticism to be: If the threat model is built to make security people feel good, it’s useless, don’t bother.
Threat models have several important uses, none of which is to salve infosec anxiety.
Testing map: A catalog of possible attack vectors and techniques (a “threat library” correlated to the target system’s attack surfaces) provides a necessary prelude to testing. Typically, skilled penetration testers will build this type of model as they prepare to test.
Risk discovery: Once a threat library has been identified, adding a risk rating to each credible attack scenario provides decision makers with data that can be used for choosing security-related investments. A risk discovery model helps decide which are the most important items to address and which may be deprioritized or delayed, i.e., as a strategy input.
Development: This model is a superset of the model types above. Derive mitigations based upon the above model’s attacks, attack surfaces, and risk ratings. That is, build a model that is useful to developers so that they may discover existing weaknesses and anticipate those that have some likelihood of showing up in the future. The intention is to build software that is resilient.
The development model is what we use in software security. It gets the most attention in articles, panels, and so forth, and is the model to which various AppSec standards and directives refer.
AppSec focus does not diminish the usefulness of other types of models. The other two types may very well be generated by security experts (a penetration tester or a risk and compliance expert) and may require limited input from others.
On the other hand, development models, that is models intended to drive secure design decision making, models intended to identify security that will be built as part of a system, really don‘t work well when the following occur:
Security expert parachutes into an ongoing development team exactly once
Security expert renders a model in isolation producing a bunch of work (security features and requirements) with little or worse, no reference to development methodologies and cycles, nor those business imperatives that the developers must heed
Expert drops the additional work on the development team, then
Expert disappears but expects developers to accept the model and its decisions without question
Indeed, I’ve seen this type of “threat modelling” all too often. I concur with Jim that the model tends to be so completely out of phase with developer needs that it too often causes more harm than help.
The model is too late, causing expensive rework
The model quickly becomes obsolete and out of sync due to the dynamic nature of development especially in Agile shops (which is today’s norm)
A parachute/drop/disappear, point-in-time process generates enormous amounts of ill-will and interdepartmental friction
This process misses the opportunity to foster security thinking throughout development
Threat modelling mustn’t be thought of as an out-of-band “thing” done to “please” security.
Threat modelling is an analysis technique. It’s not really different in timing than considering performance, scale, usability, etc. To build appropriate security, we have to consider attacks and their defenses (at varying levels of specificity, true!), just as we consider all the other factors that running software must face. We ask, “how can we help this software survive its likely adversaries?” Hence, threat modelling is best performed as a natural and organic part of development.
Usually threat modelling is an integral input to structure (architecture) and design (though certainly not the only input). Threat modelling must be one of the analyses that are taken up in whatever manner and timing design is arrived at.
Plus, in my very humble experience, threat modelling is very much a “team sport”; the model is improved greatly through inclusive, multi-discipline/knowledge participation. Very much as Agile SCRUM improves development through the active participation of every member of the SCRUM team, so too, threat modelling benefits from thoughtful input from almost everyone involved throughout the software creation, implementation, and operation process.
In that respect, Mr. Manico is exactly right. Software security (development) threat modelling by and for security doesn’t usually provide enough value while too often leading to dysfunction.
If a model isn’t useful to its consumers, why build it?
- To be clear, Jim has contributed greatly to the OWASP Application Security Verification Standard in which the first step is “Architecture, Design, and Threat Modeling”.
- Several Threat Modeling Connect members have authored books on software security threat modelling, including this author. Please see
@Adam Shostack, @izar, @Chris Romeo, etc. Readers may also want to look at the Threat Modeling Manifesto, which summarizes best practices and lessons learnt.
- Regarding the last point about the scenarios where the development model doesn’t work well, I have never met a competent engineer who didn’t hesitate to question engineering decisions when necessary. I encourage those discussions. Good engineering requires review and critique.