Exceptional Threat Modeling

  • 11 August 2023
  • 1 reply
  • 85 views

Userlevel 4
Badge

(as seen in The Security Table’s first LinkedIn live podcast episode!)

As much as system builders at large have been more willing to accept Threat Modeling as a useful practice with clear positive results and advantages, it is still somewhat difficult to institute it as a part of the secure software development lifecycle in most organizations.

The reasons are many, but chiefly among them is the clear fact that eliciting threats out of a design requires a certain amount of expertise, experience and savviness that is hard to quantify and harder to teach, in security practitioners. We can train people to be aware of security weaknesses that may be introduced at the design stage, but it is harder to train them to identify the multiple ways these weaknesses can sneak in when a system is being designed, or implemented in an agile way, when the design couples tightly with the implementation.

This author (and many others!) has tried to make that path shorter, with Continuous Threat Modeling (https;//github.com/izar/continuous-threat-modeling), which champions a reliance on first-order fundamentals and principles, if-this-then-that checklists and “threat model every story” approach to surface those issues as close as possible to their design and implementation. But even approaches like it, which are by definition developer-centric, suffer in a way from the ailing of not having that crucial part of the puzzle, the developer, to be sufficiently security-aware in order to be able to identify the weaknesses they may be introducing into the system.

Security Teams all over have supplemented threat modeling with implementation guidelines: usually extremely long and generic documents covering everything from the correct use of eval() in its many instances to the right library or way to use memory allocation in C. Add to this the many hours and hours of question-based CBTs we subject our security “customers” and the scenario is set for documentation fatigue: Security writes it and Development ignores it.

Lately efforts like CISA’s “Shifting the Balance of Cybersecurity Risk: Principles and Approaches for Security-by-Design and -Default” have stressed the need for exactly that: by-default and by-design. It should be hard for an unaware developer or user to create a situation where the security, safety and privacy (SSP) of the system is compromised; there needs to be enough guardrails in place by default to guarantee a minimum amount of acceptable SSP to make the functioning and use of the system acceptable to all.

Looking at the picture that develops from these observations, an alternative methodology of Threat Modeling emerges: Exceptional Threat Modeling. 

“Exceptional” clearly means  “unusually good; outstanding”, and that is the hope. But it also means “unusual; not typical” and that is where the method focuses: the exception.

In most cases, and specially in the SaaS environment, once a rhythm is established and the development hits a stride, we see more and more cases of “another one of those” - the new microservice is just like the other 999, and it just differs in its business case, or in the kind of data and the way it manipulates it. Everything else, is, well, just another one of those. It gets deployed in the same place, it uses the same basic infrastructure, it communicates with its peers or the Web in the same way, and it authenticates and authorizes its users or other services in the same way. It is a well-understood quantity, and as such, it can be treated as a generic piece of the puzzle. 

What we should be looking for is the exception.

Rather than long questionnaires of “did you do this? Did you do the other one?”, Security should focus in a single question: “given the kind of thing you are doing, and given this set of guidelines for how to do that thing around here, did you do anything different?”. It can then focus on the answer, only, and go deeper into the why is this different from the others, do we need a new set of guidelines, will there be others of this ? Is the present set of guidelines lacking and needs a refresh? 

This also frees the Security Team to add the verification layer to the cake. Once the pieces of the puzzle are identified and their security guidelines are written down, there can be a more focused effort in developing tests to verify and falsify the “security promise” that the development team made when they attested that they were, indeed, following those security guidelines. Dropping that piece into the overall puzzle should be seamless, the wings and ears of the pieces seamlessly clicking with the joining pieces to create the overall picture. If those seams are coarse, or if the puzzle piece needs to be forced into place, most chances are that it is not “another one of those”, and instead it needs special care to see how it interacts with the rest of the puzzle. Analogy apart, this is when the security exceptions surface and show themselves, or the security promises show themselves to not have been fulfilled. 

 

XQp7_KwB_8YLsk239Z0hZstHmVAvmMYa1XZp442TqDGRNOw8Tk7Yied5mRXv-AhEnSrsvlVu9cwOYlP26pIfhxZRrx5GGaxIkEN2Vh0WXArbJ68R68V0daPl2f6sUnP8jtKoqtarJR5_SCtRbOwU3LY

 

In this model, rather than constantly seeking to understand every piece of every service or every addition to the monolith, Security can free up time to be forward-looking and focus on the next problem to have, or in the exceptionality of the business level of the system. If Security can safely rely, and test, the security promise that every piece of storage that goes into a cloud bucket will be, indeed, encrypted, and that the encryption uses the correct cipher, and the key is of the expected length, randomly generated as expected, stored in the place it should be, and there is a provision for its rotation in case of need and it only gets accessed by the roles that should have that right, and all that is constantly monitored and observed, THEN Security can be free to look at the exceptions that Development creatively brings up.

Units like Network/Infra-structure, Compute, Storage and Databases contribute to Security by creating clear and verifiable guidelines that Security expresses in “what good looks like” checklists that are easy to follow for any developer and architect. Automation goes in place to intake the attestation by Development that their new thing looks like what good looks like, with the exception of this or that thing, which then gets the full threat modeling treatment as necessary - reducing the need for the Security team to look at every single piece of design that comes across. Once implemented, the vision of what good looks like gets automatically verified and compared with the guidelines, and any unfulfilled security promise gets an opportunity to be identified and if necessary, corrected. 

When exceptions are identified, the organization has an opportunity to either accept the risk, mitigate it, or create a new “thing” with its own set of guidelines so it can become the seed for a new category of another one of those things. 


1 reply

Userlevel 2

I convey this idea by stealing a paradigm from the devops world, and tell developers “I want to threat model cattle, not pets”.  They get the idea pretty quickly.

Reply


V2