Threat Models in SBOMs

  • 1 December 2022
  • 4 replies

Userlevel 2

SBOMs are a critical part to helping secure the software supply chain.  Having a catalogue of libraries and components in an SBOM is obviously the key element of this, so that it can be queried by security tools to identify known vulnerabilities.  So far so good. 

I can’t help thinking that including a threat model in the SBOM would help to provide some additional context about the security decisions made by the vendor.  What I mean is that as a software vendor, I choose my third party components and include them in my software.  When I do that, I may make some security decisions such as including a library with a known vulnerability because I know that we don’t use that library in a way that exposes the vulnerability.  A threat model would be a way for me to communicate this to readers of my SBOM.

What do the SBOMmers think of this?  Has any work been done in the area of marrying threat modeling and SBOMs?  

4 replies

Userlevel 4

Maybe a threat model? But is that much information necessary? Certainly, I would hope that every piece of software benefits from a living threat model that tracks and grows with the software it describes. (“beautiful dreamer”? indeed)

I had hundreds of experiences at Intel where we saw a lot of components put together in unique ways, often by disparate teams who had little working relationship with each other. The assumption was, “threat model complete for each component”, one and done.

Sadly, this is a misconception: threat models are not additive.

You can’t just sum disparate models together and get the “right answer”, one and done. Every time we put 2 things together, we may impact each component’s model. Or more properly, there is a revised model that must take into account new interactions.

Separate teams working on parts of a whole may benefit from someone paying attention to the whole threat model. On the other hand, in situations like those at Intel, I propose that the solution is fairly simple and straight forward: each component’s documentation (oh that horrible word!) sets out its security assumptions - essentially, the critical part of its threat model necessary to hooking things up together. Why does the security of the component protect against and what not? What might other components need to know in order to protect themselves appropriately?

For example, a gateway might declare: “this gateway validates protocol of incoming messages. but it does not examine or change in any way content inside of messages. Responsibility for validation is the responsibility of downstream components that will process content contained by the protocol.”

We actually had components describe these sorts of assumptions as a part of their offering. Seemed to work pretty well.

As an aside, I proposed to NIST during its SBOM comment period precisely this sort of enhancement, what I call a component’s “security contract”. In NIST’s wisdom, they revised SBOM documentation to explicitly note the security limitations of NIST’s SBOM. Haha. I had an effect, I guess (just not the one I wanted)

Userlevel 2

Your thought of including more information in the SBOM seems a bit like the usage of combining VEX information with the SBOM so that the customer can make informed decisions about the components and potential vulnerabilities in those components (plus potential remediation or mitigation of those vulnerabilities).

I’m new to VEX myself, but after reading some of the content, it sounds like it could potentially answer Q2&3 of the four questions.

So is that, in reality, a way of including (at least part of) a Threat Model with an SBOM?

My 2 cents 8-).

Nigel H.

Userlevel 2

Thank you for the pointer @nznigel,  it looks like the CSAF standard used to create the VEX already has a “threats” property that can be associated with a given vulnerability: as well as remediation information.  Very cool!

VEX seems like a great way to shoehorn threat model data into an SBOM, more analysis required.


Userlevel 4

@nznigel thanks for VEX. I hadn’t seen that. I definitely need to dig into how “exploitability” is arrived at: that could be very useful for existing vulnerabilities(1).

As so often happens with threat models, practitioners get stuck in what exists. The power of threat modelling is anticipating what may arrive in the future (i.e., weakness not necessarily exists today).

A threat model is not an analysis just like other kinds of scans and tests hunting for existing weaknesses. We need not confine ourselves to “vulnerabilities”, in fact, mustn’t! 

VEX analysis can be applied to any weakness (if I understand correctly?). That’s important in threat modelling, a key component of risk. Great! 

Still, and I try to make this point (over and over), threat models open an opportunity to gaze into an educated crystal ball so that we prepare for threats that will arise.

Take the classic example of a buffer overflow. Experience from the most mature software security practices demonstrates that no matter how hard we may try with languages like C/C++, memory vulnerabilities will eventually leak into release. the less mature a practice, there will be more leaks.

So even if as far as we know, a piece of C++ contains no issues today, we’ve got to assume that eventually someone will make a mistake that will be missed by our SAST+DAST+Fuzz regime. Threat modelling gives us a chance to prepare today for that which has some likelihood of showing up tomorrow and we believe will have impact. Anticipatory!

Coming back to VEX and SBOM: neither of them addresses the issue of which I wrote above (and in my books, yada, yada, blah, blah, blah - Brook’s once again blathering on). To wit: how components interact.

There might be no “vulnerability” in either component. Each is acting as designed. But their interaction introduces a weakness due to the interaction. Thus, I arrive at my “security contract”.the gateway’s security assumptions open an opportunity to exploit future issues in any component receiving from gateway (as per my example in my first comment).

Maybe I’m just too poor a writer to explain this problem clearly? I have certainly done my best both here and in Secrets and our DevOps focused book. sorry if I cannot convey the problem well enough.


  1. Vinay Bansal and my Just Good Enough Risk Rating (JGERR) can be used for “exploitability” rating. That’s partially what it addresses. I’ve since revised it multiple times so that the exploitability attribute hopefully gets clearer each revision. I will add that the most important factor, which we used very successfully at both Cisco & McAfee, is whether exploitation in context delivers anything useful for the attacker. that should (IMVHO) be the first assessment. We know that the vast majority of issues reported never get used “in the wild”. this might be due to complexity of exploitation. but often due to exploitation prerequisites that render exploitation moot. Like, requiring high privileges to exploit, when exploitation delivers those same privileges. Attackers don’t sit around overflowing buffers at high privilege, security researchers do. After gaining high privileges, attackers are having their way with the OS, prosecuting their goals, game over; pw0n.