Supply Chain Risk Management (SCRM) Tom Pace’s Biggest Problem: Open Source Component Supply Chain Attacks

This blog highlights the problem in firmware analysis where many software components do not have a CPE in the NVD. (Part 3/3)

I recently wrote two posts (the second one is here) about a chilling revelation that Tom Pace of NetRise made at an informal meeting I recently attended. NetRise specializes in firmware security, and Tom has  noted that a huge percentage of software and firmware products aren’t registered at all in the National Vulnerability Database (NVD), meaning there’s no CPE name registered for the product. This means there has never been a single vulnerability reported for the product.

Thus, if you’re comparing competing IoT devices and you discover in the NVD that one device has a few serious software vulnerabilities while the other doesn’t even appear in the NVD (and thus has no identified vulnerabilities), you would be making a huge mistake if you ruled the first device out of consideration for that reason.

In fact, Tom produced software bills of materials for some firmware products (and no, an SBOM for firmware isn’t an FBOM, you naughty person!) that don’t have CPE names, and identified known vulnerabilities of components in those products. In one product (a member of a family of widely-used ICS products, that doesn’t even mention anything about security or vulnerabilities on its web site. If it’s a subject you’re embarrassed to talk about, better to not mention it at all, I guess...), he found 1,237 known vulnerabilities, and he could have found more if he’d had more time.[i]

Since I addressed the implications of Tom’s statements in the two posts on that problem, I won’t repeat them now – except to say that any supplier of products that aren’t listed in the NVD should be viewed with some degree of suspicion. Frankly, they should be considered guilty of poor security practices until they prove themselves innocent.

But at the end of both posts, I pointed out that this was only the second-most-serious problem identified by Tom Pace during his presentation, and I’d discuss the most serious one soon. Well, soon is now!

The 1,237 vulnerabilities that Tom found in one ICS product – and the probably tens of millions of vulnerabilities that are likely to be found (either by the good guys or the bad guys) in all other products that aren’t registered in the NVD – are all known, meaning they have previously been identified and have CVE numbers. These should be picked up by malware scans, so if you do those regularly, you should be able to at least bug the supplier of any product that contains them, to get them patched.

However, we all know that there’s a much more dangerous class of vulnerabilities called “zero-days”, that haven’t previously been identified and don’t have CVE numbers; therefore, scans won’t pick them up at all. The holy grail for a software attacker is to be able to implant a zero-day vulnerability in software or firmware during the software development process. This means that every organization that installs the software will potentially harbor this vulnerability without knowing it, allowing the attacker to penetrate their network and do bad things. This is exactly what happened with SolarWinds, and 18,000 users downloaded – and presumably installed – the tainted files.

Of course, it’s not usually easy to plant malware – zero-day or otherwise – in software or firmware while it’s being developed. In fact, the Russians had, according to Microsoft, 1,000 people working in the team that penetrated SolarWinds’ development environment and planted the malware. They were in the environment for 15 months and – like all well-planned business projects - spent the first three of those doing a proof of concept (I imagine they’ll write a Harvard case study on their success one of these days).

Perhaps you’re comforted by the fact that such a big effort was required to plant Sunburst. If that’s what it takes to plant a zero-day vulnerability in a software product during development, that’s not very likely to happen often in the future, right? However, Tom discovered there’s a much easier way for bad guys to plant zero-days, which will be hard to prevent without some self-regulation (or actual regulation, although God help anyone who tries to impose regulations on open source! In comparison, building a house out of Jell-O™ would be easy). In the hopes that people in charge of open source projects will start to address these issues, Tom presented the following scenario:

  1. Suppose a particular open source project is regularly used as a component in other software or firmware, and that a malicious party identifies one such project.
  2. That party implants a vulnerability in the code. But not just any vulnerability: a zero-day. Perhaps they work for a nation-state that hoards zero-days for a rainy day, rather than notifying the supplier of the vulnerability (as they should do, of course). Or perhaps they’re just talented and dreamed one up on their own.
  3. The malware-laden code becomes part of the next update of the open source product, so it’s downloaded and included as a component in some end-user products. Since the vulnerability is a zero-day, the scanners don’t identify it.
  4. The party that planted the malware reaps what they sowed, and is able to penetrate end user environments that have downloaded a software product that includes the now-malicious component.
  5. The malicious party penetrates multiple organizations, using the zero-day vulnerability. Of course, these penetrations will probably only be detected after the attackers have succeeded in achieving whatever their goal was: disruption of a manufacturing plant, spreading ransomware, exfiltrating PII, etc.
  6. If we’re lucky, the vulnerability will be identified after the first few penetrations (rather than after the first 18,000, as in the case of Sunburst) and will be assigned a CVE number and reported to the NVD. After that, it will usually be picked up by scans, so proactive suppliers (i.e., the ones who actively look for vulnerabilities in their products and report these to the NVD, and who of course patch the vulnerabilities when they meet whatever severity threshold they set for patching) will protect their products. Of course, suppliers who have never reported a vulnerability in their products, and who don’t even register those products in the NVD, won’t even find this new CVE because they don’t look for vulnerabilities at all. Their mottos are “Do not seek, and ye shall not find” and “Ignorance is bliss, as well as solid legal protection.”
  7. Even after this vulnerability is identified in the NVD and the scanners start picking it up, it won’t immediately be obvious that it’s found in a component of the affected product(s), and, more importantly, in which component (remember, there are plenty of software products and intelligent devices – including ones you may have on your desktop – that have thousands of components). If you trace a vulnerability to a product, how will you trace it to a particular component? The answer: It will be very hard to do that with just a few known infections. As the number of known infections grows, it will be possible to narrow the list of suspect components down to just one. However, this is a hell of a way to identify a new vulnerability: wait until there have been a lot of infections, so that you can draw some statistically valid conclusions as to the source of the infections.

To be honest, I think Tom’s second-most-serious problem (the one I described in the two previous posts) is the more likely to cause damage, simply because of the huge numbers of software components – and software products themselves – that aren’t listed in the NVD at all, meaning that even already-known vulnerabilities won’t be identified in them. And to be honest, Tom didn’t rank these two problems himself (in fact, I don’t think he even identified them as separate problems in his presentation).

Regardless, this is a serious issue. The second-most-serious issue can be addressed “simply” by having software suppliers (especially including open source communities) register all of their products with the NVD and start reporting all vulnerabilities for them (however unlikely this is to happen on any sort of scale in real life). However, this new issue requires a lot more than that. For one thing, it might require controls on who can contribute to any open source project. And how will those ever be enforced?

This blog is part of a 3-part series, view Part 2 or Part 1 here.

References

[i] Of course, just because a component of a product contains a vulnerability doesn’t automatically mean the vulnerability is exploitable in the product itself; that’s why we need VEX documents. In general, probably only 5-10% of component vulnerabilities are in fact exploitable in the product. But even that would amount to 60-120 exploitable vulnerabilities, in this case.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.