Millions of Vulnerabilities: One Checklist to Kill The Noise

One important subject to discuss when talking about vulnerability management is the day you open the valve on a code scanning tool that generates an enormous number of security findings. This has been a problem in information security since the early 2000s that is still being worked on, and now, with AI, obviously. When you connect a code scanner or cloud misconfiguration and vulnerability scanning tools, one of the first consequences is that the number of findings will be unmanageable, be it a few thousand for an small-to-medium-sized business (SMB) or multiple millions for a larger organization. In this article, we will explore an initial approach to reducing that number and improve your vulnerabilities management program.
First, when you open the "floodgate", we can consider that number of vulnerabilities your backlog. This is because you have just started to analyze and remediate these vulnerabilities.
What's the issue? Do you think all one million vulnerabilities are worth fixing? Definitely not! What should you do next?
One of today's information security challenges is determining which of these vulnerabilities are valid for your organization and which ones affect your organization's applications or systems. There are two "vulnerability" tasks that can help you determine this: 1) Triaging and 2) Analyzing the reachability of each vulnerability.
However, before you are able to define the severity of millions of findings from your tool, you will need a few pre-requisites. All these will be in future articles:
- An Inventory of all your applications, services, and cloud assets. For this article, let's call these "assets."
- Categorize all your assets from critical to low.
- A list of your "crown jewels" from the critical assets list.
- A risk matrix is needed to define your assets' risk from a business perspective, not only from an IT perspective.
- This risk matrix must explain why an assets is or isn't critical based on financial and reputation impacts, as well as the impact of lack of availability, confidentiality, and integrity of your data and systems.
With all of this information, you can start thinking about triaging these millions of CVEs or findings your tools found.
To help most organizations, here's a quick way to determine what to fix. In a future article, we will also discuss how to develop a vulnerability management program that includes this process.
Cut the noise
Start with a short list of data points to cut through the noise. The idea is the same as with CVSS, EPSS, or CARVER. Scanners will dump millions of findings with almost no context regarding your organization. Fixing root causes is definitely the best solution (see my article Beyond Patching: Eradicating Vulnerability Root Causes, in which we will dig deeper into this subject). Nevertheless, audits, contracts, and SLAs demand patches now. You can’t wait until every root cause is fixed. Work on fixes and patch critical findings in parallel; if you don't, you will slip out of compliance, out of the SLA, or breach some of your contracts.
Triaging
These questions will let you know if the issue is critical and requires your immediate attention. They will help you triage your vulnerability to a minimal list of impactful and critical issues against your systems. Note that this is still without any organizational context.
Does the vulnerability:
- A critical severity? Whatever it's a CVE or your CVSS/EPSS/etc scores says critical?
- Have a public exploit? GitHub or anywhere else.
- What is the exploit quality? From no exploit, to PoC to fully weaponized.
- Is it being exploited right now in the wild?
- Is it on an Internet-facing or otherwise reachable service? (We will dig into reachability later!)
- What is the business impact if it's exploited? Is it critical? Or maybe high?
- Does it affect a critical asset for your organization?
If every answer is yes, you fix it. That's it!
You should have cut down your list by around 80%.
What about the rest?
If at least one of the questions is answered with "no," the severity is lowered to high. This only happens if there is an exploit. The rest stays medium, or you accept the risk. The same process can be repeated for high severity findings.
The odds of being exploited with this approach ? Very close to none! Even though it might be complex knowing the answer to all these questions, it's a great target for your vulnerability management program.
This means that about 90% or higher of the findings generated by your cloud or code scanning are either not exploitable or not severe enough for you to consider addressing them. No fix needed until:
- A new exploit shows up
- The asset’s value rises.
- Or, if you have more time, remediate the next severity levels (highs), and so on. What about reachability?
Reachability... what about it?
That's what the market is working on these days: building scanners and AI tools that detect whether a vulnerability was actually "reached" by your application. This validates if your application's code ever accesses the vulnerable function(s). If it does, then the application is vulnerable to that specific vulnerability. If you have reachability metrics, the only question left would be:
Is the vulnerability critical? Does the asset have a known vulnerability and is it exposed to the internet? Is it affecting a critical system? And is the vulnerability reachable?
If yes, you fix.
If no, you skip.
Your backlog went from a million to a few thousands, or surely less. Now, you can breathe a little easier. Easier said than done, right? Definitely, ... for now!