The recent discovery of Log4J and its widespread impact on businesses highlights the importance of scanning networks to identify and mitigate software vulnerabilities. But periodically scanning networks is the easy part of building an effective vulnerability management program. Since these “programs” aim to identify and fix software vulnerabilities, ongoing management must include a schedule of consistent patching.
And this is the area where most organizations struggle. On average, each vulnerability scan will generate 20-40 findings for every asset scanned. Consider an organization with 1000 assets, and the vulnerability findings will range from 20-40K with each scan. The large volume of vulnerability data combined with resource time limitations for patching creates a challenge. Often forcing patching teams to “select” what appear to be the riskiest vulnerabilities. But, how do they really know what the ‘Riskiest’ vulnerabilities are? This uncertainty is created by how vulnerability scanning tools rank the findings: Critical, High, Medium, Low, and Informational. Teams invariably adopt a “traditional approach” that focuses on Critical and High vulnerabilities. Unfortunately, these categories (on average) represent 25% of the overall findings. Using the example above with 20-40K vulnerabilities from each scan, a traditional patching strategy requires a team to mitigate 5-10K vulnerabilities. Since most companies scan monthly, the workload quickly becomes overwhelming.
In sharp contrast, the traditional approach for vulnerability management is a process that assigns a unique risk score to prioritize identified vulnerabilities. Risk scoring reduces the number of vulnerabilities that are ultimately identified for patching and typically targets between 3-5% of the overall vulnerabilities.
There are important factors to consider when evaluating risk-based vulnerability management solutions. Some offerings use historical information to assign risk scores, while others utilize artificial intelligence (A/I) technology to predict the potential impact of each vulnerability. Solutions using historical data are limited to known threats and can only score vulnerabilities associated with past (known) attacks. Alternatively, solutions that utilize A/I technology are using threat research to collect details on evolving attack techniques and using them as input to calculate vulnerability scores. Scores developed using this approach are based on vulnerabilities’ potential impact on a given environment. They take threat inputs and apply learning algorithms to predict which vulnerabilities represent the most significant risk to organizations. This approach provides broader risk insights and further streamlines vulnerability patching priorities.
THE BOTTOM LINE
Any organization looking to sustain an effective vulnerability management program will face resource constraints. A risk-based vulnerability management program identifies and prioritizes high-risk vulnerabilities and reduces the mitigation efforts by 80% or more compared to a traditional approach. It also has the added advantage of increasing risk coverage by 40% or more. Employing a risk-based vulnerability mitigation approach for any organization struggling to drive results delivers clear and concise benefits.