Managing the Unmanageable
- engineering ,
- cybersecurity
Modern IT environments are too complex to ever be vulnerability-free. Acknowledging that is not defeatist; it’s pragmatic. By aligning our frameworks and practices with the world as it is - noisy, imperfect, resource-constrained - we can create vulnerability management programs that are honest, effective, and compliant.
On paper, standards paint an idealized picture. For example, SOC 2’s Trust Services Criteria CC7.1 requires companies to “use detection and monitoring procedures to identify... susceptibilities to newly discovered vulnerabilities,” and to “take action to remediate identified deficiencies on a timely basis”. ISO 27001’s guidance is similar: organizations should obtain information on technical vulnerabilities, evaluate their exposure, and take appropriate measures. Many companies even codify strict patch SLAs - e.g. “critical vulnerabilities patched within 48 hours” - as if such timelines were consistently achievable. The intent is noble: find problems quickly, fix them quickly, and prove it.
The trouble is, none of this reflects how modern systems work. It’s generally not feasible to remediate all identified vulnerabilities quickly given the typical backlog, constant influx of new issues, and limited resources. Every organization operates under triage by necessity. Frameworks implicitly assume a static world where all gaps can be closed; reality is a firehose of new CVEs, legacy systems that break when patched, and cloud environments so ephemeral you might discover a vulnerability after the container has vanished.
Drowning in Volume and Noise
The sheer volume of vulnerabilities guarantees that comprehensive remediation is a fantasy. The numbers speak for themselves: over 40,000 new CVEs were published in 2024 alone, and 2025 set another record with ~48,000 more. Of these, a startling proportion are severe - over one-third of vulnerabilities discovered are rated high or critical. No enterprise can seriously investigate, let alone eliminate, every one of these findings. Indeed, data shows that nearly 45% of vulnerabilities in large companies remain unpatched even a year after discovery.
This deluge creates a brutal signal-to-noise problem. Scanners dutifully spit out thousands of findings - the majority of which will never be exploited in the wild. Separating the truly dangerous needles from the haystack is more art than science, and teams have to gamble their limited remediation hours on what seems most perilous. Meanwhile, the compliance story on paper remains one of total coverage. The disconnect is almost absurd: the National Vulnerability Database itself was so overwhelmed in 2024 that it fell behind on analysis and temporarily “deferred” processing older vulnerabilities. If even the keepers of the CVE catalog can’t keep up, how exactly do frameworks expect a lean corporate security team to?
One particularly troublesome artifact of compliance language is the ubiquitous SLA: e.g. “Critical findings remediated within 14 days; highs in 30 days; mediums in 90” (pick your poison, the exact numbers vary). This sounds great - time-bound requirements are concrete and auditable. In reality it’s almost laughable. Many organizations set these timelines because frameworks, auditors, or customers expect to see them.
Consider what it would take to verify a 14-day fix SLA in substance: continuous tracking of every new critical vulnerability, ensuring work is scheduled and completed in that window, and - most challengingly - having a way to prove none fell through the cracks. In a large environment, new critical issues crop up daily. It’s rarely acknowledged publicly, but most security leaders will tell you privately that strict patch SLAs are more aspirational than achievable. They serve as a guideline for prioritization, not an actual promise. Yet compliance rubrics treat them as if carved in stone.
How do we begin to reconcile this structural disconnect? A good start is honesty in policy language. Instead of writing a policy that implies “we remediate everything, no exceptions,” organizations should explicitly define the scope and limits of their vulnerability management efforts. For example, if your practice is to focus on internet-facing systems and critical business applications, say so. If you decide that certain classes of low-risk vulnerabilities (e.g. informational findings, minor internal issues) will not be tracked beyond discovery, make that an official, approved stance. Good governance means transparently acknowledging what you will NOT fix just as much as it is what you will.
Gas on the fire
In the face of this challenge, it’s tempting to reach for more automation. Indeed, a whole crop of “compliance platforms” and vulnerability management tools promise to streamline remediation tracking, integrate scanner results with ticketing systems, and even auto-generate evidence for auditors. While these tools can help with workflow, we should be brutally honest: no tool can magically eliminate the fundamental workload and decision-making burden of vulnerability management. Compliance software that auto-populates a dashboard of open vulnerabilities and sends reminder emails doesn’t actually fix anything - it just makes the noise more visible. In some cases it even amplifies the theater, producing slick reports that give leadership the sense everything is under control, when in fact the hard work of prioritizing and patching is still grinding along slowly beneath the surface.
Automation in this space often optimizes the appearance of compliance. It can churn out neat charts of SLA adherence, create tickets for every new CVE (which, let’s be honest, engineers may promptly ignore or bulk-close), and maintain a repository of “evidence” that patches were applied. But if the underlying process is broken then automating it just means you’ll generate a lot of data with little security value. As I stated in one of my other blogs… many compliance tools “offer the illusion of modernization without challenging the underlying assumptions… they streamline the production of paperwork, not the improvement of systems.” In vulnerability management, this is painfully true. A tool that tracks 10,000 findings with beautiful precision doesn’t help you one iota in actually reducing risk if you don’t have the people and strategy to address those findings. It might even lull companies into a false sense of accomplishment (“we have a single pane of glass for all vulns now, so we must be on top of things!”).
None of this is to say automation has no place. There are very obviously aspects of vulnerability management which software and automation lend themselves towards. But we should stop pretending that buying a fancy GRC module will somehow solve the remediation gap. It won’t. Compliance teams should use tools to aid understanding, not treat the tool’s outputs as proof that “we’re compliant, therefore we’re secure.” The map is not the terrain.
Embracing Transparency and Realism
Companies must be willing to admit imperfection in their controls, and build governance around that reality. That means explicitly deciding which risks to accept and documenting those decisions - not sweeping them under the rug. It means writing policies that set achievable expectations and reflect actual priorities, rather than copying boilerplate mandates that every single bug be squashed. It also means investing in the fundamentals (keeping an asset inventory will take you further than a compliance tool trying to be a “single pane” for all your vulnerabilities) and being candid when lower-priority items linger. Good governance is not about eliminating all risk - it’s about knowing what risks you’re accepting and why.
For compliance leaders and auditors, the charge is to move beyond checkbox ticking and look for genuine security posture. Reward organizations for frankness and for having a systematic process, even if that process includes saying “we’re not fixing X and here’s our rationale.” Push for evidence of continuous improvement (are they getting better at this over time?) rather than cursory evidence pretending to show you met an unrealistic SLA. The goal should be to ensure that an organization’s actual vulnerability management - with all its messy trade-offs - is sound and defensible, instead of ensuring their paperwork claims a flawless record that nobody truly has.
When compliance stops insisting on a fantasy and starts engaging with reality, security teams can stop doing compliance theater and focus on real risk reduction. Imagine audits where you could openly discuss your backlog with an auditor, showing which 10% of issues you’re targeting because they matter, and which 90% you’re watching or accepting because they don’t - and have that be seen as mature rather than a confession. That’s where we need to head. The truth is that modern IT environments are too complex to ever be vulnerability-free. Acknowledging that is not defeatist; it’s pragmatic. By aligning our frameworks and practices with the world as it is - noisy, imperfect, resource-constrained - we can create vulnerability management programs that are honest, effective, and yes, still compliant.
Compliance will only be credible when it relinquishes the checkbox illusion and embraces the messy, risk-based reality that front-line teams deal with every day.