Your vulnerability management program has a data quality problem, and most teams don’t realize it until something blows up.
Here’s the pattern: a CVE drops, your scanner picks it up, your triage process assigns it a priority based on the CVSS score and asset criticality, and your team either patches it this sprint or pushes it to the backlog. Reasonable workflow. One problem — the severity data you triaged against was wrong.
Not wrong in a philosophical “CVSS doesn’t capture real-world risk” sense (though that’s also true). Wrong in a concrete, factual sense: the vendor gave it the wrong score, or quietly upgraded it six months later, or marked it fixed when it wasn’t, or NVD hasn’t gotten around to scoring it at all.
This isn’t theoretical. In the past 90 days alone, we’ve watched a DoS get silently promoted to a pre-auth RCE, a zero-day sit exploited for three years before disclosure, a “patched” version that was still fully exploitable, and a vulnerability database that can’t keep up with the volume of CVEs it’s supposed to enrich. If your triage process doesn’t account for the fact that its inputs are frequently unreliable, you’re building your defense on a foundation that shifts.
The Silent Reclassification: F5 BIG-IP APM
In October 2025, F5 disclosed CVE-2025-53521 as a denial-of-service bug in the BIG-IP Access Policy Manager’s apmd process. CVSS v4: 8.7. For most teams running BIG-IP APM, a DoS in a load balancer is annoying but not hair-on-fire — especially when the patch lands alongside a dozen other quarterly fixes. Plenty of shops deprioritized it.
Five months later, in March 2026, F5 quietly reclassified the same CVE as a pre-authentication remote code execution vulnerability, bumping the CVSS v3.1 score to 9.8 and the v4 to 9.3. CISA added it to the Known Exploited Vulnerabilities catalog on March 27. By the time the reclassification happened, attackers were already exploiting it — dropping persistent implants, replacing system binaries, and establishing root-level access on devices that sit in front of entire application stacks.
If you patched in October, you were fine — the fix was the same binary. But if you skipped it because “it’s just a DoS,” you spent five months exposed to pre-auth RCE on a perimeter device, and nothing in your vuln management workflow told you to reconsider.
This is severity drift in its purest form. The vulnerability didn’t change. The vendor’s understanding of the vulnerability changed. And no mechanism exists to reliably push that updated understanding back into every organization’s triage queue.
The Three-Year Zero-Day: Cisco SD-WAN
CVE-2026-20127 makes the F5 case look quaint. Disclosed by Cisco in February 2026 as a maximum-severity (CVSS 10.0) authentication bypass in the Catalyst SD-WAN Controller, the flaw allows an unauthenticated remote attacker to gain admin access with a crafted request. That’s bad enough on its own.
What makes it worse: Cisco Talos, working with external researchers, found evidence that the vulnerability had been actively exploited since at least 2023 — three full years before public disclosure. A threat actor tracked as UAT-8616 had been using it to compromise SD-WAN controllers, downgrade software versions to introduce additional flaws, and escalate to root.
For three years, no CVE existed. No scanner flagged it. No CVSS score fed into anyone’s triage process. The vulnerability was real, the exploitation was active, and the entire vulnerability management ecosystem was blind to it because the vendor hadn’t disclosed it yet.
This isn’t an argument against vulnerability scanning — you can’t scan for what hasn’t been published. It’s an argument that vulnerability management programs that rely exclusively on CVE-based triage have a structural blind spot that threat actors actively exploit. The time between exploitation and disclosure is free real estate for attackers, and that window is measured in years, not days.
The Fake Patch: Langflow CVE-2026-33017
If severity drift is about scores being wrong, and disclosure lag is about CVEs being missing, the Langflow incident represents a third failure mode: patches being wrong.
CVE-2026-33017 is an unauthenticated RCE in Langflow, the open-source LLM pipeline platform. CVSS 9.3. The flaw is almost absurdly simple — user-controlled flow data gets passed directly to Python’s exec() with no sandboxing. One HTTP request, no credentials, full code execution.
When the advisory dropped on March 17, 2026, attackers had working exploits within 20 hours. CISA added it to the KEV catalog. Security teams scrambled to patch. Multiple sources — including vendor communications — pointed to Langflow 1.8.2 as the fixed version.
JFrog tested 1.8.2. Still exploitable. Both the PyPI package and the Docker image. The “patch” didn’t adequately address the root cause. The actual fix didn’t land until version 1.9.0.
Every team that upgraded to 1.8.2 and checked the box in their compliance tracker was still running a fully exploitable version of the software. Their vulnerability management data said “remediated.” The reality said “still vulnerable to active exploitation.”
The Enrichment Backlog: NVD Can’t Keep Up
These case studies describe individual failures — a vendor getting the score wrong, a disclosure taking years, a patch being incomplete. But there’s a systemic issue underneath all of them: the National Vulnerability Database, the single largest source of CVSS enrichment data for the global vulnerability management ecosystem, is drowning.
CVE submissions increased 32% in 2024 and the growth continues into 2025 and 2026. NVD’s processing capacity hasn’t scaled to match. As of early 2026, roughly 44% of CVEs added to the NVD in the past year carry an “awaiting analysis” status — meaning no CVSS score, no CWE classification, no CPE match data. For practical purposes, these vulnerabilities are invisible to any scanner or triage process that depends on NVD enrichment.
NIST acknowledged the problem and made a strategic decision: all CVEs published before January 1, 2018 that are still awaiting enrichment are now marked “Deferred” — they won’t be prioritized. NIST’s long-term plan is to shift enrichment responsibility to CVE Numbering Authorities (CNAs), the vendors themselves. Which brings us full circle: the same vendors who reclassify DoS to RCE five months late and ship incomplete patches would be responsible for the initial severity data.
The CVSS scoring discrepancies are already significant. VulnCheck’s analysis of 120,000 CVEs with CVSSv3 scores found that among the roughly 25,000 CVEs with both NIST and vendor scores, 56% had conflicting assessments. Not minor differences — cases where a vendor calls something “High” and NIST calls it “Critical,” or vice versa.
If your vulnerability management program treats CVSS scores as ground truth, more than half the time the two authoritative sources can’t agree on what that truth is.
What CVSS Scores Actually Tell You (and What They Don’t)
CVSS was designed to describe the intrinsic characteristics of a vulnerability — attack vector, complexity, privileges required, impact on confidentiality, integrity, and availability. It was never designed to answer the question your triage process is actually asking: “Should I patch this now?”
That question depends on exploitability (is someone actually attacking this?), exposure (is my instance reachable?), asset criticality (does this system matter?), and compensating controls (would exploitation succeed even if the vuln exists?). CVSS captures none of these. A CVSS 9.8 in a library you don’t use and a CVSS 9.8 in your internet-facing authentication gateway get the same score.
FIRST (the organization behind CVSS) knows this. CVSS v4, released in 2023, attempted to address some limitations by adding supplemental metrics and an “Environmental” score. But adoption has been glacial — only 25.9% of CVEs published in 2025 received a CVSS v4 score, and the two historically dominant enrichment sources (NIST NVD and CISA ADP) rarely publish v4 scores at all.
The result: most organizations are making triage decisions in 2026 based on CVSS v3.1 scores from a framework designed in 2019, enriched by a database that can’t keep up with volume, provided by vendors who get it wrong more than half the time.
Building a Triage Process That Assumes Bad Data
If you accept that your severity data is frequently wrong, late, or missing — and the evidence says you should — the question becomes: what do you do about it?
Layer EPSS into your prioritization. The Exploit Prediction Scoring System provides a daily probability estimate that a vulnerability will be exploited in the wild within 30 days. It’s not a replacement for CVSS, but it’s a powerful complement. Only 2.3% of CVEs scoring 7.0+ on CVSS were actually observed being exploited. EPSS helps you focus on the ones that matter. EPSS v4, released in March 2025, improved prediction accuracy significantly.
Subscribe to CISA KEV as a hard override. If a CVE lands on the Known Exploited Vulnerabilities catalog, it gets patched this sprint regardless of what CVSS says. The KEV catalog is the closest thing the industry has to a ground-truth signal that a vulnerability is being actively exploited. Treat it as an interrupt, not an input.
Build reclassification checks into your workflow. Most vuln management tools don’t re-triage a CVE when its score changes. You need a process — even if it’s a cron job that diffs NVD data — that flags when a previously triaged vulnerability gets a severity bump. The F5 case is the poster child: if you triaged CVE-2025-53521 in October and never looked at it again, you missed the reclassification to RCE.
Verify patches, don’t just deploy them. Langflow 1.8.2 is a lesson every team should internalize. “Vendor says it’s fixed” is not the same as “it’s fixed.” For critical CVEs on internet-facing systems, run the POC after patching. If no POC exists, validate that the specific vulnerable code path is actually modified in the new version. This is expensive, so reserve it for the highest-risk vulns — but do it.
Don’t ignore vulns that lack NVD enrichment. If 44% of recent CVEs are sitting in “awaiting analysis,” your scanner may not be flagging them. Supplement NVD with commercial feeds (VulnCheck, Rapid7, Qualys) or CISA’s ADP data. A CVE without a CVSS score isn’t a CVE without risk.
Assume zero-days exist for everything internet-facing. The Cisco SD-WAN case proves that high-value network infrastructure can be exploited for years before a CVE is assigned. For internet-facing assets — especially network appliances, VPN concentrators, and file transfer tools — your security posture can’t depend entirely on patch-when-CVE-drops. Network segmentation, behavioral monitoring, and assume-breach detection strategies are the only things that cover the gap between exploitation and disclosure.
The Uncomfortable Conclusion
The vulnerability management industry has spent two decades building processes around a single assumption: that the severity data attached to a CVE is accurate, timely, and complete. In 2026, none of those assumptions hold reliably.
Vendors get scores wrong, then silently change them. Critical vulns sit exploited for years with no CVE. Patches ship broken. NVD can’t enrich CVEs fast enough to matter. And the industry’s answer is to hand more enrichment responsibility to the same vendors who created the data quality problem.
None of this means vulnerability management is useless. It means vulnerability management that treats CVSS as a sorting algorithm and patch deployment as the finish line is insufficient. The teams that will weather this landscape are the ones building layered triage processes, validating their assumptions, and planning for the case where the data they’re working with is wrong — because it frequently is.