The United States National Institute of Standards and Technology has more or less admitted it is drowing in vulnerabilities, and will be scaling back its work evaluating them in its National Vulnerabilities Database.
This could be considered somewhat… problematic, as managing cybersecurity vulnerabilities and exposures is one of NIST’s key responsibilities, and something that organisations – public and private – rely upon worldwide to manage and mitigate the most dangerous of flaws in the most vital hardware and applications.
“We are working faster than ever. We enriched nearly 42,000 CVEs in 2025 – 45 per cent more than any prior year,” NIST said in an April 15 update.
“But this increased productivity is not enough to keep up with growing submissions. Therefore, we are instituting a new approach.”
Previously, NIST had aimed to analyse every vulnerability, or CVE, fully. It would publish severity scores, products impacted, and other data points to every CVE it published. That process was called ‘enrichment,” but now, that process will only apply to certain CVEs.
“Going forward, NIST will add details, or ‘enrich,’ those CVEs that meet certain criteria, which are explained below,” NIST said.
“CVEs that do not meet those criteria will still be listed in the NVD but will not automatically be enriched by NIST.”
From today, here are the vulnerabilities that will fully catch NIST’s attention:
CVEs appearing in CISA’s Known Exploited Vulnerabilities (KEV) Catalog (NIST said “Our goal is to enrich these within one business day of receipt”)
CVEs for software used within the federal government
CVEs for critical software as defined by Executive Order 14028
In other words, what once would have been a Critical Severity vulnerability, even a perfect 10, will not be fully analysed unless it meets the above criteria.
This move, according to Ian Gray, VP of Intelligence at cyber security firm Flashpoint, will make acquiring intelligence sources outside of NIST’s once regular reporting of vulnerabilities vital.
“For years, security teams relied on NVD for vulnerability context to support prioritisation decisions. But that model is under real strain,” Gray told Cyber Daily.
“CVE submissions have grown 263 per cent between 2020 and 2025, and NIST can no longer keep pace by enriching everything.”
Gray said this will result in a wide gap between the disclosure of vulnerabilities and the context that network defenders have to adequately evaluate them.
“That gap doesn’t disappear just because enrichment becomes more selective,” Gray said. “Organisations will need additional intelligence to understand what actually matters most.”
Shane Fry, Chief Technology Officer at RunSafe Security, said that even while NIST must be “drowning in vulnerabilities to enrich,” the decision to effectively abandon across-the-board enrichment will “radically increase the difficulty for businesses and software developers to keep their software patched”.
“The announcement is a signal to the industry that the era of waiting for a CVE score before acting has come to an end. Vulnerability visibility is imperfect, but organisations that use a diverse set of vulnerability data sources will have more reliable insight into vulnerabilities and which ones they are affected by,” Fry said.
“More importantly, organisations need to assume unknown vulnerabilities already exist in their software and deploy protections that can prevent exploitation before a patch – or a CVE score – is ever available."
Counterpoint: Vulnerability hunting in the AI age
Jim Sherlock, Vice President, AI & Cybersecurity R&D, at ProCircular, believes artificial intelligence is part of the problem, particularly when it comes to finding and reporting vulnerabilities.
“Just as big of a problem is what this AI spam is doing to the actual humans on the hook for reviewing these bugs,” Sherlock said.
“Now that anyone with access to a frontier model can just point it at a codebase and say, ‘go hunt,’ they're turning around and slamming maintainers with endless reports, hoping to score some cash, a little clout, or just a pat on the back. No maintainer of open-source projects is immune to it either.”
Those in the community apparently call the process “Death by a Thousand Slops,” where so many bug alerts essentially become a DDoS attack.
“It's causing them to either lose their minds or (like the guy maintaining the popular curl project) just shutter their bug bounty programs entirely. Burning out an open-source dev is one thing, but the impact this noise is having on national security infrastructure is a whole different ballgame,” Sherlock said.
“Between projects pulling the plug on bounties and federal databases giving up on comprehensive analysis just to survive the backlog, the entire cyber security industry is being forced to radically re-engineer how it handles vulnerability reporting in the AI era.
“All of this while the next class of hyper-capable frontier models are lining up on the tarmac, engines running, ready to flood the zone faster than any human could ever hope to patch.”
Want to see more stories from trusted news sources?Make Cyber Daily a preferred news source on Google.
David Hollingworth
David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.