That assumption is now dangerously outdated. What was once a linear risk tied to third-party code has become a layered, compounding threat … and artificial intelligence is accelerating it.
The modern enterprise is built on three stacked supply chains: software, SaaS and AI. Each layer inherits the weaknesses of the one below it, then amplifies them. Together, they create an attack surface that is broader, faster moving, and harder to see than anything security teams have faced before.
Risk compounds, it doesn’t reset
Software supply chain attacks were the first warning shot. Compromised libraries, poisoned updates and dependency confusion showed how attackers could weaponise trust at scale. However, just as organisations began to harden those pipelines, SaaS exploded.
Today’s businesses rely on hundreds of SaaS platforms connected through APIs, shared identities and continuous data flows. Shadow applications proliferate outside formal IT oversight.
Each integration promises productivity and quietly introduces another external dependency. When one SaaS provider is breached, attackers don’t just gain access to a single system; they inherit downstream access to customers, partners and data stores.
AI now sits on top of this already-fragile stack. Modern AI systems are not standalone tools. They depend on pre-trained models, external inference APIs, vast data pipelines and orchestration layers that sit far outside an organisation’s direct control.
A poisoned dataset, a manipulated model or a compromised inference endpoint can influence decisions, outputs and automated actions across the enterprise.
Crucially, AI supply chains contain every risk that existed before - insecure code, vulnerable APIs, misconfigured SaaS - and add new ones. This is not a shift from one threat to another. It is an escalation.
Attackers are already exploiting the gaps
From an adversary’s perspective, this environment is ideal. Attackers no longer need to breach hardened perimeter systems. They target the weakest supplier, the least-monitored integration, or the most opaque AI dependency, and let trust do the rest.
AI systems are particularly attractive as their behaviour is harder to predict, their internals are rarely transparent, and failures often look like “unexpected outputs” rather than clear security incidents. Prompt manipulation, model poisoning and data leakage can persist undetected, shaping outcomes long before alarms are raised.
This changes the nature of trust. In the traditional software era, verification focused on signatures and provenance: who wrote the code and was it altered? In the AI era, that is insufficient.
Organisations must also be able to answer how a model was trained, what data shaped it, which versions are running, and how it behaves under real-world conditions. Without that visibility, trust becomes an assumption, and assumptions are exactly what attackers exploit.
Transparency is now table stakes
This is where many organisations are still dangerously complacent. Software Bills of Materials (SBOMs) and AI Bills of Materials (AIBOMs) are often discussed as emerging frameworks or future best practice, however that framing is wrong.
SBOMs are no longer optional. They are the minimum requirement for understanding what actually runs inside modern software. AIBOMs extend that same principle to AI systems, documenting datasets, model versions, training origins and external dependencies. Without them, organisations are effectively blind to the components shaping automated decisions.
These tools are not about compliance theatre: they are about survivability. If a vulnerability is disclosed in a library, model or dataset and you cannot quickly determine whether you are exposed, the damage is already done.
Continuous assurance, not periodic comfort
Static compliance models cannot keep pace with living supply chains as software updates daily and SaaS configurations shift constantly. AI models evolve, retrain and adapt. Annual audits and point-in-time certifications provide reassurance, but not protection.
Modern supply chain security requires continuous monitoring: real-time visibility into code changes, API behaviour, data flows and model outputs. Anomalies must be detected as they emerge, not months later during a review. This is as true for AI inference as it is for SaaS access patterns or software updates.
The significant cost of inaction
The consequences of failing to act are not abstract. Organisations that cannot trace their supply chains will suffer longer dwell times, wider blast radii and higher regulatory exposure when (not if) incidents occur. In AI-driven environments, those failures can directly affect customers, financial decisions and operational outcomes.
Supply chain security has moved beyond the domain of technical hygiene. It is now a core element of business resilience and national security strategy. Enterprises that continue to treat it as a secondary concern are not taking a calculated risk but rather are accepting unmanaged exposure.
The message is clear. The AI era does not reduce the need for supply chain security but makes it existential. Organisations must act now, embed transparency by design, and assume that anything they cannot see will eventually be used against them.
This story was written by Pouya Ghotbi, Lead Technologist at Check Point Software Technologies.