Dr Carl Windsor
Chief information security officer at Fortinet
There have been many cases of disinformation being used to unduly influence people, most notably during the United Kingdom Brexit debate. The power of AI takes this to a new level with services such as OpenAI, DALL-E, and Sora 2, which make the creation of almost indistinguishable audio, images, and videos trivial.
Deepfake services are going to take business email compromise (BEC) and social engineering to a whole new level. In 2024 and 2025, we have already observed a marked shift in the quality of phishing emails, with AI generating highly targeted, well-constructed emails that make phishing content harder and harder to identify.
The use of AI-generated audio has already been observed in extortion attempts, but in 2026, we expect organisations to face an onslaught of audio- and video-generated content used for BEC, phishing, and other targeted attacks. If people already fall for text-based attacks, resulting in billions of dollars of losses, imagine how many there will be once people receive calls or even video calls from their chief executive officer telling them to transfer money.
Fortinet expects there to be a large increase in the value of BEC and other scams with multiple high-profile and high-value attacks in the coming year.
Michael Adjei
Director, systems engineering, at Illumio
Depending on how people use agents, they are, in a way, relinquishing part of their identity to autonomous AI. Agents will assume people’s identities, accessing usernames, passwords, and tokens to log in to systems for automated convenience.
In 2026, cyber criminals will target the autonomous capabilities of agentic AI and exploit them to commit cyber attacks by compromising agent-to-agent communication. This approach could make agents appear culpable in potential mass exploitation incidents, allowing the true attacker to remain concealed in the shadows. Agentic AI’s novelty, paired with overlooked security and continued mass adoption, will likely fuel this trend.
This will force organisations to rethink identity, access, and accountability in a world where machines act faster and more dangerously than humans ever could.
Karl Holmqvist
Founder and CEO of Lastwall
In 2025, the intensifying threat of ‘steal-now, decrypt-later’ attacks will force organisations to accelerate the adoption of post-quantum cryptography (PQC). With quantum computing advancements making traditional encryption methods increasingly vulnerable, adversaries are actively stockpiling encrypted data today to decrypt it with future quantum capabilities.
The recent standardisation of FIPS-203 in August 2024 enables organisations to legally deploy proven PQC algorithms like ML-KEM, pushing CISOs to establish comprehensive cryptographic asset registers and proactively overhaul encryption strategies. Without immediate action to secure high-value assets, organisations face a growing risk of quantum-enabled breaches, threatening not just data but national security and global stability.
Dr Darren Williams
Founder and CEO of BlackFog
Collateral damage of ransomware attacks on healthcare providers will extend beyond personal records. High-profile healthcare provider attacks in 2024, from Change Healthcare in the US to pathology services provider Synnovis in the UK, were notable not only for the significant data loss but also for their impact on services and, ultimately, patient wellbeing.
Ongoing issues with resources and legacy infrastructure, along with the wealth of valuable data across the healthcare sector, mean it is perceived as a ‘weak link’ by cyber attackers and will likely continue to bear the brunt of serious cyber attacks. As criminal gangs leverage patients’ privacy, safety, and health in ransom demands, it is vital for providers across the sector to protect their most vulnerable points to safeguard patients and staff.
Jake Williams
Faculty at IANS Research, and VP of R&D at Hunter Strategy
Advanced threat actors, primarily nation-state threat actors, are likely to focus more on targeting network devices, specifically routers and firewalls. While threat actors continue to struggle to stay ahead of endpoint detection and response (EDR) software on endpoints, similar monitoring software can’t be installed on network devices. We’ve already seen multiple threat actors targeting networking devices to gain access to networks. While this isn’t exactly unprecedented, we can expect the scope and scale of these efforts to increase as threat actors encounter more difficulty maintaining operations with EDR software.
It’s also worth noting that the number of compromised network devices is almost certainly under-reported today. The vast majority of organisations lack a dedicated threat hunting program for compromised network devices. Very few have the telemetry needed to perform such threat hunts, and even fewer know what to look for. All of this creates a perfect storm for threat actors targeting network devices. Finally, threat actors may target network devices for their lawful intercept capabilities or to disrupt operations in a destructive cyber attack. Some evidence of such prepositioning was seen with Salt Typhoon in 2024, doubtless a sign of more to come.
George Gerchow
Faculty at IANS Research, and interim CISO/head of trust at MongoDB
Nation-state actors will increasingly exploit AI-generated identities to infiltrate organisations. An emerging insider threat gaining traction over the past six months, these sophisticated operatives bypass traditional background checks using stolen US credentials and fake LinkedIn profiles to secure multiple roles within targeted companies. Once inside, they deploy covert software and reroute hardware to siphon sensitive data directly to hostile nations.
The FBI confirmed that 300 companies unknowingly hired these impostors for over 60 positions, exposing critical flaws in hiring practices. Traditional background checks can’t catch this level of deception, and HR teams lack the tools to identify these threats. This escalating risk demands stronger identity verification and fraud detection – ignoring it leaves organisations vulnerable to catastrophic breaches. This isn’t just an attack trend; it’s a wake-up call.
Bruno Kurtic
Co-founder, president and CEO of Bedrock Security
By 2025, increasing security risks and AI regulations on data handling will push organisations to enhance data visibility, classification, and governance. With agentic AI systems becoming integral to operations, companies will need full insight into data assets to use them responsibly, emphasising data sensitivity classification to avoid exposing confidential or personal information during AI training.
A standard practice will emerge: creating a data bill of materials (DBOM) for AI datasets. DBOMs will detail the origin, lineage, composition, and sensitivity of data, ensuring only appropriate data trains AI models. Strict entitlements will limit access, allowing only authorised users to manage sensitive data, thereby reducing accidental or malicious exposures.
As data volumes surge, scalable solutions will be essential to handle diverse datasets. This focus on visibility, classification, and access control will drive new data platforms, advancing AI data governance and mitigating security risks.
George Moawad
Country Manager Oceania at Genetec
Physical and cyber security solutions have been on the fast track to convergence in recent years. That’s upping the appeal of cloud-based security management platforms, particularly those with open architecture. Businesses can deploy them in which ever model they prefer ; bringing new monitoring equipment online as it’s acquired while utilising legacy programs to manage aging camera and device arrays that have not yet reached end of life. Expect to see more local businesses taking advantage of the flexibility this model offers by modernising their security infrastructure progressively, rather than in one fell swoop.
Andre Durand
Founder and CEO at Ping Identity
Also, as AI agents become part of daily workflows, a new threat is emerging: the agent-in-the-middle. These agents can see screens, move cursors, and act on our behalf. It’s the next evolution of man-in-the-middle attacks - only now, the intruder is software you invited. Detecting and governing those agents will be one of cyber security’s defining challenges. Knowing when AI is acting, and who it’s acting for, will separate the secure from the exposed.
Pierre Lamy
Principal Threat Intelligence Researcher at Anomali
Ransomware, supply chain breaches, and credential theft will remain rampant as public-sector leadership vacuums and underfunded defences erode response capabilities. Even cybersecurity vendors will become prime targets, proving no one is immune in an increasingly fragile ecosystem. In addition, a wave of mergers and acquisitions will reshape the security landscape, as smaller vendors get squeezed out by pricing pressure and ecosystem lock-in. The result: fewer, larger providers, and a growing risk of systemic outages as dependence on a handful of giants deepens.
David Hollingworth
David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.