Pedram Amini
Chief Scientist at OPSWAT
The drumbeat of threat evolution will continue, with nation-states increasing attacks on physical devices and appliances. ML-assisted scams will grow significantly in volume, quality and believability. As costs associated with ML compute decrease, we’ll see a transition from ML-assisted to fully autonomous operations.
Organisations should expect increased attacks on employees’ personal devices and should prioritise training and novel detection controls to prepare for AI-enhanced social engineering attacks. Production-grade zero-day vulnerabilities will likely be found – and perhaps even exploited – by AI. While we’re likely a few years out from the first fully agentic AI malware, the industry should brace for its emergence.
Ariel Parnes
Co-founder and Chief Operating Officer at Mitiga
The lethal combination of AI-powered attacks and SaaS vulnerabilities will redefine the threat landscape. In 2026, two critical trends will converge to create a perfect storm and reshape the threat landscape: the growing availability of generative AI for cyber criminals and the rapid adoption of SaaS applications.
Generative AI, with its ability to craft sophisticated, context-aware content, will empower threat actors to automatically scan SaaS environments, find vulnerabilities and launch precise, rapid attacks. The barriers to creating adaptive phishing campaigns or exploiting SaaS misconfigurations will drop, enabling even less-skilled hackers to conduct highly targeted attacks. AI will also help attackers evade detection by continually modifying their techniques.
Meanwhile, organisations are adopting more SaaS applications, creating sprawling, interconnected environments and introducing new security challenges. Many organisations lack visibility into their SaaS ecosystems, making it difficult to monitor user behaviour, detect threats and enforce security policies consistently across applications. Traditional security tools are ill-equipped to protect the decentralised and dynamic nature of SaaS platforms. As business functions shift to the cloud, this gap in SaaS visibility and detection will remain a significant weakness for cyber criminals to exploit.
James Fisher
Director of Security Operations at SecureCyber
With breaches continually on the rise, new credentials will become available for exploitation by threat actors. Security teams must stay vigilant, regularly checking environments for weak passwords and outdated credentials. User fatigue with passwords is real, but solutions like Single Sign-On with hardware tokens will ease this burden. Expect to see hardware devices gradually replacing passwords on more secure systems.
Neil Thacker
Global Privacy and Data Protection Officer at Netskope
By mid-2026, I predict that a landmark data breach will be traced not to a cyber criminal or nation-state, but to an autonomous, agentic AI system operating within an enterprise environment. The incident will redefine AI governance, risk management and compliance globally, exposing the danger of unmonitored AI autonomy and weak controls between interconnected AI services.
Every enterprise adopting LLMs, AI and agentic automation will need to implement an AI gateway. Much like how CASB became essential for SaaS security in 2013, AI gateways will become essential for AI governance in 2026.
Jan Michael Alcantara
Threat Research Engineer at Netskope
Social engineering attacks have surged this year as AI has made it easier for attackers to create convincing phishing emails, deepfake videos and realistic phishing websites. In 2026, we may see autonomous adversary agentic AI capable of running entire phishing campaigns. They could independently research and profile potential targets, conduct reconnaissance, craft personalised lures and payloads, and even deploy and manage C2 infrastructure.
This advancement would further lower the technical barriers for launching sophisticated attacks, allowing more threat actors to participate.
Yaz Bekkar
Certified Ethical Hacker and Principal Consulting Architect, XDR, at Barracuda Networks
By next year, attacks won’t just use AI, the AI will behave like an independent operator, making real-time choices to reach the attack goal. We’re already seeing AI automate chunks of the kill chain such as reconnaissance, phishing and basic defence evasion. I believe that the shift in 2026 will be towards systems that plan steps, learn from defences in real time and reroute without human steering.
The AI operator will run the show end to end, gathering what it needs, crafting convincing lures, trying a path, watching how your protection or defences react, then quietly shifting tactics and timing until it gets what it wants. These advanced hacking tools will feel like a coordinated brain that strings steps together, learns from each obstacle and blends into normal activity.
Defenders should expect new attack types and tactics that don’t look like anything they’ve seen and may be hard to explain after the fact. The attack surface keeps expanding, creating both known and unknown gaps, and zero-day exploitation will rise.
Gerry Sillars
Vice President, APJ, at Semperis
We are standing at the breaking point of a digital arms race. In 2026, the boundary between state-sponsored and financially motivated cyber activity will continue to blur, making it more difficult for incident response teams to definitively identify the perpetrators of an attack and build defences against them.
Traditionally, state-sponsored actors weaponise cyber avenues to achieve political objectives, such as espionage, the disruption of critical services and disinformation campaigns, including election interference. Unlike typical ransomware groups, financial gain has not always been their primary motive.
But this is changing. A growing number of nation-states, particularly those facing international sanctions, turn to cyber crime to raise money instead. The Australian government joined the US and the UK in imposing over 1,600 sanctions since Russia’s invasion of Ukraine, and while this may serve as a significant deterrent, these actions are also putting our nation at risk of retaliatory campaigns.
Adrian Covich
Vice President, Systems Engineering, for Proofpoint in APJ
In 2026, I expect espionage campaigns to grow stealthier, more personal and harder to detect. We’re already seeing some nation-state-aligned actors moving away from traditional phishing emails and towards encrypted messaging apps like Signal and WhatsApp, where they can build trust through casual, credible conversation before launching their attack.
We’re also seeing a growing focus from South Asian and Indian threat actors targeting Western organisations – particularly those involved in technology, defence and policy. These campaigns are increasingly sophisticated, often timed around key geopolitical events or trade negotiations.
At the same time, attackers are stealing non-traditional credentials through device code phishing campaigns and using legitimate remote management tools and cloud platforms to blend seamlessly into normal network traffic. In 2026, the most effective espionage won’t be loud or flashy – it’ll be invisible, hiding in plain sight behind the tools and platforms we trust every day.
David Hollingworth
David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.