Alex Papli
CEO of Hypergen
Vibe coding that leverages generative AI will change the economics of software creation. Using natural-language “vibe coding” and full code AI agents will enable organisations to build functional tools that rival many SaaS products – often at less than the price of a single year’s renewal. Apps that are little more than glorified forms or basic CRMs will feel the squeeze first.
To survive, SaaS apps will need strong differentiators, such as exclusive API integrations, such as bank feeds, police checks and other processes. Or they will need to build deep domain expertise that can’t easily be replicated. The boundary between ‘build’ and ‘buy’ will blur, and organisations will have the upper hand at the negotiation table.
Dexter Loeble
VP of customer success at All Covered
Across nearly every conversation, customers are feeling the strain of convergence. AI is woven into every SaaS workflow, rapidly increasing commercial adoption of robotics, and governance frameworks [are] trying to catch up.
The takeaway is that staying secure now requires speed and adaptability, not just compliance. [The year] 2026 will reward organisations that can evolve their security posture as fast as their technology stack changes.
Tom Scully
Principal architect for government and critical industries, Asia-Pacific and Japan, Palo Alto Networks
AI is not just changing the threat landscape; it is fundamentally breaking our assumptions about trust – when a single deepfake CEO, poisoned dataset, or hijacked agent can trigger damage at machine speed. [The year] 2026 will be the Year of the Defender, with autonomous AI the only way to keep pace with an unprecedented threat environment.
As we enter the AI economy, the mandate is clear. We must build visibility before capability, constrain the power surface and enforce zero trust across every agent and interaction. In 2026, resilience will belong to organisations that can observe, govern and correct AI-driven risk in flight.
David Rajkovic
Regional vice president A/NZ at Rubrik
AI agents are a force multiplier, but that force cuts both ways. Our Rubrik Zero Labs research found that 98 per cent of Australian security leaders cite identity-driven attacks as their top concern. With 99 per cent already integrating or planning to integrate AI into identity systems, the stakes have never been higher.
A compromised agent can unleash 10 times the damage in one-tenth of the time. Securing AI agent identities and access controls is critical. We’ve already seen the impact compromised human identities can have, and it’s clear agentic identities will be the next battleground in 2026.
Assaf Keren
Chief security officer at Qualtrics
The winners of 2026 won’t be the organisations that moved fastest on AI, but those that moved fastest on governed AI adoption. Companies that attempt to strictly block or control AI risks will paradoxically find themselves less secure, as their teams turn to unauthorised ‘shadow’ systems to get work done, creating the exact vulnerabilities leadership feared. By contrast, market leaders will be the security teams that enable risk rather than prevent it, making the secure path to innovation the fastest path for their developers.
Ultimately, this strategy redefines the relationship between speed and safety. By embedding governance directly into the technology stack, organisations can ensure that their ‘governed velocity’ outpaces the reckless speed of their rivals. By the end of 2026, the greatest competitive advantage will belong to those who treat security as an accelerator of innovation rather than a blocker. In an AI-driven world, trust does more than drive customer loyalty; it drives revenue.
Husnain Bajwa
SVP, risk solutions, at SEON
AI has become a permanent part of the fraud landscape, but not in the way many expected. AI has transformed how we detect and prevent fraud, from adaptive risk scoring to real-time data enrichment, but full autonomy remains out of reach. Fraud detection still depends on human judgement, such as weighing intent, interpreting ambiguity, and understanding context that no model can fully replicate.
Fraud prevention is a complex interplay of data, intent and context, and that’s where human reasoning continues to matter most. Analysts interpret ambiguity, weigh risk appetite, and understand social signals that no model can fully replicate. What AI can do is amplify that capability. It surfaces patterns, prioritises alerts, and reduces manual work so teams can focus on what really matters.
In that sense, the future isn’t human or machine, but human plus machine. AI becomes an enabler, not a replacement. The organisations that thrive will be the ones that design systems where humans and machines enhance each other’s strengths, pairing computational
scale with the intuition and ethical reasoning that only people can provide.
Daniel Garcia
Vice president and general manager, APAC, at Kaseya
While there is plenty of noise about AI taking jobs, the reality for SMBs in 2026 is that AI will be the ultimate force multiplier for leaner teams. Our 2025 Global IT Trends Report shows that 27 per cent of IT professionals now see AI as benefiting their business, a significant jump up from 20 per cent in 2024 in sentiment.
However, the biggest threat to AI ROI isn’t the technology itself. It is the ‘trust gap’. Our research shows that only 12 per cent of businesses currently trust AI to act autonomously. This hesitation is a bottleneck. To get actual ROI, MSPs must bridge this gap by implementing AI where it delivers immediate, verifiable wins, specifically in end-user productivity and IT efficiency, which are now top priorities for nearly 30 per cent of IT teams.
For the APAC region, where IT teams are running leaner, 68 per cent of organisations now operate with fewer than 25 IT employees. AI is not about reducing headcount; it is about preventing burnout and increasing capacity. We are already seeing 45 per cent of respondents using AI to automate routine patching and scripting. The bottom line is, AI doesn’t replace your experts. By offloading repetitive maintenance to AI, you free your most valuable talent to focus on high-impact, revenue-generating initiatives that drive true business profitability.
Gareth Cox
VP Asia Pacific at Exabeam
The agentic era is here: IDC research shows that 40 per cent of Asia Pacific and Japan (APJ) organisations already use AI agents, with over 50 per cent planning to implement them within the next year. As organisations embrace this shift, they will need to rethink how they manage insider risk. Increasingly, insider risk isn’t just emerging from rogue employees or compromised accounts, but also AI agents that operate autonomously with diverse privileges, allowing them to bypass security oversight and amplify data exposure.
These synthetic identities are creating entirely new categories of insider threats, whether it is malfunctioning agents that behave unpredictably, misaligned agents that follow flawed prompts into compliance or privacy issues, or subverted agents that can be weaponised by bad actors against the business.
Marshall Erwin
CISO at Fastly
The rapid proliferation of AI agents and bots will redefine how we interact with the digital world. In 2026, bots from major LLM providers will not only consume vast amounts of content for training but will also mediate an increasing number of interactions with websites and services. This shift will blur the line between human and bot activity, creating significant repercussions for security and the internet as a whole.
When websites and systems can no longer accurately distinguish between human users and AI agents, traditional authentication and access control methods will falter because you can’t secure what you can’t see. Malicious bot-driven attacks will become harder to identify, and it will become more difficult to respond and filter malicious traffic without potentially blocking wanted bot traffic important to businesses.
James Maude
Field Chief Technology Officer at BeyondTrust
As AI becomes embedded in almost everything, respecting AI veganism will become a niche area that some brands align themselves with by fully abstaining from AI. Most brands will not commit to full AI abstinence, but they will have to factor AI use into their Environmental, Social, and Governance (ESG) assessments. This will trigger a wave of AI greenwashing and raise concerns over the technology’s true environmental costs.
In cyber security, the use of AI will be less optional. This will create friction for users and customers who attempt to opt out, as their current tools will hit limits on effectiveness. In the most extreme cases, opting out of AI may even shift liability away from the service provider and back onto the user.
David Hollingworth
David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.