Powered by MOMENTUMMEDIA
For breaking news and daily updates, subscribe to our newsletter

Industry predictions for 2026: Artificial intelligence – boom or bust?

Part 2: As the year draws to a close and the new year looms, the industry takes out its collective crystal ball for a look at what 2026 has in store for AI.

Wed, 24 Dec 2025
Industry predictions for 2026: Artificial intelligence – boom or bust?

Yuval Fernbach
VP and CTO of JFrog MLOps

While today’s AI adoption often starts with generic LLMs and isolated prototypes, enterprises are realising that real value doesn’t come from the model alone – it comes from how well that model is connected to their internal systems. In 2026, the focus will move away from “building your own” models and towards deploying AI that natively integrates with internal assets: data sources, tools, APIs, operational workflows, and governance layers.

Models and agents will increasingly use MCP-like connectors to enrich prompts with internal organisational context, retrieve real-time business data, and perform actions across existing enterprise systems. This shift turns AI from a static text generator into an operational participant – one that queries, validates, updates, and orchestrates tasks based on live internal information.

 
 

As a result, companies will reduce drift, improve reliability, and unlock far faster time-to-value. Instead of experimenting in isolation, enterprises will rely on integrated, governed, production-ready AI systems that understand their business, operate within their environment, and continuously stay aligned with their internal truth.


Paul Davis
US field CISO at JFrog

The surge in advanced AI tools, such as Model Control Platforms (MCPs), is raising urgent questions for security teams: How do we build trust in AI, govern its adoption, and ensure secure integration? Governance will play a pivotal role, such as the EU AI Act, Cyber Resilience Act (CRA), DORA and various state regulations, such as California’s AI Transparency Act (SB 942), providing clear standards and accountability to help organisations manage AI risks and ensure secure, responsible deployment.

Ultimately, for agentic AI to be used productively in 2026, developers must remain actively engaged – never taking their hands off the wheel. AI cannot simply be set loose; it requires rigorous testing and continuous security checks/oversight at every stage of development through to production to ensure these powerful tools remain safe and reliable. For example, teams must establish a collective evidence ecosystem, where every model and its components are verified by leading organisations in the industry. This will establish a single source of truth and automated trust at every stage of the AI development life cycle.

Leaders who combine strong processes with intelligent technologies will build resilient, compliant ecosystems, where AI security and data protection is not just a technical requirement but a strategic advantage driving sustainable growth.


George Harb
Vice president – Australia and New Zealand at OpenText

The next step in enterprise AI will be less about model size and more about whether the data feeding those models is clean, governed and fit for purpose.

You can have a high-performance vehicle tuned to perfection. If you put the wrong fuel in it, you will not get the result you expect. The same principle applies to AI. Large language models can be powerful, but if you put dirty or poorly governed data into them, they will produce outcomes that cannot be trusted.

In response, more Australian organisations are appointing chief data officers and chief AI officers whose focus is to engineer data, not only to clean up what exists but also to change how the organisation captures and manages new data so it stays fit for purpose over time. The right data is king. Tech leaders who fail to get this right face not only wasted AI spend but serious exposure under privacy and cyber regulation, including the risk of very large penalties if they mishandle sensitive information.


Shannon Davis
Principal AI security researcher, SURGe/Foundation AI, at Splunk

In 2026, machine data will step into the spotlight as companies accelerate their use of AI. With AI models, infrastructure and data centres all expanding at once, the volume of information systems produce will rise. Making sense of this data – and importantly, staying in control of it – will become essential for managing cyber risk, performance and resilience.

Machine data is data generated by all the systems running in data centres and the new world of connected devices. It is all the data generated by everything that powers an organisation – from applications and servers to security and network devices.

As AI systems become more interconnected and complex, machine data becomes the single source of truth for both observability and security. Many early warning signs – a spike in errors, a slowdown, an unexpected process – can indicate a performance issue or an attack. Without unified machine data, teams risk chasing the wrong problem.

That gap will only widen as cyber threats intensify. AI will increasingly support security operations, but the AI systems organisations rely on will also require safeguards to ensure they operate safely and can’t be exploited. Neither can happen without accurate, timely machine data feeding those decisions.


Kurt Semba
Senior Principal Software Systems Engineer at Extreme Networks

By the end of 2026, we’ll move from observability to operability, shifting from “agents that know” to “agents that act.” Agents won’t just summarise telemetry, they’ll pull context from clients and APs, run targeted diagnostics, and create or annotate ServiceNow tickets with supporting evidence – then track issues through resolution. Expect closed-loop runbooks where agents will handle the repetitive 80% of tasks – for example, triage, enrichment, and first actions – while escalating edge cases with clear, actionable handoffs to human team members.

The result is faster MTTR, fewer handoffs, and safer change windows as agent actions are policy-guarded, versioned, and fully auditable. Security follows the same pattern: agents detect misconfigurations or anomalous behaviour and execute pre-approved containment steps, requiring explicit human approval for high-risk changes.


Matias Madou
Co-Founder and Chief Technology Officer at Secure Code Warrior

Agentic AI, coupled with MCP technology, will provide exciting new software development possibilities, but it won't be safe to deploy in enterprise environments without security-proficient developers: Comprehensive data from the likes of BaxBench has proved, at least to date, that LLMs and agentic agents cannot yet generate enterprise-ready code, with 62 per cent of the solutions offered by even the best-performing model containing an error or security vulnerability.

Agentic AI and Model Context Protocol (MCP) technology adds a new dimension to frontier AI applications, allowing autonomous and often seamless integration into existing workflows to complete set tasks. 2026 is sure to provide further advancement in this area, with fascinating developments in MCP-powered scanning tools, access control tools and other security weapons to add to the arsenal, but it will inevitably be quite a while before true, safe, trusted autonomy can be realised. These will all need careful monitoring by skilled security personnel and developers, the latter of which should be assessed and verified as security-proficient before using such potent tools.

David Hollingworth

David Hollingworth

David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.

Tags:
You need to be a member to post comments. Become a member for free today!