Powered by MOMENTUMMEDIA
For breaking news and daily updates, subscribe to our newsletter

Op-Ed: AI won’t patch the holes in your SOC

The potential for artificial intelligence (AI) to improve cyber security outcomes is real.

user icon Aaron Bugal – Field CISO, APJ, at Sophos Tue, 21 Apr 2026
Op-Ed: AI won’t patch the holes in your SOC

For security operations center (SOC) teams under pressure to investigate faster, manage growing alert volumes, and respond with limited people and time, the appeal is obvious.

AI’s ability to process large volumes of data quickly and identify patterns has made it increasingly attractive in SOC environments.

But the hype is also creating a problem.

 
 

Many organisations now feel pressure to adopt AI in cyber security before understanding where it fits, what risks it introduces, and how much trust it deserves. AI can strengthen security operations, but it cannot replace the judgement of an experienced analyst.

Critical context can be missed, incorrect recommendations can be surfaced, and teams can become overly dependent on the tool. In cyber security, those failures have real operational consequences.

For Australian organisations, the challenge is separating genuine value from hype when adopting AI in security operations.

The promise of AI in security operations

Australian organisations are already well into their broader AI journey, and cyber security is no exception.

As businesses deal with persistent threats, lean SOC teams, and pressure to improve resilience without continually adding headcount, technology that promises speed and efficiency naturally attracts attention.

SOC teams stand to benefit when AI is applied appropriately. It can help enrich alerts, surface relevant information faster, summarise activity across tools, and reduce time spent on repetitive analysis. In environments suffering from alert fatigue, this can materially improve analyst effectiveness.

At the same time, many organisations are still maturing their approach to AI governance. The Sophos Future of Cybersecurity in Asia Pacific and Japan 2025 report highlights that a significant proportion of Australian organisations are already seeing unsanctioned or poorly governed AI use, while many still lack a formal AI strategy. This gap between adoption and governance creates risk.

SOC teams cannot use AI safely if their organisation lacks clear rules, visibility, and accountability. As enthusiasm outpaces readiness, the risk shifts from external threat actors to internal operational exposure.

Speed cannot replace judgement

AI is effective at accelerating repetitive work, but it does not understand business context.

A model may flag an anomaly or recommend a response, but it does not know how an organisation actually operates or what the downstream impact of an action will be. That distinction matters.

In a live incident, context is everything. A system may appear safe to isolate from a purely technical standpoint, but it may be supporting a critical business process that cannot be disrupted. Unusual user behaviour may look malicious until business context reveals a legitimate role or workflow change. These decisions require human judgement, informed by both technical evidence and organisational knowledge. AI can support that process, but it cannot replace it.

There is also a longer-term skills risk. If analysts spend most of their time validating AI recommendations rather than conducting investigations themselves, core investigative capability can erode. Over time, SOC teams may become proficient at supervising tools but less capable of responding independently when those tools fail or are unavailable. Industry analysts have warned about this growing dependency risk as AI adoption accelerates.

Australian organisations should also recognise that AI can expand the attack surface within security operations. Poorly governed tools, excessive access, or autonomous actions on sensitive data introduce new operational and security risks if not tightly controlled.

These risks do not mean organisations should avoid AI. They mean AI adoption in cyber security must be deliberate, controlled, and accountable.

A safer path to AI in cyber security operations

A safer approach starts with controlled adoption.

AI should be used where it adds clear value without overriding critical decisions. Areas such as anomaly detection, summarisation, endpoint and log correlation, and data filtering are well-suited to AI support. When applied this way, AI reduces analyst fatigue while keeping humans firmly in control.

Human oversight remains essential. The higher the consequence of an action – isolating systems, escalating incidents, responding to threats – the more important it is that a trained cyber security professional remains accountable throughout the process.

Organisations must also continue to invest in SOC skills as AI adoption grows. Analysts should practise core investigative skills, not just tool operation, through regular training and exercises, so they can respond effectively when manual intervention is required.

Finally, AI governance must be addressed at an organisational level. Security and IT teams need visibility into which AI tools are in use and what data they can access. Clear policies, guardrails, and accountability are essential to reduce the risk of data exposure or misuse.

Speed will always matter in security operations, and AI delivers it. But efficiency without judgEment increases risk. The safest path for Australian organisations is to use AI to reduce noise and support analysts, not replace them.

Used well, AI strengthens SOC teams.

Used carelessly, it makes them more fragile and more exposed.

Cyber DailyWant to see more stories from trusted news sources?
Make Cyber Daily a preferred news source on Google.
Tags: