Powered by MOMENTUMMEDIA
For breaking news and daily updates, subscribe to our newsletter

Op-Ed: Building secure foundations for AI in the cloud

Bryce Boland, head of security solution architecture, APJ at AWS, discusses AI resilience and cloud security.

user icon Bryce Boland, head of security solution architecture, APJ at AWS Fri, 20 Mar 2026
Op-Ed: Building secure foundations for AI in the cloud

One Australian business adopts AI every three minutes.

According to AWS’s Unlocking Australia’s AI Potential report, 1.3 million businesses, 50 per cent of all Australian businesses, now regularly use AI, with year-on-year adoption growing 16 per cent.

For Australian organisations, this acceleration represents a significant opportunity. As companies move from AI pilots to scale implementation, the security conversation naturally evolves too. It’s no longer about protecting isolated experiments; it’s about securing AI workloads that process large volumes of data, integrate across multiple systems, and operate as critical business functions.

 
 

The question facing Australian businesses is not whether to adopt AI, but how to build the secure foundations that allow them to move fast with confidence.

Conversations around AI security can often focus on emerging threats like deepfakes and AI-enhanced phishing. These risks are real, but they’re only part of the picture. What’s equally important is understanding that AI workloads work differently from traditional applications, and that means security needs to work differently too. AI systems learn from data, interact with users in new ways, connect to other systems through APIs, and increasingly take actions on behalf of people. Each of these capabilities is powerful, and each will benefit from purpose-built security controls to ensure it operates as intended.

The good news is that the fundamental security principles that organisations are guided by still apply. Visibility, access control, resilience, and continuous improvement. Now, they just need to be extended to cover how AI systems are built, deployed, and operated.

Drawing from our work with organisations across the globe, from highly regulated financial institutions to fast-moving digital platforms, three architectural principles have emerged as essential for scaling AI securely.

1. Resilience and infrastructure integrity

AI workloads demand robust infrastructure capable of handling sustained compute for training and unpredictable scaling for inference, while securely managing sensitive data throughout pipelines. As Australian organisations scale AI initiatives, resilience becomes paramount, requiring infrastructure with security built in from the ground up, not bolted on afterwards.
Hardware-level isolation and purpose-built security foundations ensure AI systems remain secure, reliable, and available during load spikes and potential disruptions.

This approach gives organisations confidence that their compute environments provide a trusted foundation for mission-critical AI workloads, delivering the business continuity essential for long-term success.

2. Visibility and operational efficiency across AI environments

As AI adoption scales, security complexity multiplies exponentially. Models deploy across multiple accounts and regions, data flows between diverse stores and endpoints, and identity management must accommodate both human users and AI agents requiring appropriately scoped permissions. Security teams need unified visibility across this entire landscape to identify vulnerabilities before they escalate. Consolidated security postures and automated validation of least-privilege access policies become essential, particularly as AI agents interact with APIs and data stores.

By automating routine checks and consolidating security signals, organisations free security professionals from manual monitoring to focus on strategic, higher-value work that genuinely strengthens their security posture.

3. Continuous security innovation for AI workloads

Security cannot be “set and forget”, especially for AI. As models evolve, new data sources integrate, and agentic capabilities expand – security must evolve in lockstep. Intelligent threat detection that monitors for unusual activity across accounts and workloads becomes essential, particularly for containerised inference environments. Early vulnerability identification in the development process prevents issues before production. Autonomous security agents represent a transformative opportunity – acting as persistent virtual engineers that independently analyse code, detect risks, and flag vulnerabilities throughout the development life cycle.

By embedding continuous, automated security work from the outset, organisations can accelerate rather than constrain AI adoption, scaling production workloads with confidence.

Together, these capabilities create a security posture that evolves alongside AI workloads rather than lagging behind them.

These principles in practice

We are already seeing these principles deliver results across diverse industries locally and across the broader Asia-Pacific region.

In healthcare, Australia’s ASX-listed nib Group, a health insurer supporting almost 2 million customers, worked with AWS Professional Services to migrate 95 per cent of its regulated healthcare workloads to AWS with zero downtime and zero security incidents. nib established over 150 automated security checks and managed guardrails to safeguard sensitive health data and maintain full regulatory compliance.

In financial services, Singapore’s Singlife migrated its entire operation to the cloud with zero downtime or security incidents, implementing automated security checks and managed guardrails to ensure innovation stayed within the bounds of strict regulatory compliance.

In digital-native services, Grab has demonstrated the value of embedding safeguards directly into the AI life cycle. By deploying Amazon Bedrock Guardrails to standardise protections across model, prompt, and application layers, Grab ensures that customer trust remains central to their generative AI journey. As of mid-2025, these guardrails are active across all their critical production systems.

Moving forward with confidence

The rapid adoption of AI across APAC is an opportunity to build well. The organisations that will lead this next wave of innovation are those that treat security as a core component of their AI architecture – not something bolted on afterwards.

By prioritising infrastructure resilience, unified visibility, and continuous security innovation purpose-built for AI workloads, businesses can move beyond experimentation and scale with the confidence that their systems are built to last.

Tags: