Powered by MOMENTUMMEDIA
For breaking news and daily updates, subscribe to our newsletter

The Industry Speaks, Part One: World Privacy Day 2026

Leading experts from Kensington, Tanium, Qualys, CyberArk, and SailPoint weigh in on why the basics still matter when it comes to data privacy and protection.

Tue, 27 Jan 2026
The Industry Speaks, Part One: World Privacy Day 2026

Arivan Ahmad
Product Manager at Kensington Australia

Digitisation has increased the amount of sensitive information displayed on screens every day. As work becomes more mobile, the risks of ‘shoulder surfing’, where someone simply glances at your screen, have grown significantly. Airports, cafés, coworking spaces and even shared office setups create ideal conditions for visual hacking.

Many organisations still underestimate how easily visual data can be harvested. In highly regulated industries like government, healthcare and financial services, privacy screens are now becoming essential day-to-day security tools, not optional accessories.

 
 


Melissa Bischoping
Senior Director, Security and Product Design Research at Tanium

As AI agents and workflows become an undeniable part of the modern enterprise, data privacy expands into a complex ecosystem that many organisations are scrambling to understand and govern. The spirit of innovation that fuels technologists drives them to want to build, adopt, and integrate agentic AI, but fear of the unknown can bring pause. While AI has given us unprecedented ability to execute sophisticated workflows at speed and scale, we also understand that – if ungoverned and unchecked – it can introduce unprecedented risk and loss of data at that same scale.

To lead responsibly as an AI-forward technologist, build on a strong foundation of data governance and visibility first. Understanding the scope and permissions of agents, the data resident on systems interacting with other AI tools and infrastructure, and having safeguards where there is always a human-in-the-loop to validate actions will reduce the risk of unexpected data loss through misconfiguration.

Data privacy in the era of AI requires a clear, accurate, real-time answer to the questions, “What AI agents exist in my environment? What data/systems can they access? Under what permissions can they access systems? And do I have governance and controls to ensure autonomous workflows and agentic actions can be traced and audited with confidence?”


Sam Salehi
Managing Director ANZ, at Qualys

On Data Privacy Day, enterprises should confront a modern reality: we’re handing over more data than we realise – not in a single breach moment, but through thousands of fast, everyday decisions.

LLMs have made “copy, paste, prompt” the new workflow. Teams drop documents, code, incident notes, customer details and internal strategy into tools that feel helpful – even when they sit outside approved environments. Shadow IT has upgraded to shadow AI, creating an unobserved risk surface security teams can’t properly see, govern, or audit.

At the same time, attackers are using AI to scale what already works. Phishing and deepfakes are more convincing, and the line between real and fake is blurring at speed – making privacy and security inseparable.

The response shouldn’t be a blanket ban. Enterprises need to treat AI like any material risk surface: know what’s being used, control what’s being shared, and enforce guardrails based on business context – with approved pathways, strong access controls, clear handling rules and continuous monitoring.

The fundamentals still apply. The attack surface is just more conversational now.

And here’s the catch: the next privacy incident won’t always look like a breach. Sometimes it’ll look like productivity.


Thomas Fikentscher
Area Vice President ANZ at CyberArk

As AI systems move from analysis to autonomous decision-making, Data Privacy Day is no longer just about how data is collected or stored – it’s about accountability. Organisations are deploying AI into high-impact environments faster than governance frameworks can keep up, raising hard questions around liability, data quality and oversight when AI-driven systems produce unintended consequences. While the scale of what AI can enable is compelling, there is a growing responsibility gap as AI decisions increasingly affect people, outcomes and trust.

For organisations, the priority must be securing AI at the point where privacy risk is highest: the AI agent itself. These agents operate with speed, scale, and access that often exceed human users, making them a new class of highly privileged identity. Treating AI agents as trusted software rather than privileged identities is a very risky endeavour. In a hybrid world of human and machine collaboration, agentic AI security becomes a core privacy control – requiring least-privilege access, continuous monitoring and clear human accountability. With regulation still evolving, organisations must take the lead to protect privacy in the AI era.


Olly Stimpson
Senior Manager, PAM Transformation ANZ, at CyberArk

In an increasingly chaotic world, the idea of ‘taking control’ is no longer just an aspiration but a business imperative, in particular when it comes to handling data.

While we struck an optimistic tone on Data Privacy Day last year, 2025 regrettably saw several high-profile incidents across Australia and New Zealand in which significant volumes of personal data were compromised. Many of these attacks didn’t rely on new or exotic techniques, but on familiar weaknesses – social engineering, credential misuse, and gaps in identity and access controls, underscoring the lack of ‘control’ that too often exists in corporate technology platforms. The result has been the same – widespread data exposure and a sharp erosion of trust.

At the heart of many of these incidents is a growing disconnect between how organisations operate today and how access is governed. Privileged and sensitive access to data is no longer confined to a small group of administrators. It underpins cloud services, third-party access, automated workloads and increasingly, machine-driven processes. When privileges sprawl across users, sessions, tokens and systems without visibility or control, attackers don’t need to break in – they simply log in and move laterally through trusted pathways.

This matters even more as organisations accelerate AI adoption. The rush to implement AI – and reap the rewards it promises – is increasingly colliding with a lack of control that is already evident across many environments. AI initiatives don’t replace existing access models – they sit on top of them, inheriting the same privilege gaps and blind spots. Without strong, modern privileged access management in place, AI becomes a force multiplier for risk, increasing the speed, scale and potential impact of identity-driven attacks.

In this context, privacy, security and access can no longer be treated as separate concerns. Strengthening PAM foundations is not just about reducing today’s exposure – it’s about ensuring organisations can adopt AI and automation without amplifying the very risks they are already struggling to contain.


Gary Savarino
Identity Strategist for APAC at SailPoint

On Data Privacy Day, organisations across Australia and New Zealand face a simple but uncomfortable question: do we really understand who, or what, has access to our most sensitive data?

We’ve entered a new era of enterprise complexity. AI agents now act autonomously, machine identities are multiplying, and sensitive data is constantly moving between systems, people and services. The security perimeters organisations once relied on, including networks, departments and firewalls, no longer hold.

Attackers understand this shift. Increasingly, they are not exploiting new technical vulnerabilities, but walking straight through the front door using compromised, over-privileged or poorly governed identities.

Recent SailPoint research underscores the scale of the problem. While 82 per cent of organisations are already using AI agents, fewer than half have governance in place. These agents access sensitive data every day, often with limited oversight and, in many cases, operate beyond their intended scope. The result is a widening gap between access and accountability, something traditional identity and access controls were never designed to close.

That risk is compounded by how identity is still managed in many organisations. Static access models, manual controls and infrequent reviews cannot keep pace with environments where identities are created dynamically and operate at machine speed. In this context, compliance alone is no longer enough. Organisations may meet regulatory requirements, yet still lack real control over who, or what, can access their most sensitive data at any given moment.

Data privacy in 2026 is no longer just about protecting information at rest. It is about understanding access in real time, reducing unnecessary privilege and adapting controls as risk changes. Without that shift, the rapid adoption of AI will accelerate exposure, not innovation.

It is time to move from static identity management to adaptive identity. That means treating identity security as the control layer for data privacy by unifying identity, data and security, continuously validating access, and delivering context-aware protection as risk evolves.

Adaptive Identity helps organisations lead with confidence, innovation, and trust by reducing standing privilege and providing visibility into relationships between human and non-human identities. It is the way forward that will define the next era of enterprise security: security that moves as fast as the enterprise it protects.

David Hollingworth

David Hollingworth

David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.

Tags:
You need to be a member to post comments. Become a member for free today!