Powered by MOMENTUMMEDIA
For breaking news and daily updates, subscribe to our newsletter

Health check: Securing patient data in the age of rising cyber threats and AI integration

As healthcare organisations continue to be a prime target for hackers, OpenAI’s announcement of ChatGPT Health delivers another challenge in securing patient data and health outcomes. Here’s what healthcare providers need to know.

Wed, 21 Jan 2026
Health check: Securing patient data in the age of rising cyber threats and AI integration

A recent audit of NSW Health found some worrying issues regarding “systematic non-compliance” with cyber security obligations.

Of all four local health districts studied in the audit – one metropolitan, one outer-metropolitan, and two regional – three did not have a cyber security plan in place, and none had implemented the NSW Cyber Security Policy.

“NSW Health is not effectively managing cyber security risks to clinical systems that support healthcare delivery in local health districts,” the Audit Office of NSW’s report said.

 
 

“Systemic non-compliance with NSW government cyber security requirements, including maintaining adequate cyber security response plans, business continuity planning and disaster recovery for cyber security incidents, means that local health districts could not demonstrate that they are prepared for, or resilient to, cyber threats.”

The report is particularly damning in the wake of a torrid year of breaches impacting Australian patient data. From the Spectrum Medical Imaging hack in January 2025, which saw patient scans and personal information compromised, to the ransomware attack on the Genea fertility clinic that saw patient diagnoses and treatments posted to the dark web, the facts back up the Audit Office of NSW’s findings.

Securing patient data is a challenge. Many healthcare organisations rely on legacy systems and lack dedicated security resources, while the sensitive nature of the data makes it a goldmine for financially motivated hackers looking for a payday.

“Unlike financial data, which has a limited shelf life because it is relatively easy to change, leaked medical records are permanent and therefore hold long-term value,” Matt Green, principal threat analyst at Rapid7, told Cyber Daily in the wake of a ransomware attack on a Victorian medical centre in early 2025.

“Medical records from specialised clinics, such as IVF, are highly prized by cyber criminals for their mix of medical and personal data. This data can fuel targeted scams, such as tailored phishing emails or identity theft, and supports direct extortion by threatening to expose sensitive conditions, exploiting victims’ emotions and finances.”

And the hacks are coming faster than ever. Specialist healthcare cyber security firm Fortified Health Security’s 2026 Horizon Report paints an alarming picture. In 2024, the US healthcare sector alone experienced 237 data breaches, but in 2025, that number rose by 112 per cent to 502.

The scale of individual breaches was smaller, but the number of breaches rose dramatically, even if fewer patients were impacted overall.

“This represents progress in limiting breach size, but also signals a new phase of cyber risk, where operational resilience, response capacity and workforce sustainability matter as much as traditional data protection measures,” the report said.

Meeting the healthcare challenge

Yossi Altevet, chief technical officer of AI security firm DeepKeep, told Cyber Daily that evolving data regulations and the very serious consequences of any breach of patient data, combined with the lack of proper security controls, create a perfect storm for healthcare providers.

“One of the biggest issues so far has been the complexity and fragmentation of healthcare’s existing IT systems. Many healthcare providers rely on outdated, insecure infrastructure, alongside unreliable cloud technologies, leaving wide gaps for vulnerabilities to be exploited,” Altevet said.

“The integration of AI-driven tools expands the threat surface even further, providing another entry point for adversarial attacks and opening the opportunity for data leakage, misinformation and hallucination, which is a major problem in the healthcare sector.”

But for all of that, there are several steps that can be taken to secure healthcare networks and protect the important data they store.

“The first step is ensuring comprehensive data protection across all digital systems, which includes strong access controls, encryption, and ensuring that only authorised users can interact with sensitive data,” Altevet said.

“Strong guardrails that can block malicious attackers are fundamental to keeping the network safe, as well as keeping data under encryption, so that even if data is accessed, it remains unreadable without proper access controls.”

Continuous monitoring is also essential in understanding who is accessing patient data, as well as regular security assessments and regular monitoring of data protection guidelines.

Enter ChatGPT Health

Earlier this month, OpenAI announced the introduction of ChatGPT Health, a “dedicated experience that securely brings your health information and ChatGPT’s intelligence together, to help you feel more informed, prepared, and confident navigating your health”.

OpenAI said in its announcement that more than 230 million people turn to ChatGPT for health advice every week, so there’s clearly a use case. But what about the risks of relying upon AI to drive positive healthcare outcomes?

“Healthcare organisations should welcome new advancements in AI, but not without scrutiny and proactive, dedicated AI security. As AI tools take on more decision-making roles and interact with sensitive medical information, they become attractive targets for adversarial attacks designed to manipulate models, extract data, or compromise outcomes,” Altevet said.

“At the same time, the risk of data leakage – whether through model training, inference, or third-party integrations – continues to grow. Without proper safeguards, these vulnerabilities can undermine trust, expose patient data, and introduce systemic risk into clinical and operational workflows.”

The speed of AI adoption – by patients and professionals alike – brings with it further challenges, as the pace of AI uptake outpaces security controls.

“To mitigate these risks, healthcare providers must implement dedicated, robust security frameworks across their AI ecosystems,” Altevet said.

“The first step should be carefully selecting and testing any AI model before deploying it in day-to-day operations. This sets a standard of responsibility and accountability, which is essential when AI enters a highly sensitive environment.”

As with any possible security risk, real-time, continuous monitoring is essential to ensure compliance and accuracy, a process that can also protect patient data.

“Proactive security measures will be essential to fostering trust in AI’s role in healthcare, ensuring that AI adoption doesn’t come at the expense of patient safety, data protection, and healthcare organisations’ reputations,” Altevet said.

The last word

Finally, it’s impossible to talk about the intersection of artificial intelligence and healthcare without addressing the ethics of the matter.

Mercy Health – an Australia-based healthcare provider founded by Catholic religious institute, the Sisters of Mercy, in the previous century – was the first Australian provider to endorse the Rome Call for AI Ethics, a document aimed at supporting an ethical approach to AI and responsibility for its proper use.

Dr Paul Jurman, chief information and digital transformation officer at Mercy Health, said AI comes with both real benefits and tangible, ethical risks.

“AI has real potential to support people to better understand their health and navigate complex systems, particularly where access is limited,” Jurman said.

“But it must always be used ethically, transparently and with strong safeguards around privacy, accuracy and accountability.”

In Jurman’s opinion, AI can assist with triage and diagnosis, but a human must always be in the loop.

“Decisions about patient treatment carry moral weight and responsibility,” Jurman said.

“That must remain the purview of a highly trained, accountable human being.”

David Hollingworth

David Hollingworth

David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.

Tags:
You need to be a member to post comments. Become a member for free today!