You have 0 free articles left this month.
Register for a free account to access unlimited free content.
Powered by MOMENTUM MEDIA
lawyers weekly logo

Powered by MOMENTUMMEDIA

For breaking news and daily updates, subscribe to our newsletter.
Advertisement

OpenAI says new models could present a ‘high’ cyber risk

US AI giant OpenAI has issued a warning that its upcoming AI models could prove to be “high” risk when it comes to cyber security.

OpenAI says new models could present a ‘high’ cyber risk
expand image

In a blog post released earlier this month, the AI firm said that new models could reach greater levels of cyber capability, capable of developing “working zero-day remote exploits against well-defended systems, or meaningfully assist with complex, stealthy enterprise or industrial intrusion operations aimed at real-world effects”.

AI tools like ChatGPT are already proving to be a concern when it comes to cyber crime, lowering the barrier for entry for criminals.

A report by Vanta that surveyed over 2,500 customers across the US, Europe, the Middle East, and Africa (EMEA), and Australia found that while threat actors are more easily crafting cyber threats, company budgets aren’t keeping up.

 
 

These tools are lowering the barrier for entry to cyber crime, and have for years proven useful for the writing of phishing emails, writing code and more, particularly when it comes to models without appropriate safeguards.

The company said that as the capabilities of its models advance, OpenAI is investing in bolstering them for “defensive cyber security tasks and creating tools that enable defenders to more easily perform workflows such as auditing code and patching vulnerabilities”.

OpenAI said it is investing in safeguards to ensure that these capabilities “primarily benefit defensive uses” and limit how much they assist hackers and other threat actors.

To do this, OpenAI is using what it has called a defence-in-depth approach, which balances empowering users and risk.

“Cyber security touches almost every field, which means we cannot rely on any single category of safeguards – such as restricting knowledge or using vetted access alone,” the company said.

“In practice, this means shaping how capabilities are accessed, guided, and applied so that advanced models strengthen security rather than lower barriers to misuse.”

On top of this, OpenAI said it will train models to refuse or safely respond to any harmful requests, while also providing help in cases that are educational or defensive.

It will also refine detection systems for malicious activity and will workwith end-to-end red teaming organisations to identify gaps in its security.

Daniel Croft

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.
Tags:
You need to be a member to post comments. Become a member for free today!

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.