Powered by MOMENTUM MEDIA
cyber daily logo

Breaking news and updates daily. Subscribe to our Newsletter

Breaking news and updates daily. Subscribe to our Newsletter X facebook linkedin Instagram Instagram

OpenAI combats risks of superintelligent AI with new framework

In an effort to curb the dangers presented by powerful and potentially threatening artificial intelligence (AI), ChatGPT maker OpenAI has announced the adoption of a “Preparedness Framework”.

user icon Daniel Croft
Tue, 19 Dec 2023
OpenAI combats risks of superintelligent AI with new framework
expand image

In a blog post on its site, OpenAI said that the framework is designed to lay out the foundations for safety when working on and with AI models with superintelligence.

“The study of frontier AI risks has fallen far short of what is possible and where we need to be. To address this gap and systematise our safety thinking, we are adopting the initial version of our Preparedness Framework,” said OpenAI.

“It describes OpenAI’s processes to track, evaluate, forecast, and protect against catastrophic risks posed by increasingly powerful models.”

============
============

The updated framework lays out how the company will identify catastrophic risks and then decide how it will proceed.

Catastrophic risks are those that result in severe harm or death to many people, billions in economic loss or existential risks.

The Preparedness Framework, which is currently in beta, introduces a number of measures with which OpenAI will evaluate the risk of AI models.

For instance, the company has introduced scorecards for its frontier models, which will be used throughout testing to analyse the potential dangers they create. These include cyber security risks, the model’s autonomy, dangers relating to CBRN and persuasion.

The final score is the same as the highest risk score of any category.

There are four scores – low, medium, high and critical – each of which is carefully defined. Only models that earn a score of medium or lower will be allowed to be deployed. Models that earn a score of high or lower are still allowed to be developed.

OpenAI has also announced a preparedness team that, it said, will “drive technical work to examine the limits of frontier models capability, run evaluations, and synthesise reports”.

“This technical work is critical to inform OpenAI’s decision making for safe model development and deployment.

“We are creating a cross-functional Safety Advisory Group to review all reports and send them concurrently to leadership and the board of directors. While leadership is the decision-maker, the board of directors holds the right to reverse decisions,” it said.

The bolstering of the company’s security measures regarding frontier AI models comes only weeks after a hiccup at the company resulted in chief executive Sam Altman being fired, before being rehired just a weekend later.

The CEO was reportedly fired due to not being candid about a potentially dangerous AI known as Q* (pronounced Q Star) with superintelligence, or that which is greater than human intelligence.

Daniel Croft

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.

cd intro podcast

Introducing Cyber Daily, the new name for Cyber Security Connect

Click here to learn all about it
newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.