In a document titled “Industrial Policy for the Intelligence Age”, the company said the world is heading towards superintelligence, as in artificial intelligence that is greater than human intelligence and that it believes that the benefits of AI outweigh the issues, but the world needs to be prepared.
“In just a few years, AI has progressed from systems capable of fast, narrow tasks to models that can perform general tasks people used to need hours to do. Now, we’re beginning a transition toward[s] superintelligence: AI systems capable of outperforming the smartest humans even when they are assisted by AI,” the company said.
“While we strongly believe that AI’s benefits will far outweigh its challenges, we are clear-eyed about the risks – of jobs and entire industries being disrupted; bad actors misusing the technology; misaligned systems evading human control; governments or institutions deploying AI in ways that undermine democratic values; and power and wealth becoming more concentrated instead of more widely shared.
“Indeed, we highlight these risks here to raise awareness of the need for policy solutions to address them. Unless policy keeps pace with technological change, the institutions and safety nets needed to navigate this transition could fall behind. Ensuring that AI expands access, agency, and opportunity is a central challenge as we move towards superintelligence.”
OpenAI has made a number of proposals as to how the world needs to proceed and prepare for AI and superintelligence to curb and mitigate the risks associated with the technology.
The company has said that AI has a real risk of widening inequality and has suggested that an open economy that rewards all is ideal, particularly as AI leads to jobs being disrupted and industries being shifted.
“These changes will not arrive evenly. Without thoughtful policies, AI could widen inequality by compounding advantages for those already positioned to capture the upside while communities that begin with fewer resources fall further behind, excluded from new tools, new industries, and new opportunities,” the company said.
“There is also a risk that the economic gains concentrate within a small number of firms like OpenAI, even as the technology itself becomes more powerful and widely used. Workers using AI might well agree that it’s increasing their productivity without believing they’re seeing the benefits.”
OpenAI also raised concerns regarding cyber security and the ability of the technology to be abused by threat actors.
“As AI systems become more capable and more embedded across the economy, they may introduce new vulnerabilities alongside new abundance. Some systems may be misused for cyber or biological harm. Others may create new pressures on social and emotional wellbeing, including for young people, if deployed without adequate safeguards. AI systems may act in ways that are misaligned with human intent or operate beyond meaningful human oversight,” it said.
“And as advanced AI reshapes how people, organisations, and governments operate, it may place new strain on the institutions and norms that societies rely on to remain stable, secure, and free.”
OpenAI said this calls for a resilient society to be built so that users are ready, while governments have regulatory and other safeguards in place.
“We offer these ideas not as fixed answers but as a starting point for a broader conversation about how to ensure that AI benefits everyone. That conversation should be inclusive and ongoing – engaging governments, companies, researchers, civil society, communities, and families – and should be mediated through democratic processes that give people real power to shape the AI future they want. It also needs to expand globally – bringing in the perspectives of cultures, societies, and governments around the world,” the company said.
Want to see more stories from trusted news sources?Make Cyber Daily a preferred news source on Google.