Powered by MOMENTUM MEDIA
cyber daily logo

Breaking news and updates daily. Subscribe to our Newsletter

Breaking news and updates daily. Subscribe to our Newsletter X facebook linkedin Instagram Instagram

Boffins want a pause on AI — but one security expert thinks education and regulation are key

Last week a raft of industry experts put their names to an open letter calling for a pause on the current pace of AI development.

user icon David Hollingworth
Mon, 03 Apr 2023
Boffins want a pause on AI – but one security expert thinks education and regulation are key
expand image

The writers include the likes of Skype co-founder Jaan Tallinn, Apple co-founder Steve Wozniak, and politician Andrew Yang, alongside a number of artificial intelligence (AI) specialists. Elon Musk, who is struggling with his own recent IT purchase, was also a cosignatory.

The letter was published by the Future of Life Institute and essentially states that AI research is “out of control” and moving too fast and in ways that even the creators of these new AI cannot adequately predict.

“Therefore, we call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4,” the letter said. “This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

============
============

But one security expert thinks that a pause is not the only answer to the challenges AI presents us with.

“AI research is moving at such a pace that we need to either regulate it or learn how to defend against its products before they get out of control,” said Jake Moore, global cyber security adviser at ESET, via email. “The answer not only lies in pausing the rapid growth but in adopting the correct regulation and working together in aligning the future.”

“To regulate AI, data quality must be regulated at the earliest stages, but problems could arise when different counties and regions adopt different policies. Deepfakes, for example, are a major cyber security threat and require attention, but regulation will be difficult to implement globally, and therefore awareness and education remain key.”

“More education and technology countermeasures will be required as cyber criminals are cleverly using these new techniques to attack people and businesses,” Moore said.

“With AI-powered phishing bound to grow in number and quality and deep learning autonomously generating quality fakes, we will soon require more forms of multilayered security to pass even the most trivial of transactions to prove identity.”

David Hollingworth

David Hollingworth

David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.

cd intro podcast

Introducing Cyber Daily, the new name for Cyber Security Connect

Click here to learn all about it
newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.