Powered by MOMENTUM MEDIA
cyber daily logo

Breaking news and updates daily. Subscribe to our Newsletter

Breaking news and updates daily. Subscribe to our Newsletter X facebook linkedin Instagram Instagram

Tesserent boss warns of AI tools in the hands of hackers

ChatGPT and its AI-powered brethren are a boon to many industries – including criminal ones.

user icon David Hollingworth
Tue, 31 Oct 2023
Tesserent boss warns of AI tools in the hands of hackers
expand image

Hot on the tail of its acquisition by Thales Australia, Tesserent’s managing partner, Patrick Butler, has issued a stern warning on the growing use of artificial intelligence (AI) tools by cyber criminals and threat actors.

While US President Joe Biden has just signed a comprehensive executive order on the uptake of AI in government, especially regarding security, the AI tools in the hands of criminals are not limited by any government or industry mandate.

WormGPT seems to be the tool of choice, but others are in common use and available online.

============
============

“Malicious parties can go to criminal forums and rent access to WormGPT,” Butler said in a statement. “They can then leverage this tool to craft convincing phishing emails in different languages, further increasing their ability to execute identity theft, business email compromise, and to compromise system access.”

“We’re seeing malicious generative AI being used to create new malware variants that are more difficult for some traditional tools to detect. These platforms can even assist criminals in exploiting published vulnerabilities,” Butler said.

That said, the ready availability of such tools does work in security researchers’ favour, as they can be acquired and studied to understand how they work and what they’re capable of. At the same time, developers need to be aware that using their own AI tools in the writing of code could well backfire.

“Organisations must ensure that their defences are ready for better-designed threats and the increase of AI-assisted attacks,” Butler said.

“While some legitimate AI tools can be used to conduct software code reviews, developers should be discouraged from doing this as their code may be used to train AI models that criminals gain access to, giving them further intelligence into organisational systems. This should be in addition to following established standards such as Essential Eight, NIST, ISO 270001 and others.”

David Hollingworth

David Hollingworth

David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.