Powered by MOMENTUM MEDIA
cyber daily logo

Breaking news and updates daily. Subscribe to our Newsletter

Breaking news and updates daily. Subscribe to our Newsletter X facebook linkedin Instagram Instagram

AI poses cyber risk, says British experts

The discussion around the dangers of AI and the cyber risks they create continues, with British officials issuing a warning that AI tools can be fooled into being harmful.

user icon Daniel Croft
Wed, 30 Aug 2023
AI poses cyber risk says British experts
expand image

The British National Cyber Security Centre (NCSC) has said the full risks that AI tools present and their ability to replicate human behaviour had not been fully realised by experts, and that large language models could be used in malicious ways, particularly when implemented with customer service, sales, and other business tasks.

Chatbots, which are the usual medium in which AI is utilised at this stage, are also highly risky, with researchers finding ways to get around the rules put in place by developers that limit them from performing harmful tasks, such as writing malicious code or relaying dangerous information.

The NCSC warning was issued via two blog posts on its website.

============
============

In the first blog post, “Exercise caution when building off LLMs”, the NCSC has said that the jump from LLMs from machine learning (ML) means that there is a massive lack of understanding of the risks.

“The challenge with LLMs is that, although fundamentally still ML, LLMs (having being trained on increasingly vast amounts of data) now show some signs of more general AI capabilities,” says the NCSC’s David C.

“Creators of LLMs and academia are still trying to understand exactly how this happens and it has been commented that it’s more accurate to say that we ‘grew’ LLMs rather than ‘created’ them.

“It may be indeed more useful to think of LLMs as a third entity that we don’t yet fully understand, rather than trying to apply our understanding of ML or AGI.”

One of the issues that researchers have already detected is that LLMs struggle to make a distinction between an instruction and the data inputted to assist in completing the instruction.

“Consider a bank that deploys an ‘LLM assistant’ for account holders to ask questions, or give instructions about their finances,” the post continues.

“An attacker might be able send a user a transaction request, with the transaction reference hiding a prompt injection attack on the LLM. When the user asks the chatbot, ‘Am I spending more this month?’, the LLM analyses transactions, encounters the malicious transaction, and has the attack reprogram it into sending user’s money to the attacker’s account.”

The other major concern with the development of AI chatbots is the risk that they could be manipulated and corrupted with the input of loaded training data.

The NCSC has said that to defeat data poisoning attacks and prompt injection, systems need to be designed from the ground up with security in mind.

“What we can do is design the whole system with security in mind,” said Martin R.

“That is, by being aware of the risks associated with the ML component, we can design the system in such a way as to prevent exploitation of vulnerabilities leading to catastrophic failure.”

Daniel Croft

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.

cd intro podcast

Introducing Cyber Daily, the new name for Cyber Security Connect

Click here to learn all about it
newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.