You have 0 free articles left this month.
Register for a free account to access unlimited free content.

Powered by MOMENTUMMEDIA

For breaking news and daily updates, subscribe to our newsletter.

Artificial intelligence is a powerful tool – but it can also be a powerful threat.

Artificial intelligence is revolutionising the workplace, but with great power, comes great risk. Lumify Work’s Lead Cyber Security Instructor, Louis Cremen talks about the rapid pace of AI improvement and the challenge of keeping systems secure while avoiding AI-powered attacks.

Artificial intelligence is a powerful tool – but it can also be a powerful threat.
expand image

Cyber Daily’s Editor, Liam Garman, recently sat down to record a podcast with Lumify Work’s Lead Cyber Security Instructor, Louis Cremen. The topic was the rapid rise of workplace AI, and the benefits and perils of the technology, from AI poisoning to AI-powered phishing capable of impersonating senior staff. Here’s part of the conversation, and you can listen to the full podcast here.

Cyber Daily: Every press release we get mentions AI in one form or another. Every company pretends to have it. It's such a catchphrase now that a lot of people say they have it, but we don't necessarily believe some of the marketing hype around it. Why is AI becoming such a focus for governance and security plans for really every company in every industry across the nation?

Louis Cremen: Yeah, we've been talking about AI and security for so many years. And you're right – how much of this is AI and how much of this is like some weird thing that's not actually AI-driven.

But in its current form, it's gotten so much better than it ever has. I remember looking at this 15, 20 years ago, and it's just so much better. We're building things with AI coding… I can't believe how fast that's gotten. It's enabling decisions, it's enabling humans. We've got so much more automation that has some AI base to it, and it's touching data that's all over our organisation, on the inside and externally.

But one of the challenges we're going to have, and we are seeing, is that when an AI system fails, it's going to fail fast, and the impact is going to be much larger than it may have been in the past. So when we talk about governance and AI, it's not just about protecting systems – which it's definitely about, because there's a lot of different AI attacks exist out there – but it's also about the integrity of the data that is going into these AI models and it's also the integrity decisions that are going to be made from the outputs of these different AI models.

Now we see AI having tons of different problems, some of them around biases, both human and statistical. In terms of governance, it's important that we have the right structure, having to ask the right questions to mitigate against a lot of these AI risks that you are seeing. From a security perspective, the attack surface is so much wider than even I imagined. I gave a short talk at a conference recently, and I was running up the slides and I just kept writing slides of new techniques, of things that people are doing with AI attacks – it is insane. The scope, the scale of what's possible against AI, whether it be things like data poisoning or prompt injection or stealing the model, the sort of parameters or data about the model or supply chain issues and normal security things. It is just insane what's going on.

So I guess at the end of the day, the point of AI governance is not about making it slower or slowing down innovation. It really is about trying to make sure that innovation happens in a safe way, in a responsible way, and I guess in a way that's sustainable.

Cyber Daily: So we've got three questions that we want to ask. I think it's very important that you started off by saying when AI fails, it'll fail hard. What do you mean by fail?

Louis Cremen: There are a few different ways that can be looked at.

One would be that – and this is a conversation I have with a lot of people that come see me – if you have an organisation that says, “Right, we're going to put all of our SharePoint, all of our material into this AI system so we can ask it through prompts”. And again, I think there's some merit to that from an innovation perspective. But if you put in, say, HR data, you can then find out a lot of information about people who work in the organisation.

Maybe you can find out the CEO's salary, which obviously a lot of people may want to find out, but the CEO doesn't want to necessarily have that available, or people's addresses. And when you ask an AI system, there's not that same authorisation of, “Oh, are you from HR?” or “You are okay, you can have this data” or “you're not or you can't have this data”. We don't have that sort of fine-tuned control to get access to data. That's one thing.

Another thing might be that we use certain APIs. We have this term, it's called unbounded consumption. It's basically the idea that you've used so many AI resources that the costs are significant, very similar to the cloud. And then if you've got decisions and automations that are being made with AI, then they can go rogue, they can lead to certain failures or can – even worse – lead to leakage of a lot of different data, a lot of corporate data, intellectual property, privacy data. And the more data we give it, the more data an attacker could potentially get out of it. And we're trying to feed AI with a lot of data right now.

Cyber Daily: The HR example is probably always the prime example where you have, if you were to use some of the more prolific mainstream solutions, and you are using that to navigate a company structure, if it makes your life easier, it makes the life of an attacker easier as well. Especially if we're talking about the world of credential theft.

In terms of AI attacks, you said you're adding slide after slide after slide and there are just so many. What are some of those attacks that they might not have considered but now they really have to?

Louis Cremen: Yeah, I'll start with the ones that are probably a bit more on the obvious side because we just keep seeing them.

I guess one of the biggest ones that I keep looking at are things like how AI is enabling more advanced phishing attacks. So this idea of an LLM doesn't make spelling mistakes, right? An LLM doesn't do the traditional things we used to look for in phishing. And so we're seeing attackers utilise LLMs for phishing in other languages. So attacks on Japanese companies, rather non English speaking countries, have gone up significantly because it's much easier to write a phishing attack. Now they can be trained on an individual like, “Hey, here's this person's LinkedIn page. How can you personalise the attack based on the language or how they think about the world?”

So we're definitely seeing a lot more of that, and it is getting more advanced. Unfortunately, in terms of scams, there are a few different ways of enabling it, but one is the AI summaries you see – some of those have been hijacked to have a malicious phone number or a malicious link in there that actually goes to a non-legitimate page.

Obviously, Deepfakes is a big one. There was an attack last year where someone imitated the chief financial officer, which seems to be a pretty common way of forming this attack. There was n one this year that was about half a million dollars in Singapore. I know there was one in Noosa in Australia, Noosa Council – they paid over $200 million due to a deepfake imitation like those.

Those are the ones we see all the time. We're seeing AI-generated malware, we're seeing LLMs that are being embedded into malware and looking around going, “Alright, I'm in this environment, write me a script that will bypass XYZ”. And so we're seeing more state and criminal groups using these different types of attacks. So I guess if I want to summarise some of these things, I would say that we're seeing a lot of threats from AI models. They're being used for deep fakes and stuff like this, or misinformation campaigns, and other threats that are using AI models. Things like prompt injection, which is something we looked at before, and threats to AI models.

So, being able to poison the actual data or make modifications to the data that's actually powering these AI systems, and I think there are probably lots of other ones. I can literally talk for a few days on this exact topic, but I think there's a lot of different ways we are seeing these attacks.

This was taken from a recent Cyber Uncut podcast. To listen to Lumify Work’s Louis Cremen, click here.

To find out more about Lumify Work’s services, click here.

Tags:
You need to be a member to post comments. Become a member for free today!
cyber daily discover
Lumify Group is Australasia's largest provider of corporate ICT, soft skills and digital skills...

Latest articles

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.