Powered by MOMENTUM MEDIA
cyber daily logo

Breaking news and updates daily. Subscribe to our Newsletter

Breaking news and updates daily. Subscribe to our Newsletter X facebook linkedin Instagram Instagram

Op-Ed: AI deserves a better security conversation than the one we’re having now

There’s a big security conversation happening around enterprise use of artificial intelligence right now.

user icon David Hollingworth
Thu, 31 Aug 2023
Op-ed: AI deserves a better security conversation than the one we’re having now
expand image

On one level, we should be thankful it is happening, because it shows the message about security is getting through: the best time to consider the security implications of an emerging technology is as it emerges, not down the track when the voices of concern get louder and genuine threats emerge.

The question is, “Is the current conversation about security in AI the ‘right’ one to be having right now?”

That seems unlikely.

============
============

Like many emerging technologies, the conversation is being dominated early by a focus on the dangers – in this case, the potential for misuse of generative AI or threat actors harnessing AI to strengthen attacks.

In reality, threat actors don’t need to get this fancy. There are much more accessible and exploitable vulnerabilities than asking generative AI to help craft one. To combat the rise of enterprise AI, security teams don’t need to get this fancy either – a lot can be achieved today, for example, by using network detection and response (NDR) to address the real issues that AI poses, around unconstrained use, while setting up base level protections for the business’ core.

Addressing data security concerns

There is one realistic threat vector worth discussing about AI now, and that’s data security.

This is a conversation that can be informed today by analysing activity at the network level to determine current levels of AI-related traffic. By understanding which users are generating AI traffic, security teams and business unit leaders can have more effective discussions with users about whether or not their utilisation is in line with the direction set out in internal AI policies.

At a time when even governments are yet to issue formal guidelines governing generative AI use, organisations generally are treading a fine line trying to encourage experimentation, but with some guardrails.

But those guardrails may be hard to enforce. Generative AI lends itself to being utilised by individuals, outside of any corporate oversight. Its adoption trajectory shares a lot of the characteristics of past “shadow” IT encroachments, such as the initially uncontrolled way that software-as-a-service made its way into organisations, where business units or teams circumvented central purchase controls to get access to new tools faster. There are even fewer barriers to entry for generative AI – this time, even a corporate credit card may not be required.

While blocking generative AI use outright on the corporate network is a possible course of action – already taken by government agencies with particularly sensitive data – this doesn’t allow the organisation to gauge present usage levels or uncover potential instances of misuse.

Some organisations have been able to use NDR for visibility into employees’ use of AI-as-a-service (AIaaS) and generative AI tools, like OpenAI’s ChatGPT. NDR shows devices and users on the networks that are connecting to external AIaaS domains, the amount of data employees are sharing with these services, and in some cases, the type of data and individual files that are being shared.

NDR can’t stop the behaviour but it allows security teams to hone in on who is using AI. From there, they can ask their own questions to determine whether the usage is approved, and if not, to understand what has been done – and potentially what data has been input into the models – before determining how best to proceed.

Setting a strong security foundation

NDR is also useful to establish baseline security hygiene to support broader generative AI use – assuming that’s on the agenda – and to mitigate risks that might flow from the technology’s use.

In any conversation about security risk, it’s important to understand what happens in circumstances where a risk gets through any first line of defence. The first line of defence against AI misuse at this early time is policy enforcement. What’s important in circumstances like these is having foundational security that is capable of detecting and responding to the risk of the policy not being followed.

Importantly, NDR can do that not just for AI-based risks but for any risks posed by an organisation’s use of any emerging technology.

Knowing what’s in the network at all times, what protocols are communicating and where traffic is coming and going is important to understanding the general nature of traffic on the network. This helps in being able to recognise, trace, and block anomalous patterns or connections.

In addition, NDR plays a kind of cyber “janitorial” role in organisations, helping with the general clean-up of the core network environment: eliminating vulnerabilities for old exploitable protocols, cleartext passwords, and other assets that a threat actor might seek to make use of.

In essence, NDR is a way to put in place a strong security foundation that can be augmented with other more technology-specific layers down the track. That could include the use of specific security tools to mitigate AI risks, assuming that AI-specific risk vectors emerge and that such counter-tooling even becomes available.

Until that happens, the most practical way of dealing with the threat of an emerging technology is to stay focused on what we know works: good security hygiene and visibility into the network.


Chris Thomas is senior security Adviser at ExtraHop.

David Hollingworth

David Hollingworth

David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.

cd intro podcast

Introducing Cyber Daily, the new name for Cyber Security Connect

Click here to learn all about it
newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.