Powered by MOMENTUM MEDIA
cyber daily logo

Breaking news and updates daily. Subscribe to our Newsletter

Breaking news and updates daily. Subscribe to our Newsletter X facebook linkedin Instagram Instagram

Only 13% of businesses have a generative AI security policy

With artificial intelligence (AI) becoming more and more commonplace in the office, organisations are making decisions as to how the technology should be utilised within the workplace.

user icon Daniel Croft
Tue, 05 Sep 2023
Only 13% of businesses have a generative AI security policy
expand image

However, it seems that a staggeringly large majority still overlook the need to adapt security procedures, with researchers finding that roughly 13 per cent of organisations have implemented a generative AI security policy, 4 per cent of that group not knowing how to access it.

Findings come from human risk management platform CybSafe, which surveyed 1,000 office workers if they were aware of any security measures that had been engaged by their organisation regarding threats presented by generative AI to security and workers.

Of those surveyed, 56 per cent said that their organisation did not have a policy, while an additional 14 per cent said they didn’t know if their organisation did.

============
============

Evidence shows that workers are failing to operate AI tools safely due to a lack of training or being unable to recall said training.

Ten per cent of the respondents said they had access to general information on AI, while only 7 per cent said they had been trained on AI security. Prior to this, CybSafe discovered only 10 per cent of workers remember all their cyber security training.

Even more concerningly, an alarming 64 per cent of workers who have used generative AI have entered work information. Thirty-eight per cent said that the data they shared with AI is information they wouldn’t reveal to a friend casually.

“If employees are entering sensitive data sometimes on a daily basis, this can lead to data leaks,” said CybSafe director of science and research, Dr Jason Nurse.

“Our behaviour at work is shifting, and we are increasingly relying on generative AI tools. Understanding and managing this change is crucial.”

Human beings are the number one vulnerability within an organisation’s security processes due to the prevalence of social engineering attacks.

The addition of AI in the workplace adds an additional avenue for data exposure, particularly if workers aren’t trained.

The security risks around AI are already being explored. On top of the potential for AI chatbots like ChatGPT lowering the bar of entry for threat actors, cyber criminals have proven that manipulation of these tools is possible and could lead to rogue commands being completed.

In addition, if these chatbots are being fed data, a malicious actor may be able to access that data through manipulation or other means.

“Generative AI has enormously reduced the barriers to entry for cyber criminals trying to take advantage of businesses,” adds Nurse.

“Not only is it helping create more convincing phishing messages, but as workers increasingly adopt and familiarise themselves with AI-generated content, the gap between what is perceived as real and fake will reduce significantly.”

Daniel Croft

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.

cd intro podcast

Introducing Cyber Daily, the new name for Cyber Security Connect

Click here to learn all about it
newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.