You have 0 free articles left this month.
Register for a free account to access unlimited free content.
Powered by MOMENTUM MEDIA
lawyers weekly logo

Powered by MOMENTUMMEDIA

For breaking news and daily updates, subscribe to our newsletter.
Advertisement

Are companies actually ready for agentic AI?

AI technology has become a staple in every industry, including cyber security. However, while adopting AI may be quick, studies are showing that companies are not prepared enough to make the most of the technology.

Are companies actually ready for agentic AI?
expand image

Sitting down with Cyber Daily editor Liam Garman, Vanta solutions engineering manager Jefferson Haw and Novera founder and managing partner Tony Vizza discussed the growing number of companies using AI without a proper handle on the technology.

A report by Vanta that surveyed over 2,500 customers across the US, Europe, the Middle East, and Africa (EMEA), and Australia found that companies are not weighing up the risks of sharing data with AI bots.

“A lot of companies are using AI to do most of the research analysis, whether it’s forensic … threat consolidation … generating reports, [things that are] normally manual stuff that has been done by people,” said Haw during Cyber Daily’s “The State of Trust: Navigating the future of compliance and security” webcast.

 
 

“The problem we’re seeing is that a lot of these companies are exposing that data to AI services. And a lot of them might be just using the public AI service.

“What happens there is you’re now exposing your own set of private confidential data to the public. And these companies are using it as a way to learn and train their LM models.”

When asked about the benefits and risks of agentic AI specifically, Haw said that companies typically believe that if they craft the right AI agent, they can streamline and simplify tasks and counter threats.

“A lot of the companies are using [agentic AI] because they feel like if they create the right playbook, it becomes easy for them to say, go for it,” he said.

“And basically, as long as the data stacks up, this is a way to countermeasure those threat attacks.

“But the problem right there is [that] this is still driven by playbooks. It’s dependent on the data you provide.

“And what’s happening right now is a lot of those companies are now looking at safeguarding that data through using AI.”

Daniel Croft

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.
Tags:
You need to be a member to post comments. Become a member for free today!

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.