You have 0 free articles left this month.
Register for a free account to access unlimited free content.
Powered by MOMENTUM MEDIA
lawyers weekly logo

Powered by MOMENTUMMEDIA

Breaking news and updates daily. Subscribe to our Newsletter
Advertisement

Interview: Assaf Keren – ‘Attackers are faster technology innovators than companies’

Cyber Daily chats with Qualtrics’ chief security officer about the risks and rewards of agentic AI and how the barrier to entry to cyber crime is continually decreasing.

Interview: Assaf Keren – “Attackers are faster technology innovators than companies”
expand image

Cyber Daily: Assaf, you’ve had quite an interesting career, both public and private sectors, military, what have you over that time. So, before we get into specifics, what are some of the trends that you’ve seen emerge, and what’s changed when it comes to technology and security throughout your career?

Assaf Keren: That’s a great question.

I remember a time when we were information security people and not cyber security people, and this cyber thing came up, and everybody started calling it cyber security. And yes, people who used to be InfoSec, that’s a marketing name. Nothing has changed, and now cyber is everywhere. Yeah, I think it’s funny.

My first start-up, I did machine learning technology. We built machine learning technology to identify malware and on the network, and we actually went too early to the market, because nobody talked about machine learning back in that time, and we had to sell early, as we had to hit traction. But if we had waited just two years …

But in the time that I’ve been in security, especially shifting from initially being in the military, which is a lot more, on one hand, a lot more sophisticated attackers. And the threat model is not bound by money – there is zero conversation around the cost of an attack. So, the conversations are different.

I think, in the time since I started doing security to now, we went through virtualisation, as a new technology that came in. Then we had the cloud, which was somebody else’s computer at the time. There were no, like, baseline additional capabilities. Then you get the SaaS boom, and nobody talks about it anymore, but post-quantum crypto, and then quantum cryptography conversations. And now AI in its new sense.

There’s always this new technology revolution coming up that’s defining some of the conversations that we’re having. Sadly, for security, we’re also still talking a lot about the same things we talked about 20 years ago. Basically, it’s just changes in which environments and which technologies we’re thinking about.

Cyber Daily: Agentic AI appears to be the latest in this long list of buzzwords, but I have a feeling a lot of people – myself included – are not entirely across what agentic AI can offer a business, alongside a lot of the risks of spinning up so many machine identities. Can you walk us through it?

Assaf Keren: I’ll start [with] an anecdote, if you don’t mind.

I had a friend who runs a company, and they’re building an agentic solution for identity management. So, they’re building agents, and he told me that they had an HR problem with their agents. I asked, what do you mean? And basically, the orchestration of agent work is based on different agents being good at different things. You create that structure, and then you have one or two or many orchestration agents on top of them that say, “OK, this thing that I’m getting needs to be routed to that particular agent. And sometimes, that’s static, and sometimes, that’s another agent that does that for a living.

One of the people on the team called one of the worker agents, the Teach Agent. And so the orchestrating agent looked at the name of the agent and said that that is the teacher. They know everything. So, they shifted all of the work to the Teach Agent. They had to then go back and work on the analysis.

And it’s an HR problem because the orchestrating agent didn’t know, well enough, how to do the structure, and they called it the wrong name.

But I think in that sense, agents are not that different [from] the experiences we have with stream-of-thought models today. So if you go on ChatGPT 03 or if you go on Gemini 2.5 and you actually ask them to do something, and they do the stream-of-thought process, they’re not very different than the baseline models that we’ll be using now – things will change over time as more models come in. It’s just that they are taught to do a very specific role, and they have the operational capability to do that, whether through agent-to-agent communication capabilities or general API integrations, but they have the ability to get access to do things.

So far, what we’ve been doing with AI mostly is to help us enrich our capabilities, not necessarily do things for us. And that’s where the differentiation comes between GenAI and agentic. It’s the ability to say, “Hey, this thing is not just going to help me write code or summarise a message or help me write an email. This thing is actually going to go out and do things.” It’s going to do things on your behalf.

I keep saying this to my team and to other people. I think security as a whole is a problem of coverage and efficiency in most of what we do. The conversation is, do you have enough coverage and are you efficient enough in managing that? I think that’s very correct for a lot of aspects in our lives. As humans, we tend to be highly efficient, but have low coverage, right?

I think the paradigm shift that’s happening with agentic right now is that we have an opportunity to be high coverage, high efficiency, or even high coverage, medium efficiency – even that will do because right now, automation, you can get to high coverage, low efficiency, but that’s not cutting it.

Cyber Daily: That does sound really useful, but what about the security concerns that come with that kind of utility?

Assaf Keren: The point where security concerns come into play here is mostly in three places. One, you want to make sure that when you unleash an army of agents, they do what they need to do.

The main concern with all of these new technologies is that they’re non-deterministic. So, in the old world, you wrote code; if you wrote it well, you did all your testing, and everything went well, then you would know what the output would be every single time you ask it to do something.

In this new world, you put the same prompt into the same model, and you’ll get different responses. Now, by the way, that’s a place where, in security, we have a lot of issues – we know how to defend against SQL injection because it’s deterministic. We don’t really know how to defend against prompt injection because it’s non-deterministic, but different. It’s not necessarily that you get the exact same response every single time, but you get the prescribed action every single time, and that’s about the resolution.

So that’s one thing. The second thing is data flow, which I think is the one conversation that’s a bit over-indexed on in the security industry. We share data as a general rule of thumb these days, like everybody uses cloud providers, and everybody uses SAAS providers, and everybody uses third parties and consultants, etc. So, this is not a different conversation. There is a lot of fear, but this is not a different conversation – understanding how you’re flowing data.

And the third piece is, how do you secure the infrastructure itself? How do you make sure that whatever you build is secure enough, in the sense that nobody can go in and utilise this infrastructure to do things that they shouldn’t be doing? And, again, it’s not architecturally different from what we’ve had in the past, but it is, implementation-wise, difficult, and a scale difference as well.

A very quick example is, do you give your AI model or your agent access to everything, and if you give it access to everything, do you then need to worry about this middle-tier attack where somebody can go in and do something, and then they give them access by mistake? Again, not very different than what we had five years ago – it’s managed access.

The technology environment is so new, and the fact that we’re in a non-deterministic world actually makes it more complicated.

Cyber Daily: When I speak to a lot of cyber security types, one of their big fears is the implications of AI in the hands of hackers and other threat actors. Do you think there’s a reason to fear agentic AI inside that ecosystem?

Assaf Keren: Look, attackers are faster technology innovators than companies. They could afford to break things.

Normally, they don’t have an IT team or security team telling them, “Hey, you can’t use this”. This is part of a conversation I’m having with my team – when we talk about security, and I tell them, “Look, hackers don’t care about our organisational schemes”. They don’t care about the policies we have in place. They’re going to do whatever they’re going to need to do in order to gain advantage and get into environments.

Up until a year ago, or a year and a half ago, phishing was unheard of in Japan, because who could do that?

But those are simple things, and I’ve not seen the capability of an autonomous agent to go and do what we would call automated Red Teaming. But the idea of ransomware backed by that kind of agentic capability is sort of terrifying.

And going back to my statement on security, it’s a problem of coverage and efficiency. That’s true for defenders, and it’s true for attackers. Even state-sponsored actors are limited by the amount of people they have. It’s not necessarily money, but it’s the amount of people they have. Their ability to scale up operations in a way where they can go and tackle everything.

So state-sponsored actors are usually more targeted in what they’re attacking for both resource and political reasons, but that is correct for criminals and for other cyber attackers as well. They’re limited by the resources that they have.

Now, create an environment where the human resource is no longer the limiting factor. Then what we’re going to see is a continuation of a trend that’s been driving for about 15 years. It’s a trend of the condensation of the threat model. It has been getting easier and easier to be a cyber criminal, and it’s going to continue getting easier and easier to be a cyber criminal because already ransomware as a service makes for a very low barrier to entry.

Cyber Daily: The way ransomware-as-a-service is advertised is really quite striking. Full spec sheets, slick marketing campaigns, and even trailers and promotional videos. All you need is the price of entry, and these platforms do the rest, practically. It’s all just getting so easy.

Assaf Keren: Back when I was at Paypal, we would track threat actors. And we had threat actors that were dedicated to PayPal.

Everybody wants to get into PayPal because that’s where the money is. So, we will track them and take them down together with law enforcement. And so there has been this movement of if, 20 years ago, if you were a cyber criminal, you had to understand money movement, and you had to understand how to buy the right malware, and you had to understand how to infiltrate an organisation, exfiltrate data, and all of those things – today, it’s a marketplace.

But GenAI and agentic are going to do what they do best, which is reducing the barrier to entry. And that’s true for attackers as well. So we’re going to need to deal with a logarithmic rise in the [number] of attacks, in the efficiency of attacks that we’re seeing on a day-by-day basis.

That’s scary, yes, but that’s reality.

David Hollingworth

David Hollingworth

David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.

You need to be a member to post comments. Become a member for free today!

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.