Cyber Daily: Thank you very, very much for your time. I do want to chat about agentic AI, but before we get into that, as the CEO of a company like Proofpoint, you must have a very informed, very high-level view into the wider threat landscape. So what is it like in early 2026 compared to a year ago or two years ago?
Sumit Dhawan: I’d say, I think one thing is the trend, as we saw in 2025, continues to be the same around volume of sophisticated threats, where languages and even sizes of customers are no longer barriers. We’re clearly seeing the type of social-engineering threats come in, becoming more sophisticated and higher in volume. So you have to suspect a good reason for that is generative AI, and I think, based on even the regions where the attacks are happening, I’d say crypto, because crypto means currencies are no longer a barrier.
The second thing I would say is [that] we are starting to see more and more of insider risk, where insiders are becoming a mechanism to steal information. It’s not traditional, outbound threat actors attacking and a hacker attacking, but maybe either state-sponsored or potentially just true criminal networks doing more and more of insider risk – we’re starting to see in almost every enterprise that this is happening.
COVID-19 made it easier, and I think there’s sort of more to gain because of the digitisation that’s happened everywhere in the world. And now, insider is becoming a bigger and bigger nuisance.
And I think the third trend, I would say, is trust exploitation, which is a combination of … Where you can exploit the identity of your supplier, or a low-privileged individual, even if it was an insider who potentially takes on the job of a very low-privileged user, all of a sudden, using lateral traversal through the communication layer, not through the network layer – this is becoming a growing issue.
Cyber Daily: That makes so much sense, on the supply chain angle and the impact that can have. Just the other week, we had a chicken producer. They do farms, and they process the birds as well. They got hit by a cyber attack – which sounds like it’s a ransomware incident – and it’s taken their production lines offline. So you have all these small towns around Victoria that don’t have chicken parmigiano to sell to their customers.
Sumit Dhawan: Everyone needs sophisticated protection – that wasn’t the case three years ago, when everyone could just have some basic security, which would do some semantic analysis. Now, everyone needs sophisticated protection because, unfortunately, it’s just a matter of time.
Cyber Daily: And the way initial access brokers, these days, are selling access, they’re advertising, “Hey, they’ve got this antivirus system, they’ve got this defence on their network,” and the threat actors who are buying that access have malware that can dance around endpoint protection.
Sumit Dhawan: 100 per cent – and even if the endpoint can catch it, it’s too late by then, and now you’ve got to deal with this potential threat. You don’t know how it’s reversed, when it was issued, signature-based protection on the endpoint … There’s always a zero-day threat that’s going to exist, and will keep coming up.
Cyber Daily: Moving on to the agentic side … It’s such a buzzword across the enterprise. We’re hearing talk of agents now starting to outnumber warm-blooded, actual humans in the enterprise. But at the same time, what you’re worried about is agentic AI as a threat. So what is an agentic AI threat?
Sumit Dhawan: I think we’re worried about not just agentic AI as a threat, which is happening today. I think there are autonomous AI agents that are able to do social engineering that’s happening today. In other words, there may not be a human in the loop when you have threats coming in, which is agentic AI as a threat.
We’re also seeing agentic AI as a risk; that’s slightly different, where your risk surface area is increased. Here’s the issue when it comes to AI: AI, to some extent, is supposed to mimic humans, and the way AI works is indeterministic.
If today, humans are asked to do a task, how do we work? You give us a problem. Let’s say that’s a prompt. We break that problem down into a series of steps. We use the context and the memory that we have, and then we apply some analytics to solve those problems. But those analytics that we apply are not an if-then Boolean logic. No, it’s the intelligence of our pattern detection. And then we use broad context to draw on our knowledge, as well as we ask questions about everything we know or try to build the knowledge by asking questions of other experts to come up with a position.
AI is no different. AI applies a language model. You give it a prompt, the language model will need to reason, and it will try to get an inference by asking other pieces of information or other agents for their opinions, and it will then collect those opinions and come up with some sort of a conclusion that may or may not be correct.
Just like humans, you’re talking about the efficacy of a decision. None of us knows if humans make 100 per cent correct decisions, right? No, we have error rates. AI has error rates. Their memory context is increasing, and the more AI is able to communicate and be challenged by other AI agents, the better the efficacy of decisions you get, just like humans. Humans have to be PhDs to be able to do that.
So, as you’re making AI to make better efficacy decisions, what are you doing? You’re doing three things.
You’re increasing their memory, in other words, how much they can remember. That’s happening with Frontier models. You’re giving them access to more data that they can place in their memory. And you’re sometimes making agents talk to each other, just like humans talk to each other. We’re challenged by each other. Agents are challenging each other to make their decision making better.
It’s a different world of risk, because somebody can get inside that loop and poison the data. In a typical enterprise, protection is decided or implemented through a very deterministic set of APIs and north-south traffic. That’s how you have contained information, as soon as you change it so that there is an indeterministic way an AI can get you the answer. All of your AI risk protection is off. Every enterprise is off today.
Cyber Daily: Well, that’s alarming. So how would you protect it?
Sumit Dhawan: You would protect it just like you would protect a human thinking.
Firstly, say, any shadow AI, which is anyone who’s not an employee, whom I don’t trust, is out. So shadow AI. Secondly, I want to make sure AI can only access the data that it should have access to, to protect the data that you’re feeding to AI.
And third, you’re going to say that all the AI activities that are happening, in accessing either more information that’s put into memory, or AI talking to each other, have guardrails.
That’s what Proofpoint has built now. So that’s our strategy of providing, number one, protection against AI threats, and now we have extended our platform to protection against AI risks in the enterprise, and that’s something that you can’t do based on network protection or endpoint today. We provide insider risk protection on human behaviour, and human access to data.
We have extended those same set of protections now, almost as if AI is an insider risk.
Cyber Daily: What kind of advice would you give to a CISO whose company is rolling out all of these agents, and how to communicate that risk to the board and the C-suite?
Sumit Dhawan: That’s a big challenge. OK, number one, I think it’s number one, I would say, is that CISOs do need to step in and take charge.
I’m seeing more CISOs losing control, with CIOs taking charge and CISOs waiting for when it’s an issue that they can capture. That’s a wrong approach. Now, I suggest building an AI Governance Program at the tech level. That’s the second thing. CISOs are building AI governance programs through AI committees and policies, which are predominantly focused on which AI will be used? How will we make sure which AI can be brought in? What is the data policy on models … We are well past that.
CEOs and boards are going to push every business to adopt AI as fast as possible, for both generative AI, which is assistants and copilots, as well as agentic AI, which is autonomous tasks, because of either business model transformation or cost and economics.
CISOs need to go past policy creation to data and AI governance programs that have tech enablement. Those are the two things I would ask the CISOs, and that they start thinking about AI risks as an extension of human risk, which is already understood at the board level. Now you can easily associate it – here’s how we protect our humans from insider risk and AI-based threats. We’ve got to do the same thing with AI, but the technology that we have to bring in is different.
It’s got to be much, much faster than what CISOs did with cloud. They did that with cloud and SaaS, and they’re operating [on] the same calendar, which is three to four times slower, because AI is moving at [three] to four times faster.
Anything that used to be done in a year is getting done in a quarter.
David Hollingworth
David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.