Powered by MOMENTUMMEDIA
For breaking news and daily updates, subscribe to our newsletter

Q&A: Quantum cryptography will be a “Y2k times 10 problem,” says DigiCert CEO

Cyber Daily recently sat down with DigiCert’s CEO, Amit Sinha, about AI agents, trust, and the looming impact of quantum computing.

Q&A: Quantum cryptography will be a “Y2k times 10 problem,” says DigiCert CEO

Cyber Daily: Thanks for joining us, Amit. We want to talk to you about AI and trust today, of course, but you’ve had quite the career, including 13 years with ZScaler before DigiCert. You’ve seen a lot of changes in that time, so what’s it been like seeing AI come to such prominence?

Amit Sinha: You know, AI has been around for a while, but it certainly had its ChatGPT moment.

When I was at ZScaler, we were using AI there to look at more machine learning, to look at patterns, and being able to come up with threat intel that you could then apply in real time. The big aha! moment for Zscaler was, how do I quickly detect a new threat and use the cloud service to compress the gap between a threat being known and protections being available to customers right away, because before that, threat intel was in the form of signatures, and signatures were sent to your laptop.

 
 

It was all very slow-moving.

Now, fast forward to the world of AI threats, moving at an incredible fast pace. If you look at the recent announcements from Claude, they're finding vulnerabilities that have escaped human analysis for 30-plus years, and in FreeBSD, and things like core operating systems that have been around and stable for a very long time. So what does that mean?

That means the timescales are compressing rapidly. We're getting to a point where I can find new zero-day vulnerabilities using these AI tools, and worse, come up with exploits based on those vulnerabilities that have been silent and hidden for a very long time. So we are living in a world where you need to fight AI with AI, and many cyber security firms are reacting to that as we speak.

But I also view this in a very positive light.

For a long time, the security industry has been based on the fact that software will inherently be flawed, and we're going to build giant industries around that to detect issues, to prevent issues, to provide remediation… But it's all inherently based on the fact that software is flawed. I think we have an opportunity now, with all these AI models, to essentially shift left.

It’s like how you buy food, and it has something like a ‘grown organically’ seal of approval.

So, can we use AI to now really have genuinely safe software that also comes with a certified cryptographic label that tells the end consumer or enterprise that this has been vetted by the best AI out there, and it's safe, and it takes secure by design to a whole new level, doesn't it?

Cyber Daily: Given the way these AI systems are becoming so prevalent across the enterprise, where does trust come into it, because I know that’s a big part of what DigiCert does.

Amit Sinha: If you look at an enterprise today, I wouldn't let an employee come into my organisation without a verifiable identity. If I hire someone, I'd say, “Hey, show me a passport”. Or, you know, “Are you authorised to work?” You know, before I give you authorisation to access enterprise resources, you come with a proven identity, right?

Why should AI agents get a free pass?

Because now AI agents are getting into my organisation and taking actions on behalf of humans. I need to start with an identity. I need to have authorisations, and then I need governance and audit trails of actions they have taken.

So the first problem DigiCert is solving, for AI trust, is durable, immutable, cryptographically verifiable identities. Think of your passport. You have a passport, you know you went to some office that validated your birth certificate and other credentials and issued you a tamper-proof identity. DigiCert is doing the same thing for AI agents.

All AI agents need to have verifiable identities, and that's step one for zero trust. And once you have those identities, you can start building policies around what those identities are allowed to access. You can build a governance life cycle around it. Maybe I don't like a particular family of AI agents – I want a kill switch for it. I need audit trails around it. So what DigiCert has done is solve this problem of durable identities and the governance around it, for AI agents.

Cyber Daily: We’re not so much looking at a technology challenge, or a governance challenge, or an accountability challenge – it's all rolled into one, isn't it?

Amit Sinha: It starts with standards.

Then you need a platform that implements those standards. Then you need full lifecycle and governance around it, right? And you're right – it's not one tech product or anything like that.

At DigiCert, we believe in standards-based trust models – that's how e-commerce emerged on the internet. As a user, you need to trust a bank. Well, both of you need to trust a common third-party. And DigiCert was kind of the certificate authority that validated who the bank said they were and issued them a passport. We distributed our trust chains to the browsers and ecosystems, and that's how the whole framework emerged. Now the same thing is extending to AI models, AI agents, and AI-generated content in applications that are producing content.

Enterprises that are using AI agents have to extend familiar controls. They've used DNS, for example, to name workloads and PKI to provide machine identities to workloads. How do I think about AI agents? Well, they're smart workloads. I know how to name them. I know how to control them. How do I extend those identities and governance to these new challenges that are emerging?

Cyber Daily: Something that we’ve come across whenever we speak to people in industry in this way… There are all these great tools out there, whether it be the zero trust concept itself, whether it be Australia's Essential Eight, or the kind of technologies you're talking about. But if you're a CISO and your job is to secure that agentic AI, those digital identities, how do you educate your board and your bosses that this is something that needs to happen? How does a CISO take that journey?

Amit Sinha: The challenge, traditionally, that CISOs face in educating their board is convincing them of a threat that they don't know about. The two big trust problems that CISOs need to educate the board on are “How do I trust AI?” And then, “What do I do with the threat of quantum-safe cryptography?”

The good news is that boards are aware of this, because there's enough news around. It. The question is about “How do I prioritise it? What's my budget for it? What's the sense of urgency to drive it?”

CISOs have been running identity and access management systems for some time. I know how to govern my users. I know how to govern my workloads. I know how to govern my software. Those are familiar controls. And if I look at DigiCert, well, we already are deployed in 90 per cent of the Australian Stock Exchange top 200 companies, the top four banks already, for all the fabric that is used in your traditional enterprise.

The question is, “Can I extend those controls right to this new crop of smart workloads?” I think if CISOs approach the board with that pragmatic mindset, that AI is a great productivity tool, but we need to have proper governance. The biggest challenge of AI agent adoption within the organisation is, “How do I trust them?” How do I know that what I'm allowing within my organisation is something that has been vetted, something that I have controls around if I don't like it? Do I have a kill switch?

The ability to reduce risk while getting the productivity benefits of AI is something that boards would react very favourably to, and that helps with budgets and prioritisation.

Cyber Daily: You mentioned the impact quantum cryptography is going to have on this market. What is that impact? Will it be as big a change as people expect, or more subtle?

Amit Sinha: I think it's a huge change. It's a Y2k times 10 problem, in my opinion.

And that's because of the math that has secured all our digital lives. From phones to Bitcoin to banking transactions, everything is based on these asymmetric math problems that classical computers cannot crack. It's like factoring big numbers, right, and we’ve known for a while that quantum computers are especially good at it.

The only thing preventing a digital meltdown is the fact that stable, cryptographically relevant, large-scale quantum computers are not readily available. But if you look at the guidelines from the Australian Signals Directorate, they're saying by 2026, this year, organisations need to have a plan to migrate to post-quantum cryptography.

It's a big task. And what DigiCert has been doing is we've been working with the National Institute of Standards and Technology, the Browser Forum, the ITF, to… Kind of standardise these quantum-safe algorithms. And what has happened more recently is some researchers from Google, from Stanford, from Berkeley and a bunch of other industry leaders, have shown that they can break classical cryptography with 20x less quantum resources than was previously thought about.

Google recently, two weeks ago, published research where they have released proof that they can crack ECC, which is used in Bitcoin. In about 10 days, they have shown that they can break RSA, 2048-bit, with a feasible quantum computer in 100 days, right?

So the rate at which quantum computing is advancing is staggering – it's going to have its ChatGPT moment.

The problem organisations face is that even if they start today, they'll run out of time before the 2029 mandate to upgrade. So that's the great sense of urgency. This is a topic that CISOs do need to push a little harder on their boards about, that this is a big migration. It starts with having an inventory of all my cryptographic assets across machines, software, devices, and content.

I then need to prioritise. What are my crown jewels? I need to then have a plan of migrating them to quantum safety. And what DigiCert ONE does is give you a platform that gives you the inventory, the dynamic automation, and then you can play around… We support all the quantum-safe algorithms today. And you can issue certificates. You can benchmark performance and start – pick one application, migrate it, rinse and repeat.

Cyber Daily: How can organisations balance this driving need to innovate, to move forward, to take up new technologies, while also protecting their core business functions? How do businesses manage that friction between security, safety, and innovation?

Amit Sinha: That’s a big question.

I would say, if you have the right security solutions, then security and user experience, or security and productivity, are not orthogonal choices. If you don't have the right security architecture, if you're not invested in the right tools, then often the choice between “Is it secure?” and "Is it easy to use?” or “Is it going to give me productivity enhancements?” seems to be mutually exclusive. If you talk to most consumers, they'd say security is a pain, right?

That's the end-user impact.

But I genuinely believe that if you have the right security architecture, it can simplify your user experience, and it can help you adopt these massively disruptive technologies like AI and quantum computing, and reap all the productivity benefits that they bring without the security and risk side effects. But it starts with investing in the right security architecture, the right platform. AI agents can unleash productivity, but if I don't have the right identity and governance, I'll always be afraid of my data getting exfiltrated.

And that will always lead to leaders being nervous about adoption and then realising the full benefit of what that technology can bring. So I think AI trust is an eminently solvable problem. We've solved the problem of adopting cloud, adopting workloads, and securing users. The question comes up of, “What about scale?” Well, we issue ‘birth certificates’ to a billion televisions, and now devices outnumber users ten to one, right? So whether it's glucose monitors or infusion pumps or EV cars… These are machines. They all have cryptographic identities, tamper-proof identities that can then be leveraged for security policies and governance.

So really, the scale problem has been solved.

If you adopt the right platform, then you can do the same thing for agents. You can do the same thing for content, and you can do that with less risk and reap the productivity benefits.

Cyber DailyWant to see more stories from trusted news sources?
Make Cyber Daily a preferred news source on Google.

David Hollingworth

David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.

Tags: