You have 0 free articles left this month.
Register for a free account to access unlimited free content.
Powered by MOMENTUM MEDIA
lawyers weekly logo

Powered by MOMENTUMMEDIA

Breaking news and updates daily. Subscribe to our Newsletter
Advertisement

Op-ed: What’s behind the ‘machine identity’ v ‘NHI’ tug-of-words, and why it matters

Understanding the importance of non-human identities and machine identities is essential for securing both.

Op-ed: What’s behind the ‘machine identity’ v ‘NHI’ tug-of-words, and why it matters
expand image

In cyber security, we often focus on protecting human credentials. And rightly so: usernames and passwords are frequently exploited through phishing, brute force, or dark web marketplaces. But the reality is that people are no longer the majority in today’s identity landscape.

Research shows that for every human identity, there are 46 machine identities – from API keys, service accounts, certificates, and AI agents – that are allowed to verify themselves and interact with critical systems and data.

As organisations continue to adopt AI technology and introduce more digital identities into the IT ecosystem, there’s been some debate over how to refer to this new AI era.

What’s in a name?

The most common term in the market for credentials that are not for employees is “non-human identity” (NHI). It’s a broad label that captures everything from IoT devices to LLM agents – basically, any credential not tied to a living, breathing person.

This can be useful for high-level policy discussions where general awareness often matters more than technical nuance. However, its very inclusivity can muddy the waters. Lumping a temperature sensor, a service account, and an autonomous AI agent into the same category obscures key differences in how they should be secured.

That’s why many security practitioners, Delinea included, lean into the term “machine identity”. It provides a clearer picture of what’s actually at stake: how each system or service proves its authenticity and gains access to sensitive data or workflows.

Machine identity focuses attention where it’s needed – on certificate management, key rotation, zero trust, and least privilege enforcement. It also clarifies responsibility. Who owns the credential life cycle for a machine? What’s the remediation plan if something gets exposed?

Some argue for slicing the categorisation further, for instance, separating IoT devices, workload identities, or tokens, because risk profiles and remediation methods vary widely.

Whatever terminology you choose, it’s critical that everyone speaks the same language and understands how significant this shift in cyber security is.

Scale and challenges

The sheer volume of machine identities dramatically expands the attack surface. Bad actors no longer need to steal an employee’s password if they can lift a poorly protected service account with broad privileges.

AI increases the risks. Each time an LLM agent generates a bot, fresh credentials appear and must be managed. Without continuous oversight, these can become invisible doors into sensitive data.

Another threat is longevity. IoT devices, sensors, and cameras often have hard-coded certificates designed to last for years. These credentials can outlive even the company that purchased them and become an appealing target.

Another issue is privilege creep, which emerges when minimal access gradually becomes broader permissions through quick fixes or ad-hoc changes. Because machines work quietly and predictably, their elevated privileges often go unnoticed until exploited.

What can be done

At Delinea, we advocate a zero-trust approach to all identities – machine or human – that’s rooted in the principle of least privilege. The first step is continuous discovery. You can’t protect what you don’t know exists.

Every certificate, API key, and token must be inventoried, tagged to an owner, and assigned a documented purpose. If there’s no owner, it’s a risk. If there’s no justification, it’s a liability.

Once visibility is in place, the next move is to automate and simplify the entire credential life cycle – especially for AI workloads – so that identity creation, rotation, and retirement follow best practices without relying on manual checklists.

Issuance should be driven by policy-as-code: every time an LLM agent spawns a helper bot, the same pipeline should create a credential with a built-in expiry date. Short-lived identities dramatically shrink the risks if something leaks.

Because AI systems often centre sensitive data flows, they deserve extra scrutiny. Access controls must enforce least privilege from the outset, with regular reviews to right-size each agent’s entitlements before privilege creep sets in.

Enterprises should also lean on AI-driven authorisation to keep pace with the dynamism of their own automation. Just-in-time access models that evaluate context and grant privileges only for the moment they are needed extend least-privilege discipline to both humans and machines without throttling productivity.

We once worried about passwords taped under keyboards; now, the blind spots are certificates nobody remembers creating. Whether you prefer “machine identity”, “non-human identity”, or even a more detailed description, the mission remains unchanged: every identity must be accounted for and managed to avoid being used by the wrong actor.

You need to be a member to post comments. Become a member for free today!

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.