The Non-Human Identity Crisis: Why AI Agents Need Their Own Identity Layer
There are now 45 billion non-human identities operating across the world's digital infrastructure (World Economic Forum, 2025). Service accounts, API keys, OAuth tokens, bot credentials, CI/CD pipelines, and increasingly, autonomous AI agents. For every human identity in a typical enterprise, there are 144 machine identities running alongside it (Entro Security, 2025) — up from 92:1 just six months earlier.
The security industry has spent decades building identity infrastructure for humans: SSO, MFA, RBAC, SCIM provisioning. For non-human identities, the strategy has mostly been “create an API key and hope for the best.”
That hope is failing. Eighty percent of identity-related breaches now involve non-human identities (OWASP NHI Top 10, 2025). The identity crisis isn't coming. It's here.
the scale of the problem
Non-human identities have quietly become the dominant population on every network. They authenticate to databases, call third-party APIs, trigger webhooks, deploy infrastructure, and now — with agentic AI — they make decisions, negotiate with other services, and spend money on behalf of their operators.
Yet the tooling hasn't kept pace. A Token Security survey from early 2026 found that 45.6% of organizations still rely on shared API keys as the primary authentication method for their machine identities. Not per-service keys. Not short-lived tokens. Shared secrets, passed between teams in Slack messages and environment variables that haven't been rotated since the Obama administration.
The consequences are predictable. GitGuardian's 2025 report documented 28.65 million secrets leaked on GitHub in a single year, with an 81% surge in AI service credential leaks specifically. Every one of those secrets is an identity — a machine credential that grants access to some system, somewhere.
when NHI breaches make headlines
The pattern repeats across the industry's most notable incidents. In the Microsoft Midnight Blizzard attack, threat actors compromised an OAuth application to move laterally through Microsoft's corporate environment. Not a phished employee — an exploited machine identity. The New York Times GitHub breach exposed 270 gigabytes of source code through a single compromised personal access token. One secret, one identity, catastrophic access.
The tj-actions supply chain attack in early 2025 compromised over 23,000 repositories by injecting malicious code into a widely-used CI/CD action. The attack exploited the implicit trust that pipelines place in their dependencies — non-human identities trusting other non-human identities with no verification layer in between.
These aren't edge cases. They're the natural consequence of treating machine identities as second-class citizens in the security stack.
the OWASP NHI top 10
OWASP formalized this problem with their Non-Human Identity Top 10 in 2025, and the ranking tells you everything about the state of machine identity management:
NHI1: Improper Offboarding. When an employee leaves, their human accounts get deprovisioned. The 47 service accounts they created? Those live forever. Orphaned machine identities with active credentials are the single biggest NHI risk.
NHI2: Secret Leakage. Secrets embedded in code, configs, logs, and CI/CD artifacts. The GitGuardian numbers above make this one self-explanatory.
NHI3: Vulnerable Third-Party NHI. Your supply chain is someone else's machine identities. The tj-actions incident is the canonical example.
The remaining entries cover excessive privileges, insecure authentication, overly broad scoping, long-lived credentials, environment isolation failures, NHI reuse across services, and lack of logging. Every single one maps directly to problems that AI agents amplify by an order of magnitude.
why AI agents make this worse
Traditional NHIs — service accounts, API keys, CI runners — are deterministic. They do what they're programmed to do. An API key for a payment processor processes payments. A CI token deploys code. The blast radius of a compromised credential is bounded by the service's function.
AI agents break this assumption. An autonomous agent with tool access can discover new APIs, chain actions together in unpredicted ways, and make decisions that its operator never explicitly authorized. The identity isn't just an access credential anymore — it's a proxy for judgment and authority.
Consider a coding agent with repository access. Its credential might be scoped to a single repo, but if the agent decides to open a pull request that modifies a GitHub Actions workflow, it has effectively escalated its own permissions through the CI/CD pipeline. The identity system saw a write to a repository. The actual impact was arbitrary code execution on every future merge.
Only 10% of executives currently have a well-developed NHI strategy (Okta/World Economic Forum, 2025). For autonomous AI agents specifically, that number is effectively zero.
what a purpose-built agent identity layer looks like
The fundamental problem is that NHI security was designed for static credentials, not autonomous entities. An agent identity layer needs to handle properties that API keys never had to address:
Cryptographic uniqueness. Every agent needs its own keypair, not a shared secret. Ed25519 signatures give each agent a provable, non-replayable identity that can't be shared between services without detection.
// Each agent gets a unique Ed25519 identity at registration
const stamp = await agentstamp.verify("agent-uuid");
// Returns: { wallet, trust_score, human_sponsor, created_at }
// No shared keys. No static tokens.
// The agent proves identity by signing with its own keypair.Human accountability. Every agent identity should link back to a human or organization that takes responsibility for its behavior. AgentStamp's human_sponsor field makes this explicit: an agent that causes harm has an accountable operator, not just an anonymous wallet address.
Dynamic trust, not static access. A traditional API key is either valid or revoked. Agent trust needs to be continuous. A trust score that decays without activity and adjusts based on behavior is closer to how humans evaluate trustworthiness — recent track record matters more than a one-time registration.
Tamper-evident audit trails. When agents interact with each other across organizational boundaries, the audit trail can't live on any single party's server. Hash-chained logs that reference previous entries make retroactive tampering detectable.
the offboarding problem, solved differently
OWASP ranked improper offboarding as the number one NHI risk for a reason. Organizations lose track of machine identities constantly. With AI agents, this gets worse — agents can spin up sub-agents, create new credentials, and establish connections that their operators don't know about.
A registry-based approach inverts the problem. Instead of trying to find and revoke every credential an agent ever created, you revoke the agent's identity at the registry level. Every service that verifies trust against the registry immediately sees the agent as untrusted. One action, complete offboarding.
// Before granting access, verify the agent is still active
const { trust_score, status } = await agentstamp.verify(agentId);
if (status === "revoked" || trust_score < 30) {
return { error: "Agent identity no longer trusted" };
}
// Trust is checked at interaction time, not just at onboardingwhere we go from here
The non-human identity crisis is a solvable problem, but it requires accepting that the tools built for human IAM don't transfer cleanly to autonomous agents. Service accounts, API keys, and OAuth tokens were designed for a world where machines do exactly what they're told. That world is ending.
The next generation of agent identity needs cryptographic uniqueness per agent, human accountability chains, dynamic trust scoring, tamper-evident audit logs, and interoperability across platforms. Standards like ERC-8004 are formalizing how on-chain agent registration works. Platforms are starting to build the practical infrastructure.
Forty-five billion non-human identities are waiting for an identity layer that was actually designed for them. The organizations that build this into their agent infrastructure now — before a breach forces their hand — will be the ones still operating confidently when the NHI-to-human ratio hits 300:1.