raxIT AI logo
Identity Crisis in AI Agents: Why Traditional IAM Is Breaking Down
By Adesh Gairola

Identity Crisis in AI Agents: Why Traditional IAM Is Breaking Down

AI agents are breaking traditional identity and access management systems. From impersonation risks to cross-domain delegation chains, enterprises need new frameworks that balance autonomous operation with accountability and security.

The identity crisis in AI is not about stolen passwords or phishing attacks. AI agents - the autonomous software entities making decisions and taking actions on behalf of users - are operating in a world built for human identities and predictable applications. That world is breaking down.

When ChatGPT calls an API to book your flight, when Claude analyzes your spreadsheet data, when an enterprise agent autonomously deploys code changes - these aren't simple API calls with predetermined execution paths. These are autonomous entities making real-time decisions, spawning sub-agents dynamically, and crossing organizational boundaries that legacy identity frameworks were never designed to handle.

Industry Challenge

Arnab Bose, Okta's Chief Product Officer, warns that unmanaged AI agents may be holding keys to your enterprise data. Without proper identity frameworks, agents can impersonate users, leak sensitive information, or cause financial harm with zero accountability.

The stakes are immediate. A recent whitepaper from the OpenID Foundation lays out the challenge: current OAuth 2.1 frameworks work reasonably well for synchronous agents within single trust domains, but fail completely for cross-domain operations, asynchronous execution, and recursive delegation. This gap is what every enterprise deploying AI agents must address.

Executive Summary

📖 15-20 minute read • Best for: Security/Identity professionals, AI governance teams

Key insights from this analysis:
  • Traditional IAM is incompatible with AI agents. Four breaking points - non-deterministic behavior, autonomous action, dynamic lifecycles, and cross-domain operations - render legacy identity frameworks inadequate for enterprise AI deployments.
  • Impersonation creates accountability gaps. When agents operate using user credentials instead of delegated identity, audit trails cannot distinguish autonomous decisions from human actions, violating regulatory requirements like EU AI Act Article 14.
  • Seven gaps in current identity standards: identity fragmentation across vendors, consent fatigue at scale, recursive delegation chains, multi-user agent contexts, cross-domain federation challenges, computer-using agent authentication, and continuous verification requirements.
  • Policy-based governance replaces individual consent. Organizations need to shift from per-action authorization to policy-as-code, intent-based authorization, and risk-based dynamic flows that balance security with operational scalability.
  • Standards convergence matters. While OAuth 2.1 provides a foundation for single-domain scenarios, cross-organizational agent operations require coordinated extension of existing frameworks before proprietary fragmentation becomes permanent technical debt.
💡 Quick takeaway: AI agents need delegated identity frameworks distinct from user credentials. Without proper identity management, organizations face accountability gaps, regulatory non-compliance, and unmanageable security risks as agents operate across organizational boundaries.

Why Traditional IAM Breaks Down for AI Agents

Traditional identity and access management was designed for a simpler world. Human users logging into applications. Services authenticating to APIs. Devices joining networks. Each category had clear boundaries, predictable lifecycles, and deterministic behavior.

AI agents break these assumptions.

Four Breaking Points

Non-deterministic behavior

Unlike applications that execute predetermined code paths, AI agents adapt in real-time based on context. They make dynamic decisions about when and how to use tools. You cannot pre-define all required permissions at provisioning time because the agent's needs evolve based on the specific task, user intent, and environmental context. This violates the principle of least privilege by design.

Autonomous action

AI agents don't wait for button clicks or explicit user approvals for every action. They interpret unstructured inputs - text, documents, images, audio - and formulate execution plans independently. When a user tells an agent to "handle my travel arrangements," what exactly has been authorized? The OpenID Foundation calls this an "interpretive burden" in identifying the scope of delegated authority.

Dynamic lifecycles

Traditional identity management assumes stable, long-lived identities measured in months or years. AI agents spawn, execute tasks, and terminate within minutes or hours. They operate at high velocity, moving across domains with no centralized lifecycle management. Agents may disappear and reappear in different contexts with completely different permission needs. Traditional provisioning and de-provisioning cycles cannot keep pace.

Cross-organizational operations

Single workflows now routinely touch multiple companies' services. Each organization has separate identity providers and policies. There are no pre-established trust relationships. Yet agents need to operate seamlessly across these boundaries. Single-organization IAM cannot handle this reality.

Diagram showing how traditional IAM assumptions break down with AI agents: predictable behavior vs non-deterministic, single domain vs cross-domain, stable lifecycle vs dynamic lifecycle, requiring intent-based auth, federation standards, and automated governance leading to newness required in IAM
Click to zoom and explore

How AI agents break traditional IAM assumptions

Comparison: Traditional IAM vs Agent Reality

Behavior
Deterministic, predictable code execution
Non-deterministic, context-dependent decisions
Cannot pre-define all permissions needed
Oversight
Human approval required for critical actions
Autonomous execution without constant oversight
Requires policy-based guardrails
Lifecycle
Stable, long-lived identities (months/years)
Dynamic spawn/terminate cycles (minutes/hours)
Lifecycle management must scale dramatically
Trust Domain
Single organization with unified IdP
Cross-organizational, multi-IdP environments
Requires federated trust frameworks
Permission Model
Static roles defined at provisioning
Dynamic, context-sensitive scoping
Need for runtime policy evaluation
Accountability
Direct user-to-action mapping
Delegation chains (User → Agent → Sub-Agent)
Requires delegation audit trails

Impersonation Problem: Why Agent Identity Matters

Most AI agents don't have their own identity. They impersonate users by operating with the user's credentials or access tokens.

This creates an accountability gap.

When you examine audit logs after an incident, you see: "User performed action." But the reality is that an agent autonomously decided and executed that action. Your investigation reveals nothing about which agent acted, why it acted, or what logic it followed. Zero accountability for autonomous decisions.

Sequence diagram comparing impersonation vs delegation: In impersonation, user shares credentials with agent, agent acts as user, audit logs cannot distinguish agent from user creating accountability gap. In delegation, user delegates authority, agent obtains agent-specific token, acts on behalf of user, logs track both user and agent identities for clear accountability
Click to zoom and explore

The critical difference between impersonation and delegation

Why Regulators Care

This is a compliance problem.

The EU AI Act Article 14 mandates "effective oversight" for high-risk AI systems, requiring clear attribution of autonomous decisions to accountable entities. Impersonation makes compliance impossible. When agent actions can't be traced back to the agent that made them, organizations face direct legal exposure.

Impersonation is risky. Agents need delegated identity distinct from the user's identity. This allows the agent to be identified as an agent while still proving it's authorized by a specific user.

What Actually Works Today (And Its Limits)

For agents operating within a single enterprise trust domain - where the agent, services, and users all share a common identity provider - OAuth 2.1 with PKCE (Proof Key for Code Exchange) works well.

Model Context Protocol Success Story

The Model Context Protocol (MCP), which standardizes how AI models connect to resources and tools, initially shipped without authentication. Community pressure quickly forced the integration of OAuth 2.1. Security must be baked in from day one, not bolted on later.

MCP's architecture demonstrates the two-layer authentication model that works for today's agents:

  1. Client Authentication: The agent software itself must be authenticated as a trusted client with a workload identifier
  2. User Authentication & Delegation: The human user is authenticated and their intent to delegate authority to the agent is captured

For synchronous operations within a single organization, this pattern works. The agent is registered as a client in the corporate identity provider (Azure AD, Okta, etc.), uses OAuth 2.1 authorization code flow with PKCE, obtains user consent, and calls internal APIs with properly scoped tokens.

Progressive Enterprise Integration

Forward-thinking enterprises are extending their existing IAM infrastructure to agents:

Single Sign-On (SSO): Federated identity providers allow agents to leverage corporate credentials without storing passwords or static API keys.

SCIM Provisioning: The System for Cross-domain Identity Management protocol automates agent lifecycle management. A proposed SCIM extension introduces an AgenticIdentity resource type, enabling centralized IT administration with policy-driven workflows.

Microsoft's Entra Agent ID and Okta's AIM (AI Identity Management) represent vendor implementations of this approach, treating agents as first-class identity citizens within enterprise directories.

But these solutions only work within the walls of a single organization. The moment an agent needs to operate across trust boundaries, the model breaks down.

Seven Gaps in Current Identity Standards

The OpenID Foundation's recent analysis identifies seven challenges that existing identity frameworks cannot adequately address. These create security vulnerabilities, compliance risks, and interoperability barriers.

1. Identity Fragmentation

Without standardized identity protocols for agents, every vendor and platform creates proprietary authentication methods. The same agent needs dozens of different credentials to work with different services, multiplying security vulnerabilities and creating integration nightmares for developers.

Cisco's AGNTCY framework, Microsoft's Entra Agent ID, and Okta's AIM each offer different approaches to agent identity. But they're not interoperable. This fragmentation reduces developer velocity and forces organizations to manage multiple security models, each with different attack surfaces.

Standardized frameworks are necessary for preventing fraud and ensuring regulatory compliance across the emerging agent ecosystem. A proposed standard, OpenID Connect for Agents (OIDC-A), aims to standardize core identity claims like agent name, model version, owner, capabilities, and certification status. Without convergence on standards, we're building technical debt that becomes harder to fix over time.

2. Consent Fatigue

A user with a dozen personal AI assistants - a travel agent, a finance bot, a health coach, a personal shopper, a calendar manager - faces a scalability problem. Each assistant makes hundreds of decisions per day. Each decision potentially requires authorization.

Thousands of authorization prompts per day lead to consent fatigue, where users reflexively approve requests without reading them. This makes security worse, not better. The mechanism designed to give users control becomes the weakest link.

How do you respect the principle of least privilege when dealing with flexible, non-deterministic agents that need varying permissions based on context?

  • Preemptive authorization (giving broad permissions upfront) violates least privilege
  • Per-action authorization (prompting for each decision) is untenable at scale

The current approach - pretending this problem doesn't exist - is failing. An emerging solution involves policy-as-code authorization combined with Client Initiated Backchannel Authentication (CIBA) for asynchronous human approval when agents encounter high-risk scenarios.

3. Recursive Delegation

Diagram showing recursive delegation chain: User with permissions A,B,C,D,E delegates to Primary Agent (scope A,B,C,D), which spawns Sub-Agent 1 (scope B,C) and Sub-Agent 2 (scope A), Sub-Agent 1 creates Sub-Sub-Agent (scope C). Each delegation narrows permissions showing scope attenuation at each hop, with agents acting on services X, Y, Z with their respective scopes
Click to zoom and explore

Recursive delegation with scope attenuation at each hop

A user delegates to Agent A (their personal assistant). Agent A determines it needs specialized help and spawns Agent B (a data analysis specialist). Agent B needs to access the user's spreadsheet data. Agent A delegates some of its authority to Agent B.

This recursive delegation chain (User → Agent A → Agent B → Service) requires verifiable trust with scope attenuation at each hop to prevent permission abuse.

Requirements:

  • Each agent in the chain must prove its authority traces back to the original user
  • Permissions must progressively narrow at each delegation step (scope attenuation)
  • The final service must see and verify the complete delegation context
  • Any compromise at any link doesn't grant access beyond the attenuated scope

Two technical approaches:

OAuth 2.0 Token Exchange (RFC 8693): A centralized model where agents request down-scoped tokens from an authorization server. This centralizes policy control and simplifies revocation, but introduces latency and requires network connectivity.

Capability-Based Tokens (Biscuits/Macaroons): A decentralized model where authority is embedded in the credential itself, allowing offline attenuation without contacting the issuer. However, revocation becomes extremely challenging in offline scenarios - a critical unsolved problem.

These recursive delegation problems are showing up in production environments. The question is how to design systems that can handle them safely.

4. Multi-User Agent Contexts

OAuth and OpenID Connect were designed for one-to-one relationships: one user grants permissions to one application. But what happens when an agent serves multiple users simultaneously?

A CFO's AI agent answering questions in a company Slack channel with 20 employees faces this problem. The CFO has access to sensitive salary data. Other channel members don't. When someone asks about department budgets, should the agent answer based on:

  • The CFO's permissions (risking disclosure to unauthorized users)?
  • The intersection of permissions across all channel members (the right approach, but technically complex)?

No popular protocol exists for shared agents with varying authority levels. This requires fine-grained Attribute-Based Access Control (ABAC) to compute permission intersections - a complex implementation challenge with no standardized solution.

5. Cross-Domain Federation

A financial advisory agent needs to aggregate data from your bank, investment platform, and credit agency. Each organization has its own identity provider. The agent is "nobody" to organizations other than the one that created it.

How does the agent prove to Domain B that it has been delegated authority by a user from Domain A? Traditional infrastructure-based trust systems like SPIFFE/SPIRE work well within controlled environments but don't extend across organizational boundaries.

Three technical approaches:

OAuth 2.0 Token Exchange: Preserves original identity context across multi-hop workflows, allowing domains to understand the delegation chain. Enables identity chaining across domains.

Identity Assertion Authorization Grant: Agents use identity assertions from trusted corporate IdPs to obtain access tokens for third-party APIs.

Verifiable Credentials: Cryptographically encapsulate delegated authority, enabling decentralized trust without requiring all domains to pre-establish relationships.

Each approach has trade-offs between centralized control, decentralized flexibility, and revocation capabilities. The industry hasn't converged on a single solution.

6. Computer-Using Agents

OpenAI's Operator, Anthropic's computer use capabilities, and other Computer-Using Agents (CUAs) invert the security model.

Instead of calling APIs with structured parameters, these agents manipulate visual interfaces directly: controlling browsers, clicking buttons, filling forms. This bypasses all traditional API-based authorization controls. Once a CUA logs into a web application as a user, every subsequent action looks exactly like a human using the interface.

Distinguishing agent actions from genuine user actions becomes nearly impossible.

The emerging solution is Web Bot Auth, an IETF proposal that allows agents to prove their identity within HTTP requests cryptographically using HTTP Message Signatures. This creates a "passport for agents" that attaches verifiable identity to traffic regardless of IP address, enabling sites to differentiate legitimate agents from malicious bots without blocking all automation.

Does the open web risk fracturing into a two-tiered system where identified, trusted agents get permissioned access while anonymous agents face aggressive blocking?

7. Continuous Verification

Traditional authorization happens at the start of a session: authenticate the user, check permissions, grant access. This works when humans are in the loop, making decisions and taking actions.

AI agents operate autonomously, sometimes for hours or days. An initial authentication and authorization check is insufficient for high-stakes scenarios:

  • AI trading bots executing financial transactions
  • Operations agents deploying code to production
  • Autonomous vehicles navigating cities

These scenarios require continuous, programmatic verification that the agent's actions align with operational goals and constraints. Identity becomes a real-time safety system, not just an access control gatekeeper.

Solving the Authorization Scalability Crisis

Better UX won't solve the scalability crisis. Organizations need an architectural shift from individual consent to policy-based governance.

Policy-Based Governance Patterns

1. Policy-as-Code for Agent Authorization

Administrators or users define high-level policies that set the operational envelope for agents: budgetary limits, data access tiers, API call velocity, and permissible actions. The IAM system enforces these policies programmatically without requiring per-action human approval.

Example policy: "Marketing agents may access customer contact information and spend up to $5,000 per day on advertising, but cannot access financial records or make purchases exceeding $500 without approval."

2. Intent-Based Authorization

Users approve high-level intents expressed in natural language. The system translates these into bundles of specific, least-privilege permissions.

For example, "Book my travel for the upcoming conference" becomes a precise set of permissions:

  • Read calendar (to find conference dates)
  • Search flights and hotels (within date range)
  • Make purchases (up to $2,000)
  • Update calendar (to add travel bookings)

MIT's research on authenticated delegation introduces the concept of an Intent Ledger: a cryptographic record where user instructions are signed and recorded as Intent Mandates, with all subsequent agent actions referencing the mandate. This creates a non-repudiable audit trail linking high-level intent to specific actions.

3. Risk-Based Dynamic Authorization

Flowchart showing risk-based dynamic authorization: Agent initiates action to Policy Decision Point which performs risk assessment. Low risk actions check policy and auto-approve if compliant. Medium risk actions go through enhanced verification of context (time, location, recent activity). High risk actions trigger CIBA flow requiring explicit user approval on trusted device. Actions either proceed if approved or are blocked if denied
Click to zoom and explore

Risk-based dynamic authorization flow

A Policy Decision Point (PDP) assesses risk in real-time based on the action context:

This balances security and usability: the vast majority of agent actions flow through without friction, while exceptional cases receive appropriate scrutiny.

Guardrails as Defense-in-Depth

Traditional identity governance focuses on what agents can access. Guardrails address how agents use that access.

Real-time controls work at several levels:

  • Masking personally identifiable information before sending data to an LLM
  • Preventing unintended information sharing across security boundaries
  • Limiting resource consumption through rate limits and quotas
  • Enforcing data residency rules for regulatory compliance
  • Blocking problematic outputs for ethical alignment

Guardrails must be enforced at the Policy Decision Point, not within the agent itself. This creates a centralized, auditable chokepoint for safety, compliance, and ethical alignment. If enforcement depends on the agent voluntarily respecting constraints, compromised or malicious agents can simply bypass them.

Economic Dimension: When Agents Spend Money

The identity challenge becomes more acute when agents engage in economic activity: paying for API usage, making purchases, orchestrating services that cost money.

Three complementary protocols:

Financial-grade API (FAPI): Hardens OAuth 2.1 for high-stakes scenarios with sender-constrained tokens, stronger client authentication, and strict consent logging. While not agent-specific, FAPI provides the security foundation for any agent operating in regulated financial contexts.

Agent Payments Protocol (AP2): Introduces cryptographically-signed Mandates that capture user intent:

  • Intent Mandates provide high-level instructions and auditable context
  • Cart Mandates capture specific purchase approvals
  • Uses Verifiable Credentials to bind mandates to user identity, creating non-repudiable audit trails

KYAPay Protocol: Addresses the "cold start" problem when agents need to establish payment relationships with new services. KYAPay extends KYC (Know Your Customer) and KYB (Know Your Business) verification to agents through a KYA (Know Your Agent) process. The output is a JWT bundling verified identity claims with payment information, enabling atomic onboarding and payment in a single interaction.

Governance Imperative: Strategic Priorities

The shift from network-based and device-based security to identity-based security is complete in the AI era. Identity is the new security perimeter because agents can appear anywhere. They're not bound to specific networks, devices, or even geographic locations.

1. Lifecycle Management

Agents require formal lifecycle processes mirroring human employee onboarding and offboarding.

  • Provisioning establishes identity and grants initial permissions
  • Continuous governance reviews permissions and monitors anomalies
  • De-provisioning permanently removes identity across all systems
  • Automated discovery identifies shadow agents operating outside official channels

2. Delegation, Not Impersonation

Shift from agents impersonating users to agents proving delegated authority.

  • Implement On-Behalf-Of (OBO) flows with dual identities
  • Tokens contain both user (sub) and agent (act/azp) claims

3. Federated Trust

Single-organization IAM breaks down when agents cross organizational boundaries.

  • Federated trust fabrics using OpenID Federation or X.509
  • Asynchronous authorization for unfamiliar services
  • Balance agent autonomy with user control

4. Externalized Guardrails

Security teams must externalize guardrails from agent implementations to centralized policy enforcement points.

  • Auditable chokepoints for every agent action
  • Real-time evaluation against organizational policies
  • PII masking and data residency enforcement
  • For cyber-physical systems, IAM becomes a core safety system
  • Centralized control prevents compromised agents from bypassing constraints

Implementation Note

The distinction between token revocation and agent de-provisioning matters critically. Revoking a token terminates an active session. De-provisioning erases the identity itself. For a compromised agent, only full de-provisioning prevents persistent backdoors where underlying trust relationships remain intact.

Standards Development: Preventing Fragmentation

The risk of fragmentation is real. Identity vendors are racing to create proprietary agent identity systems, each with compelling features but limited interoperability.

Coordinated standards development is needed across three areas:

OpenID Connect for Agents (OIDC-A): An emerging proposal to standardize core identity claims, capabilities, and discovery mechanisms specifically for agents. This would enable multiple IdPs to issue agent credentials that are uniformly understood across the ecosystem.

Interoperability Profiles: The IPSIE (Interoperability Profiling for Secure Identity in the Enterprise) working group is developing guidance on rigorous, interoperable profiles of identity standards, giving enterprises confidence to adopt AI agents without facing unquantifiable risks.

Decentralized Identity Frameworks: Decentralized Identifiers (DIDs) and Verifiable Credentials offer a path to globally unique, verifiable agent identities that don't depend on centralized providers, enabling truly portable agent identity across organizational boundaries.

The OpenID Foundation makes a critical point: existing foundational frameworks work for securing today's agents. OAuth 2.1, SCIM, and OpenID Connect are the bedrock. The challenge isn't replacing these standards. It's extending them thoughtfully to address agent-specific requirements like delegation, scope attenuation, and cross-domain operation.

Frequently Asked Questions

Conclusion: Identity as the Foundation for Trustworthy AI

Digital identity was built for humans and applications. AI agents are neither. They act autonomously, spawn dynamically, and cross organizational boundaries legacy IAM cannot handle.

OAuth 2.1 with PKCE works for single-domain scenarios. Enterprise IAM platforms are treating agents as first-class identity citizens. Standards bodies are developing extensions for delegation, federation, and recursive trust chains. The building blocks exist.

AI agents are being deployed at scale today. Each implementation without proper delegated identity creates forensic gaps, compliance risks, and interoperability barriers. Organizations that wait will face costly remediation later.

The path forward is clear. Agents need tokens that contain both user and agent identities for audit trails that satisfy regulators. Policy-based governance must replace thousands of individual prompts with high-level authorization that actually scales. Standards like OIDC-A and Verifiable Credentials need to converge before fragmentation becomes permanent technical debt. Real-time monitoring and policy evaluation must run continuously for agents operating autonomously over extended periods.

Vendors, standards bodies, and enterprises need to coordinate. Identity isn't just access control anymore - it's the substrate for accountability, audit trails, policy enforcement, and trust in autonomous systems. Organizations that get this right will deploy AI agents safely at scale. Those that don't will struggle with security debt that compounds over time.

raxIT Perspective: AI Governance Beyond Identity

Identity frameworks like OAuth 2.1 and OIDC handle authentication and delegation. But identity alone doesn't solve the governance challenge. Organizations need visibility into what agents are doing and policy enforcement that works regardless of which identity system they use.

What raxIT adds to the stack:

  • Agent Discovery & Inventory: See all AI agents operating in your environment, including shadow agents that bypass official approval
  • Policy Enforcement & Guardrails: Enforce organizational policies at runtime - PII masking, data residency rules, spending limits - regardless of the agent's identity provider
  • Continuous Monitoring: Track agent behavior over time to spot anomalies, policy violations, and security risks before they escalate
  • Compliance Documentation: Generate audit trails that satisfy regulators by connecting high-level intent to specific agent actions

Identity systems tell you who the agent is. Governance platforms tell you what the agent is doing and whether that's okay. You need both.

Ready to govern your AI agents? to discuss your specific deployment context and governance needs.