AI Agents Demand Workflow Isolation and JIT Credentials

Experts warn AI agents act as creative insider threats; secure them via unmanaged identity cleanup, dynamic just-in-time credentials, and strict workflow isolation to curb privilege chains.

Unmanaged Identities Amplify Agent Risks

AI agents inherit broad, lingering access from human and non-human identities (NHI), creating insider threats that creatively exceed scopes. Jake Lunberg from HashiCorp highlights how agents, unlike deterministic scripts, explore unexpected paths due to their non-deterministic nature. Regulated industries like finance adopt conservative stances, while others rush ahead, mirroring early cloud adoption chaos where "cloud first" policies ignored security.

Panel consensus: Organizations overlook unmanaged identities—old roles, static API keys in codebases, or chat systems. Jake notes, "I joke that I probably have access to any number of systems that I had access to over the last 30 years. And it's probably true." This compounds with agents' creativity, enabling self-escalating privilege chains where one agent delegates to another, expanding blast radius infinitely.

Jeff Kroom adds that traditional NHI for workloads like CI/CD was deterministic; agents introduce unpredictability, nullifying biometrics or MFA suited for humans. Dave McInness frames agents as "the most helpful insider threats we've ever had," echoing the segment's opener.

Traditional IAM Breaks Under Agent Creativity

Human IAM and static NHI fail agents because they lack attestation for human-initiated workflows, inheritance controls across roles, and audit trails distinguishing user intent from agent actions. Jake explains: "How do I attest that Jake is the one who asks for those things and then how do I reduce the boundaries of what my identity can do just for that particular domain space?"

Key divergence: Panelists agree on identity scoping issues but emphasize workflow isolation over just credentials. Direct agent-to-agent communication risks "confused deputy" scenarios. Jake argues against it: "If you allow your agents to reach out and talk to other agents, you've already lost this game." Instead, mimic human separations of duties—specialized agents with domain-specific models, even embedded in IoT.

Host Matt references "self-escalating privilege chains," which Jake ties to missing isolation: Agents proxy requests creatively, bypassing scopes without ring-fencing.

Security Lifecycle Management: Human-to-Agent Handoff

IBM and HashiCorp's joint approach—Security Lifecycle Management—spans verification, credential vending, and inspection. Tools like IBM Verify handle human attestation; HashiCorp Vault provides JIT credentials for agents. Vault Identity Protect inspects network streams; Vault Radar discovers unmanaged identities.

Jake outlines layers: Test human workflows, vend laser-focused agent identities, verify both ends. Code scanning prevents API keys in repos, addressing breaches like LiteLLM where libraries exfiltrate local keys.

Suja reinforces: Agents need dynamic rotation beyond human MFA, shifting from static to session-based creds that expire post-task—even if designed for days, kill after minutes.

Trade-offs: JIT enables observability challenges—short-lived IDs complicate tracing—but enforces just-in-time over just-in-case access, flipping legacy IAM mindsets where creds outlive needs.

Isolate Workflows with Orchestration Layers

Strongest argument from Jake: Prevent agent discovery and direct calls via coordination layers like IBM's CFKA (Cloud Foundry something? Transcript: CFKA). Agents watch domain-specific queues, orchestrated centrally—no visibility into peers.

"Think of CFKA as almost like the partitioning layer to basically allow an agent to watch for the work that it's supposed to be doing and it can't be called by other agents," Jake says. This yields audit artifacts for compliance, crucial for regulated sectors.

Jeff probes trust between agents; Jake counters: Build human-like hierarchies via orchestration, not peer-to-peer. Dave nods to design: "No one's thinking about... people are just so excited about what is possible and they release it and they go oops forgot about right."

Panel agreement: FOMO engineering repeats cloud mistakes—racing in Yugos vs. engineering F1 cars. Responsible design limits agents to scoped "friends," preserving AI's power without chaos.

Roadmap: From Inventory to Session-Based Isolation

Jake's prioritized steps:

  1. Inventory all identities—clean VCS, chats, files of unmanaged creds.
  2. Rotate static creds to shorter-lived.
  3. Shift to JIT/session-based: Creds spawn on-request, expire post-session.
  4. Assign long-term agent identities sparingly (e.g., SPIFFE).
  5. Isolate via queues/orchestration.

"You have to be this tall to AI," Jake analogizes, like roller coaster height checks—master basics before advanced play.

Panelists converge: Tooling exists (Vault, Verify); execution needs will. Predictions: Domain-specific small models proliferate; compliance drives adoption in finance.

Notable quotes:

  • Jake Lunberg: "We need to isolate those agentic workflows... ring fence not just the identity but the scope and how those agents run and how it is that I allow them to live and die for the workloads that they need." (On workflow isolation vs. mere IAM.)
  • Jake Lunberg: "The beauty is... we have the tooling. It's now just... working on your people to actually affect this change." (Urging organizational buy-in for migration.)
  • Dave McInness (quoted): "AI agents are the most helpful insider threats we've ever had." (Framing the core risk.)
  • Jeff Kroom: "What we need to build in are things that are actually more human... How do those two agents learn to trust each other?" (Probing inter-agent dynamics, countered by isolation.)
  • Jake Lunberg: "You're going to be fear of missing out on your revenue targets... because your company may not exist anymore." (Warning against FOMO-driven deployment.)

Key Takeaways

  • Inventory unmanaged identities across systems before deploying agents—remove static creds from codebases and files.
  • Transition NHI from static/long-lived to rotation, then JIT/session-based credentials that auto-expire post-task.
  • Ban direct agent-to-agent communication; use orchestration layers (e.g., CFKA queues) for domain-isolated workflows.
  • Design agents with separation of duties: Scope to single jobs, like human roles, using small/domain-specific models.
  • Prioritize compliance artifacts—auditable handoffs from human requests to agent actions—for regulated industries.
  • Avoid FOMO engineering; treat AI adoption like F1 design, not reckless speed, to prevent breaches like LiteLLM.
  • Combine human verification (IBM Verify) with agent credential vending (HashiCorp Vault) for end-to-end lifecycle.
  • Expect workflow isolation to define secure AI, enabling creativity within guardrails without privilege escalation.
Video description
Learn more about solving agentic AI identity and access gaps → https://ibm.biz/BdpSCg LiteLLM is a nifty little Python library that gives you access to about 100 different AI services through one API. It gets an estimated 3.4 million downloads a day. And last week, it was turned into a Trojan horse, distributing infostealers to hundreds of thousands of devices. (At least, that’s what TeamPCP says—the hackers behind the LiteLLM breach and a slew of other high-profile software supply chain attacks in recent weeks.) Quote Andrej Karpathy: This is “basically the scariest thing imaginable in modern software.” On this episode of Security Intelligence, Suja Viswesan, Dave McGinnis and Jeff Crume help us break down the LiteLLM breach and the broader campaign TeamPCP is waging. We’re also joined by HashiCorp Field CTO Jake Lundberg in the first segment for a discussion of how organizations are trying—with varying degrees of success—to tackle the agentic AI problem. AI agents are identities—but identities our existing frameworks weren’t built to house. Simply porting existing human and non-human identity management practices onto them won’t cut it. But the question remains: What do we need instead? All that and more on Security Intelligence. Segments 00:00 -- Intro 1:13 -- Who will fix AI agent security? 21:17 -- RSAC 2026 Recap 29:31 -- 2026's most dangerous cyberattacks 40:45 -- The LiteLLM breach The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity. Explore the podcast → https://ibm.biz/BdpSCh #AIAgentSecurity #AIAgent #Cyberattack

Summarized by x-ai/grok-4.1-fast via openrouter

8133 input / 2341 output tokens in 20902ms

© 2026 Edge