Deep Agents: LangChain's Ready-Made Harness for Complex AI Tasks
Deep Agents automates planning, filesystem offloading, subagents, context compression, and memory for LangGraph agents, handling infrastructure so you build task logic in one function call.
Automate Agent Infrastructure to Focus on Tasks
Replace hand-crafted LangGraph loops with create_deep_agent() from the deepagents library (pip install deepagents). This single function builds a full agent harness on LangChain/LangGraph, managing state, streaming, and context without custom schemas or edges. For complex tasks needing loops, tools, and variable outputs, it eliminates boilerplate: invoke with messages and tools like get_weather, and it runs a tool-calling loop automatically.
LangGraph remains the low-level runtime for graphs and persistence; Deep Agents adds opinionated layers like automatic planning via write_todos tool, which persists todo lists (pending, in_progress, completed) in state for adaptive execution across sessions.
Handle Long Contexts with Filesystem and Compression
Offload large tool outputs (>20,000 tokens) to a pluggable virtual filesystem (ls, read_file, write_file, edit_file, glob, grep), replacing them in context with file paths and 10-line previews. Backends include in-memory (default), local disk, LangGraph Store for persistence, or sandboxes like Modal/Daytona.
At 85% context window usage, auto-summarize history into structured notes (intent, artifacts, next steps), archiving originals to files for on-demand retrieval. This enables indefinite runs on research or coding tasks without truncation.
Subagents via task tool spawn clean-context specialists: main agent delegates (e.g., code review subagent with custom prompt/tools), gets a summary back, keeping primary context lean.
Build and Persist State Across Sessions
Configure persistent memory with CompositeBackend (e.g., StoreBackend(InMemoryStore()) for /memories/ paths), loading files like project conventions. Example research agent:
from deepagents import create_deep_agent
from tavily import TavilyClient
def internet_search(query: str, max_results: int = 5):
return tavily_client.search(query, max_results=max_results)
agent = create_deep_agent(
tools=[internet_search],
system_prompt="Plan with write_todos, search web, write report to files."
)
result = agent.invoke({"messages": [{"role": "user", "content": "Research agentic AI in 2025."}]})
Agent auto-plans todos, offloads search results, spawns subagents if needed, synthesizes reports—zero infrastructure code.
CLI (deepagents) uses same SDK for interactive coding with memory.
Use for Multi-Step Tasks, Skip for Simple Ones
Ideal for planning-heavy workflows (research, coding, analysis) with large outputs or delegation; provides subagents, memory without reinvention. Avoid for single-tool agents (use create_agent) or custom graphs needing topology control.
Shifts agent building from plumbing (context strategies, subagents) to logic, standardizing patterns as agentic AI matures toward long-horizon reliability.