Sequential Fails Complex Tasks—Switch to Hierarchical for Parallelism and Adaptation

Sequential workflows force agents to run in fixed order (e.g., fraud detection in Part 2), suiting linear processes with dependencies like document analysis or validation. They predict timing and resources but block parallelism, prevent adapting to intermediate results, and force full execution even if early stops suffice. Total time scales with steps; no dynamic rerouting.

Hierarchical workflows assign a manager agent to decompose goals into subtasks, delegate to specialists (running parallel), monitor progress, synthesize outputs, and adapt plans. Use for multi-perspective analysis (credit assessment), parallel data sources, or investigative scopes. Drawbacks: higher LLM calls (costlier coordination), needs strong manager reasoning, complex debugging, robust context sharing. Production banking favors hierarchical as operations mirror supervisor-led teams assessing complexity before delegating.

CrewAI enables via Process.hierarchical and manager_llm (e.g., GPT-4). Explicit managers set allow_delegation=True:

from crewai import Agent
manager = Agent(
    role="Operations Manager",
    goal="Coordinate specialist agents...",
    backstory="You are an experienced operations manager...",
    llm=get_openai_llm(model="gpt-4"),
    allow_delegation=True
)

Managers handle: (1) Planning (break goals to subtasks), (2) Delegation (match expertise/workload), (3) Monitoring (track blockers/iteration), (4) Synthesis (coherent final output), (5) Adaptation (new tasks/priorities from findings).

Customer Service: 5-Specialist Team Routed by Intake Classification

Builds intake → account → resolution/technical → manager flow. Intake classifies queries into 6 categories with urgency:

  • Authentication (password/login): high → resolution_agent
  • Account inquiry (statement/balance): medium → account_agent
  • Fraud concern: critical → escalate
  • Credit services (loan): medium → credit_specialist
  • Technical issue: high → technical_agent
  • General: low → resolution_agent

Tools simulate banking ops:

  • classify_query: Keyword-based dict with category/urgency/routing/keywords.
  • access_customer_account: Fetches mock data (e.g., CUST12345: $15,420.50 balance, transactions, alerts).
  • execute_account_action: Handles reset_password, request_statement, etc., returns success/reference.
  • check_knowledge_base: Returns articles (e.g., password reset: 4 steps; transaction dispute: 60-day window).

Agents optimize LLMs by complexity:

AgentRoleLLMTools
IntakeClassify/routeLlama3.1:8b (local/fast)classify_query
AccountData retrievalClaude-3.5-Haikuaccess_customer_account
ResolutionRoutine fixesClaude-3.5-Haikuexecute_account_action, check_knowledge_base
TechnicalComplex troubleshootingClaude-3.5-Sonnetcheck_knowledge_base
ManagerCoordinate/escalateGPT-4None

Tasks chain via context=[prior_tasks]; manager dynamically routes (e.g., technical if classified as such). Run via Crew(agents=specialists, tasks=tasks_list, process=Process.hierarchical, manager_llm=...). This cuts costs (cheap models for simple tasks) while GPT-4 ensures coordination quality.

Credit Risk: 4 Parallel Agents Synthesized by Manager

Applies same pattern: Manager delegates parallel financial data analysis, industry research, policy compliance checks, then synthesizes risk decision. Enables independent specialist runs (faster than sequential), adapts if data flags issues (e.g., add deeper checks). Use when banking needs multi-angle evaluation; manager LLM must excel at synthesis to avoid fragmented outputs.