FIM One’s DAG execution engine was designed with deliberate restraint. Several patterns common in multi-agent frameworks were evaluated and intentionally excluded. This page documents those decisions and the reasoning behind them.Documentation Index
Fetch the complete documentation index at: https://docs.fim.ai/llms.txt
Use this file to discover all available pages before exploring further.
Handoff — Not Needed
Pattern: Explicit task handoff between agents, where one agent completes its work and passes control to the next. Why excluded: DAG linear chains already cover sequential task handoff naturally. When step B depends on step A, the executor waits for A to complete, then passes A’s result into B’s context via the dependency mechanism. This is structurally equivalent to a handoff — but without requiring a separate abstraction layer. Adding an explicit handoff protocol would duplicate what step dependencies already provide, with no additional expressiveness.Backflow — Not Needed
Pattern: Downstream steps feeding results back to upstream steps for re-evaluation (also called backpropagation of results). Why excluded: The PlanAnalyzer re-plan loop already covers this use case. After all steps complete, the analyzer evaluates whether the original goal was achieved. If not (and confidence is below the stop threshold), it triggers a full re-plan with context from the previous round’s results — including what went wrong and what was learned. This is a coarser-grained but more robust approach than per-step backflow. Per-step backflow creates complex feedback loops that are hard to debug and can lead to infinite cycles. The re-plan loop achieves the same corrective effect with a cleaner execution model: plan, execute, evaluate, re-plan if needed.Teams (Agent Inter-Communication) — Not Needed
Pattern: Multiple specialized agents communicating with each other during execution, negotiating subtask ownership, or collaborating on shared state. Why excluded: DAG dependency chains are sufficient for data-dependency tasks. When step C needs results from both step A and step B, the dependency edges express this directly. Step C receives the outputs of A and B in its context — no inter-agent messaging protocol required. The DAG model is simpler and more predictable than agent-to-agent communication. Agents don’t need to negotiate; the planner determines the execution structure upfront, and the executor enforces data flow through dependency resolution. This avoids the coordination overhead and non-determinism that multi-agent messaging introduces.agent_hint / role_hint — Not Needed
Pattern: An explicit hint telling the executor which type of agent or persona should handle each step (e.g., “use the researcher agent” or “use the coder agent”). Why excluded: The combination of three existing fields already provides sufficient guidance:task— The natural language description of what the step should accomplish. A well-written task implicitly signals the required expertise.tool_hint— Suggests which tools the step should use (e.g.,web_search,python_exec). This constrains the execution approach more precisely than a vague role label.model_hint— Controls whether the step uses the fast or reasoning model tier, which is the most impactful execution decision.
agent_hint or role_hint would be redundant with these three fields and would introduce another axis of configuration that the planner LLM must get right. Keeping the hint surface small reduces planning errors.
Deep Multi-Agent Orchestration — Strategically Deferred
The “Teams” exclusion above is an engineering argument: DAG edges already express data dependencies. This section adds a strategic argument: the industry trend of model capability absorbing orchestration complexity.Why models are eating the orchestration layer
Multi-agent hierarchies emerged as a workaround for model limitations — if one model couldn’t handle a complex task, you decomposed it across specialized agents. Each generation of frontier models erodes this rationale:| Model capability | What it replaces |
|---|---|
| Expanding context windows (200K → 1M tokens) | Splitting context across agents to fit within limits |
| Extended thinking / chain-of-thought | Explicit Planner → Executor agent pipelines |
| Native agentic loops (Claude Code, Codex) | Framework-orchestrated multi-step execution |
| Native multi-agent protocols (Google A2A, OpenAI Swarm) | Framework-level agent coordination |
The cost of depth
Even when models lack capability, deep agent hierarchies carry compounding costs:- Token multiplication — each nesting level reinjects system prompts and context. Three levels deep, total token spend can be 5-10x a single-agent approach.
- Latency stacking — each level is a full LLM round-trip. Users wait for the slowest chain.
- Information lossy compression — delegated agents return summaries, not raw data. Each level loses fidelity.
- Debug opacity — “which level misunderstood the task?” is nearly impossible to diagnose at depth > 2.
FIM One’s position
One-level delegation viaCallAgentTool is the sweet spot: “I’m a generalist, but this sub-task needs the SQL expert.” This covers the vast majority of real-world delegation needs without the compounding costs of deeper hierarchies.
Deep multi-agent orchestration is deferred indefinitely — not because it’s hard to build, but because frontier model trajectory suggests it will become unnecessary. FIM One’s value lies in the tool supply layer (Connectors, MCP, Governance), not in orchestration depth that models are rapidly internalizing.