Skip to main content

General

FIM One works with any OpenAI-compatible LLM provider. This includes:
  • Commercial APIs — OpenAI, DeepSeek, Anthropic (Claude), Alibaba Qwen, Google Gemini, and any provider that exposes a /v1/chat/completions endpoint.
  • Local/self-hosted — Ollama, vLLM, LocalAI, LM Studio, and any other runtime that serves the OpenAI-compatible API format.
You configure your provider via three environment variables in .env:
LLM_API_KEY=sk-your-key
LLM_BASE_URL=https://api.openai.com/v1   # change for other providers
LLM_MODEL=gpt-4o                         # change for other models
See the Quick Start guide for provider-specific examples.
Yes. FIM One connects to any endpoint that implements the OpenAI-compatible API format. Popular self-hosted options include:
RuntimeBase URLExample model
Ollamahttp://localhost:11434/v1qwen2.5:14b
vLLMhttp://localhost:8000/v1Qwen/Qwen2.5-72B-Instruct
LocalAIhttp://localhost:8080/v1llama3
LM Studiohttp://localhost:1234/v1(whatever you load)
Set LLM_API_KEY to any non-empty string (e.g., ollama) when the provider does not require authentication. All agent features — ReAct reasoning, DAG planning, tool calling — work identically regardless of whether the model is local or cloud-hosted.
FIM One is released under a Source Available License. This is not an OSI-approved open source license, but it provides broad freedoms for most use cases:Permitted:
  • Internal use within your organization
  • Modification and custom development
  • Distribution with the license intact
  • Embedding in your own (non-competing) applications
Restricted:
  • Multi-tenant SaaS offerings
  • Competing agent platforms
  • White-labeling or removing branding
For commercial licensing inquiries, please open an issue on GitHub. See the full LICENSE for complete terms.
Do NOT open a public GitHub issue for sensitive vulnerabilities.
  • Sensitive reports (credential exposure, auth bypass, injection, etc.) — use GitHub Security Advisories or email security@fim.ai.
  • Low-severity issues (missing headers, informational disclosures) — open a regular GitHub issue with the security label.
All reports are acknowledged within 48 hours (business days). Critical issues are patched as soon as possible; others ship with the next release.See the full Security Policy for scope, response timelines, and self-hosting best practices.

Deployment

Minimum requirements:
ResourceRequirement
Python3.11+
RAM2 GB minimum
Disk1 GB free (plus space for uploaded documents and vector store)
Node.js18+ (for local development)
Recommended for production:
ResourceRecommendation
RAM4 GB+ (especially if running embedding models locally)
CPU2+ cores
DatabasePostgreSQL for multi-worker deployments
Docker alternative: Docker 20+ and Docker Compose v2. No local Python or Node.js required — everything is built inside the container.
Yes. Both Docker and local development work on ARM architectures, including Apple Silicon (M1/M2/M3/M4) Macs. The Docker image builds natively for linux/arm64, and all Python and Node.js dependencies have ARM-compatible wheels or fallbacks.
FIM One uses databases in two distinct ways:1. Internal database (FIM One’s own data):
  • SQLite — zero-config default, great for development and single-worker deployments.
  • PostgreSQL — recommended for production, required for multi-worker setups (WORKERS > 1).
2. Connector targets (systems you connect to):FIM One can connect to external databases as data sources via Database Connectors:
DatabaseStatus
PostgreSQLSupported
MySQLSupported
OracleSupported
SQL ServerSupported
DM (Dameng)Supported
KingbaseESSupported
GBaseSupported
HighgoSupported
Each database connector auto-generates three tools: list_tables, describe_table, and query. Schema introspection, AI-powered annotation, and read-only query execution are included by default.
Yes. FIM One is built for multi-tenant deployments from the ground up:
  • JWT authentication — token-based auth with per-user session isolation.
  • Organization isolation — resources (agents, connectors, knowledge bases) are scoped to organizations.
  • Role-based access — admin and user roles with appropriate permission boundaries.
  • Resource ownership — conversations and configurations are isolated per user.
For multi-user production deployments, use PostgreSQL as the internal database and set WORKERS to match your expected concurrency.

Features

FIM One offers two execution engines, each suited to different task types:
ReAct (Standard)DAG (Planner)
How it worksSingle reasoning loop: Reason, Act, Observe, repeatLLM decomposes the goal into a dependency graph; independent steps run in parallel
Best forFocused queries, single-system lookups, conversational tasksMulti-step tasks, cross-system orchestration, parallel data gathering
ConcurrencySequential (one tool at a time)Concurrent (independent steps run simultaneously via asyncio)
Re-planningN/AUp to 3 rounds of automatic re-planning if goals are not met
Auto mode (the default) uses a fast LLM classifier to analyze each incoming query and routes it to the optimal engine automatically. You can also manually select the mode via the three-way toggle in the chat UI (Auto / Standard / Planner).For a deep dive, see Execution Modes.
FIM One provides three ways to create connectors — no Python code required:1. Import an OpenAPI spec — Upload a YAML, JSON, or URL pointing to an OpenAPI specification. FIM One parses the spec and generates connectors with all actions automatically.2. AI chat builder — Describe the API you want to connect in natural language. The AI generates and iterates on the connector configuration in conversation, using 10 specialized builder tools for settings, actions, testing, and agent wiring.3. MCP protocol — Connect any MCP (Model Context Protocol) server directly. The third-party MCP ecosystem works out of the box.For database connectors, configure the connection details (host, port, credentials) and FIM One auto-generates schema introspection and query tools.See the AI Builder documentation and the Extension Guide for step-by-step instructions.
Yes. FIM One’s Copilot mode is specifically designed for embedding into host systems. You can integrate it via:
  • iframe — Embed the FIM One chat interface directly into any web page.
  • Widget — A lightweight chat widget that overlays on your existing UI.
  • API — Use the FastAPI backend directly for fully custom integrations.
In Copilot mode, the AI works alongside users in their familiar interface — querying data, generating reports, and orchestrating actions without forcing users to switch applications.See Execution Modes for configuration details on Standalone, Copilot, and Hub delivery modes.
FIM One supports 6 languages (English, Chinese, Japanese, Korean, German, French) with a fully automated translation pipeline:
  1. Only edit English source files — UI strings in frontend/messages/en/*.json, documentation in docs/*.mdx (root level), and README.md.
  2. Auto-translate on commit — A pre-commit hook detects changes to English files and translates them via the project’s Fast LLM. Translations are incremental: only new, modified, or deleted content is processed.
  3. Never manually edit translated files — Files in messages/zh/, messages/ja/, docs/zh/, docs/ja/, etc. are all auto-generated and will be overwritten.
To set up the translation hook after cloning:
bash scripts/setup-hooks.sh
To force a full retranslation:
uv run scripts/translate.py --all

Contributing

FIM One welcomes contributions of all kinds — code, documentation, translations, bug reports, and feature ideas.Getting started:
  1. Read the Contributing Guide for setup instructions, coding conventions, and the PR process.
  2. Browse Good First Issues for curated tasks suitable for newcomers.
  3. Check Open Issues for bugs and feature requests.
Pioneer Program: The first 100 contributors who get a PR merged are recognized as Founding Contributors with permanent credits, a profile badge, and priority issue support.
LayerTechnology
BackendPython 3.11+, FastAPI, SQLAlchemy, Alembic, asyncio
FrontendNext.js, React, TypeScript, Tailwind CSS, shadcn/ui
LLM integrationOpenAI-compatible API (provider-agnostic)
Vector searchLanceDB + Jina embeddings
DatabaseSQLite (dev) / PostgreSQL (production)
Package managersuv (Python), pnpm (Node.js)
DeploymentDocker Compose, single-process script
The codebase follows an async-first, protocol-first architecture with zero vendor lock-in.
  • Documentation — You are here. Browse the Guides, Concepts, and Configuration sections.
  • Discord — Join the FIM One Discord for real-time help and community discussions.
  • GitHub Issues — File bug reports and feature requests on GitHub.
  • Twitter/X — Follow @FIM_One for updates and announcements.