General
What LLM providers are supported?
What LLM providers are supported?
FIM One works with any OpenAI-compatible LLM provider. This includes:See the Quick Start guide for provider-specific examples.
- Commercial APIs — OpenAI, DeepSeek, Anthropic (Claude), Alibaba Qwen, Google Gemini, and any provider that exposes a
/v1/chat/completionsendpoint. - Local/self-hosted — Ollama, vLLM, LocalAI, LM Studio, and any other runtime that serves the OpenAI-compatible API format.
.env:Can I use local or self-hosted models?
Can I use local or self-hosted models?
Yes. FIM One connects to any endpoint that implements the OpenAI-compatible API format. Popular self-hosted options include:
Set
| Runtime | Base URL | Example model |
|---|---|---|
| Ollama | http://localhost:11434/v1 | qwen2.5:14b |
| vLLM | http://localhost:8000/v1 | Qwen/Qwen2.5-72B-Instruct |
| LocalAI | http://localhost:8080/v1 | llama3 |
| LM Studio | http://localhost:1234/v1 | (whatever you load) |
LLM_API_KEY to any non-empty string (e.g., ollama) when the provider does not require authentication. All agent features — ReAct reasoning, DAG planning, tool calling — work identically regardless of whether the model is local or cloud-hosted.Is FIM One open source?
Is FIM One open source?
FIM One is released under a Source Available License. This is not an OSI-approved open source license, but it provides broad freedoms for most use cases:Permitted:
- Internal use within your organization
- Modification and custom development
- Distribution with the license intact
- Embedding in your own (non-competing) applications
- Multi-tenant SaaS offerings
- Competing agent platforms
- White-labeling or removing branding
How do I report security vulnerabilities?
How do I report security vulnerabilities?
Do NOT open a public GitHub issue for sensitive vulnerabilities.
- Sensitive reports (credential exposure, auth bypass, injection, etc.) — use GitHub Security Advisories or email security@fim.ai.
- Low-severity issues (missing headers, informational disclosures) — open a regular GitHub issue with the
securitylabel.
Deployment
What are the system requirements?
What are the system requirements?
Minimum requirements:
Recommended for production:
Docker alternative: Docker 20+ and Docker Compose v2. No local Python or Node.js required — everything is built inside the container.
| Resource | Requirement |
|---|---|
| Python | 3.11+ |
| RAM | 2 GB minimum |
| Disk | 1 GB free (plus space for uploaded documents and vector store) |
| Node.js | 18+ (for local development) |
| Resource | Recommendation |
|---|---|
| RAM | 4 GB+ (especially if running embedding models locally) |
| CPU | 2+ cores |
| Database | PostgreSQL for multi-worker deployments |
Does FIM One work on ARM / Apple Silicon?
Does FIM One work on ARM / Apple Silicon?
Yes. Both Docker and local development work on ARM architectures, including Apple Silicon (M1/M2/M3/M4) Macs. The Docker image builds natively for
linux/arm64, and all Python and Node.js dependencies have ARM-compatible wheels or fallbacks.What databases are supported?
What databases are supported?
FIM One uses databases in two distinct ways:1. Internal database (FIM One’s own data):
Each database connector auto-generates three tools:
- SQLite — zero-config default, great for development and single-worker deployments.
- PostgreSQL — recommended for production, required for multi-worker setups (
WORKERS > 1).
| Database | Status |
|---|---|
| PostgreSQL | Supported |
| MySQL | Supported |
| Oracle | Supported |
| SQL Server | Supported |
| DM (Dameng) | Supported |
| KingbaseES | Supported |
| GBase | Supported |
| Highgo | Supported |
list_tables, describe_table, and query. Schema introspection, AI-powered annotation, and read-only query execution are included by default.Can multiple users share the same instance?
Can multiple users share the same instance?
Features
What is the difference between ReAct and DAG mode?
What is the difference between ReAct and DAG mode?
FIM One offers two execution engines, each suited to different task types:
Auto mode (the default) uses a fast LLM classifier to analyze each incoming query and routes it to the optimal engine automatically. You can also manually select the mode via the three-way toggle in the chat UI (Auto / Standard / Planner).For a deep dive, see Execution Modes.
| ReAct (Standard) | DAG (Planner) | |
|---|---|---|
| How it works | Single reasoning loop: Reason, Act, Observe, repeat | LLM decomposes the goal into a dependency graph; independent steps run in parallel |
| Best for | Focused queries, single-system lookups, conversational tasks | Multi-step tasks, cross-system orchestration, parallel data gathering |
| Concurrency | Sequential (one tool at a time) | Concurrent (independent steps run simultaneously via asyncio) |
| Re-planning | N/A | Up to 3 rounds of automatic re-planning if goals are not met |
How do I add a new connector?
How do I add a new connector?
FIM One provides three ways to create connectors — no Python code required:1. Import an OpenAPI spec — Upload a YAML, JSON, or URL pointing to an OpenAPI specification. FIM One parses the spec and generates connectors with all actions automatically.2. AI chat builder — Describe the API you want to connect in natural language. The AI generates and iterates on the connector configuration in conversation, using 10 specialized builder tools for settings, actions, testing, and agent wiring.3. MCP protocol — Connect any MCP (Model Context Protocol) server directly. The third-party MCP ecosystem works out of the box.For database connectors, configure the connection details (host, port, credentials) and FIM One auto-generates schema introspection and query tools.See the AI Builder documentation and the Extension Guide for step-by-step instructions.
Can I embed FIM One into my existing system?
Can I embed FIM One into my existing system?
Yes. FIM One’s Copilot mode is specifically designed for embedding into host systems. You can integrate it via:
- iframe — Embed the FIM One chat interface directly into any web page.
- Widget — A lightweight chat widget that overlays on your existing UI.
- API — Use the FastAPI backend directly for fully custom integrations.
How does the translation system work?
How does the translation system work?
FIM One supports 6 languages (English, Chinese, Japanese, Korean, German, French) with a fully automated translation pipeline:To force a full retranslation:
- Only edit English source files — UI strings in
frontend/messages/en/*.json, documentation indocs/*.mdx(root level), andREADME.md. - Auto-translate on commit — A pre-commit hook detects changes to English files and translates them via the project’s Fast LLM. Translations are incremental: only new, modified, or deleted content is processed.
- Never manually edit translated files — Files in
messages/zh/,messages/ja/,docs/zh/,docs/ja/, etc. are all auto-generated and will be overwritten.
Contributing
How can I contribute to FIM One?
How can I contribute to FIM One?
FIM One welcomes contributions of all kinds — code, documentation, translations, bug reports, and feature ideas.Getting started:
- Read the Contributing Guide for setup instructions, coding conventions, and the PR process.
- Browse Good First Issues for curated tasks suitable for newcomers.
- Check Open Issues for bugs and feature requests.
What is the tech stack?
What is the tech stack?
| Layer | Technology |
|---|---|
| Backend | Python 3.11+, FastAPI, SQLAlchemy, Alembic, asyncio |
| Frontend | Next.js, React, TypeScript, Tailwind CSS, shadcn/ui |
| LLM integration | OpenAI-compatible API (provider-agnostic) |
| Vector search | LanceDB + Jina embeddings |
| Database | SQLite (dev) / PostgreSQL (production) |
| Package managers | uv (Python), pnpm (Node.js) |
| Deployment | Docker Compose, single-process script |
Where can I get help?
Where can I get help?
- Documentation — You are here. Browse the Guides, Concepts, and Configuration sections.
- Discord — Join the FIM One Discord for real-time help and community discussions.
- GitHub Issues — File bug reports and feature requests on GitHub.
- Twitter/X — Follow @FIM_One for updates and announcements.