Skip to main content
☁️ Option 0: Cloud (Zero Setup) Don’t want to self-host? Try FIM One instantly at cloud.fim.ai — no Docker, no API keys, no configuration. Sign in and you’re ready to go. Early access.
No local Python or Node.js required — everything is built inside the container.
git clone https://github.com/fim-ai/fim-one.git
cd fim-one

# Configure — only LLM_API_KEY is required
cp example.env .env
# Edit .env: set LLM_API_KEY (and optionally LLM_BASE_URL, LLM_MODEL)

# Build and run (first time, or after pulling new code)
docker compose up --build -d
Open http://localhost:3000 — on first launch you’ll be guided through creating an admin account. That’s it. After the initial build, subsequent starts only need:
docker compose up -d          # start (skip rebuild if image unchanged)
docker compose down           # stop
docker compose logs -f        # view logs
Data is persisted in Docker named volumes (fim-data, fim-uploads) and survives container restarts. Note: Docker mode does not support hot reload. Code changes require rebuilding the image (docker compose up --build -d). For active development with live reload, use Option B below.

Option B: Local Development

Prerequisites: Python 3.11+, uv, Node.js 18+, pnpm.
git clone https://github.com/fim-ai/fim-one.git
cd fim-one

cp example.env .env
# Edit .env: set LLM_API_KEY

# Install
uv sync --all-extras
cd frontend && pnpm install && cd ..

# Launch (with hot reload)
./start.sh
CommandWhat startsURL
./start.shNext.js + FastAPIhttp://localhost:3000 (UI) + :8000 (API)
./start.sh devSame, with hot reload (Python --reload + Next.js HMR)Same
./start.sh dev:apiAPI only, dev mode (--reload)http://localhost:8000/api
./start.sh dev:uiNext.js only, dev mode (HMR)http://localhost:3000
./start.sh apiFastAPI only (headless, for integration or testing)http://localhost:8000/api

Configuration

FIM One works with any OpenAI-compatible LLM provider — OpenAI, DeepSeek, Anthropic, Qwen, Ollama, vLLM, and more.
ProviderLLM_API_KEYLLM_BASE_URLLLM_MODEL
OpenAIsk-...(default)gpt-4o
DeepSeeksk-...https://api.deepseek.com/v1deepseek-chat
Anthropicsk-ant-...https://api.anthropic.com/v1claude-sonnet-4-6
Ollama (local)ollamahttp://localhost:11434/v1qwen2.5:14b
Jina AI unlocks web search/fetch, embedding, and the full RAG pipeline (free tier available). Minimal .env:
LLM_API_KEY=sk-your-key
# LLM_BASE_URL=https://api.openai.com/v1   # default — change for other providers
# LLM_MODEL=gpt-4o                         # default — change for other models

JINA_API_KEY=jina_...                       # unlocks web tools + RAG
For a complete list of all configuration options, see the Environment Variables reference.

Production Deployment

docker compose up -d brings up everything you need — no manual service configuration required:
ServicePurposeConfigured by
fim-oneAPI + Frontend.env (your LLM keys, etc.)
RedisCross-worker interrupt relayAuto-configured by compose
docker compose up --build -d   # first time / after code changes
docker compose up -d           # subsequent starts
docker compose logs -f         # view logs
docker compose down            # stop all services

Scaling with Workers

By default, the API runs with a single worker process. To handle more concurrent users, increase workers via .env:
WORKERS=4   # number of Uvicorn worker processes
Multi-worker requirements:
  • PostgreSQL — SQLite is single-writer and does not support concurrent writes. Set DATABASE_URL to a PostgreSQL connection string.
  • Redis — already included in Docker Compose (auto-configured). Handles cross-worker interrupt/inject relay.
With WORKERS=1 (default), no Redis or PostgreSQL is needed — SQLite works fine.

Nginx Reverse Proxy

For HTTPS and custom domain, put an Nginx reverse proxy in front:
User → Nginx (443/HTTPS) → localhost:3000
The API runs internally on port 8000 — Next.js proxies /api/* requests automatically. Only port 3000 needs to be exposed.

Code Execution Sandbox

If you use the code execution sandbox (CODE_EXEC_BACKEND=docker), mount the Docker socket:
# docker-compose.yml
volumes:
  - /var/run/docker.sock:/var/run/docker.sock

Cloudflare Tunnel

For a zero-open-ports setup, use Cloudflare Tunnel instead of Nginx. All traffic flows through Cloudflare’s edge — no need to expose ports 80/443, manage SSL certificates, or configure firewall rules.
Mainland China users: Cloudflare Free/Pro/Business plans have no PoPs (Points-of-Presence) in mainland China. Traffic from mainland China is routed to overseas edges (typically US West), causing frequent 502 errors and high latency. Do not use Cloudflare Tunnel if your primary users are in mainland China. Cloudflare Enterprise with China Network (JD Cloud partnership) is required for reliable mainland access.
User → Cloudflare Edge (SSL) → Tunnel → cloudflared → fim-one:3000
Setup:
1

Create a tunnel

Go to Cloudflare Zero Trust → Networks → Tunnels → Create a tunnel. Choose Cloudflared as the connector type.
2

Configure the public hostname

In the tunnel config, add a public hostname:
FieldValue
TypeHTTP
URLfim-one:3000
Leave all other settings (HTTP Host Header, Chunked Encoding, Timeouts, Access) at their defaults.
The URL uses the Docker service name fim-one, not localhost, because cloudflared runs as a separate container in the same Docker network.
3

Copy the tunnel token

In the tunnel’s Configure page, find the install command — it contains a token starting with eyJ.... Copy it.
4

Add the token to .env

# Add to your .env file on the server
CLOUDFLARE_TUNNEL_TOKEN=eyJhIjoiNj...
5

Deploy with the tunnel overlay

docker compose -f docker-compose.yml -f docker-compose.tunnel.yml build
docker compose -f docker-compose.yml -f docker-compose.tunnel.yml up -d
The docker-compose.tunnel.yml overlay adds a cloudflared sidecar container. The base docker-compose.yml is unchanged — community users without Cloudflare can continue using docker compose up -d as before.
6

Remove the old DNS record

If your domain previously had an A record pointing to your server’s IP, delete it in Cloudflare DNS. The tunnel automatically creates a CNAME record pointing to its edge endpoint.
7

Close server ports

Remove (or comment out) the ports section from docker-compose.yml on your server. Traffic now flows exclusively through the tunnel — no inbound ports needed.
Cloudflare Tunnel is free on all plans including Free. There are no bandwidth or traffic limits.

Script Deployment (Bare Metal)

For bare-metal servers or custom process managers, use ./start.sh directly:
./start.sh           # production mode
./start.sh portal    # same as above (explicit)
./start.sh api       # API only (headless)
In this mode, Redis is not included automatically. The system runs in single-worker, in-process mode by default — suitable for low-traffic deployments. To enable multi-worker with Redis locally:
# Start a Redis instance (Docker or system package)
docker run -d --name redis -p 6379:6379 redis:7-alpine

# Add to .env
REDIS_URL=redis://localhost:6379/0
WORKERS=4
DATABASE_URL=postgresql+asyncpg://user:pass@localhost/fim_one