Option A: Docker (Recommended)
No local Python or Node.js required — everything is built inside the container.fim-data, fim-uploads) and survives container restarts.
Note: Docker mode does not support hot reload. Code changes require rebuilding the image (docker compose up --build -d). For active development with live reload, use Option B below.
Option B: Local Development
Prerequisites: Python 3.11+, uv, Node.js 18+, pnpm.| Command | What starts | URL |
|---|---|---|
./start.sh | Next.js + FastAPI | http://localhost:3000 (UI) + :8000 (API) |
./start.sh dev | Same, with hot reload (Python --reload + Next.js HMR) | Same |
./start.sh dev:api | API only, dev mode (--reload) | http://localhost:8000/api |
./start.sh dev:ui | Next.js only, dev mode (HMR) | http://localhost:3000 |
./start.sh api | FastAPI only (headless, for integration or testing) | http://localhost:8000/api |
Configuration
FIM One works with any OpenAI-compatible LLM provider — OpenAI, DeepSeek, Anthropic, Qwen, Ollama, vLLM, and more.| Provider | LLM_API_KEY | LLM_BASE_URL | LLM_MODEL |
|---|---|---|---|
| OpenAI | sk-... | (default) | gpt-4o |
| DeepSeek | sk-... | https://api.deepseek.com/v1 | deepseek-chat |
| Anthropic | sk-ant-... | https://api.anthropic.com/v1 | claude-sonnet-4-6 |
| Ollama (local) | ollama | http://localhost:11434/v1 | qwen2.5:14b |
.env:
Production Deployment
Docker (Recommended)
docker compose up -d brings up everything you need — no manual service configuration required:
| Service | Purpose | Configured by |
|---|---|---|
| fim-one | API + Frontend | .env (your LLM keys, etc.) |
| Redis | Cross-worker interrupt relay | Auto-configured by compose |
Scaling with Workers
By default, the API runs with a single worker process. To handle more concurrent users, increase workers via.env:
- PostgreSQL — SQLite is single-writer and does not support concurrent writes. Set
DATABASE_URLto a PostgreSQL connection string. - Redis — already included in Docker Compose (auto-configured). Handles cross-worker interrupt/inject relay.
WORKERS=1 (default), no Redis or PostgreSQL is needed — SQLite works fine.
Nginx Reverse Proxy
For HTTPS and custom domain, put an Nginx reverse proxy in front:/api/* requests automatically. Only port 3000 needs to be exposed.
Code Execution Sandbox
If you use the code execution sandbox (CODE_EXEC_BACKEND=docker), mount the Docker socket:
Cloudflare Tunnel
For a zero-open-ports setup, use Cloudflare Tunnel instead of Nginx. All traffic flows through Cloudflare’s edge — no need to expose ports 80/443, manage SSL certificates, or configure firewall rules.Create a tunnel
Go to Cloudflare Zero Trust → Networks → Tunnels → Create a tunnel.
Choose Cloudflared as the connector type.
Configure the public hostname
In the tunnel config, add a public hostname:
Leave all other settings (HTTP Host Header, Chunked Encoding, Timeouts, Access) at their defaults.
| Field | Value |
|---|---|
| Type | HTTP |
| URL | fim-one:3000 |
The URL uses the Docker service name
fim-one, not localhost, because cloudflared runs as a separate container in the same Docker network.Copy the tunnel token
In the tunnel’s Configure page, find the install command — it contains a token starting with
eyJ.... Copy it.Deploy with the tunnel overlay
docker-compose.tunnel.yml overlay adds a cloudflared sidecar container. The base docker-compose.yml is unchanged — community users without Cloudflare can continue using docker compose up -d as before.Remove the old DNS record
If your domain previously had an A record pointing to your server’s IP, delete it in Cloudflare DNS. The tunnel automatically creates a CNAME record pointing to its edge endpoint.
Script Deployment (Bare Metal)
For bare-metal servers or custom process managers, use./start.sh directly: