Option A: Docker (Recommended)
No local Python or Node.js required — everything is built inside the container.fim-data, fim-uploads) and survives container restarts.
Note: Docker mode does not support hot reload. Code changes require rebuilding the image (docker compose up --build -d). For active development with live reload, use Option B below.
Option B: Local Development
Prerequisites: Python 3.11+, uv, Node.js 18+, pnpm.| Command | What starts | URL |
|---|---|---|
./start.sh | Next.js + FastAPI | http://localhost:3000 (UI) + :8000 (API) |
./start.sh dev | Same, with hot reload (Python --reload + Next.js HMR) | Same |
./start.sh api | FastAPI only (headless, for integration or testing) | http://localhost:8000/api |
Configuration
FIM Agent works with any OpenAI-compatible LLM provider — OpenAI, DeepSeek, Anthropic, Qwen, Ollama, vLLM, and more.| Provider | LLM_API_KEY | LLM_BASE_URL | LLM_MODEL |
|---|---|---|---|
| OpenAI | sk-... | (default) | gpt-4o |
| DeepSeek | sk-... | https://api.deepseek.com/v1 | deepseek-chat |
| Anthropic | sk-ant-... | https://api.anthropic.com/v1 | claude-sonnet-4-6 |
| Ollama (local) | ollama | http://localhost:11434/v1 | qwen2.5:14b |
.env:
Production Deployment
Both options work in production:| Method | Command | Best for |
|---|---|---|
| Docker | docker compose up -d | Hands-off deployment, easy updates |
| Script | ./start.sh | Bare-metal servers, custom process managers |
/api/* requests automatically. Only port 3000 needs to be exposed.
If you use the code execution sandbox (CODE_EXEC_BACKEND=docker), mount the Docker socket: