Skip to main content
All configuration is done via .env. Copy example.env and fill in your values:
cp example.env .env

Configuration Levels

Each integration has a configuration level indicating its importance:
LevelMeaningBehavior when not configured
RequiredCore system dependencySystem will error — chat and primary functions will not work
RecommendedSignificant feature enablerGraceful degradation — the feature is visibly unavailable but the system runs
OptionalEnhancement capabilityTransparent degradation — system works fine, capability simply not present
Note: Admin-configured models (Admin → Models page) can substitute for LLM environment variables. The health check considers both sources.

Frontend (Local Dev Only)

The frontend has a separate env file only for local development: frontend/.env.local.
This file is NOT used in Docker. Inside the Docker container, Next.js proxies /api/* to the Python backend internally (port 8000 is container-internal), so no frontend env file is needed.
For local dev, the defaults work out of the box — you do not need to create frontend/.env.local unless your backend runs on a non-default port. If you need to override, create frontend/.env.local manually:
echo 'NEXT_PUBLIC_API_URL=http://localhost:9000' > frontend/.env.local
VariableDefaultDescription
NEXT_PUBLIC_API_URLhttp://localhost:8000 (auto)Backend URL the browser uses for direct API calls (OAuth redirects, streaming). Auto-detected from window.location if unset — only override if your backend runs on a non-standard port locally.
Build-time note: NEXT_PUBLIC_* variables are baked into the JS bundle at pnpm build time. Changing them at runtime (e.g. via root .env) has no effect — this is why they live in frontend/.env.local for local dev only.

LLM (Required)

VariableRequiredDefaultDescription
LLM_API_KEYYesAPI key for the LLM provider
LLM_BASE_URLNohttps://api.openai.com/v1Base URL of any OpenAI-compatible API
LLM_MODELNogpt-4oMain model — used for planning, analysis, and ReAct agent
FAST_LLM_MODELNo(falls back to LLM_MODEL)Fast model — used for DAG step execution (cheaper, faster)
LLM_TEMPERATURENo0.7Default sampling temperature
LLM_CONTEXT_SIZENo128000Context window size for the main LLM
LLM_MAX_OUTPUT_TOKENSNo64000Max output tokens per call for the main LLM
FAST_LLM_CONTEXT_SIZENo(falls back to LLM_CONTEXT_SIZE)Context window size for the fast LLM
FAST_LLM_MAX_OUTPUT_TOKENSNo(falls back to LLM_MAX_OUTPUT_TOKENS)Max output tokens per call for the fast LLM
Resolution order: User Preference → Admin Models (DB) → ENV Fallback. If an admin model with role “General” is configured in Admin → Models, these ENV vars serve as fallback only. The health check considers both sources.

Agent Execution

VariableRequiredDefaultDescription
REACT_MAX_ITERATIONSNo20Max tool-call iterations per ReAct request
MAX_CONCURRENCYNo5Max parallel steps in DAG executor
DAG_STEP_MAX_ITERATIONSNo15Max tool-call iterations within each DAG step
DAG_MAX_REPLAN_ROUNDSNo3Max autonomous re-plan attempts when goal is not achieved
DAG_REPLAN_STOP_CONFIDENCENo0.8Stop retrying when agent confidence that goal is unachievable exceeds this threshold (0.0 = never stop early, 1.0 = stop on any failure)

Web Tools (Optional)

VariableRequiredDefaultDescription
JINA_API_KEYNoJina API key — also used for embedding and reranker; get yours at jina.ai
TAVILY_API_KEYNoTavily Search API key (auto-selected if set and WEB_SEARCH_PROVIDER is unset)
BRAVE_API_KEYNoBrave Search API key (auto-selected if set and WEB_SEARCH_PROVIDER is unset)
WEB_SEARCH_PROVIDERNojinaSearch provider selector: jina / tavily / brave
WEB_FETCH_PROVIDERNojina (if key set, else httpx)Fetch provider: jina / httpx

VariableRequiredDefaultDescription
EMBEDDING_MODELNojina-embeddings-v3Embedding model identifier
EMBEDDING_DIMENSIONNo1024Embedding vector dimension
EMBEDDING_API_KEYNo(uses JINA_API_KEY)Override API key for a different embedding provider
EMBEDDING_BASE_URLNohttps://api.jina.ai/v1Override base URL for a different embedding provider
RETRIEVAL_MODENogroundinggrounding (full pipeline with citations/conflicts/confidence) or simple (basic RAG)
RERANKER_MODELNojina-reranker-v2-base-multilingualReranker model identifier
RERANKER_PROVIDERNojinaReranker provider: jina / cohere / openai
COHERE_API_KEYNoCohere API key (auto-selects Cohere reranker when set)
COHERE_RERANKER_MODELNorerank-multilingual-v3.0Cohere reranker model
VECTOR_STORE_DIRNo./data/vector_storeDirectory for LanceDB vector store data
Embedding is recommended for knowledge base features. Reranker is optional — search works without it using fusion scoring.

Code Execution

VariableRequiredDefaultDescription
CODE_EXEC_BACKENDNolocallocal (direct host execution) or docker (isolated containers)
DOCKER_PYTHON_IMAGENopython:3.11-slimDocker image for Python execution
DOCKER_NODE_IMAGENonode:20-slimDocker image for Node.js execution
DOCKER_SHELL_IMAGENopython:3.11-slimDocker image for shell execution
DOCKER_MEMORYNo(Docker default)RAM cap per container (e.g. 256m, 512m, 1g)
DOCKER_CPUSNo(Docker default)CPU quota per container (e.g. 0.5, 1.0)
SANDBOX_TIMEOUTNo120Default execution timeout in seconds
Security: local mode runs AI-generated code directly on the host. For internet-facing or multi-user deployments, always set CODE_EXEC_BACKEND=docker.

Tool Artifacts

Size limits for files produced by tool execution (code execution, template rendering, image generation).
VariableRequiredDefaultDescription
MAX_ARTIFACT_SIZENo10485760 (10 MB)Max single artifact file size in bytes
MAX_ARTIFACTS_TOTALNo52428800 (50 MB)Max total artifact size per session in bytes

Image Generation (Optional)

VariableRequiredDefaultDescription
IMAGE_GEN_PROVIDERNogooglegoogle (Gemini native API) or openai (OpenAI-compatible /v1/images/generations)
IMAGE_GEN_API_KEYNoGoogle AI Studio key (google) or proxy/OpenAI API key (openai)
IMAGE_GEN_MODELNogemini-3.1-flash-image-previewImage generation model (e.g. dall-e-3, gemini-nano-banana-2)
IMAGE_GEN_BASE_URLNo(per provider)Google: https://generativelanguage.googleapis.com/v1beta; OpenAI: https://api.openai.com/v1

Auto-registers the email_send built-in tool when SMTP_HOST, SMTP_USER, and SMTP_PASS are all set.
VariableRequiredDefaultDescription
SMTP_HOSTCond.SMTP server hostname
SMTP_PORTNo465SMTP port
SMTP_SSLNosslTLS mode: ssl (port 465) / tls (STARTTLS, port 587) / "" (plain)
SMTP_USERCond.SMTP login username
SMTP_PASSCond.SMTP login password
SMTP_FROMNo(uses SMTP_USER)Sender address shown in From header
SMTP_FROM_NAMENoDisplay name shown in From header
SMTP_ALLOWED_DOMAINSNoComma-separated domain allowlist (e.g. example.com,corp.io); blocks recipients outside listed domains
SMTP_ALLOWED_ADDRESSESNoComma-separated exact-address allowlist; combined with SMTP_ALLOWED_DOMAINS; leave both unset to allow any recipient (not recommended for shared mailboxes)

Connectors

VariableRequiredDefaultDescription
CONNECTOR_RESPONSE_MAX_CHARSNo50000Max characters for non-array JSON / plain-text connector responses
CONNECTOR_RESPONSE_MAX_ITEMSNo10Max array items to keep when connector response is a JSON array

Platform

VariableRequiredDefaultDescription
DATABASE_URLNosqlite+aiosqlite:///./data/fim_agent.dbDatabase connection string (SQLite default; PostgreSQL via asyncpg also supported)
JWT_SECRET_KEYNoCHANGE_MESecret key for JWT token signing. Placeholder value CHANGE_ME (or any legacy default) triggers auto-generation of a secure 256-bit random key on first start, which is written back to .env. Set explicitly in production to keep tokens valid across restarts and replicas.
CORS_ORIGINSNoComma-separated list of extra allowed CORS origins beyond the default localhost entries. Required when the frontend runs on a non-localhost domain (e.g. https://app.example.com).
UPLOADS_DIRNo./uploadsDirectory for uploaded files
MCP_SERVERSNoJSON array of MCP server configs (requires uv sync --extra mcp)
ALLOW_STDIO_MCPNotrueAllow stdio MCP servers. Set false for public/SaaS deployments
LOG_LEVELNoINFOLogging level: DEBUG / INFO / WARNING / ERROR / CRITICAL

OAuth (Optional)

When both CLIENT_ID and CLIENT_SECRET are set for a provider, the login page automatically shows the corresponding OAuth button.
VariableRequiredDefaultDescription
GITHUB_CLIENT_IDNoGitHub OAuth App client ID. Create at github.com/settings/developers → OAuth Apps
GITHUB_CLIENT_SECRETNoGitHub OAuth App client secret
GOOGLE_CLIENT_IDNoGoogle OAuth client ID. Create at console.cloud.google.com/apis/credentials
GOOGLE_CLIENT_SECRETNoGoogle OAuth client secret
DISCORD_CLIENT_IDNoDiscord OAuth2 client ID. Create at discord.com/developers
DISCORD_CLIENT_SECRETNoDiscord OAuth2 client secret
FEISHU_APP_IDNoFeishu (Lark) App ID. Create at open.feishu.cn. Requires contact:user.email:readonly permission
FEISHU_APP_SECRETNoFeishu (Lark) App Secret
FRONTEND_URLProdhttp://localhost:3000Where the browser lands after OAuth completes. Must be set in production (e.g. https://yourdomain.com)
API_BASE_URLProdhttp://localhost:8000Externally reachable backend URL, used to build OAuth callback URLs. Must be set in production
NEXT_PUBLIC_API_URLProd(auto-detected as <hostname>:8000)Browser-side API base URL for OAuth redirects. This is a frontend build-time variable — set it in frontend/.env.local for local dev, or pass it as a Docker build arg for custom production deployments. Auto-detection works for standard reverse-proxy setups (port 80/443).
Prod = optional locally (defaults work), but required for any internet-facing deployment.

OAuth Callback URLs to register with each provider

The backend constructs callback URLs as: {API_BASE_URL}/api/auth/oauth/{provider}/callback
ProviderCallback URL to register
GitHubhttps://yourdomain.com/api/auth/oauth/github/callback
Googlehttps://yourdomain.com/api/auth/oauth/google/callback
Discordhttps://yourdomain.com/api/auth/oauth/discord/callback