| # Riprap environment configuration. | |
| # | |
| # Copy this file to `.env` and fill in the values that match the | |
| # inference backend you want to talk to. The default profile runs | |
| # only the app container, so both the LLM (vLLM serving Granite 4.1) | |
| # and the ML specialist service must be reachable at HTTP endpoints. | |
| # | |
| # Three common configurations: | |
| # | |
| # 1. Easiest β talk to the live demo's backends. Adam runs a public | |
| # MI300X droplet for the hackathon; if it's still up at demo time, | |
| # both endpoints are reachable from anywhere. | |
| # | |
| # 2. Self-hosted β bring up your own MI300X droplet via | |
| # docs/DROPLET-RUNBOOK.md, then point both URLs at it. | |
| # | |
| # 3. Full local β use `docker compose --profile with-models up` to | |
| # run the riprap-models service yourself (requires a GPU on your | |
| # box) and point a separate vLLM container at Granite 4.1. | |
| # ---- Granite 4.1 reconciler (vLLM, OpenAI-compatible) ----------------- | |
| # Set to "ollama" instead of "vllm" if you have a local Ollama with | |
| # granite4.1:8b pulled and want to use that. | |
| RIPRAP_LLM_PRIMARY=vllm | |
| RIPRAP_LLM_BASE_URL=http://your-vllm-host:8000/v1 | |
| RIPRAP_LLM_API_KEY=your-token-here | |
| # ---- ML specialist service (Prithvi, TerraMind, GLiNER, etc.) --------- | |
| RIPRAP_ML_BASE_URL=http://your-ml-host:7860 | |
| RIPRAP_ML_API_KEY=your-token-here | |
| # ---- Backend pill labels (cosmetic, shown top-right of the UI) -------- | |
| RIPRAP_HARDWARE_LABEL=AMD MI300X | |
| RIPRAP_ENGINE_LABEL=Granite 4.1 / vLLM | |