File size: 1,463 Bytes
caa28aa | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 | # Riprap environment configuration.
#
# Copy this file to `.env` and fill in the values that match the
# inference backend you want to talk to. The default profile runs
# only the app container, so both the LLM (vLLM serving Granite 4.1)
# and the ML specialist service must be reachable at HTTP endpoints.
#
# Three common configurations:
#
# 1. Easiest — talk to the live demo's backends. Adam runs a public
# MI300X droplet for the hackathon; if it's still up at demo time,
# both endpoints are reachable from anywhere.
#
# 2. Self-hosted — bring up your own MI300X droplet via
# docs/DROPLET-RUNBOOK.md, then point both URLs at it.
#
# 3. Full local — use `docker compose --profile with-models up` to
# run the riprap-models service yourself (requires a GPU on your
# box) and point a separate vLLM container at Granite 4.1.
# ---- Granite 4.1 reconciler (vLLM, OpenAI-compatible) -----------------
# Set to "ollama" instead of "vllm" if you have a local Ollama with
# granite4.1:8b pulled and want to use that.
RIPRAP_LLM_PRIMARY=vllm
RIPRAP_LLM_BASE_URL=http://your-vllm-host:8000/v1
RIPRAP_LLM_API_KEY=your-token-here
# ---- ML specialist service (Prithvi, TerraMind, GLiNER, etc.) ---------
RIPRAP_ML_BASE_URL=http://your-ml-host:7860
RIPRAP_ML_API_KEY=your-token-here
# ---- Backend pill labels (cosmetic, shown top-right of the UI) --------
RIPRAP_HARDWARE_LABEL=AMD MI300X
RIPRAP_ENGINE_LABEL=Granite 4.1 / vLLM
|