lib/prompts
File-based prompt loader + templates shared by both the generation pipeline and the runtime orchestration layer.
Directory layout
lib/prompts/
βββ loader.ts β file I/O + cache
βββ index.ts β public API (loadPrompt, buildPrompt, β¦) + PROMPT_IDS
βββ types.ts β PromptId / SnippetId string literal unions
βββ templates/
β βββ <prompt-id>/
β βββ system.md β required
β βββ user.md β optional (mostly for offline generation prompts)
βββ snippets/
βββ <snippet-id>.md β reusable blocks referenced via {{snippet:β¦}}
Template syntax
Three kinds of placeholder:
| Syntax | Semantics | Resolved by |
|---|---|---|
{{variableName}} |
Value is provided by the caller via buildPrompt(id, vars) |
interpolateVariables in loader.ts |
{{snippet:snippet-name}} |
File content is spliced in at load time | processSnippets in loader.ts |
{{#if conditionName}}...{{/if}} |
Content is included only when conditionName is truthy in the template variables |
processConditionalBlocks in loader.ts |
Processing order is snippet includes first, then conditional blocks, then
variable interpolation, so snippets may themselves contain {{#if}}
blocks and {{variableName}} placeholders if the caller provides the value.
Conditional blocks read from the same variables record passed to
buildPrompt β no separate conditions object is needed.
Naming conventions
- Placeholder names use
camelCase. Example:{{agentName}},{{stateContext}}. - Template IDs use
kebab-case. Example:agent-system,pbl-design. lib/prompts/templates/slide-content/{system,user}.mdstill uses legacysnake_caseplaceholders ({{canvas_width}},{{canvas_height}}). This predates the camelCase convention; don't imitate it when writing new templates.
Adding a new prompt
- Create
lib/prompts/templates/<new-id>/system.md(anduser.mdif needed). - Add
<new-id>to thePromptIdunion intypes.ts. - Add
NEW_ID: '<new-id>'to thePROMPT_IDSconstant inindex.ts(thesatisfies Record<string, PromptId>clause enforces that the value exists in the union). - Call
buildPrompt(PROMPT_IDS.NEW_ID, vars)from the consuming module.
Still in TypeScript (not yet in templates)
Not every prompt fragment lives in markdown. Some role-conditional content still exists as TS template literals and needs editing directly:
| What | Where | Why not in markdown |
|---|---|---|
ROLE_GUIDELINES (teacher / assistant / student blocks) |
lib/orchestration/prompt-builder.ts |
Branches by agentConfig.role |
| Length targets (100 / 80 / 50 chars per role) | buildLengthGuidelines in lib/orchestration/prompt-builder.ts |
Branches by role |
These may migrate into snippets in a later pass once Phase 2 eval feedback shows which parts need frequent iteration.
Silent-passthrough gotcha
interpolateVariables leaves unknown placeholders unchanged rather than
throwing:
interpolate('hello {{missing}}', {}) === 'hello {{missing}}'
This is intentional for partial-render scenarios but means a typo in a
placeholder name ships literal {{β¦}} text to the LLM. Defence:
- Tests in
tests/prompts/templates.test.tsassert that the fully-rendered agent-system / director / pbl-design prompts contain no surviving{{β¦}}tokens. Keep that check passing when adding variables. {{snippet:name}}lookups throw on a missing snippet file rather than passing through silently, so a typo like{{snippet:speach-guidelines}}fails at load time instead of reaching the LLM.
Testing a template change locally
The cheapest feedback loop is the template smoke suite:
pnpm test tests/prompts
For end-to-end runtime behaviour (agent loop + template composition + chat/director integration), use the whiteboard eval harness on one scenario:
PORT=3100 pnpm dev &
EVAL_CHAT_MODEL=<provider:model> EVAL_SCORER_MODEL=<provider:model> \
pnpm eval:whiteboard --base-url http://localhost:3100 \
--scenario econ-tech-innovation
Loading
loadPrompt and loadSnippet read from disk on every call. No caching β
markdown edits take effect immediately without restarting any dev server.
Prompt disk I/O is negligible next to the LLM call it feeds.