File size: 4,437 Bytes
f56a29b | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 | # `lib/prompts`
File-based prompt loader + templates shared by both the generation pipeline and
the runtime orchestration layer.
## Directory layout
```
lib/prompts/
βββ loader.ts β file I/O + cache
βββ index.ts β public API (loadPrompt, buildPrompt, β¦) + PROMPT_IDS
βββ types.ts β PromptId / SnippetId string literal unions
βββ templates/
β βββ <prompt-id>/
β βββ system.md β required
β βββ user.md β optional (mostly for offline generation prompts)
βββ snippets/
βββ <snippet-id>.md β reusable blocks referenced via {{snippet:β¦}}
```
## Template syntax
Three kinds of placeholder:
| Syntax | Semantics | Resolved by |
|---|---|---|
| `{{variableName}}` | Value is provided by the caller via `buildPrompt(id, vars)` | `interpolateVariables` in `loader.ts` |
| `{{snippet:snippet-name}}` | File content is spliced in at load time | `processSnippets` in `loader.ts` |
| `{{#if conditionName}}...{{/if}}` | Content is included only when `conditionName` is truthy in the template variables | `processConditionalBlocks` in `loader.ts` |
Processing order is **snippet includes first, then conditional blocks, then
variable interpolation**, so snippets may themselves contain `{{#if}}`
blocks and `{{variableName}}` placeholders if the caller provides the value.
Conditional blocks read from the same `variables` record passed to
`buildPrompt` β no separate conditions object is needed.
## Naming conventions
- **Placeholder names use `camelCase`.** Example: `{{agentName}}`, `{{stateContext}}`.
- **Template IDs use `kebab-case`.** Example: `agent-system`, `pbl-design`.
- `lib/prompts/templates/slide-content/{system,user}.md` still uses legacy
`snake_case` placeholders (`{{canvas_width}}`, `{{canvas_height}}`). This
predates the camelCase convention; don't imitate it when writing new templates.
## Adding a new prompt
1. Create `lib/prompts/templates/<new-id>/system.md` (and `user.md` if needed).
2. Add `<new-id>` to the `PromptId` union in `types.ts`.
3. Add `NEW_ID: '<new-id>'` to the `PROMPT_IDS` constant in `index.ts`
(the `satisfies Record<string, PromptId>` clause enforces that the value
exists in the union).
4. Call `buildPrompt(PROMPT_IDS.NEW_ID, vars)` from the consuming module.
## Still in TypeScript (not yet in templates)
Not every prompt fragment lives in markdown. Some role-conditional content
still exists as TS template literals and needs editing directly:
| What | Where | Why not in markdown |
|---|---|---|
| `ROLE_GUIDELINES` (teacher / assistant / student blocks) | `lib/orchestration/prompt-builder.ts` | Branches by `agentConfig.role` |
| Length targets (100 / 80 / 50 chars per role) | `buildLengthGuidelines` in `lib/orchestration/prompt-builder.ts` | Branches by role |
These may migrate into snippets in a later pass once Phase 2 eval feedback
shows which parts need frequent iteration.
## Silent-passthrough gotcha
`interpolateVariables` leaves unknown placeholders **unchanged** rather than
throwing:
```ts
interpolate('hello {{missing}}', {}) === 'hello {{missing}}'
```
This is intentional for partial-render scenarios but means a typo in a
placeholder name ships literal `{{β¦}}` text to the LLM. Defence:
- Tests in `tests/prompts/templates.test.ts` assert that the fully-rendered
agent-system / director / pbl-design prompts contain no surviving
`{{β¦}}` tokens. Keep that check passing when adding variables.
- `{{snippet:name}}` lookups **throw** on a missing snippet file rather than
passing through silently, so a typo like `{{snippet:speach-guidelines}}`
fails at load time instead of reaching the LLM.
## Testing a template change locally
The cheapest feedback loop is the template smoke suite:
```bash
pnpm test tests/prompts
```
For end-to-end runtime behaviour (agent loop + template composition +
chat/director integration), use the whiteboard eval harness on one scenario:
```bash
PORT=3100 pnpm dev &
EVAL_CHAT_MODEL=<provider:model> EVAL_SCORER_MODEL=<provider:model> \
pnpm eval:whiteboard --base-url http://localhost:3100 \
--scenario econ-tech-innovation
```
## Loading
`loadPrompt` and `loadSnippet` read from disk on every call. No caching β
markdown edits take effect immediately without restarting any dev server.
Prompt disk I/O is negligible next to the LLM call it feeds.
|