karlexmarin Claude Opus 4.7 (1M context) commited on
Commit
d0a945b
·
1 Parent(s): 6ec910c

feat(ui): info tooltips, help modal, more visible verdict box

Browse files

- Add (ⓘ) info icon next to every form field with hover tooltip
explaining the parameter (theta, T_train, n_kv_heads, etc.)
- Add 📘 Help button in header opening modal with full manual:
what each recipe does, how to add models, parameter glossary,
privacy notes, what verdicts mean.
- Make verdict box much more prominent: bigger text (1.6rem),
emoji prefix (✅/❌/⚠), thicker border, more padding.
- Improve verdict-class detection for all verdict strings
(YES/NO/GO/MEMORY-LIMITED/TINY-MODEL/USE SOFT/etc.)
- Add console.log debug in renderResult to diagnose if verdict
rendering fails (helps users report bugs).
- Defensive null-checks on verdict/recipe_id/recipe_name fields.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

Files changed (3) hide show
  1. index.html +106 -1
  2. js/main.js +59 -11
  3. style.css +111 -0
index.html CHANGED
@@ -17,15 +17,120 @@
17
  <p class="subtle">
18
  All computation happens locally — your data never leaves this page.
19
  </p>
 
 
 
20
  </header>
21
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
  <main>
23
  <!-- Status -->
24
  <section id="status-bar"><div id="status">⏳ Loading Python runtime...</div></section>
25
 
26
  <!-- Mode toggle -->
27
  <section id="mode-section">
28
- <h2>🎯 Mode</h2>
 
 
 
 
29
  <div class="mode-tabs">
30
  <button class="mode-btn active" data-mode="ask">💬 Ask in plain English</button>
31
  <button class="mode-btn" data-mode="recipe">📋 Pick recipe + fill form</button>
 
17
  <p class="subtle">
18
  All computation happens locally — your data never leaves this page.
19
  </p>
20
+ <p style="margin-top: 0.75rem;">
21
+ <button id="help-btn" type="button">📘 Help & examples</button>
22
+ </p>
23
  </header>
24
 
25
+ <!-- Help modal -->
26
+ <div id="help-modal">
27
+ <div class="help-content">
28
+ <button class="help-close" id="help-close">×</button>
29
+ <h2>📘 TAF Agent — User Manual</h2>
30
+
31
+ <h3>What does it do?</h3>
32
+ <p>Predicts <strong>practical viability</strong> of any transformer LLM <em>before you spend GPU/$</em>.
33
+ Answers questions like "will this model work at L=32K?" or "should I train custom or use API?" using
34
+ deterministic Python formulas (TAF — Thermodynamic Attention Framework).</p>
35
+
36
+ <h3>How to use — 2 modes</h3>
37
+ <p><strong>💬 Ask in plain English</strong> (default): type your question, the in-browser LLM picks
38
+ the right recipe and runs it. Best for casual exploration.</p>
39
+ <p><strong>📋 Pick recipe + form</strong>: select a recipe manually, fill the parameters, run.
40
+ Best when you want full control or know exactly what you need.</p>
41
+
42
+ <h3>The 5 recipes available</h3>
43
+
44
+ <p><strong>X-1 Custom training vs API</strong> — compares cost of training your own model vs paying for API access.</p>
45
+ <div class="help-example">
46
+ Try: <em>"Should I train an 8B custom model or use GPT-4o for 50M tokens/month?"</em><br>
47
+ Answer types: YES (custom) / NO (API) with break-even months.
48
+ </div>
49
+
50
+ <p><strong>X-2 Long Context Viability</strong> — predicts if a model serves a target context length reliably.</p>
51
+ <div class="help-example">
52
+ Try: <em>"Will Meta-Llama-3-8B handle 32000 tokens for retrieval?"</em><br>
53
+ Chains: γ_Padé → decomposition → d_horizon → NIAH ceiling → hallucination → KV memory.<br>
54
+ Verdict: YES / DEGRADED / NO with mitigation if needed.
55
+ </div>
56
+
57
+ <p><strong>X-3 Budget pre-flight</strong> — given $ budget, what model is feasible to train?</p>
58
+ <div class="help-example">
59
+ Try: <em>"I have $5000, what model can I train?"</em><br>
60
+ Answer: GO / TINY-MODEL / MEMORY-LIMITED with concrete N (params) and D (tokens).
61
+ </div>
62
+
63
+ <p><strong>X-5 Hardware selection</strong> — which GPU should I use to serve at target throughput?</p>
64
+ <div class="help-example">
65
+ Try: <em>"Cheapest hardware to serve Llama-3-8B at 10M tokens/day"</em><br>
66
+ Answer: best GPU + $/Mtok + capacity vs target.
67
+ </div>
68
+
69
+ <p><strong>X-19 KV Compression decision</strong> — should I use soft decay, hard cutoff, or literature methods?</p>
70
+ <div class="help-example">
71
+ Try: <em>"How to compress KV cache for Qwen2.5-7B at 32K?"</em><br>
72
+ Answer: USE SOFT DECAY / USE D_f CUTOFF / USE LITERATURE METHODS / USE HARD T_train.
73
+ </div>
74
+
75
+ <h3>Adding new models</h3>
76
+ <ul>
77
+ <li><strong>Preset list</strong>: 11 popular models curated. Just select from dropdown.</li>
78
+ <li><strong>HF Hub fetch</strong>: paste any model id (e.g. <code>Qwen/Qwen2.5-32B-Instruct</code>),
79
+ click 📥 Fetch. Browser downloads <code>config.json</code> directly from HuggingFace,
80
+ fills the form. Works for any public model.</li>
81
+ <li><strong>Manual</strong>: fill the form fields directly with values from the model card.</li>
82
+ </ul>
83
+
84
+ <h3>The audit chain</h3>
85
+ <p>Every result shows the full <strong>Computation Chain</strong> — each formula step with its inputs,
86
+ output, and interpretation. Click any step to expand. Cite section numbers (§26.1, §19.1, etc.) refer
87
+ to the underlying paper for derivation.</p>
88
+
89
+ <h3>The plain-English answer</h3>
90
+ <p>After the deterministic chain runs, an in-browser LLM (Qwen2.5-0.5B, ~350MB cached after first load)
91
+ synthesizes a plain-English summary. The numbers above are <em>always correct</em> (deterministic Python);
92
+ the synthesis is LLM-generated — verify against the chain if in doubt.</p>
93
+
94
+ <h3>Common parameters explained</h3>
95
+ <ul>
96
+ <li><strong>θ (rope_theta)</strong>: RoPE base frequency. Higher = more long-range capacity.
97
+ Typical: 10000 (early), 500000 (Llama-3), 1000000 (Qwen2.5).</li>
98
+ <li><strong>T_train</strong>: max context the model was trained on. From <code>max_position_embeddings</code>.</li>
99
+ <li><strong>T_eval</strong>: <em>your target</em> inference context length. The key knob.</li>
100
+ <li><strong>n_kv_heads &lt; n_attention_heads</strong>: model uses GQA (Grouped Query Attention).
101
+ Reduces KV memory but pushes γ toward Hagedorn.</li>
102
+ <li><strong>has_SWA</strong>: model uses Sliding Window Attention (Mistral, gemma-2).</li>
103
+ <li><strong>n_params</strong>: total parameter count. Threshold ~400M for induction-head emergence.</li>
104
+ </ul>
105
+
106
+ <h3>What to look for in verdicts</h3>
107
+ <ul>
108
+ <li><strong style="color:#3fb950;">YES / GO</strong> — proceed with confidence; numbers support the choice.</li>
109
+ <li><strong style="color:#d29922;">DEGRADED / TINY-MODEL</strong> — works but with caveats; read the action.</li>
110
+ <li><strong style="color:#f85149;">NO / MEMORY-LIMITED</strong> — don't proceed as-is; mitigation provided.</li>
111
+ </ul>
112
+
113
+ <h3>Privacy</h3>
114
+ <p>Everything runs in your browser. No telemetry, no analytics, no data sent anywhere. Even the LLM model
115
+ runs locally via WebGPU/WebAssembly. Your model_ids and questions never leave this page.</p>
116
+
117
+ <h3>Source & paper</h3>
118
+ <p>Source code: <a href="https://github.com/karlesmarin/tafagent" target="_blank">github.com/karlesmarin/tafagent</a><br>
119
+ Paper: <em>Marin 2026 — Transformer Thermodynamics</em> (arXiv forthcoming)</p>
120
+ </div>
121
+ </div>
122
+
123
  <main>
124
  <!-- Status -->
125
  <section id="status-bar"><div id="status">⏳ Loading Python runtime...</div></section>
126
 
127
  <!-- Mode toggle -->
128
  <section id="mode-section">
129
+ <h2>🎯 Mode <span class="info"><span class="tooltip"><strong>Two ways to use the tool</strong>.<br>
130
+ <strong>Ask</strong>: free-form question, browser LLM picks the right recipe.<br>
131
+ <strong>Recipe</strong>: manual selection with full form control.<br>
132
+ Same result either way — pick whichever fits your style.
133
+ </span></span></h2>
134
  <div class="mode-tabs">
135
  <button class="mode-btn active" data-mode="ask">💬 Ask in plain English</button>
136
  <button class="mode-btn" data-mode="recipe">📋 Pick recipe + fill form</button>
js/main.js CHANGED
@@ -131,15 +131,23 @@ function buildDynamicForm(recipe) {
131
  recipe.params.forEach(name => {
132
  const div = document.createElement("div");
133
  div.className = "form-field";
134
- const label = document.createElement("label");
135
- label.textContent = paramLabel(name);
136
- label.htmlFor = `param_${name}`;
 
 
 
 
 
 
 
 
 
137
  const input = document.createElement("input");
138
  input.type = "text";
139
  input.id = `param_${name}`;
140
  input.dataset.param = name;
141
  input.value = defaults[name] !== undefined ? String(defaults[name]) : "";
142
- div.appendChild(label);
143
  div.appendChild(input);
144
  container.appendChild(div);
145
  });
@@ -161,6 +169,29 @@ function paramLabel(name) {
161
  return labels[name] || name;
162
  }
163
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
164
  function getRecipeDefaults(recipeId) {
165
  const D = {
166
  "X-1": { N_params: "8e9", D_tokens: "", gpu: "H100 SXM", n_gpus: 8, mfu: 0.45,
@@ -405,6 +436,7 @@ json.dumps(result)
405
  }
406
 
407
  function renderResult(r) {
 
408
  if (r.error) {
409
  $("verdict-box").className = "verdict-no";
410
  $("verdict-box").innerHTML = `<strong>Error</strong>: ${escapeHtml(r.error)}`;
@@ -412,21 +444,28 @@ function renderResult(r) {
412
  return;
413
  }
414
  const vBox = $("verdict-box");
 
 
 
 
 
415
  let vClass = "";
416
- if (r.verdict.startsWith("YES") || r.verdict === "GO") vClass = "verdict-yes";
417
- else if (r.verdict.startsWith("NO")) vClass = "verdict-no";
418
  else vClass = "verdict-degraded";
419
  vBox.className = vClass;
 
420
  vBox.innerHTML = `
421
- <div style="display:flex; justify-content:space-between; align-items:center; margin-bottom:0.5rem;">
422
- <div style="font-size:1.3rem; font-weight:700;">${escapeHtml(r.verdict)}</div>
423
- <div class="recipe-tag">${r.recipe_id} — ${escapeHtml(r.recipe_name)}</div>
424
  </div>
425
- <div><strong>Reason:</strong> ${escapeHtml(r.reason)}</div>
426
  ${r.mitigation && r.mitigation !== "None required." && r.mitigation !== "None — proceed with Chinchilla-optimal recipe."
427
- ? `<div style="margin-top:0.5rem;"><strong>Action:</strong> ${escapeHtml(r.mitigation)}</div>`
428
  : ""}
429
  `;
 
430
 
431
  const cBox = $("chain-box");
432
  cBox.innerHTML = "";
@@ -567,6 +606,15 @@ function formatResultPlain(r) {
567
  return String(r);
568
  }
569
 
 
 
 
 
 
 
 
 
 
570
  // ════════════════════════════════════════════════════════════════════
571
  // Bootstrap
572
  // ════════════════════════════════════════════════════════════════════
 
131
  recipe.params.forEach(name => {
132
  const div = document.createElement("div");
133
  div.className = "form-field";
134
+
135
+ const labelWrap = document.createElement("label");
136
+ labelWrap.htmlFor = `param_${name}`;
137
+ labelWrap.innerHTML = paramLabel(name);
138
+ if (PARAM_TOOLTIPS[name]) {
139
+ const info = document.createElement("span");
140
+ info.className = "info";
141
+ info.innerHTML = `<span class="tooltip">${PARAM_TOOLTIPS[name]}</span>`;
142
+ labelWrap.appendChild(info);
143
+ }
144
+ div.appendChild(labelWrap);
145
+
146
  const input = document.createElement("input");
147
  input.type = "text";
148
  input.id = `param_${name}`;
149
  input.dataset.param = name;
150
  input.value = defaults[name] !== undefined ? String(defaults[name]) : "";
 
151
  div.appendChild(input);
152
  container.appendChild(div);
153
  });
 
169
  return labels[name] || name;
170
  }
171
 
172
+ const PARAM_TOOLTIPS = {
173
+ theta: "<strong>RoPE base frequency</strong>. From <code>config.rope_theta</code>. Higher = more long-range capacity. Typical: <code>10000</code> early models, <code>500000</code> Llama-3, <code>1000000</code> Qwen2.5.",
174
+ T_train: "<strong>Max context the model was trained on</strong>. From <code>max_position_embeddings</code>. The model has never seen positions beyond this; extrapolating much further usually fails.",
175
+ T_eval: "<strong>Your target inference context length</strong>. The key knob. The whole question is: will the model behave well at <em>this</em> length?",
176
+ n_attention_heads: "Number of query heads. From <code>num_attention_heads</code>.",
177
+ n_kv_heads: "Number of K/V heads. If &lt; n_attention_heads → model uses GQA (Grouped Query Attention). Smaller = more memory-efficient KV cache but pushes γ toward Hagedorn boundary.",
178
+ d_head: "Per-head dimension. Typically <code>hidden_size / n_attention_heads</code>. Common: 64, 80, 128.",
179
+ n_layers: "Number of transformer layers. From <code>num_hidden_layers</code>.",
180
+ n_params: "<strong>Total parameter count</strong>. Use scientific notation: <code>8e9</code> for 8B. Threshold ~400M is the induction-head emergence boundary (sign-flip in Δγ).",
181
+ has_SWA: "Sliding Window Attention. <code>true</code> for Mistral, gemma-2, phi-3. SWA lowers γ_decomposition by ~0.21.",
182
+ N_params: "Same as n_params. Total parameter count, scientific notation (e.g. <code>8e9</code>).",
183
+ D_tokens: "Number of training tokens. Leave empty to use Chinchilla 20:1 default (D = 20·N).",
184
+ gpu: "GPU model from the catalog. Options: H100 SXM, H100 PCIe, H200, B200, A100 80GB, A100 40GB, L40S, MI300X, RTX 4090, RTX 5090, RTX 5060Ti.",
185
+ n_gpus: "Number of GPUs in your training/serving cluster.",
186
+ mfu: "<strong>Model FLOPs Utilization</strong>. Realistic fraction of peak FLOPs achieved. Typical: 0.4-0.5 for well-tuned. Default 0.45.",
187
+ api_model: "Frontier API to compare against. Options: GPT-4o, GPT-4o-mini, Claude-Opus-4, Claude-Sonnet-4, Claude-Haiku-4, Gemini-1.5-Pro, DeepSeek-V3, Llama-3.3-70B (Together).",
188
+ monthly_tokens_M: "Expected monthly token volume <em>in millions</em>. e.g. <code>10</code> = 10 million tokens/month.",
189
+ USD_budget: "Your training budget in US dollars (no symbol). e.g. <code>5000</code> for $5K.",
190
+ bytes_per_weight: "Memory per parameter. BF16/FP16 = 2, INT8 = 1, INT4 = 0.5.",
191
+ target_tokens_per_day: "How many tokens/day you need to serve. e.g. <code>10000000</code> = 10M tokens/day.",
192
+ concurrent_users: "Simultaneous concurrent requests. Affects KV cache memory needed.",
193
+ };
194
+
195
  function getRecipeDefaults(recipeId) {
196
  const D = {
197
  "X-1": { N_params: "8e9", D_tokens: "", gpu: "H100 SXM", n_gpus: 8, mfu: 0.45,
 
436
  }
437
 
438
  function renderResult(r) {
439
+ console.log("[TAF] renderResult called with:", r);
440
  if (r.error) {
441
  $("verdict-box").className = "verdict-no";
442
  $("verdict-box").innerHTML = `<strong>Error</strong>: ${escapeHtml(r.error)}`;
 
444
  return;
445
  }
446
  const vBox = $("verdict-box");
447
+ if (!vBox) {
448
+ console.error("[TAF] verdict-box element not found!");
449
+ return;
450
+ }
451
+ const verdictStr = String(r.verdict || "UNKNOWN");
452
  let vClass = "";
453
+ if (verdictStr.startsWith("YES") || verdictStr === "GO" || verdictStr.startsWith("USE SOFT")) vClass = "verdict-yes";
454
+ else if (verdictStr.startsWith("NO") || verdictStr.startsWith("MEMORY") || verdictStr === "TINY-MODEL") vClass = "verdict-no";
455
  else vClass = "verdict-degraded";
456
  vBox.className = vClass;
457
+ const verdictEmoji = vClass === "verdict-yes" ? "✅" : (vClass === "verdict-no" ? "❌" : "⚠");
458
  vBox.innerHTML = `
459
+ <div style="display:flex; justify-content:space-between; align-items:center; margin-bottom:0.75rem; gap:1rem; flex-wrap:wrap;">
460
+ <div style="font-size:1.6rem; font-weight:800;">${verdictEmoji} ${escapeHtml(verdictStr)}</div>
461
+ <div class="recipe-tag">${escapeHtml(r.recipe_id || "")} — ${escapeHtml(r.recipe_name || "")}</div>
462
  </div>
463
+ <div style="margin-bottom:0.5rem;"><strong>Reason:</strong> ${escapeHtml(r.reason || "(none)")}</div>
464
  ${r.mitigation && r.mitigation !== "None required." && r.mitigation !== "None — proceed with Chinchilla-optimal recipe."
465
+ ? `<div><strong>Action:</strong> ${escapeHtml(r.mitigation)}</div>`
466
  : ""}
467
  `;
468
+ console.log("[TAF] verdict-box populated with class:", vClass, "verdict:", verdictStr);
469
 
470
  const cBox = $("chain-box");
471
  cBox.innerHTML = "";
 
606
  return String(r);
607
  }
608
 
609
+ // ════════════════════════════════════════════════════════════════════
610
+ // Help modal
611
+ // ════════════════════════════════════════════════════════════════════
612
+ $("help-btn").addEventListener("click", () => $("help-modal").classList.add("open"));
613
+ $("help-close").addEventListener("click", () => $("help-modal").classList.remove("open"));
614
+ $("help-modal").addEventListener("click", (e) => {
615
+ if (e.target.id === "help-modal") $("help-modal").classList.remove("open");
616
+ });
617
+
618
  // ════════════════════════════════════════════════════════════════════
619
  // Bootstrap
620
  // ════════════════════════════════════════════════════════════════════
style.css CHANGED
@@ -133,6 +133,117 @@ button:disabled { background: #444; cursor: not-allowed; }
133
  border-radius: 4px;
134
  }
135
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
136
  .mode-tabs { display: flex; gap: 0.5rem; margin-bottom: 0.75rem; flex-wrap: wrap; }
137
  .mode-btn {
138
  background: var(--bg-input); color: var(--fg-dim);
 
133
  border-radius: 4px;
134
  }
135
 
136
+ /* ─── Info icon + tooltip system ─── */
137
+ .info {
138
+ display: inline-flex;
139
+ align-items: center;
140
+ justify-content: center;
141
+ width: 16px; height: 16px;
142
+ border-radius: 50%;
143
+ background: var(--bg-input);
144
+ color: var(--accent);
145
+ border: 1px solid var(--border);
146
+ font-size: 11px;
147
+ font-style: italic;
148
+ font-family: serif;
149
+ cursor: help;
150
+ margin-left: 0.3rem;
151
+ position: relative;
152
+ user-select: none;
153
+ }
154
+ .info:hover { border-color: var(--accent); background: var(--accent-dim); color: white; }
155
+ .info::before { content: "i"; }
156
+
157
+ .tooltip {
158
+ visibility: hidden;
159
+ opacity: 0;
160
+ transition: opacity 0.15s, visibility 0.15s;
161
+ position: absolute;
162
+ bottom: calc(100% + 8px);
163
+ left: 50%;
164
+ transform: translateX(-50%);
165
+ background: #1a1f29;
166
+ color: var(--fg);
167
+ border: 1px solid var(--accent);
168
+ border-radius: 6px;
169
+ padding: 0.6rem 0.8rem;
170
+ font-size: 0.85rem;
171
+ font-family: -apple-system, BlinkMacSystemFont, sans-serif;
172
+ font-style: normal;
173
+ width: 280px;
174
+ z-index: 100;
175
+ text-align: left;
176
+ line-height: 1.5;
177
+ box-shadow: 0 4px 16px rgba(0,0,0,0.4);
178
+ pointer-events: none;
179
+ }
180
+ .tooltip strong { color: var(--accent); }
181
+ .tooltip code {
182
+ background: var(--bg);
183
+ padding: 0.1rem 0.3rem;
184
+ border-radius: 3px;
185
+ font-family: monospace;
186
+ font-size: 0.85em;
187
+ }
188
+ .info:hover .tooltip { visibility: visible; opacity: 1; }
189
+ .tooltip::after {
190
+ content: "";
191
+ position: absolute;
192
+ top: 100%; left: 50%;
193
+ margin-left: -6px;
194
+ border-width: 6px;
195
+ border-style: solid;
196
+ border-color: var(--accent) transparent transparent transparent;
197
+ }
198
+
199
+ /* Make verdict box more prominent */
200
+ #verdict-box {
201
+ font-size: 1.1rem;
202
+ padding: 1.5rem;
203
+ border-radius: 8px;
204
+ border-left: 6px solid;
205
+ margin-bottom: 1rem;
206
+ min-height: 80px;
207
+ }
208
+ .verdict-yes { background: rgba(63, 185, 80, 0.15); border-color: var(--success); }
209
+ .verdict-no { background: rgba(248, 81, 73, 0.15); border-color: var(--danger); }
210
+ .verdict-degraded { background: rgba(210, 153, 34, 0.15); border-color: var(--warning); }
211
+
212
+ /* Help modal */
213
+ #help-btn {
214
+ background: transparent; color: var(--accent);
215
+ border: 1px solid var(--accent); border-radius: 6px;
216
+ padding: 0.3rem 0.7rem; font-size: 0.85rem;
217
+ }
218
+ #help-btn:hover { background: var(--accent); color: white; }
219
+ #help-modal {
220
+ display: none; position: fixed; inset: 0;
221
+ background: rgba(0,0,0,0.7); z-index: 1000;
222
+ padding: 2rem; overflow-y: auto;
223
+ }
224
+ #help-modal.open { display: flex; align-items: flex-start; justify-content: center; }
225
+ .help-content {
226
+ background: var(--bg-card); color: var(--fg);
227
+ max-width: 800px; width: 100%;
228
+ border: 1px solid var(--accent); border-radius: 12px;
229
+ padding: 2rem; position: relative;
230
+ }
231
+ .help-close {
232
+ position: absolute; top: 1rem; right: 1rem;
233
+ background: var(--bg-input); border: 1px solid var(--border);
234
+ color: var(--fg); border-radius: 50%;
235
+ width: 32px; height: 32px; cursor: pointer; font-size: 1.2rem;
236
+ }
237
+ .help-content h3 { color: var(--accent); margin-top: 1.5rem; }
238
+ .help-content code {
239
+ background: var(--bg-input); padding: 0.1rem 0.4rem;
240
+ border-radius: 3px; font-family: monospace;
241
+ }
242
+ .help-example {
243
+ background: var(--bg-input); border-left: 3px solid var(--accent);
244
+ padding: 0.75rem 1rem; margin: 0.5rem 0; border-radius: 4px;
245
+ }
246
+
247
  .mode-tabs { display: flex; gap: 0.5rem; margin-bottom: 0.75rem; flex-wrap: wrap; }
248
  .mode-btn {
249
  background: var(--bg-input); color: var(--fg-dim);