Spaces:
Running
Running
Commit ·
c969a03
1
Parent(s): 5d885d7
v0.4 help/info: 7 modes (was 4) + v0.4 sesion-31 section, 4 langs
Browse files- modes.tip: "Four/Cuatro/Quatre/四种" → "Seven/Siete/Sept/七种"; mention 8 recipes
- help.modes.title: "How to use — 7 modes" in EN/ES/FR/ZH
- New help.modes.{inspector,diagnose,phase} entries × 4 langs
- index.html: 3 new <p> for inspector/diagnose/phase modes
- New v04.section.intro key + index.html section listing 4 sesion-31
diagnostics (Architectural Concentration, PDI, 4-bit shift, critical
exponents bundle) — visible in help modal across 4 langs
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
- index.html +15 -1
- js/i18n.js +24 -8
index.html
CHANGED
|
@@ -71,11 +71,14 @@
|
|
| 71 |
<em>before you spend GPU/$</em>. Answers questions like "will this model work at L=32K?" or
|
| 72 |
"should I train custom or use API?" using deterministic Python formulas (TAF — Thermodynamic Attention Framework).</p>
|
| 73 |
|
| 74 |
-
<h3 data-i18n="help.modes.title">How to use —
|
| 75 |
<p data-i18n="help.modes.profile"><strong>📇 Profile</strong>: paste model id → all recipes at once = TAF Card. <strong>Best starting point</strong>.</p>
|
| 76 |
<p data-i18n="help.modes.compare"><strong>🆚 Compare</strong>: 2-3 models side-by-side on same recipe. Best when choosing between candidates.</p>
|
|
|
|
| 77 |
<p data-i18n="help.modes.ask"><strong>💬 Ask plain English</strong>: free-form question, in-browser LLM picks the recipe. Best for casual exploration.</p>
|
| 78 |
<p data-i18n="help.modes.recipe"><strong>📋 Recipe + form</strong>: manual selection, full parameter control. Best when you want exact control.</p>
|
|
|
|
|
|
|
| 79 |
|
| 80 |
<h3 data-i18n="help.recipes.title">The 8 recipes available</h3>
|
| 81 |
|
|
@@ -137,6 +140,17 @@
|
|
| 137 |
|
| 138 |
<p data-i18n="help.v04.constants" style="font-size: 0.9em; opacity: 0.85;"><strong>γ-cluster on famous constants</strong> (intriguing, n=4): CodeLlama-13b γ=0.382 ≈ 1−1/φ (golden conjugate, err 0.0003); pythia-1.4b γ=0.705 ≈ 1/√2; Llama-2-7b γ=0.287 ≈ 1−1/√2; Mistral-Nemo γ=0.428 ≈ log_10(e). Caveat: could be coincidence.</p>
|
| 139 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 140 |
<h3 data-i18n="help.add_models.title">Adding new models (3 ways)</h3>
|
| 141 |
<ul>
|
| 142 |
<li data-i18n="help.add_models.preset"><strong>Preset list</strong>: 11 popular models curated. Just select from dropdown.</li>
|
|
|
|
| 71 |
<em>before you spend GPU/$</em>. Answers questions like "will this model work at L=32K?" or
|
| 72 |
"should I train custom or use API?" using deterministic Python formulas (TAF — Thermodynamic Attention Framework).</p>
|
| 73 |
|
| 74 |
+
<h3 data-i18n="help.modes.title">How to use — 7 modes</h3>
|
| 75 |
<p data-i18n="help.modes.profile"><strong>📇 Profile</strong>: paste model id → all recipes at once = TAF Card. <strong>Best starting point</strong>.</p>
|
| 76 |
<p data-i18n="help.modes.compare"><strong>🆚 Compare</strong>: 2-3 models side-by-side on same recipe. Best when choosing between candidates.</p>
|
| 77 |
+
<p data-i18n="help.modes.inspector"><strong>🔍 Inspect config</strong>: paste raw <code>config.json</code> → tool parses + runs full Profile. For private models, in-development configs, or models not yet on HF Hub.</p>
|
| 78 |
<p data-i18n="help.modes.ask"><strong>💬 Ask plain English</strong>: free-form question, in-browser LLM picks the recipe. Best for casual exploration.</p>
|
| 79 |
<p data-i18n="help.modes.recipe"><strong>📋 Recipe + form</strong>: manual selection, full parameter control. Best when you want exact control.</p>
|
| 80 |
+
<p data-i18n="help.modes.diagnose"><strong>🩺 Diagnose CLI</strong>: generate Python command to measure γ on your local machine (transformers + numpy). Fast ≈5 min CPU; full ≈20–60 min GPU. Output JSON re-uploadable via Inspect.</p>
|
| 81 |
+
<p data-i18n="help.modes.phase"><strong>📊 Phase diagram</strong>: scatter plot of 23 panel models on (log θ, γ) plane. Hagedorn line γ=1 separates Phase A from Phase B. Click a dot to load that model into Recipe form.</p>
|
| 82 |
|
| 83 |
<h3 data-i18n="help.recipes.title">The 8 recipes available</h3>
|
| 84 |
|
|
|
|
| 140 |
|
| 141 |
<p data-i18n="help.v04.constants" style="font-size: 0.9em; opacity: 0.85;"><strong>γ-cluster on famous constants</strong> (intriguing, n=4): CodeLlama-13b γ=0.382 ≈ 1−1/φ (golden conjugate, err 0.0003); pythia-1.4b γ=0.705 ≈ 1/√2; Llama-2-7b γ=0.287 ≈ 1−1/√2; Mistral-Nemo γ=0.428 ≈ log_10(e). Caveat: could be coincidence.</p>
|
| 142 |
|
| 143 |
+
<h3 style="margin-top: 1.5em;" data-i18n="v04.title">🆕 v0.4 — New diagnostics (sesion 31)</h3>
|
| 144 |
+
<p style="opacity: 0.85;"><em data-i18n="v04.section.intro">Four new diagnostic functions derived sesion 31 (2026-04-30) from cross-of-crosses formula games + Sócratic interrogation. Available in <code>taf_browser.py</code> §33.</em></p>
|
| 145 |
+
|
| 146 |
+
<p><strong data-i18n="v04.arch.label">Architectural Concentration</strong> — <span data-i18n="v04.arch.desc">γ_text ≈ γ_Padé − 0.012·n_kv. Cross-panel correlational law (R²=0.30). Caveat: not per-model predictor.</span></p>
|
| 147 |
+
|
| 148 |
+
<p><strong data-i18n="v04.pdi.label">PDI — Padé Deviation Index</strong> — <span data-i18n="v04.pdi.desc">PDI = d_horizon_obs/T_eval. Traffic light: green (≈1), orange (>>1), yellow (<<1), red (Phase B negative).</span></p>
|
| 149 |
+
|
| 150 |
+
<p><strong data-i18n="v04.4bit.label">4-bit Shift Predictor</strong> — <span data-i18n="v04.4bit.desc">MHA: R²(bf16)<0.9 → γ rises; R²>0.99 → γ drops. GQA: precision-robust regardless.</span></p>
|
| 151 |
+
|
| 152 |
+
<p><strong data-i18n="v04.crit.label">Critical Exponents Bundle</strong> — <span data-i18n="v04.crit.desc">ν_c, β_c, η_c (=γ−1, CORRECTED), α_C, γ_susc with AM-GM minimum at γ=1−1/√2≈0.293.</span></p>
|
| 153 |
+
|
| 154 |
<h3 data-i18n="help.add_models.title">Adding new models (3 ways)</h3>
|
| 155 |
<ul>
|
| 156 |
<li data-i18n="help.add_models.preset"><strong>Preset list</strong>: 11 popular models curated. Just select from dropdown.</li>
|
js/i18n.js
CHANGED
|
@@ -157,7 +157,7 @@ export const TRANSLATIONS = {
|
|
| 157 |
"common.no": "No",
|
| 158 |
|
| 159 |
// Mode tooltips
|
| 160 |
-
"modes.tip": "<strong>
|
| 161 |
"profile.tip": "<strong>One-click full diagnosis</strong>. Paste any HF model id (or pick preset). Tool runs all 5 recipes (long-context, KV-compression, custom-vs-API, budget, hardware) and produces a single <strong>TAF Card</strong> with verdict per dimension + key numbers + architecture classification.<br><br><strong>Use case</strong>: \"I'm evaluating Qwen2.5-32B for production — what's its full viability profile?\" → paste id → Profile → done.",
|
| 162 |
"compare.tip": "<strong>Same recipe, multiple models</strong>. Pick 2-3 candidate models and one recipe. See verdicts in a single comparison table.<br><br><strong>Use case</strong>: \"I need long-context retrieval at 16K — which is best: Llama-3-8B, Mistral-7B, or Qwen-7B?\" → pick 3 + X-2 + 16K → see winner.",
|
| 163 |
|
|
@@ -165,11 +165,14 @@ export const TRANSLATIONS = {
|
|
| 165 |
"help.title": "📘 TAF Agent — User Manual",
|
| 166 |
"help.what.title": "What does it do?",
|
| 167 |
"help.what.body": "Predicts <strong>practical viability</strong> of any transformer LLM <em>before you spend GPU/$</em>. Answers questions like \"will this model work at L=32K?\" or \"should I train custom or use API?\" using deterministic Python formulas (TAF — Thermodynamic Attention Framework).",
|
| 168 |
-
"help.modes.title": "How to use —
|
| 169 |
"help.modes.profile": "<strong>📇 Profile</strong>: paste model id → all recipes at once = TAF Card. <strong>Best starting point</strong>.",
|
| 170 |
"help.modes.compare": "<strong>🆚 Compare</strong>: 2-3 models side-by-side on same recipe. Best when choosing between candidates.",
|
|
|
|
| 171 |
"help.modes.ask": "<strong>💬 Ask plain English</strong>: free-form question, in-browser LLM picks the recipe. Best for casual exploration.",
|
| 172 |
"help.modes.recipe": "<strong>📋 Recipe + form</strong>: manual selection, full parameter control. Best when you want exact control.",
|
|
|
|
|
|
|
| 173 |
"help.recipes.title": "The 8 recipes available",
|
| 174 |
"help.recipe.x1.title": "<strong>X-1 Custom training vs API</strong> — compares cost of training your own model vs paying for API access.",
|
| 175 |
"help.recipe.x1.example": "Try: <em>\"Should I train an 8B custom model or use GPT-4o for 50M tokens/month?\"</em><br>Answer types: YES (custom) / NO (API) with break-even months.",
|
|
@@ -220,6 +223,7 @@ export const TRANSLATIONS = {
|
|
| 220 |
|
| 221 |
// §33 v0.4 (sesion 31, 2026-04-30) — new diagnostic functions
|
| 222 |
"v04.title": "🆕 v0.4 — New diagnostics (sesion 31)",
|
|
|
|
| 223 |
"v04.arch.label": "Architectural Concentration",
|
| 224 |
"v04.arch.desc": "γ_text ≈ γ_Padé − 0.012·n_kv. Cross-panel correlational law (R²=0.30). Caveat: not per-model predictor.",
|
| 225 |
"v04.pdi.label": "PDI — Padé Deviation Index",
|
|
@@ -236,6 +240,7 @@ export const TRANSLATIONS = {
|
|
| 236 |
es: {
|
| 237 |
// §33 v0.4 (sesion 31, 2026-04-30) — nuevas funciones diagnósticas
|
| 238 |
"v04.title": "🆕 v0.4 — Nuevos diagnósticos (sesion 31)",
|
|
|
|
| 239 |
"v04.arch.label": "Concentración Arquitectural",
|
| 240 |
"v04.arch.desc": "γ_text ≈ γ_Padé − 0.012·n_kv. Ley correlacional cross-panel (R²=0.30). Caveat: no es predictor per-model.",
|
| 241 |
"v04.pdi.label": "PDI — Índice de Desviación de Padé",
|
|
@@ -391,7 +396,7 @@ export const TRANSLATIONS = {
|
|
| 391 |
"common.no": "No",
|
| 392 |
|
| 393 |
// Tooltips de modos
|
| 394 |
-
"modes.tip": "<strong>
|
| 395 |
"profile.tip": "<strong>Diagnóstico completo en un click</strong>. Pega cualquier id de modelo HF (o elige preset). La herramienta ejecuta las 5 recetas (contexto largo, compresión KV, custom vs API, presupuesto, hardware) y produce una única <strong>TAF Card</strong> con veredicto por dimensión + números clave + clasificación arquitectónica.<br><br><strong>Caso de uso</strong>: \"Estoy evaluando Qwen2.5-32B para producción — ¿cuál es su perfil completo de viabilidad?\" → pega id → Perfilar → listo.",
|
| 396 |
"compare.tip": "<strong>Misma receta, múltiples modelos</strong>. Elige 2-3 modelos candidatos y una receta. Ve los veredictos en una única tabla comparativa.<br><br><strong>Caso de uso</strong>: \"Necesito recuperación de contexto largo a 16K — ¿cuál es mejor: Llama-3-8B, Mistral-7B o Qwen-7B?\" → elige 3 + X-2 + 16K → ve el ganador.",
|
| 397 |
|
|
@@ -399,11 +404,14 @@ export const TRANSLATIONS = {
|
|
| 399 |
"help.title": "📘 TAF Agent — Manual de Usuario",
|
| 400 |
"help.what.title": "¿Qué hace?",
|
| 401 |
"help.what.body": "Predice la <strong>viabilidad práctica</strong> de cualquier LLM transformer <em>antes de gastar GPU/€</em>. Responde preguntas como \"¿funcionará este modelo a L=32K?\" o \"¿debería entrenar custom o usar API?\" usando fórmulas Python deterministas (TAF — Thermodynamic Attention Framework).",
|
| 402 |
-
"help.modes.title": "Cómo usar —
|
| 403 |
"help.modes.profile": "<strong>📇 Perfilar</strong>: pega id de modelo → todas las recetas a la vez = TAF Card. <strong>Mejor punto de inicio</strong>.",
|
| 404 |
"help.modes.compare": "<strong>🆚 Comparar</strong>: 2-3 modelos lado a lado en la misma receta. Mejor al elegir entre candidatos.",
|
|
|
|
| 405 |
"help.modes.ask": "<strong>💬 Pregunta libre</strong>: pregunta en lenguaje natural, el LLM del navegador elige la receta. Mejor para exploración casual.",
|
| 406 |
"help.modes.recipe": "<strong>📋 Receta + formulario</strong>: selección manual, control total de parámetros. Mejor cuando quieres control exacto.",
|
|
|
|
|
|
|
| 407 |
"help.recipes.title": "Las 8 recetas disponibles",
|
| 408 |
"help.recipe.x1.title": "<strong>X-1 Entrenamiento custom vs API</strong> — compara coste de entrenar tu propio modelo vs pagar API.",
|
| 409 |
"help.recipe.x1.example": "Prueba: <em>\"¿Entrenar 8B custom o usar GPT-4o para 50M tokens/mes?\"</em><br>Respuestas: SÍ (custom) / NO (API) con meses para break-even.",
|
|
@@ -459,6 +467,7 @@ export const TRANSLATIONS = {
|
|
| 459 |
fr: {
|
| 460 |
// §33 v0.4 (sesion 31, 2026-04-30) — nouvelles fonctions de diagnostic
|
| 461 |
"v04.title": "🆕 v0.4 — Nouveaux diagnostics (sesion 31)",
|
|
|
|
| 462 |
"v04.arch.label": "Concentration Architecturale",
|
| 463 |
"v04.arch.desc": "γ_text ≈ γ_Padé − 0.012·n_kv. Loi corrélationnelle cross-panel (R²=0.30). Caveat : pas un prédicteur par-modèle.",
|
| 464 |
"v04.pdi.label": "PDI — Indice de Déviation de Padé",
|
|
@@ -613,7 +622,7 @@ export const TRANSLATIONS = {
|
|
| 613 |
"common.no": "Non",
|
| 614 |
|
| 615 |
// Tooltips des modes
|
| 616 |
-
"modes.tip": "<strong>
|
| 617 |
"profile.tip": "<strong>Diagnostic complet en un clic</strong>. Collez n'importe quel id de modèle HF (ou choisissez préréglage). L'outil exécute les 5 recettes (contexte long, compression KV, custom vs API, budget, hardware) et produit une <strong>TAF Card</strong> unique avec verdict par dimension + nombres clés + classification architecturale.<br><br><strong>Cas d'usage</strong>: « J'évalue Qwen2.5-32B pour la production — quel est son profil complet de viabilité ? » → collez id → Profiler → fait.",
|
| 618 |
"compare.tip": "<strong>Même recette, plusieurs modèles</strong>. Choisissez 2-3 modèles candidats et une recette. Voyez les verdicts dans un seul tableau comparatif.<br><br><strong>Cas d'usage</strong>: « J'ai besoin de récupération longue contexte à 16K — quel est le meilleur : Llama-3-8B, Mistral-7B ou Qwen-7B ? » → choisissez 3 + X-2 + 16K → voyez le gagnant.",
|
| 619 |
|
|
@@ -621,11 +630,14 @@ export const TRANSLATIONS = {
|
|
| 621 |
"help.title": "📘 TAF Agent — Manuel d'utilisation",
|
| 622 |
"help.what.title": "Que fait-il ?",
|
| 623 |
"help.what.body": "Prédit la <strong>viabilité pratique</strong> de tout LLM transformer <em>avant de dépenser du GPU/€</em>. Répond à des questions comme « ce modèle fonctionnera-t-il à L=32K ? » ou « dois-je entraîner sur mesure ou utiliser une API ? » via des formules Python déterministes (TAF — Thermodynamic Attention Framework).",
|
| 624 |
-
"help.modes.title": "Comment l'utiliser —
|
| 625 |
"help.modes.profile": "<strong>📇 Profiler</strong>: collez id de modèle → toutes les recettes à la fois = TAF Card. <strong>Meilleur point de départ</strong>.",
|
| 626 |
"help.modes.compare": "<strong>🆚 Comparer</strong>: 2-3 modèles côte à côte sur la même recette. Mieux pour choisir entre candidats.",
|
|
|
|
| 627 |
"help.modes.ask": "<strong>💬 Question libre</strong>: question en langage naturel, le LLM du navigateur choisit la recette. Mieux pour exploration casuelle.",
|
| 628 |
"help.modes.recipe": "<strong>📋 Recette + formulaire</strong>: sélection manuelle, contrôle total des paramètres. Mieux quand vous voulez un contrôle exact.",
|
|
|
|
|
|
|
| 629 |
"help.recipes.title": "Les 8 recettes disponibles",
|
| 630 |
"help.recipe.x1.title": "<strong>X-1 Entraînement custom vs API</strong> — compare le coût d'entraîner votre propre modèle vs payer l'accès API.",
|
| 631 |
"help.recipe.x1.example": "Essayez: <em>« Dois-je entraîner un 8B custom ou utiliser GPT-4o pour 50M tokens/mois ? »</em><br>Réponses: OUI (custom) / NON (API) avec mois pour break-even.",
|
|
@@ -681,6 +693,7 @@ export const TRANSLATIONS = {
|
|
| 681 |
zh: {
|
| 682 |
// §33 v0.4 (sesion 31, 2026-04-30) — 新诊断功能
|
| 683 |
"v04.title": "🆕 v0.4 — 新诊断 (会话 31)",
|
|
|
|
| 684 |
"v04.arch.label": "架构集中度",
|
| 685 |
"v04.arch.desc": "γ_text ≈ γ_Padé − 0.012·n_kv。跨面板相关性定律(R²=0.30)。警告:不是逐模型预测器。",
|
| 686 |
"v04.pdi.label": "PDI — Padé 偏差指数",
|
|
@@ -835,7 +848,7 @@ export const TRANSLATIONS = {
|
|
| 835 |
"common.no": "否",
|
| 836 |
|
| 837 |
// 模式提示
|
| 838 |
-
"modes.tip": "<strong>
|
| 839 |
"profile.tip": "<strong>一键完整诊断</strong>。粘贴任意 HF 模型 id (或选择预设)。工具运行所有 5 个配方 (长上下文、KV 压缩、自定义 vs API、预算、硬件),生成单个 <strong>TAF 卡</strong>,显示每个维度的判定 + 关键数字 + 架构分类。<br><br><strong>用例</strong>: \"我正在为生产评估 Qwen2.5-32B — 它的完整可行性概况是什么?\" → 粘贴 id → 画像 → 完成。",
|
| 840 |
"compare.tip": "<strong>同一配方,多个模型</strong>。选择 2-3 个候选模型和一个配方。在单个比较表中查看判定。<br><br><strong>用例</strong>: \"我需要在 16K 进行长上下文检索 — 哪个最好: Llama-3-8B、Mistral-7B 或 Qwen-7B?\" → 选择 3 个 + X-2 + 16K → 看赢家。",
|
| 841 |
|
|
@@ -843,11 +856,14 @@ export const TRANSLATIONS = {
|
|
| 843 |
"help.title": "📘 TAF Agent — 用户手册",
|
| 844 |
"help.what.title": "它做什么?",
|
| 845 |
"help.what.body": "在<em>花费 GPU/$ 之前</em>,预测任意 transformer LLM 的<strong>实际可行性</strong>。回答诸如 \"这个模型能在 L=32K 工作吗?\" 或 \"我应该自定义训练还是使用 API?\" 等问题,使用确定性 Python 公式 (TAF — Thermodynamic Attention Framework)。",
|
| 846 |
-
"help.modes.title": "如何使用 —
|
| 847 |
"help.modes.profile": "<strong>📇 画像</strong>: 粘贴模型 id → 同时运行所有配方 = TAF 卡。<strong>最佳起点</strong>。",
|
| 848 |
"help.modes.compare": "<strong>🆚 比较</strong>: 2-3 个模型在同一配方上并排。最适合在候选者之间选择。",
|
|
|
|
| 849 |
"help.modes.ask": "<strong>💬 自由提问</strong>: 自然语言问题,浏览器 LLM 选择配方。最适合随意探索。",
|
| 850 |
"help.modes.recipe": "<strong>📋 配方 + 表单</strong>: 手动选择,完全控制参数。最适合需要精确控制时。",
|
|
|
|
|
|
|
| 851 |
"help.recipes.title": "可用的 8 个配方",
|
| 852 |
"help.recipe.x1.title": "<strong>X-1 自定义训练 vs API</strong> — 比较训练自己模型的成本与付费使用 API 的成本。",
|
| 853 |
"help.recipe.x1.example": "尝试: <em>\"我应该训练 8B 自定义模型还是使用 GPT-4o 处理每月 50M tokens?\"</em><br>答案: 是 (自定义) / 否 (API),含损益平衡月数。",
|
|
|
|
| 157 |
"common.no": "No",
|
| 158 |
|
| 159 |
// Mode tooltips
|
| 160 |
+
"modes.tip": "<strong>Seven ways to use the tool</strong>.<br><strong>📇 Profile</strong>: paste a model id → all 8 recipes at once = TAF Card.<br><strong>🆚 Compare</strong>: 2-3 models side-by-side on one recipe.<br><strong>🔍 Inspect config</strong>: paste raw config.json → full Profile.<br><strong>💬 Ask</strong>: free-form question, browser LLM picks the recipe.<br><strong>📋 Recipe</strong>: manual selection with full form control.<br><strong>🩺 Diagnose CLI</strong>: generate Python command for local γ measurement.<br><strong>📊 Phase diagram</strong>: 23-model panel on (log θ, γ) plane.",
|
| 161 |
"profile.tip": "<strong>One-click full diagnosis</strong>. Paste any HF model id (or pick preset). Tool runs all 5 recipes (long-context, KV-compression, custom-vs-API, budget, hardware) and produces a single <strong>TAF Card</strong> with verdict per dimension + key numbers + architecture classification.<br><br><strong>Use case</strong>: \"I'm evaluating Qwen2.5-32B for production — what's its full viability profile?\" → paste id → Profile → done.",
|
| 162 |
"compare.tip": "<strong>Same recipe, multiple models</strong>. Pick 2-3 candidate models and one recipe. See verdicts in a single comparison table.<br><br><strong>Use case</strong>: \"I need long-context retrieval at 16K — which is best: Llama-3-8B, Mistral-7B, or Qwen-7B?\" → pick 3 + X-2 + 16K → see winner.",
|
| 163 |
|
|
|
|
| 165 |
"help.title": "📘 TAF Agent — User Manual",
|
| 166 |
"help.what.title": "What does it do?",
|
| 167 |
"help.what.body": "Predicts <strong>practical viability</strong> of any transformer LLM <em>before you spend GPU/$</em>. Answers questions like \"will this model work at L=32K?\" or \"should I train custom or use API?\" using deterministic Python formulas (TAF — Thermodynamic Attention Framework).",
|
| 168 |
+
"help.modes.title": "How to use — 7 modes",
|
| 169 |
"help.modes.profile": "<strong>📇 Profile</strong>: paste model id → all recipes at once = TAF Card. <strong>Best starting point</strong>.",
|
| 170 |
"help.modes.compare": "<strong>🆚 Compare</strong>: 2-3 models side-by-side on same recipe. Best when choosing between candidates.",
|
| 171 |
+
"help.modes.inspector": "<strong>🔍 Inspect config</strong>: paste raw <code>config.json</code> → tool parses + runs full Profile. For private models, in-development configs, or models not yet on HF Hub.",
|
| 172 |
"help.modes.ask": "<strong>💬 Ask plain English</strong>: free-form question, in-browser LLM picks the recipe. Best for casual exploration.",
|
| 173 |
"help.modes.recipe": "<strong>📋 Recipe + form</strong>: manual selection, full parameter control. Best when you want exact control.",
|
| 174 |
+
"help.modes.diagnose": "<strong>🩺 Diagnose CLI</strong>: generate Python command to measure γ on your local machine (transformers + numpy). Fast ≈5 min CPU; full ≈20–60 min GPU. Output JSON re-uploadable via Inspect.",
|
| 175 |
+
"help.modes.phase": "<strong>📊 Phase diagram</strong>: scatter plot of 23 panel models on (log θ, γ) plane. Hagedorn line γ=1 separates Phase A from Phase B. Click a dot to load that model into Recipe form.",
|
| 176 |
"help.recipes.title": "The 8 recipes available",
|
| 177 |
"help.recipe.x1.title": "<strong>X-1 Custom training vs API</strong> — compares cost of training your own model vs paying for API access.",
|
| 178 |
"help.recipe.x1.example": "Try: <em>\"Should I train an 8B custom model or use GPT-4o for 50M tokens/month?\"</em><br>Answer types: YES (custom) / NO (API) with break-even months.",
|
|
|
|
| 223 |
|
| 224 |
// §33 v0.4 (sesion 31, 2026-04-30) — new diagnostic functions
|
| 225 |
"v04.title": "🆕 v0.4 — New diagnostics (sesion 31)",
|
| 226 |
+
"v04.section.intro": "Four new diagnostic functions derived sesion 31 (2026-04-30) from cross-of-crosses formula games + Sócratic interrogation. Available in <code>taf_browser.py</code> §33.",
|
| 227 |
"v04.arch.label": "Architectural Concentration",
|
| 228 |
"v04.arch.desc": "γ_text ≈ γ_Padé − 0.012·n_kv. Cross-panel correlational law (R²=0.30). Caveat: not per-model predictor.",
|
| 229 |
"v04.pdi.label": "PDI — Padé Deviation Index",
|
|
|
|
| 240 |
es: {
|
| 241 |
// §33 v0.4 (sesion 31, 2026-04-30) — nuevas funciones diagnósticas
|
| 242 |
"v04.title": "🆕 v0.4 — Nuevos diagnósticos (sesion 31)",
|
| 243 |
+
"v04.section.intro": "Cuatro nuevas funciones diagnósticas derivadas en sesión 31 (2026-04-30) desde juegos de fórmulas cross-of-crosses + interrogación socrática. Disponibles en <code>taf_browser.py</code> §33.",
|
| 244 |
"v04.arch.label": "Concentración Arquitectural",
|
| 245 |
"v04.arch.desc": "γ_text ≈ γ_Padé − 0.012·n_kv. Ley correlacional cross-panel (R²=0.30). Caveat: no es predictor per-model.",
|
| 246 |
"v04.pdi.label": "PDI — Índice de Desviación de Padé",
|
|
|
|
| 396 |
"common.no": "No",
|
| 397 |
|
| 398 |
// Tooltips de modos
|
| 399 |
+
"modes.tip": "<strong>Siete formas de usar la herramienta</strong>.<br><strong>📇 Perfil</strong>: pega un id → las 8 recetas a la vez = TAF Card.<br><strong>🆚 Comparar</strong>: 2-3 modelos lado a lado en una receta.<br><strong>🔍 Inspeccionar config</strong>: pega config.json crudo → Perfil completo.<br><strong>💬 Pregunta</strong>: pregunta libre, el LLM del navegador elige la receta.<br><strong>📋 Receta</strong>: selección manual con control total del formulario.<br><strong>🩺 Diagnóstico CLI</strong>: genera comando Python para medir γ localmente.<br><strong>📊 Diagrama de fase</strong>: panel de 23 modelos en plano (log θ, γ).",
|
| 400 |
"profile.tip": "<strong>Diagnóstico completo en un click</strong>. Pega cualquier id de modelo HF (o elige preset). La herramienta ejecuta las 5 recetas (contexto largo, compresión KV, custom vs API, presupuesto, hardware) y produce una única <strong>TAF Card</strong> con veredicto por dimensión + números clave + clasificación arquitectónica.<br><br><strong>Caso de uso</strong>: \"Estoy evaluando Qwen2.5-32B para producción — ¿cuál es su perfil completo de viabilidad?\" → pega id → Perfilar → listo.",
|
| 401 |
"compare.tip": "<strong>Misma receta, múltiples modelos</strong>. Elige 2-3 modelos candidatos y una receta. Ve los veredictos en una única tabla comparativa.<br><br><strong>Caso de uso</strong>: \"Necesito recuperación de contexto largo a 16K — ¿cuál es mejor: Llama-3-8B, Mistral-7B o Qwen-7B?\" → elige 3 + X-2 + 16K → ve el ganador.",
|
| 402 |
|
|
|
|
| 404 |
"help.title": "📘 TAF Agent — Manual de Usuario",
|
| 405 |
"help.what.title": "¿Qué hace?",
|
| 406 |
"help.what.body": "Predice la <strong>viabilidad práctica</strong> de cualquier LLM transformer <em>antes de gastar GPU/€</em>. Responde preguntas como \"¿funcionará este modelo a L=32K?\" o \"¿debería entrenar custom o usar API?\" usando fórmulas Python deterministas (TAF — Thermodynamic Attention Framework).",
|
| 407 |
+
"help.modes.title": "Cómo usar — 7 modos",
|
| 408 |
"help.modes.profile": "<strong>📇 Perfilar</strong>: pega id de modelo → todas las recetas a la vez = TAF Card. <strong>Mejor punto de inicio</strong>.",
|
| 409 |
"help.modes.compare": "<strong>🆚 Comparar</strong>: 2-3 modelos lado a lado en la misma receta. Mejor al elegir entre candidatos.",
|
| 410 |
+
"help.modes.inspector": "<strong>🔍 Inspeccionar config</strong>: pega <code>config.json</code> crudo → la herramienta lo parsea y ejecuta el Perfil completo. Para modelos privados, configs en desarrollo, o modelos aún no en HF Hub.",
|
| 411 |
"help.modes.ask": "<strong>💬 Pregunta libre</strong>: pregunta en lenguaje natural, el LLM del navegador elige la receta. Mejor para exploración casual.",
|
| 412 |
"help.modes.recipe": "<strong>📋 Receta + formulario</strong>: selección manual, control total de parámetros. Mejor cuando quieres control exacto.",
|
| 413 |
+
"help.modes.diagnose": "<strong>🩺 Diagnóstico CLI</strong>: genera comando Python para medir γ en tu máquina local (transformers + numpy). Rápido ≈5 min CPU; completo ≈20–60 min GPU. JSON resultado re-subible por Inspect.",
|
| 414 |
+
"help.modes.phase": "<strong>📊 Diagrama de fase</strong>: scatter de 23 modelos del panel en plano (log θ, γ). Línea Hagedorn γ=1 separa Fase A de Fase B. Click en un punto para cargar ese modelo en el formulario de Receta.",
|
| 415 |
"help.recipes.title": "Las 8 recetas disponibles",
|
| 416 |
"help.recipe.x1.title": "<strong>X-1 Entrenamiento custom vs API</strong> — compara coste de entrenar tu propio modelo vs pagar API.",
|
| 417 |
"help.recipe.x1.example": "Prueba: <em>\"¿Entrenar 8B custom o usar GPT-4o para 50M tokens/mes?\"</em><br>Respuestas: SÍ (custom) / NO (API) con meses para break-even.",
|
|
|
|
| 467 |
fr: {
|
| 468 |
// §33 v0.4 (sesion 31, 2026-04-30) — nouvelles fonctions de diagnostic
|
| 469 |
"v04.title": "🆕 v0.4 — Nouveaux diagnostics (sesion 31)",
|
| 470 |
+
"v04.section.intro": "Quatre nouvelles fonctions diagnostiques dérivées en session 31 (2026-04-30) depuis jeux de formules cross-of-crosses + interrogation socratique. Disponibles dans <code>taf_browser.py</code> §33.",
|
| 471 |
"v04.arch.label": "Concentration Architecturale",
|
| 472 |
"v04.arch.desc": "γ_text ≈ γ_Padé − 0.012·n_kv. Loi corrélationnelle cross-panel (R²=0.30). Caveat : pas un prédicteur par-modèle.",
|
| 473 |
"v04.pdi.label": "PDI — Indice de Déviation de Padé",
|
|
|
|
| 622 |
"common.no": "Non",
|
| 623 |
|
| 624 |
// Tooltips des modes
|
| 625 |
+
"modes.tip": "<strong>Sept façons d'utiliser l'outil</strong>.<br><strong>📇 Profil</strong>: collez un id → les 8 recettes à la fois = TAF Card.<br><strong>🆚 Comparer</strong>: 2-3 modèles côte à côte sur une recette.<br><strong>🔍 Inspecter config</strong>: collez config.json brut → Profil complet.<br><strong>💬 Question</strong>: question libre, le LLM du navigateur choisit la recette.<br><strong>📋 Recette</strong>: sélection manuelle avec contrôle total du formulaire.<br><strong>🩺 Diagnostic CLI</strong>: génère commande Python pour mesurer γ localement.<br><strong>📊 Diagramme de phase</strong>: panel de 23 modèles dans le plan (log θ, γ).",
|
| 626 |
"profile.tip": "<strong>Diagnostic complet en un clic</strong>. Collez n'importe quel id de modèle HF (ou choisissez préréglage). L'outil exécute les 5 recettes (contexte long, compression KV, custom vs API, budget, hardware) et produit une <strong>TAF Card</strong> unique avec verdict par dimension + nombres clés + classification architecturale.<br><br><strong>Cas d'usage</strong>: « J'évalue Qwen2.5-32B pour la production — quel est son profil complet de viabilité ? » → collez id → Profiler → fait.",
|
| 627 |
"compare.tip": "<strong>Même recette, plusieurs modèles</strong>. Choisissez 2-3 modèles candidats et une recette. Voyez les verdicts dans un seul tableau comparatif.<br><br><strong>Cas d'usage</strong>: « J'ai besoin de récupération longue contexte à 16K — quel est le meilleur : Llama-3-8B, Mistral-7B ou Qwen-7B ? » → choisissez 3 + X-2 + 16K → voyez le gagnant.",
|
| 628 |
|
|
|
|
| 630 |
"help.title": "📘 TAF Agent — Manuel d'utilisation",
|
| 631 |
"help.what.title": "Que fait-il ?",
|
| 632 |
"help.what.body": "Prédit la <strong>viabilité pratique</strong> de tout LLM transformer <em>avant de dépenser du GPU/€</em>. Répond à des questions comme « ce modèle fonctionnera-t-il à L=32K ? » ou « dois-je entraîner sur mesure ou utiliser une API ? » via des formules Python déterministes (TAF — Thermodynamic Attention Framework).",
|
| 633 |
+
"help.modes.title": "Comment l'utiliser — 7 modes",
|
| 634 |
"help.modes.profile": "<strong>📇 Profiler</strong>: collez id de modèle → toutes les recettes à la fois = TAF Card. <strong>Meilleur point de départ</strong>.",
|
| 635 |
"help.modes.compare": "<strong>🆚 Comparer</strong>: 2-3 modèles côte à côte sur la même recette. Mieux pour choisir entre candidats.",
|
| 636 |
+
"help.modes.inspector": "<strong>🔍 Inspecter config</strong>: collez <code>config.json</code> brut → l'outil le parse et lance le Profil complet. Pour modèles privés, configs en développement, ou modèles pas encore sur HF Hub.",
|
| 637 |
"help.modes.ask": "<strong>💬 Question libre</strong>: question en langage naturel, le LLM du navigateur choisit la recette. Mieux pour exploration casuelle.",
|
| 638 |
"help.modes.recipe": "<strong>📋 Recette + formulaire</strong>: sélection manuelle, contrôle total des paramètres. Mieux quand vous voulez un contrôle exact.",
|
| 639 |
+
"help.modes.diagnose": "<strong>🩺 Diagnostic CLI</strong>: génère commande Python pour mesurer γ sur votre machine locale (transformers + numpy). Rapide ≈5 min CPU; complet ≈20–60 min GPU. JSON résultat ré-uploadable via Inspect.",
|
| 640 |
+
"help.modes.phase": "<strong>📊 Diagramme de phase</strong>: nuage de 23 modèles du panel dans le plan (log θ, γ). Ligne Hagedorn γ=1 sépare Phase A de Phase B. Cliquer un point pour charger ce modèle dans le formulaire Recette.",
|
| 641 |
"help.recipes.title": "Les 8 recettes disponibles",
|
| 642 |
"help.recipe.x1.title": "<strong>X-1 Entraînement custom vs API</strong> — compare le coût d'entraîner votre propre modèle vs payer l'accès API.",
|
| 643 |
"help.recipe.x1.example": "Essayez: <em>« Dois-je entraîner un 8B custom ou utiliser GPT-4o pour 50M tokens/mois ? »</em><br>Réponses: OUI (custom) / NON (API) avec mois pour break-even.",
|
|
|
|
| 693 |
zh: {
|
| 694 |
// §33 v0.4 (sesion 31, 2026-04-30) — 新诊断功能
|
| 695 |
"v04.title": "🆕 v0.4 — 新诊断 (会话 31)",
|
| 696 |
+
"v04.section.intro": "会话 31 (2026-04-30) 从公式 cross-of-crosses 游戏 + 苏格拉底质询中得出的四个新诊断函数。在 <code>taf_browser.py</code> §33 中可用。",
|
| 697 |
"v04.arch.label": "架构集中度",
|
| 698 |
"v04.arch.desc": "γ_text ≈ γ_Padé − 0.012·n_kv。跨面板相关性定律(R²=0.30)。警告:不是逐模型预测器。",
|
| 699 |
"v04.pdi.label": "PDI — Padé 偏差指数",
|
|
|
|
| 848 |
"common.no": "否",
|
| 849 |
|
| 850 |
// 模式提示
|
| 851 |
+
"modes.tip": "<strong>七种使用方式</strong>。<br><strong>📇 画像</strong>: 粘贴模型 id → 一次运行所有 8 个配方 = TAF 卡。<br><strong>🆚 比较</strong>: 2-3 个模型在一个配方上并排比较。<br><strong>🔍 检查 config</strong>: 粘贴原始 config.json → 完整画像。<br><strong>💬 提问</strong>: 自由形式问题,浏览器 LLM 选择配方。<br><strong>📋 配方</strong>: 手动选择,完全控制表单。<br><strong>🩺 CLI 诊断</strong>: 生成 Python 命令在本地测量 γ。<br><strong>📊 相图</strong>: 23 个面板模型在 (log θ, γ) 平面上。",
|
| 852 |
"profile.tip": "<strong>一键完整诊断</strong>。粘贴任意 HF 模型 id (或选择预设)。工具运行所有 5 个配方 (长上下文、KV 压缩、自定义 vs API、预算、硬件),生成单个 <strong>TAF 卡</strong>,显示每个维度的判定 + 关键数字 + 架构分类。<br><br><strong>用例</strong>: \"我正在为生产评估 Qwen2.5-32B — 它的完整可行性概况是什么?\" → 粘贴 id → 画像 → 完成。",
|
| 853 |
"compare.tip": "<strong>同一配方,多个模型</strong>。选择 2-3 个候选模型和一个配方。在单个比较表中查看判定。<br><br><strong>用例</strong>: \"我需要在 16K 进行长上下文检索 — 哪个最好: Llama-3-8B、Mistral-7B 或 Qwen-7B?\" → 选择 3 个 + X-2 + 16K → 看赢家。",
|
| 854 |
|
|
|
|
| 856 |
"help.title": "📘 TAF Agent — 用户手册",
|
| 857 |
"help.what.title": "它做什么?",
|
| 858 |
"help.what.body": "在<em>花费 GPU/$ 之前</em>,预测任意 transformer LLM 的<strong>实际可行性</strong>。回答诸如 \"这个模型能在 L=32K 工作吗?\" 或 \"我应该自定义训练还是使用 API?\" 等问题,使用确定性 Python 公式 (TAF — Thermodynamic Attention Framework)。",
|
| 859 |
+
"help.modes.title": "如何使用 — 7 种模式",
|
| 860 |
"help.modes.profile": "<strong>📇 画像</strong>: 粘贴模型 id → 同时运行所有配方 = TAF 卡。<strong>最佳起点</strong>。",
|
| 861 |
"help.modes.compare": "<strong>🆚 比较</strong>: 2-3 个模型在同一配方上并排。最适合在候选者之间选择。",
|
| 862 |
+
"help.modes.inspector": "<strong>🔍 检查 config</strong>: 粘贴原始 <code>config.json</code> → 工具解析并运行完整画像。适用于私有模型、开发中的配置、或尚未在 HF Hub 上的模型。",
|
| 863 |
"help.modes.ask": "<strong>💬 自由提问</strong>: 自然语言问题,浏览器 LLM 选择配方。最适合随意探索。",
|
| 864 |
"help.modes.recipe": "<strong>📋 配方 + 表单</strong>: 手动选择,完全控制参数。最适合需要精确控制时。",
|
| 865 |
+
"help.modes.diagnose": "<strong>🩺 CLI 诊断</strong>: 生成 Python 命令在你的本地机器上测量 γ (transformers + numpy)。快速 ≈5 分钟 CPU;完整 ≈20–60 分钟 GPU。结果 JSON 可通过 Inspect 重新上传。",
|
| 866 |
+
"help.modes.phase": "<strong>📊 相图</strong>: 23 个面板模型在 (log θ, γ) 平面上的散点图。Hagedorn 线 γ=1 分隔 A 相和 B 相。点击点将该模型加载到配方表单。",
|
| 867 |
"help.recipes.title": "可用的 8 个配方",
|
| 868 |
"help.recipe.x1.title": "<strong>X-1 自定义训练 vs API</strong> — 比较训练自己模型的成本与付费使用 API 的成本。",
|
| 869 |
"help.recipe.x1.example": "尝试: <em>\"我应该训练 8B 自定义模型还是使用 GPT-4o 处理每月 50M tokens?\"</em><br>答案: 是 (自定义) / 否 (API),含损益平衡月数。",
|