karlexmarin Claude Opus 4.7 (1M context) commited on
Commit
d5c0dac
·
1 Parent(s): 03f7bfe

i18n audit fix: 3 missing keys + 2 hardcoded text patches

Browse files

After thorough audit found:
- modes.inspector missing in FR + ZH dictionaries (fixed)
- <h3>'— v0.4 (sesión 29 findings) —'</h3> hardcoded (added data-i18n="help.divider.v04_s29")
- <p>'Computation: Pyodide · ...'</p> hardcoded (added data-i18n="footer.tech_stack")

All 4 langs (EN/ES/FR/ZH) now have 209/209 identical keys.
Zero hardcoded multi-language text remaining.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

Files changed (2) hide show
  1. index.html +2 -2
  2. js/i18n.js +10 -0
index.html CHANGED
@@ -113,7 +113,7 @@
113
  Answer: USE SOFT DECAY / USE D_f CUTOFF / USE LITERATURE METHODS / USE HARD T_train.
114
  </div>
115
 
116
- <h3 style="margin-top: 1.5em;">— v0.4 (sesión 29 findings) —</h3>
117
 
118
  <p data-i18n="help.section.v04"><strong>What's new in v0.4</strong> (sesión 29 findings 2026-04-28): three diagnostic recipes derived from cross-model panel analysis (n=22 LLMs).</p>
119
 
@@ -634,7 +634,7 @@
634
  ·
635
  <a href="https://github.com/karlesmarin/NeurIPS" target="_blank">Paper repo</a>
636
  </p>
637
- <p class="subtle">
638
  Computation: Pyodide · Synthesis: WebLLM (Qwen2.5-0.5B local) · Hosting: GitHub Pages · Cost: $0
639
  </p>
640
  </footer>
 
113
  Answer: USE SOFT DECAY / USE D_f CUTOFF / USE LITERATURE METHODS / USE HARD T_train.
114
  </div>
115
 
116
+ <h3 style="margin-top: 1.5em;" data-i18n="help.divider.v04_s29">— v0.4 (sesión 29 findings) —</h3>
117
 
118
  <p data-i18n="help.section.v04"><strong>What's new in v0.4</strong> (sesión 29 findings 2026-04-28): three diagnostic recipes derived from cross-model panel analysis (n=22 LLMs).</p>
119
 
 
634
  ·
635
  <a href="https://github.com/karlesmarin/NeurIPS" target="_blank">Paper repo</a>
636
  </p>
637
+ <p class="subtle" data-i18n="footer.tech_stack">
638
  Computation: Pyodide · Synthesis: WebLLM (Qwen2.5-0.5B local) · Hosting: GitHub Pages · Cost: $0
639
  </p>
640
  </footer>
js/i18n.js CHANGED
@@ -191,6 +191,8 @@ export const TRANSLATIONS = {
191
  "help.recipe.x22.example": "Try: <em>\"Does Mistral-7B fit the compute-context invariant?\"</em><br>Answer: K = γ·log(N²·D), z-score, IN-BAND or OUTLIER.",
192
  "help.recipe.x23.example": "Try: <em>\"Is Qwen2.5-7B post-induction-head?\"</em><br>Answer: CONFIRMED PRE-IH / CONFIRMED POST-IH / ANOMALY (with size-vs-Δγ consistency check).",
193
  "help.section.v04": "<strong>What's new in v0.4</strong> (sesión 29 findings 2026-04-28): three diagnostic recipes derived from cross-model panel analysis (n=22 LLMs).",
 
 
194
  "help.v04.imprint": "<strong>Learned-imprint slope ν = −1/(2π)</strong>: RoPE rotation period 2π drives a positional bias on weights, proportional to log(N_params). Even random tokens show this scaling. ν is DERIVED — not fitted (empirical err 0.3%).",
195
  "help.v04.invariant": "<strong>Chinchilla-attention invariant K</strong>: γ × log(N²·D) ≈ 51.2 ± 16.8 (CV=0.329). Connects compute scaling and attention exponent into a single dimensionless number.",
196
  "help.v04.ih_probe": "<strong>Δγ as IH probe</strong>: sign(γ_text − γ_random) > 0 ⟺ post-induction-head. Cheaper than running an in-context-learning benchmark.",
@@ -463,6 +465,8 @@ export const TRANSLATIONS = {
463
  "help.recipe.x22.example": "Prueba: <em>«¿Mistral-7B entra en el invariante compute-context?»</em><br>Respuesta: K = γ·log(N²·D), z-score, IN-BAND u OUTLIER.",
464
  "help.recipe.x23.example": "Prueba: <em>«¿Qwen2.5-7B es post-induction-head?»</em><br>Respuesta: CONFIRMED PRE-IH / CONFIRMED POST-IH / ANOMALY (chequeo consistencia tamaño vs Δγ).",
465
  "help.section.v04": "<strong>Novedades v0.4</strong> (hallazgos sesión 29 del 2026-04-28): tres recipes diagnósticas derivadas del análisis panel cross-model (n=22 LLMs).",
 
 
466
  "help.v04.imprint": "<strong>Slope imprint aprendido ν = −1/(2π)</strong>: el periodo de rotación RoPE 2π provoca un sesgo posicional en los pesos, proporcional a log(N_params). Incluso tokens random muestran este scaling. ν es DERIVADO — no ajustado (err empírico 0.3%).",
467
  "help.v04.invariant": "<strong>Invariante Chinchilla-atención K</strong>: γ × log(N²·D) ≈ 51.2 ± 16.8 (CV=0.329). Conecta compute scaling y exponente de atención en un solo número adimensional.",
468
  "help.v04.ih_probe": "<strong>Δγ como probe IH</strong>: sign(γ_text − γ_random) > 0 ⟺ post-induction-head. Más barato que correr un benchmark in-context-learning.",
@@ -537,6 +541,7 @@ export const TRANSLATIONS = {
537
  "modes.title": "🎯 Mode",
538
  "modes.profile": "📇 Profiler un modèle",
539
  "modes.compare": "🆚 Comparer des modèles",
 
540
  "modes.ask": "💬 Question libre",
541
  "modes.recipe": "📋 Choisir une recette",
542
  "modes.diagnose": "🩺 Diagnose CLI",
@@ -707,6 +712,8 @@ export const TRANSLATIONS = {
707
  "help.recipe.x22.example": "Essayez: <em>« Mistral-7B entre-t-il dans l'invariant compute-context ? »</em><br>Réponse: K = γ·log(N²·D), z-score, IN-BAND ou OUTLIER.",
708
  "help.recipe.x23.example": "Essayez: <em>« Qwen2.5-7B est-il post-induction-head ? »</em><br>Réponse: CONFIRMED PRE-IH / CONFIRMED POST-IH / ANOMALY.",
709
  "help.section.v04": "<strong>Nouveautés v0.4</strong> (résultats session 29, 2026-04-28) : trois recettes de diagnostic dérivées de l'analyse panel cross-model (n=22 LLMs).",
 
 
710
  "help.v04.imprint": "<strong>Pente d'imprint apprise ν = −1/(2π)</strong> : la période de rotation RoPE 2π entraîne un biais positionnel dans les poids, proportionnel à log(N_params). Même les tokens aléatoires montrent ce scaling. ν est DÉRIVÉ — non ajusté (erreur empirique 0,3 %).",
711
  "help.v04.invariant": "<strong>Invariant Chinchilla-attention K</strong> : γ × log(N²·D) ≈ 51.2 ± 16.8 (CV=0.329). Connecte le scaling de compute et l'exposant d'attention en un seul nombre sans dimension.",
712
  "help.v04.ih_probe": "<strong>Δγ comme probe IH</strong> : sign(γ_text − γ_random) > 0 ⟺ post-induction-head. Moins coûteux que de lancer un benchmark in-context-learning.",
@@ -780,6 +787,7 @@ export const TRANSLATIONS = {
780
  "modes.title": "🎯 模式",
781
  "modes.profile": "📇 模型画像",
782
  "modes.compare": "🆚 比较模型",
 
783
  "modes.ask": "💬 自由提问",
784
  "modes.recipe": "📋 选择配方",
785
  "modes.diagnose": "🩺 诊断 CLI",
@@ -950,6 +958,8 @@ export const TRANSLATIONS = {
950
  "help.recipe.x22.example": "尝试: <em>\"Mistral-7B 是否符合 compute-context 不变量?\"</em><br>答案: K = γ·log(N²·D)、z-score、IN-BAND 或 OUTLIER。",
951
  "help.recipe.x23.example": "尝试: <em>\"Qwen2.5-7B 是后-induction-head 吗?\"</em><br>答案: CONFIRMED PRE-IH / CONFIRMED POST-IH / ANOMALY。",
952
  "help.section.v04": "<strong>v0.4 新增</strong> (第 29 次研究会话, 2026-04-28): 来自 cross-model panel 分析 (n=22 LLMs) 的三个诊断 recipes。",
 
 
953
  "help.v04.imprint": "<strong>学习印记斜率 ν = −1/(2π)</strong>: RoPE 旋转周期 2π 在权重上引发位置偏置, 与 log(N_params) 成正比。即使 random token 也显示此 scaling。ν 是 DERIVED — 非拟合 (经验误差 0.3%)。",
954
  "help.v04.invariant": "<strong>Chinchilla-attention 不变量 K</strong>: γ × log(N²·D) ≈ 51.2 ± 16.8 (CV=0.329)。将 compute scaling 和 attention 指数连接为单一无量纲数。",
955
  "help.v04.ih_probe": "<strong>Δγ 作为 IH 探测</strong>: sign(γ_text − γ_random) > 0 ⟺ post-induction-head。比运行 in-context-learning 基准更便宜。",
 
191
  "help.recipe.x22.example": "Try: <em>\"Does Mistral-7B fit the compute-context invariant?\"</em><br>Answer: K = γ·log(N²·D), z-score, IN-BAND or OUTLIER.",
192
  "help.recipe.x23.example": "Try: <em>\"Is Qwen2.5-7B post-induction-head?\"</em><br>Answer: CONFIRMED PRE-IH / CONFIRMED POST-IH / ANOMALY (with size-vs-Δγ consistency check).",
193
  "help.section.v04": "<strong>What's new in v0.4</strong> (sesión 29 findings 2026-04-28): three diagnostic recipes derived from cross-model panel analysis (n=22 LLMs).",
194
+ "help.divider.v04_s29": "— v0.4 (sesión 29 findings) —",
195
+ "footer.tech_stack": "Computation: Pyodide · Synthesis: WebLLM (Qwen2.5-0.5B local) · Hosting: GitHub Pages · Cost: $0",
196
  "help.v04.imprint": "<strong>Learned-imprint slope ν = −1/(2π)</strong>: RoPE rotation period 2π drives a positional bias on weights, proportional to log(N_params). Even random tokens show this scaling. ν is DERIVED — not fitted (empirical err 0.3%).",
197
  "help.v04.invariant": "<strong>Chinchilla-attention invariant K</strong>: γ × log(N²·D) ≈ 51.2 ± 16.8 (CV=0.329). Connects compute scaling and attention exponent into a single dimensionless number.",
198
  "help.v04.ih_probe": "<strong>Δγ as IH probe</strong>: sign(γ_text − γ_random) > 0 ⟺ post-induction-head. Cheaper than running an in-context-learning benchmark.",
 
465
  "help.recipe.x22.example": "Prueba: <em>«¿Mistral-7B entra en el invariante compute-context?»</em><br>Respuesta: K = γ·log(N²·D), z-score, IN-BAND u OUTLIER.",
466
  "help.recipe.x23.example": "Prueba: <em>«¿Qwen2.5-7B es post-induction-head?»</em><br>Respuesta: CONFIRMED PRE-IH / CONFIRMED POST-IH / ANOMALY (chequeo consistencia tamaño vs Δγ).",
467
  "help.section.v04": "<strong>Novedades v0.4</strong> (hallazgos sesión 29 del 2026-04-28): tres recipes diagnósticas derivadas del análisis panel cross-model (n=22 LLMs).",
468
+ "help.divider.v04_s29": "— v0.4 (hallazgos sesión 29) —",
469
+ "footer.tech_stack": "Cómputo: Pyodide · Síntesis: WebLLM (Qwen2.5-0.5B local) · Hosting: GitHub Pages · Coste: $0",
470
  "help.v04.imprint": "<strong>Slope imprint aprendido ν = −1/(2π)</strong>: el periodo de rotación RoPE 2π provoca un sesgo posicional en los pesos, proporcional a log(N_params). Incluso tokens random muestran este scaling. ν es DERIVADO — no ajustado (err empírico 0.3%).",
471
  "help.v04.invariant": "<strong>Invariante Chinchilla-atención K</strong>: γ × log(N²·D) ≈ 51.2 ± 16.8 (CV=0.329). Conecta compute scaling y exponente de atención en un solo número adimensional.",
472
  "help.v04.ih_probe": "<strong>Δγ como probe IH</strong>: sign(γ_text − γ_random) > 0 ⟺ post-induction-head. Más barato que correr un benchmark in-context-learning.",
 
541
  "modes.title": "🎯 Mode",
542
  "modes.profile": "📇 Profiler un modèle",
543
  "modes.compare": "🆚 Comparer des modèles",
544
+ "modes.inspector": "🔍 Inspecter config",
545
  "modes.ask": "💬 Question libre",
546
  "modes.recipe": "📋 Choisir une recette",
547
  "modes.diagnose": "🩺 Diagnose CLI",
 
712
  "help.recipe.x22.example": "Essayez: <em>« Mistral-7B entre-t-il dans l'invariant compute-context ? »</em><br>Réponse: K = γ·log(N²·D), z-score, IN-BAND ou OUTLIER.",
713
  "help.recipe.x23.example": "Essayez: <em>« Qwen2.5-7B est-il post-induction-head ? »</em><br>Réponse: CONFIRMED PRE-IH / CONFIRMED POST-IH / ANOMALY.",
714
  "help.section.v04": "<strong>Nouveautés v0.4</strong> (résultats session 29, 2026-04-28) : trois recettes de diagnostic dérivées de l'analyse panel cross-model (n=22 LLMs).",
715
+ "help.divider.v04_s29": "— v0.4 (résultats session 29) —",
716
+ "footer.tech_stack": "Calcul : Pyodide · Synthèse : WebLLM (Qwen2.5-0.5B local) · Hébergement : GitHub Pages · Coût : 0 $",
717
  "help.v04.imprint": "<strong>Pente d'imprint apprise ν = −1/(2π)</strong> : la période de rotation RoPE 2π entraîne un biais positionnel dans les poids, proportionnel à log(N_params). Même les tokens aléatoires montrent ce scaling. ν est DÉRIVÉ — non ajusté (erreur empirique 0,3 %).",
718
  "help.v04.invariant": "<strong>Invariant Chinchilla-attention K</strong> : γ × log(N²·D) ≈ 51.2 ± 16.8 (CV=0.329). Connecte le scaling de compute et l'exposant d'attention en un seul nombre sans dimension.",
719
  "help.v04.ih_probe": "<strong>Δγ comme probe IH</strong> : sign(γ_text − γ_random) > 0 ⟺ post-induction-head. Moins coûteux que de lancer un benchmark in-context-learning.",
 
787
  "modes.title": "🎯 模式",
788
  "modes.profile": "📇 模型画像",
789
  "modes.compare": "🆚 比较模型",
790
+ "modes.inspector": "🔍 检查 config",
791
  "modes.ask": "💬 自由提问",
792
  "modes.recipe": "📋 选择配方",
793
  "modes.diagnose": "🩺 诊断 CLI",
 
958
  "help.recipe.x22.example": "尝试: <em>\"Mistral-7B 是否符合 compute-context 不变量?\"</em><br>答案: K = γ·log(N²·D)、z-score、IN-BAND 或 OUTLIER。",
959
  "help.recipe.x23.example": "尝试: <em>\"Qwen2.5-7B 是后-induction-head 吗?\"</em><br>答案: CONFIRMED PRE-IH / CONFIRMED POST-IH / ANOMALY。",
960
  "help.section.v04": "<strong>v0.4 新增</strong> (第 29 次研究会话, 2026-04-28): 来自 cross-model panel 分析 (n=22 LLMs) 的三个诊断 recipes。",
961
+ "help.divider.v04_s29": "— v0.4 (第 29 次会话发现) —",
962
+ "footer.tech_stack": "计算:Pyodide · 综合:WebLLM (Qwen2.5-0.5B 本地) · 托管:GitHub Pages · 成本:$0",
963
  "help.v04.imprint": "<strong>学习印记斜率 ν = −1/(2π)</strong>: RoPE 旋转周期 2π 在权重上引发位置偏置, 与 log(N_params) 成正比。即使 random token 也显示此 scaling。ν 是 DERIVED — 非拟合 (经验误差 0.3%)。",
964
  "help.v04.invariant": "<strong>Chinchilla-attention 不变量 K</strong>: γ × log(N²·D) ≈ 51.2 ± 16.8 (CV=0.329)。将 compute scaling 和 attention 指数连接为单一无量纲数。",
965
  "help.v04.ih_probe": "<strong>Δγ 作为 IH 探测</strong>: sign(γ_text − γ_random) > 0 ⟺ post-induction-head。比运行 in-context-learning 基准更便宜。",