Create v4_1-handoff.md
Browse files(docs): v4.1 handoff improvement and changes to be applied in the v4 run
- docs/v4_1-handoff.md +280 -0
docs/v4_1-handoff.md
ADDED
|
@@ -0,0 +1,280 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# V4.1 / V4.2 Handoff
|
| 2 |
+
|
| 3 |
+
**Date:** 2026-04-27
|
| 4 |
+
**Context:** V4 completed 200 steps, eval_best=0.476 at step 130. Training plateaued due to
|
| 5 |
+
LR decay and data starvation (13.5% of one epoch seen). Model learned (+33% over SFT baseline).
|
| 6 |
+
Recipe is validated. V4.1 squeezes the 0.5B before scaling to 3.7B.
|
| 7 |
+
|
| 8 |
+
---
|
| 9 |
+
|
| 10 |
+
## V4.1 β Four Changes Only
|
| 11 |
+
|
| 12 |
+
### Change 1: Fix the JSON Parser
|
| 13 |
+
|
| 14 |
+
**What:** Replace the regex-based `_extract_json()` in the reward functions with a robust
|
| 15 |
+
parser that handles Portuguese decimal commas and LLM formatting quirks.
|
| 16 |
+
|
| 17 |
+
**Why:** The model is writing `"sentiment_score": 4,5` (correct PT-BR format). Your parser
|
| 18 |
+
calls `json.loads()`, which fails, and scores the completion near-zero. The model is being
|
| 19 |
+
penalized for correct behavior. This is reward misspecification β the single highest-ROI fix
|
| 20 |
+
available at zero training cost.
|
| 21 |
+
|
| 22 |
+
**Do this before launching any training run.** Run it against `data/pairs/eval.jsonl` first
|
| 23 |
+
and measure how many previously-failing extractions now parse correctly.
|
| 24 |
+
|
| 25 |
+
```python
|
| 26 |
+
import json, re
|
| 27 |
+
|
| 28 |
+
def _normalize_pt_decimals(s: str) -> str:
|
| 29 |
+
"""Convert PT-BR decimals (4,5) to JSON-valid (4.5), only outside quoted strings."""
|
| 30 |
+
result, in_string, escape_next = [], False, False
|
| 31 |
+
i = 0
|
| 32 |
+
while i < len(s):
|
| 33 |
+
c = s[i]
|
| 34 |
+
if escape_next:
|
| 35 |
+
result.append(c); escape_next = False; i += 1; continue
|
| 36 |
+
if c == '\\' and in_string:
|
| 37 |
+
result.append(c); escape_next = True; i += 1; continue
|
| 38 |
+
if c == '"':
|
| 39 |
+
in_string = not in_string; result.append(c); i += 1; continue
|
| 40 |
+
if not in_string:
|
| 41 |
+
m = re.match(r'(\d+),(\d+)', s[i:])
|
| 42 |
+
if m:
|
| 43 |
+
result.append(m.group(1) + '.' + m.group(2))
|
| 44 |
+
i += len(m.group(0)); continue
|
| 45 |
+
result.append(c); i += 1
|
| 46 |
+
return ''.join(result)
|
| 47 |
+
|
| 48 |
+
def _extract_json(text: str) -> dict | None:
|
| 49 |
+
stripped = re.sub(r'^```(?:json)?\s*|\s*```$', '', text.strip(), flags=re.MULTILINE).strip()
|
| 50 |
+
for attempt in [stripped, _normalize_pt_decimals(stripped)]:
|
| 51 |
+
try:
|
| 52 |
+
result = json.loads(attempt)
|
| 53 |
+
if isinstance(result, dict):
|
| 54 |
+
return result
|
| 55 |
+
except (json.JSONDecodeError, TypeError):
|
| 56 |
+
pass
|
| 57 |
+
# Try finding first {...} block
|
| 58 |
+
match = re.search(r'\{[\s\S]*\}', attempt)
|
| 59 |
+
if match:
|
| 60 |
+
try:
|
| 61 |
+
result = json.loads(match.group())
|
| 62 |
+
if isinstance(result, dict):
|
| 63 |
+
return result
|
| 64 |
+
except (json.JSONDecodeError, TypeError):
|
| 65 |
+
pass
|
| 66 |
+
return None
|
| 67 |
+
```
|
| 68 |
+
|
| 69 |
+
---
|
| 70 |
+
|
| 71 |
+
### Change 2: Triple the Training Steps
|
| 72 |
+
|
| 73 |
+
**What:** `MAX_STEPS = 200 β 600`
|
| 74 |
+
|
| 75 |
+
**Why:** V4 trained on 13.5% of available data. Of ~1,480 training prompts, only ~400 were
|
| 76 |
+
sampled. The insights (148 prompts) and push (148 prompts) tasks may have contributed fewer
|
| 77 |
+
than 50 training examples each β not enough for meaningful policy updates. 600 steps at
|
| 78 |
+
2 prompts/step = 1,200 unique samples, covering ~80% of the training set once.
|
| 79 |
+
|
| 80 |
+
No other changes needed. Just increase the step count.
|
| 81 |
+
|
| 82 |
+
---
|
| 83 |
+
|
| 84 |
+
### Change 3: Replace LR Schedule
|
| 85 |
+
|
| 86 |
+
**What:** `lr_scheduler_type = "cosine" β "constant_with_warmup"`
|
| 87 |
+
|
| 88 |
+
**Why:** The V4 cosine schedule decayed to ~1.5Γ10β»ΒΉβ° by step 200. The model spent the
|
| 89 |
+
last 70 steps (130β200) with an effectively zero learning rate. This is why eval plateaued
|
| 90 |
+
at step 130 β not capacity, not reward ceiling, not APO resistance. The optimizer ran out of
|
| 91 |
+
gradient magnitude.
|
| 92 |
+
|
| 93 |
+
```python
|
| 94 |
+
# In GRPOConfig:
|
| 95 |
+
lr_scheduler_type = "constant_with_warmup"
|
| 96 |
+
warmup_ratio = 0.05 # 5% warmup (30 steps out of 600)
|
| 97 |
+
```
|
| 98 |
+
|
| 99 |
+
---
|
| 100 |
+
|
| 101 |
+
### Change 4: Raise the Learning Rate
|
| 102 |
+
|
| 103 |
+
**What:** `LEARNING_RATE = 2e-6 β 5e-6`
|
| 104 |
+
|
| 105 |
+
**Why:** V4's `train/grad_norm = 0.065` was low. The model can tolerate stronger updates.
|
| 106 |
+
At 0.5B with LoRA r=16, 5e-6 is within the safe range per Dr. GRPO's Appendix G. Combined
|
| 107 |
+
with the constant schedule, this gives sustained gradient magnitude throughout the 600-step run.
|
| 108 |
+
|
| 109 |
+
---
|
| 110 |
+
|
| 111 |
+
### V4.1 Config Summary
|
| 112 |
+
|
| 113 |
+
```python
|
| 114 |
+
MAX_STEPS = 600 # was 200
|
| 115 |
+
LEARNING_RATE = 5e-6 # was 2e-6
|
| 116 |
+
lr_scheduler_type = "constant_with_warmup" # was cosine
|
| 117 |
+
warmup_ratio = 0.05 # was 0.1 (keep short since constant schedule)
|
| 118 |
+
|
| 119 |
+
# Everything else UNCHANGED from V4:
|
| 120 |
+
NUM_GENERATIONS = 16
|
| 121 |
+
MAX_COMPLETION_LENGTH = 512
|
| 122 |
+
TEMPERATURE = 1.0
|
| 123 |
+
BETA = 0.0
|
| 124 |
+
SCALE_REWARDS = False
|
| 125 |
+
BATCH_SIZE = 2
|
| 126 |
+
GRAD_ACCUM = 1
|
| 127 |
+
LORA_R = 16
|
| 128 |
+
LORA_ALPHA = 32
|
| 129 |
+
```
|
| 130 |
+
|
| 131 |
+
**Also:** Add per-task reward breakdown to `EvalRewardCallback`. This is essential for V4.2
|
| 132 |
+
decisions β you need to know which specific tasks are gaining and which are stuck.
|
| 133 |
+
|
| 134 |
+
```python
|
| 135 |
+
# In EvalRewardCallback._run_eval(), track per-task:
|
| 136 |
+
task_rewards = {"extraction": [], "sql_qa": [], "insights": [], "push": []}
|
| 137 |
+
for record in subset:
|
| 138 |
+
msgs = record["prompt"]
|
| 139 |
+
text = tokenizer.apply_chat_template(msgs, tokenize=False, add_generation_prompt=True)
|
| 140 |
+
user_txt = " ".join(m["content"] for m in msgs if m["role"] == "user")
|
| 141 |
+
task = _classify_task_type(user_txt)
|
| 142 |
+
# ... generate response ...
|
| 143 |
+
r = self.reward_fn([response], [text])[0]
|
| 144 |
+
task_rewards[task].append(r)
|
| 145 |
+
|
| 146 |
+
per_task = {t: sum(v)/len(v) for t, v in task_rewards.items() if v}
|
| 147 |
+
wandb.log({"eval/" + k: v for k, v in per_task.items()}, step=state.global_step)
|
| 148 |
+
```
|
| 149 |
+
|
| 150 |
+
---
|
| 151 |
+
|
| 152 |
+
## What to Observe During V4.1
|
| 153 |
+
|
| 154 |
+
### Must watch in real time
|
| 155 |
+
|
| 156 |
+
| Metric | Expected | Stop if |
|
| 157 |
+
|---|---|---|
|
| 158 |
+
| `eval/mean_reward` trend | Improving past step 130, continuing to ~step 400 | Plateaus before step 200 (same as V4) |
|
| 159 |
+
| `eval/extraction` | Jumps significantly from V4's ~0.17 baseline | Still below 0.20 after step 100 (parser fix didn't help) |
|
| 160 |
+
| `train/completion_length` | 100β400 tokens | Hits 512 ceiling (need to raise MAX_COMPLETION_LENGTH) |
|
| 161 |
+
| `train/frac_reward_zero_std` | < 0.2 | Sustained > 0.5 |
|
| 162 |
+
| `train/grad_norm` | 0.05β0.5 | Spikes > 2.0 (LR too high) |
|
| 163 |
+
| `train/kl` | Any value (KL=0 is void at Ξ²=0) | Ignore entirely |
|
| 164 |
+
|
| 165 |
+
### Questions to answer from the run
|
| 166 |
+
|
| 167 |
+
1. **Did the parser fix change extraction reward?**
|
| 168 |
+
Compare eval/extraction at step 10 (V4.1) vs V4's calibration baseline (~0.17). If it
|
| 169 |
+
jumps to 0.35+, the parser was the bottleneck. If it stays at ~0.17, the model genuinely
|
| 170 |
+
can't produce valid JSON at 0.5B.
|
| 171 |
+
|
| 172 |
+
2. **Did eval continue improving past step 130?**
|
| 173 |
+
The key question. If yes, the V4 plateau was entirely the LR schedule and not a capacity
|
| 174 |
+
limit. If no (plateaus at same ~0.476), the 0.5B model has a real ceiling.
|
| 175 |
+
|
| 176 |
+
3. **Which task has the lowest per-task reward at step 600?**
|
| 177 |
+
This determines the V4.2 priority. Whichever task scores lowest is either (a) reward
|
| 178 |
+
function too coarse, (b) 0.5B capacity insufficient, or (c) underrepresented in training.
|
| 179 |
+
|
| 180 |
+
4. **What is eval/mean_reward at step 600?**
|
| 181 |
+
Target: β₯ 0.55. If reached, scale to 3.7B. If stuck at ~0.48, proceed to V4.2.
|
| 182 |
+
|
| 183 |
+
---
|
| 184 |
+
|
| 185 |
+
## V4.2 β Conditional on V4.1 Results
|
| 186 |
+
|
| 187 |
+
V4.2 depends entirely on what V4.1 tells you. Three scenarios:
|
| 188 |
+
|
| 189 |
+
---
|
| 190 |
+
|
| 191 |
+
### Scenario A: V4.1 reaches β₯ 0.55 eval reward
|
| 192 |
+
|
| 193 |
+
**Conclusion:** 0.5B-Instruct is maximized. The recipe is validated.
|
| 194 |
+
**V4.2 = Scale to 3.7B-Instruct.** Apply the V4.1 config with these adjustments:
|
| 195 |
+
|
| 196 |
+
```python
|
| 197 |
+
MODEL_ID = "Polygl0t/Tucano2-qwen-3.7B-Instruct"
|
| 198 |
+
NUM_GENERATIONS = 8 # VRAM constraint at 3.7B (was 16)
|
| 199 |
+
MAX_COMPLETION_LENGTH = 1024 # 3.7B can afford richer output (was 512)
|
| 200 |
+
LEARNING_RATE = 2e-6 # Larger model = smaller LR (was 5e-6)
|
| 201 |
+
MAX_STEPS = 400 # Fewer steps needed, larger model learns faster
|
| 202 |
+
MAX_SEQ_LENGTH = 2048 # Unchanged
|
| 203 |
+
```
|
| 204 |
+
|
| 205 |
+
Same reward functions, same schedule, same everything else. The qualitative findings transfer.
|
| 206 |
+
Numerical hyperparameters don't β use the values above.
|
| 207 |
+
|
| 208 |
+
---
|
| 209 |
+
|
| 210 |
+
### Scenario B: V4.1 plateaus at ~0.476 (same as V4), eval/extraction still β 0.17
|
| 211 |
+
|
| 212 |
+
**Conclusion:** Parser fix didn't help β the model can't produce valid JSON at 0.5B.
|
| 213 |
+
**V4.2 = Switch base model.**
|
| 214 |
+
|
| 215 |
+
Use `Polygl0t/Tucano2-qwen-0.5B-Base` (no APO, no SFT) with a minimal SFT warm-up first:
|
| 216 |
+
|
| 217 |
+
```python
|
| 218 |
+
# Step 1: 1-epoch SFT on extraction pairs only (~590 pairs)
|
| 219 |
+
# 30 minutes on L4, teaches JSON output format
|
| 220 |
+
MODEL_ID = "Polygl0t/Tucano2-qwen-0.5B-Base"
|
| 221 |
+
# SFT with loss on assistant turns only, 1 epoch
|
| 222 |
+
|
| 223 |
+
# Step 2: Same V4.1 GRPO config on top of SFT checkpoint
|
| 224 |
+
# If extraction still fails after SFT warm-up β 0.5B genuinely can't do structured output
|
| 225 |
+
# β skip 0.5B entirely, go directly to 3.7B-Instruct
|
| 226 |
+
```
|
| 227 |
+
|
| 228 |
+
---
|
| 229 |
+
|
| 230 |
+
### Scenario C: V4.1 eval improves but one task type lags far behind others
|
| 231 |
+
|
| 232 |
+
**Conclusion:** Task imbalance or reward function ceiling on the lagging task.
|
| 233 |
+
**V4.2 = Targeted fix for the weakest task.** Identify it from `eval/extraction`,
|
| 234 |
+
`eval/sql_qa`, `eval/insights`, `eval/push` breakdown, then apply ONE of:
|
| 235 |
+
|
| 236 |
+
**If extraction lags (reward < 0.25 at step 600):**
|
| 237 |
+
Reward function ceiling. Add semantic field-value scoring:
|
| 238 |
+
```python
|
| 239 |
+
# In reward_extraction: add bonus for correct field VALUES, not just field PRESENCE
|
| 240 |
+
if data.get("sentiment") in VALID_SENTIMENTS: score += 0.05 # was already there
|
| 241 |
+
# Add: exact-match bonus against reference answer if available
|
| 242 |
+
# Add: partial credit for correct complaint_category even if sentiment wrong
|
| 243 |
+
```
|
| 244 |
+
|
| 245 |
+
**If sql_qa lags (reward < 0.20 at step 600):**
|
| 246 |
+
Capacity limit at 0.5B for analytical tasks. Accept this and plan 3.7B for sql_qa.
|
| 247 |
+
Keep 0.5B for extraction+push only, route sql_qa+insights to 3.7B in production.
|
| 248 |
+
|
| 249 |
+
**If insights lags (reward < 0.20 at step 600):**
|
| 250 |
+
Most likely data starvation (only ~148 insights prompts in training set). Generate 200 more
|
| 251 |
+
insights pairs from `commerce.db` before V4.2:
|
| 252 |
+
```bash
|
| 253 |
+
# Quick synthetic expansion using existing generate_pairs.py logic
|
| 254 |
+
# Target: 350 total insights pairs (was 148)
|
| 255 |
+
# Source: random sample from orders_enriched WHERE sentiment='negative'
|
| 256 |
+
```
|
| 257 |
+
|
| 258 |
+
---
|
| 259 |
+
|
| 260 |
+
## Decision Tree
|
| 261 |
+
|
| 262 |
+
```
|
| 263 |
+
Run V4.1 (600 steps, 5e-6 LR, constant schedule, parser fix)
|
| 264 |
+
β
|
| 265 |
+
ββ eval β₯ 0.55 at step 600?
|
| 266 |
+
β YES β Scenario A: Scale to 3.7B-Instruct with V4.1 config
|
| 267 |
+
β
|
| 268 |
+
ββ eval plateaus at ~0.476 AND extraction still β 0.17?
|
| 269 |
+
β YES β Scenario B: Switch to 0.5B-Base with SFT warm-up
|
| 270 |
+
β
|
| 271 |
+
ββ eval improves but one task type consistently < 0.20?
|
| 272 |
+
YES β Scenario C: Targeted fix for weakest task
|
| 273 |
+
sql_qa weak β accept, plan 3.7B
|
| 274 |
+
insights weak β generate more pairs
|
| 275 |
+
extraction weak β refine reward function
|
| 276 |
+
```
|
| 277 |
+
|
| 278 |
+
---
|
| 279 |
+
|
| 280 |
+
*V4 validated the recipe. V4.1 exhausts 0.5B before spending 3.7B compute.*
|