rtferraz commited on
Commit
50e0e4d
·
verified ·
1 Parent(s): 4312bfd

ADR-002: V4 Instruct-Only GRPO — revises dual-model plan based on model repo audit

Browse files
Files changed (1) hide show
  1. docs/ADR-002-v4-instruct.md +1367 -0
docs/ADR-002-v4-instruct.md ADDED
@@ -0,0 +1,1367 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ADR-002: V4 Instruct-Only GRPO — Single-Model Validation on 0.5B-Instruct
2
+
3
+ **Status:** Proposed
4
+ **Date:** 2026-04-25
5
+ **Author:** Automated Investigation Agent
6
+ **Supersedes:** V4 Handoff (dual Instruct+Think plan)
7
+ **Context Documents:**
8
+ - `docs/INVESTIGATION_REPORT.md` — full project audit (20+ papers)
9
+ - `docs/ADR-001-next-steps.md` — V3 execution plans
10
+ - `docs/checkpoints/2026-04-23_v3-launch.md` — V3 launch & probe results
11
+ - V4 Handoff document (dual 0.5B hybrid plan)
12
+
13
+ ---
14
+
15
+ ## Table of Contents
16
+
17
+ 1. [Context: Why the Original V4 Plan Needs Revision](#1-context)
18
+ 2. [Decisions: What We Do Instead and Why](#2-decisions)
19
+ 3. [Consequences: What to Expect](#3-consequences)
20
+ 4. [Verified Model Facts: Tokenizer & Architecture Reference](#4-verified-model-facts)
21
+ 5. [Implementation: Cell-by-Cell Notebook Specification](#5-implementation)
22
+ 6. [Reward Functions: Complete Specification](#6-reward-functions)
23
+ 7. [Monitoring & Gate Conditions](#7-monitoring--gate-conditions)
24
+ 8. [Fallback Plan: What If Instruct Fails](#8-fallback-plan)
25
+ 9. [Hyperparameter Decision Log](#9-hyperparameter-decision-log)
26
+ 10. [File Structure](#10-file-structure)
27
+
28
+ ---
29
+
30
+ ## 1. Context
31
+
32
+ ### 1.1 V3 Autopsy (Confirmed)
33
+
34
+ V3 ran 171 steps on `Polygl0t/Tucano2-qwen-3.7B-Think` and failed:
35
+
36
+ ```
37
+ train/reward: 0.8675 ← high but policy didn't learn; SFT already does this
38
+ train/clip_ratio: 0.0 ← zero on ALL 171 steps — policy never moved
39
+ train/kl: 0.159 ← tiny divergence from initialization
40
+ train/completion_length: 2628 ← Think model fills context with <think>
41
+ ```
42
+
43
+ The original V4 Handoff proposed a **dual-model** approach: train `Tucano2-qwen-0.5B-Instruct` for extraction+push, and `Tucano2-qwen-0.5B-Think` for SQL+insights, as two separate GRPO runs.
44
+
45
+ ### 1.2 Problems Found in the Original V4 Plan
46
+
47
+ After auditing both model repos against their actual artifacts (`config.json`, `generation_config.json`, `chat_template.jinja`, `training_config_sft.yaml`, `training_config_apo.yaml`, `tokenizer_config.json`, `README.md` with benchmarks and inference samples), the following problems were identified:
48
+
49
+ #### Problem 1: The 0.5B-Think Model Is Catastrophically Weak
50
+
51
+ Published benchmarks from the model README:
52
+
53
+ | Model | Total NPM | Knowledge & Reasoning NPM | GSM8K-PT | IFEval-PT |
54
+ |---|---|---|---|---|
55
+ | **0.5B-Instruct** | **26.08** | **27.77** | **18.49** | **30.00** |
56
+ | **0.5B-Think** | **14.41** | **12.52** | **14.61** | **27.67** |
57
+
58
+ The Think variant scores **nearly half** of Instruct on every benchmark. The Think SFT used only **34M tokens of reasoning data for 3,060 steps**, while Instruct SFT used **874M tokens across 9 task categories for 68,635 steps** — that is 25× less data and 22× fewer steps for Think.
59
+
60
+ Concrete evidence from the Think model's own inference samples on the model card:
61
+ - **Math (2x + 3 = 11):** The Think model's CoT arrives at the wrong answer (`x = 2` instead of `x = 4`). The model's thinking trace says "subtrairo 3 de ambos os lados" correctly, then in the final answer writes `x = 5 - 3 = 2` — it contradicts its own reasoning.
62
+ - **Cooking recipe:** Hallucinates ingredient "30% ativo butylated buttercreme."
63
+ - **History (Revolução Farroupilha):** Fabricates dates, events, and named entities that don't exist.
64
+
65
+ By contrast, the Instruct model's inference samples show:
66
+ - **Structured JSON output:** Correct, well-formatted extraction from an email.
67
+ - **Math:** Correct solution (x = 4) without CoT.
68
+ - **Function calling:** Correct tool-use JSON.
69
+ - **Classification:** Correct sentiment classification.
70
+
71
+ **Conclusion:** A model that cannot solve `2x + 3 = 11` is not a viable starting point for SQL analysis or business insights, even with GRPO tuning. GRPO refines what a model approximately knows — it cannot teach fundamentally new capabilities to a model this weak.
72
+
73
+ #### Problem 2: Five Technical Bugs in the Original V4 Plan
74
+
75
+ **Bug 2a: `use_cache: false` in both model configs.**
76
+
77
+ Both `config.json` files ship with `"use_cache": false`. Without explicitly setting `model.config.use_cache = True` and `model.generation_config.use_cache = True` after loading, generation uses O(n²) full attention recomputation at every token. V3's notebook included this fix (Cell 4); the V4 plan omitted it entirely.
78
+
79
+ **Source:** `Polygl0t/Tucano2-qwen-0.5B-Instruct/config.json`, line `"use_cache": false`.
80
+
81
+ **Bug 2b: `repetition_penalty: 1.2` in `generation_config.json`.**
82
+
83
+ Both models ship with `repetition_penalty: 1.2`. TRL's `GRPOTrainer` uses `model.generate()` internally for rollouts, and the `generation_config.json` defaults are loaded automatically. If `repetition_penalty` is not explicitly overridden to `1.0`, it will suppress diversity in rollout completions — directly working against GRPO's need for diverse outputs. The GRPOConfig `temperature` parameter overrides the generation config's temperature, but there is no `repetition_penalty` field in GRPOConfig. It must be overridden via `model.generation_config.repetition_penalty = 1.0` after model load.
84
+
85
+ **Source:** `Polygl0t/Tucano2-qwen-0.5B-Instruct/generation_config.json`, line `"repetition_penalty": 1.2`.
86
+
87
+ **Bug 2c: `temperature: 0.1` in `generation_config.json`.**
88
+
89
+ Same as the V1 bug that destroyed the first training run. While GRPOConfig overrides temperature for rollouts, the model's generation_config may be used during eval callback generation if not explicitly overridden. Must set `model.generation_config.temperature = 1.0` as a defensive measure.
90
+
91
+ **Source:** `Polygl0t/Tucano2-qwen-0.5B-Instruct/generation_config.json`, line `"temperature": 0.1`.
92
+
93
+ **Bug 2d: Unsloth + tied word embeddings interaction.**
94
+
95
+ Both 0.5B models have `"tie_word_embeddings": true` in config.json. When Unsloth applies LoRA, it targets linear projection layers. With tied embeddings, `embed_tokens` and `lm_head` share weights. If Unsloth's LoRA patching doesn't handle this correctly, gradients may not propagate to the output head, or the embedding table may drift independently. The smoke test (Cell 8) must verify that `model.lm_head.weight.data_ptr() == model.model.embed_tokens.weight.data_ptr()` holds after LoRA patching.
96
+
97
+ **Source:** `Polygl0t/Tucano2-qwen-0.5B-Instruct/config.json`, line `"tie_word_embeddings": true`.
98
+
99
+ **Bug 2e: The V4 plan's reference policy VRAM estimate may be wrong.**
100
+
101
+ The V4 VRAM budget includes "Reference policy (frozen copy) ~0.4GB." With `beta=0.0` (no KL penalty), TRL 0.24.0's GRPOTrainer *may* skip loading the reference model entirely — the `ref_model` is only needed to compute KL divergence. But this behavior depends on the TRL version. If TRL loads the ref model anyway, it doubles the model footprint. This doesn't cause OOM at 0.5B (0.8GB total is fine), but it matters for the 3.7B scale-up. The smoke test must log peak VRAM to determine whether the ref model is loaded.
102
+
103
+ #### Problem 3: Hyperparameter Transfer From 0.5B to 3.7B Is Overstated
104
+
105
+ The original V4 plan claims hyperparameters validated at 0.5B transfer to 3.7B. This is partially true for qualitative findings (e.g., "clip_ratio > 0 is achievable," "task split works") but not for numerical values:
106
+ - LR=2e-6 at 0.5B (490M params) likely needs LR=5e-7 at 3.7B (3.8B params) — smaller models tolerate higher LR.
107
+ - G=16 at 0.5B is feasible with 512 completion length; at 3.7B the same VRAM budget supports G=4-8 at best.
108
+ - Effective batch size effects differ: batch=32 at 0.5B vs batch=8-16 at 3.7B changes gradient noise characteristics.
109
+
110
+ What transfers: the *qualitative* evidence that GRPO works on Tucano2-Instruct models, the reward function design, and the finding that APO-trained models can be further aligned (or can't).
111
+
112
+ ---
113
+
114
+ ## 2. Decisions
115
+
116
+ ### Decision 1: Single-Model, All-Task — 0.5B-Instruct Only
117
+
118
+ **Decision:** Train one GRPO run on `Polygl0t/Tucano2-qwen-0.5B-Instruct` using ALL four task types (extraction, push, sql_qa, insights). Do not train the 0.5B-Think model.
119
+
120
+ **Rationale:**
121
+ 1. The Instruct model scores 26.08 NPM vs Think's 14.41 — nearly 2× better on every benchmark.
122
+ 2. The Instruct model demonstrably produces correct structured JSON, correct math, correct function-call formatting (see model card samples).
123
+ 3. The Instruct chat template does NOT inject `<think>` tokens. The assistant message is just `{content}` — clean output, no token budget conflict.
124
+ 4. The Instruct model was SFT-trained on 874M tokens including structured output, retrieval, function calling, math with CoT, and general instruction following — it has a broad skill base suitable for all four tasks.
125
+ 5. Running one model simplifies the notebook, eliminates the data-split complexity, and halves the compute budget.
126
+ 6. If the Instruct model fails specifically on insights/analysis tasks, we can revisit Think for those tasks only. But the evidence says to test the strong model first.
127
+
128
+ **Evidence:** ThinkJSON (2502.14905) demonstrated that a 1.5B Instruct/Base model + GRPO beats DeepSeek-R1-671B on JSON extraction. The Instruct model doesn't need CoT to do structured output well. For analytical tasks, GRPO's reward signal can teach the model to produce structured analysis without explicit `<think>` overhead.
129
+
130
+ ### Decision 2: Use ALL Training Data (No Task Split)
131
+
132
+ **Decision:** Use the full V2 training set (`data/pairs/train.jsonl`, ~1,834 pairs) with the existing 40/40/10/10 distribution. Apply a 90/10 train/eval split. Do not create separate instruct/think data files.
133
+
134
+ **Rationale:**
135
+ 1. More data = more diverse prompts = more GRPO signal. Splitting the data reduces each model's training set by ~50%.
136
+ 2. Multi-task training at this scale is a feature, not a bug — the Cocktail Effect paper (2410.01109) shows mixing task types improves domain performance by 2-15%.
137
+ 3. The reward function already dispatches by task type. GRPO handles mixed-task batches natively.
138
+
139
+ ### Decision 3: Override All Dangerous generation_config Defaults
140
+
141
+ **Decision:** After model load, explicitly override the following `generation_config` fields:
142
+
143
+ ```python
144
+ model.generation_config.temperature = 1.0
145
+ model.generation_config.repetition_penalty = 1.0
146
+ model.generation_config.do_sample = True
147
+ model.generation_config.top_k = 0 # disable top-k during GRPO rollouts
148
+ model.generation_config.top_p = 1.0 # disable top-p during GRPO rollouts
149
+ model.config.use_cache = True
150
+ model.generation_config.use_cache = True
151
+ ```
152
+
153
+ **Rationale:**
154
+ - `temperature=0.1` (default) destroyed V1. Must be overridden.
155
+ - `repetition_penalty=1.2` (default) suppresses diversity. GRPO needs maximally diverse rollouts. Must be 1.0.
156
+ - `top_k=50` and `top_p=1.0` are set in the default generation_config. `top_k=50` clips the distribution during sampling — at temp=1.0, this may unnecessarily restrict exploration. Set `top_k=0` (disabled) to let temperature alone control diversity.
157
+ - `use_cache=false` (default) makes generation O(n²). Must be True.
158
+
159
+ ### Decision 4: Verify Tied Embeddings Survive LoRA Patching
160
+
161
+ **Decision:** Add a verification cell after model loading that checks:
162
+
163
+ ```python
164
+ # After Unsloth LoRA patching
165
+ assert model.lm_head.weight.data_ptr() == model.model.embed_tokens.weight.data_ptr(), \
166
+ "CRITICAL: Tied embeddings broken after LoRA patching. lm_head and embed_tokens are now separate."
167
+ ```
168
+
169
+ If this assertion fails, training may still work (gradients flow through LoRA layers on the projection matrices), but the embedding/output-head consistency that `tie_word_embeddings=true` provides would be broken. Document the result either way.
170
+
171
+ ### Decision 5: Hard Probe Gate on clip_ratio Before Full Training
172
+
173
+ **Decision:** Run a 10-step probe. If `clip_ratio == 0.0` on all 10 steps, STOP. Do not proceed to full training. This was the missed signal in V3.
174
+
175
+ **Gate condition:** `clip_ratio > 0.0` on **at least 3 of 10 probe steps**.
176
+
177
+ If the gate fails, proceed to Fallback Plan (Section 8) — do not iterate blindly.
178
+
179
+ ### Decision 6: Strip `<think>` Defensively in All Reward Functions
180
+
181
+ **Decision:** Even though the Instruct model's template doesn't inject `<think>`, the model may spontaneously generate think tokens (it has `<think>` as token ID 49116 in its vocabulary, and it was SFT-trained on math_cot data that contains reasoning traces). All reward functions must call `strip_think()` before scoring the answer portion.
182
+
183
+ ---
184
+
185
+ ## 3. Consequences
186
+
187
+ ### What We Expect
188
+
189
+ | Metric | Expected Range | Justification |
190
+ |---|---|---|
191
+ | clip_ratio | > 0 on majority of steps | 0.5B model has fewer params → larger per-param gradient; G=16 → more reward variance; no `<think>` overhead → shorter completions → less gradient dilution |
192
+ | Extraction reward | 0.30 - 0.60 | Instruct model already produces correct JSON (model card sample). GRPO refines schema compliance. |
193
+ | Push reward | 0.40 - 0.70 | Short outputs, Portuguese heuristics — simple task at any scale. |
194
+ | SQL Q&A reward | 0.20 - 0.40 | Model has general Portuguese comprehension. SQL-specific patterns need GRPO. Conservative target. |
195
+ | Insights reward | 0.20 - 0.40 | Model can follow instructions and structure output. Domain-specific vocabulary needs GRPO. Conservative target. |
196
+ | Completion length (Instruct) | 50 - 300 tokens | No `<think>` overhead. Extraction ~100 tok, SQL ~200 tok, insights ~300 tok. |
197
+ | Training time | 3 - 6 hours | 0.5B is ~8× faster than 3.7B for generation. 200 steps × ~60-120s/step. |
198
+
199
+ ### What This Validates for 3.7B Scale-Up
200
+
201
+ If V4 passes all gates:
202
+ 1. **GRPO works on APO-trained Tucano2 Instruct models.** The APO anchor resistance hypothesis is disproven.
203
+ 2. **All-task training on a single model is viable.** No need for a complex dual-model routing architecture.
204
+ 3. **Reward function calibration is confirmed.** The same reward functions (with appropriate thresholds) can be used at 3.7B.
205
+ 4. **The winning recipe:** 0.5B-Instruct + GRPO → scale to `Polygl0t/Tucano2-qwen-3.7B-Instruct` + GRPO.
206
+
207
+ ### What This Does NOT Validate
208
+
209
+ - Exact hyperparameter values (LR, G, completion length) for 3.7B.
210
+ - Whether 3.7B-Instruct has the same APO resistance characteristics as 0.5B-Instruct.
211
+ - Whether 3.7B fits in L4 VRAM at the same G and completion length.
212
+
213
+ ---
214
+
215
+ ## 4. Verified Model Facts
216
+
217
+ All values below were extracted directly from the actual repo files. The implementing agent should use these as ground truth, NOT the values from the original V4 handoff which contained some inaccuracies.
218
+
219
+ ### Tokenizer Token IDs (from `tokenizer_config.json`)
220
+
221
+ ```
222
+ Token ID 0: <|unk|> (special=true)
223
+ Token ID 1: <|im_start|> (special=true) — bos_token
224
+ Token ID 2: <|im_end|> (special=true) — eos_token
225
+ Token ID 49109: <|pad|> (special=true) — pad_token
226
+ Token ID 49116: <think> (special=false) — single token, NOT multi-token
227
+ Token ID 49117: </think> (special=false) — single token, NOT multi-token
228
+ Token ID 49118: <answer> (special=false)
229
+ Token ID 49119: </answer> (special=false)
230
+ ```
231
+
232
+ **Critical note:** `<think>` (49116) and `</think>` (49117) are registered as **single dedicated tokens** in `added_tokens_decoder`. The original V4 plan warned that `</think>` might be multi-token — this is WRONG. It is a single token. Two-pass generation using `eos_token_id=[49117]` to stop at `</think>` IS technically feasible. However, we do not need two-pass generation because we are using the Instruct model which does not generate `<think>` by default.
233
+
234
+ ### Model Architecture (from `config.json`)
235
+
236
+ ```
237
+ model_type: qwen3
238
+ architectures: Qwen3ForCausalLM
239
+ num_hidden_layers: 28
240
+ hidden_size: 1024
241
+ intermediate_size: 3072
242
+ num_attention_heads: 16
243
+ num_key_value_heads: 8
244
+ head_dim: 128
245
+ vocab_size: 49152
246
+ max_position_embeddings: 4096
247
+ tie_word_embeddings: true
248
+ use_cache: false ← MUST OVERRIDE TO true
249
+ rope_theta: 1000000
250
+ dtype: bfloat16
251
+ Parameters: 490,799,104
252
+ ```
253
+
254
+ ### Generation Config (from `generation_config.json` — ALL must be overridden)
255
+
256
+ ```
257
+ temperature: 0.1 ← OVERRIDE to 1.0 for training, 0.1 for eval
258
+ repetition_penalty: 1.2 ← OVERRIDE to 1.0 for training
259
+ do_sample: true
260
+ max_new_tokens: 1024
261
+ eos_token_id: [2] (= <|im_end|>)
262
+ ```
263
+
264
+ ### Instruct Chat Template Behavior (from `chat_template.jinja`)
265
+
266
+ The Instruct template applies the standard ChatML format:
267
+
268
+ ```
269
+ <|im_start|>system
270
+ {system_content}<|im_end|>
271
+ <|im_start|>user
272
+ {user_content}<|im_end|>
273
+ <|im_start|>assistant
274
+ {assistant_content}<|im_end|>
275
+ ```
276
+
277
+ With `add_generation_prompt=True`, the template appends `<|im_start|>assistant\n` to prompt the model to generate. **There is no `<think>` injection anywhere** in the Instruct template. The assistant block is rendered as plain `{content}` without any reasoning wrapper.
278
+
279
+ ### APO Training Details (from `training_config_apo.yaml`)
280
+
281
+ ```
282
+ loss_type: apo_zero
283
+ dpo_beta: 0.5
284
+ max_steps: 1115
285
+ max_learning_rate: 0.000005
286
+ num_train_epochs: 5
287
+ total_batch_size: 524288
288
+ reference_model: Tucano2-qwen-0.5B-Instruct-SFT
289
+ precompute_ref_log_probs: true
290
+ ```
291
+
292
+ The Instruct model had 1,115 steps of APO with `dpo_beta=0.5`. This is a moderate preference optimization — it creates a soft bias toward SFT behavior, not a hard constraint. With `beta=0.0` in GRPO (no KL penalty) and `LR=2e-6`, the GRPO gradient should be strong enough to move the policy.
293
+
294
+ ### SFT Training Details (from `training_config_sft.yaml`)
295
+
296
+ ```
297
+ Data: 874M tokens across 9 categories:
298
+ - code: ~2.3M tokens
299
+ - function_call: ~17.5M tokens
300
+ - general: ~700M tokens
301
+ - math_cot: ~27M tokens
302
+ - retrieval: ~2.2M tokens
303
+ - structured: ~35M tokens
304
+ - summarization: ~290K tokens
305
+ - translation: ~5.7M tokens
306
+ - dpo (chosen): ~14M tokens
307
+
308
+ max_steps: 68,635
309
+ max_learning_rate: 0.000085
310
+ assistant_only_loss: true
311
+ ```
312
+
313
+ The model was trained on structured output (35M tokens) and function calling (17.5M tokens) — it has a strong foundation for extraction tasks.
314
+
315
+ ---
316
+
317
+ ## 5. Implementation: Cell-by-Cell Notebook Specification
318
+
319
+ The notebook is `v4_instruct_grpo.ipynb`. Each cell is a gate — verify output before proceeding.
320
+
321
+ ### Cell 1: Dependencies
322
+
323
+ ```python
324
+ # Cell 1 — Clean install
325
+ # Run after kernel restart
326
+
327
+ !pip install "unsloth"
328
+ !pip install "trl==0.24.0" --no-deps
329
+ !pip install "rich" "wandb"
330
+ ```
331
+
332
+ **Gate:** No errors. Verify TRL 0.24.0 installed.
333
+
334
+ ### Cell 2: GPU + Unsloth Verification
335
+
336
+ ```python
337
+ import torch
338
+
339
+ print(f"CUDA available: {torch.cuda.is_available()}")
340
+ print(f"GPU: {torch.cuda.get_device_name(0)}")
341
+ print(f"VRAM: {torch.cuda.get_device_properties(0).total_mem / 1e9:.1f} GB")
342
+ print(f"bf16 support: {torch.cuda.is_bf16_supported()}")
343
+
344
+ from unsloth import FastLanguageModel
345
+ print(f"\n✓ Unsloth loaded")
346
+
347
+ import trl
348
+ assert trl.__version__ == "0.24.0", f"Expected TRL 0.24.0, got {trl.__version__}"
349
+ print(f"✓ TRL {trl.__version__}")
350
+
351
+ import transformers
352
+ print(f"✓ Transformers {transformers.__version__}")
353
+ ```
354
+
355
+ **Gate:** CUDA available, bf16=True, VRAM > 20GB, TRL 0.24.0.
356
+
357
+ ### Cell 3: Config Constants
358
+
359
+ ```python
360
+ import os
361
+ import json
362
+ import re
363
+ import time
364
+ import random
365
+ from pathlib import Path
366
+
367
+ # ── Disable Unsloth kernel recompilation ─────────────────────────────────────
368
+ os.environ["UNSLOTH_COMPILE_DISABLE"] = "1"
369
+ os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "expandable_segments:True"
370
+
371
+ # ── Model ────────────────────────────────────────────────────────────────────
372
+ MODEL_ID = "Polygl0t/Tucano2-qwen-0.5B-Instruct"
373
+ MAX_SEQ_LENGTH = 2048 # model supports 4096, but 2048 is plenty for Instruct (no <think> overhead)
374
+ ADAPTER_DIR = Path("models/tucano2-0.5B-instruct-grpo-v4")
375
+ CHECKPOINT_DIR = ADAPTER_DIR / "checkpoints"
376
+
377
+ # ── Data ────────────��────────────────────────────────────────────────────────
378
+ DATA_DIR = Path("data/pairs")
379
+ TRAIN_FILE = DATA_DIR / "train.jsonl"
380
+ EVAL_SPLIT = 0.10 # 10% held out for eval
381
+
382
+ # ── GRPO Hyperparameters ─────────────────────────────────────────────────────
383
+ NUM_GENERATIONS = 16 # 0.5B + short completions = VRAM allows G=16
384
+ MAX_COMPLETION_LENGTH = 512 # Instruct: no <think> overhead. Extraction ~100, SQL ~200, insights ~300
385
+ TEMPERATURE = 1.0 # Skywork-OR1: τ=1.0 for exploration
386
+ LEARNING_RATE = 2e-6 # Dr. GRPO: 4× V2's 5e-7 (clip_ratio=0 → push harder)
387
+ BETA = 0.0 # Dr. GRPO §3.2: β=0 optimal for rule-based rewards
388
+ SCALE_REWARDS = False # Dr. GRPO: remove std normalization bias
389
+ BATCH_SIZE = 2 # per-device batch size
390
+ GRAD_ACCUM = 1 # effective batch = 2 * 1 = 2 prompts * 16 gen = 32 completions
391
+ MAX_STEPS = 200 # validation run
392
+ SAVE_STEPS = 20
393
+ EVAL_STEPS = 10
394
+ EARLY_STOPPING_PATIENCE = 15
395
+ EARLY_STOPPING_DELTA = 0.005
396
+
397
+ # ── LoRA ─────────────────────────────────────────────────────────────────────
398
+ LORA_R = 16
399
+ LORA_ALPHA = 32
400
+
401
+ # ── Monitoring ───────────────────────────────────────────────────────────────
402
+ WANDB_PROJECT = "tucano2-commerce"
403
+ EVAL_MAX_SAMPLES = 15 # eval callback samples
404
+ EVAL_MAX_TOKENS = 512 # match training completion length
405
+
406
+ # ── Task Classification (inherited from V2/V3) ──────────────────────────────
407
+ VALID_SENTIMENTS = {"positive", "negative", "neutral"}
408
+ VALID_CATEGORIES = {
409
+ "delivery_delay", "product_quality", "product_not_received",
410
+ "wrong_product", "seller_communication", "app_issue",
411
+ "price_value", "other", "none",
412
+ }
413
+ VALID_CHURN = {"low", "medium", "high"}
414
+ VALID_REPEAT = {"yes", "no", "maybe"}
415
+ EXTRACTION_FIELDS = [
416
+ "sentiment", "sentiment_score", "churn_risk", "delivery_issue",
417
+ "product_issue", "seller_issue", "main_complaint",
418
+ "complaint_category", "repeat_intent", "would_recommend",
419
+ ]
420
+
421
+ # ── Verified Special Token IDs (from tokenizer_config.json) ─────────────────
422
+ # These are constants — do NOT recompute via tokenizer.encode()
423
+ TOKEN_ID_BOS = 1 # <|im_start|>
424
+ TOKEN_ID_EOS = 2 # <|im_end|>
425
+ TOKEN_ID_PAD = 49109 # <|pad|>
426
+ TOKEN_ID_THINK = 49116 # <think>
427
+ TOKEN_ID_THINK_END = 49117 # </think>
428
+
429
+ print("✓ Config loaded")
430
+ print(f" Model: {MODEL_ID}")
431
+ print(f" G={NUM_GENERATIONS}, max_comp={MAX_COMPLETION_LENGTH}, temp={TEMPERATURE}")
432
+ print(f" LR={LEARNING_RATE}, β={BETA}, scale_rewards={SCALE_REWARDS}")
433
+ print(f" LoRA r={LORA_R}, α={LORA_ALPHA}")
434
+ print(f" Max steps: {MAX_STEPS}")
435
+ ```
436
+
437
+ ### Cell 4: Load Model + Apply Critical Overrides
438
+
439
+ ```python
440
+ from unsloth import FastLanguageModel
441
+
442
+ print("Loading model...")
443
+ model, tokenizer = FastLanguageModel.from_pretrained(
444
+ model_name=MODEL_ID,
445
+ max_seq_length=MAX_SEQ_LENGTH,
446
+ load_in_4bit=True,
447
+ dtype=None, # auto-detect
448
+ )
449
+
450
+ # ═══════════════════════════════════════════════════════════════════════════════
451
+ # CRITICAL OVERRIDES — generation_config ships with values that destroy GRPO
452
+ # Source: Polygl0t/Tucano2-qwen-0.5B-Instruct/generation_config.json
453
+ # temperature: 0.1 → override to 1.0
454
+ # repetition_penalty: 1.2 → override to 1.0
455
+ # use_cache: false → override to true
456
+ # ═══════════════════════════════════════════════════════════════════════════════
457
+
458
+ model.config.use_cache = True
459
+ model.generation_config.use_cache = True
460
+ model.generation_config.temperature = TEMPERATURE
461
+ model.generation_config.repetition_penalty = 1.0 # CRITICAL: 1.2 suppresses diversity
462
+ model.generation_config.do_sample = True
463
+ model.generation_config.top_k = 0 # disable top-k — let temperature control diversity
464
+ model.generation_config.top_p = 1.0 # disable top-p
465
+
466
+ # Pad token
467
+ if tokenizer.pad_token is None:
468
+ tokenizer.pad_token = tokenizer.eos_token
469
+
470
+ print(f"✓ Model loaded on {model.device}")
471
+ print(f" use_cache: {model.config.use_cache}")
472
+ print(f" temperature: {model.generation_config.temperature}")
473
+ print(f" repetition_penalty: {model.generation_config.repetition_penalty}")
474
+ print(f" top_k: {model.generation_config.top_k}")
475
+ print(f" Params: {sum(p.numel() for p in model.parameters()) / 1e6:.0f}M")
476
+
477
+ # ═══════════════════════════════════════════════════════════════════════════════
478
+ # TIED EMBEDDINGS CHECK
479
+ # Source: config.json has "tie_word_embeddings": true
480
+ # If Unsloth LoRA patching breaks this, log it (may not be fatal).
481
+ # ═══════════════════════════════════════════════════════════════════════════════
482
+
483
+ try:
484
+ lm_ptr = model.lm_head.weight.data_ptr()
485
+ embed_ptr = model.model.embed_tokens.weight.data_ptr()
486
+ tied = lm_ptr == embed_ptr
487
+ print(f" Tied embeddings intact: {tied}")
488
+ if not tied:
489
+ print(" ⚠️ WARNING: Tied embeddings broken after Unsloth load. May affect output head gradients.")
490
+ except AttributeError as e:
491
+ print(f" ⚠️ Could not check tied embeddings: {e}")
492
+ ```
493
+
494
+ **Gate:** Model loaded, `use_cache=True`, `repetition_penalty=1.0`, `temperature=1.0`.
495
+
496
+ ### Cell 5: Token ID Verification
497
+
498
+ ```python
499
+ # Verify that the constants from Cell 3 match the actual tokenizer
500
+ # Do NOT skip this cell — if IDs don't match, all reward functions break
501
+
502
+ tok_tests = {
503
+ "<|im_start|>": TOKEN_ID_BOS,
504
+ "<|im_end|>": TOKEN_ID_EOS,
505
+ "<|pad|>": TOKEN_ID_PAD,
506
+ "<think>": TOKEN_ID_THINK,
507
+ "</think>": TOKEN_ID_THINK_END,
508
+ }
509
+
510
+ all_pass = True
511
+ for text, expected_id in tok_tests.items():
512
+ # For special tokens registered in added_tokens, encode should return single ID
513
+ ids = tokenizer.encode(text, add_special_tokens=False)
514
+ actual_id = ids[0] if len(ids) == 1 else ids
515
+ match = (len(ids) == 1 and ids[0] == expected_id)
516
+ status = "✓" if match else "✗"
517
+ print(f" {status} '{text}' → expected {expected_id}, got {actual_id}")
518
+ if not match:
519
+ all_pass = False
520
+
521
+ assert all_pass, "Token ID mismatch detected. Update constants in Cell 3 before proceeding."
522
+ print("\n✓ All token IDs verified")
523
+
524
+ # Also verify eos_token_id is correct
525
+ assert tokenizer.eos_token_id == TOKEN_ID_EOS, f"eos_token_id mismatch: {tokenizer.eos_token_id}"
526
+ print(f"✓ eos_token_id = {tokenizer.eos_token_id}")
527
+ ```
528
+
529
+ **Gate:** All token IDs match. Single-token `<think>` (49116) and `</think>` (49117) confirmed.
530
+
531
+ ### Cell 6: KV Cache Diagnostic
532
+
533
+ ```python
534
+ # Copied from V2 Cell 5b — verify KV cache is working
535
+ # Gate: ratio < 3× → KV cache OK. ratio > 5× → BROKEN, abort.
536
+
537
+ FastLanguageModel.for_inference(model)
538
+
539
+ _kv_msgs = [{"role": "user", "content": "Qual a categoria de reclamação mais frequente?"}]
540
+ _kv_text = tokenizer.apply_chat_template(_kv_msgs, tokenize=False, add_generation_prompt=True)
541
+ _kv_inputs = tokenizer(_kv_text, return_tensors="pt").to(model.device)
542
+
543
+ _token_times, _past, _generated = [], None, _kv_inputs["input_ids"]
544
+ with torch.no_grad():
545
+ for _step in range(50):
546
+ _t0 = time.time()
547
+ seq_len = _generated.shape[1]
548
+ if _past is None:
549
+ _position_ids = torch.arange(seq_len, dtype=torch.long, device=model.device).unsqueeze(0)
550
+ else:
551
+ _position_ids = torch.tensor([[seq_len - 1]], dtype=torch.long, device=model.device)
552
+ _out = model(
553
+ input_ids=_generated[:, -1:] if _past else _generated,
554
+ position_ids=_position_ids,
555
+ attention_mask=torch.ones(1, seq_len, device=model.device),
556
+ past_key_values=_past,
557
+ use_cache=True,
558
+ return_dict=True,
559
+ )
560
+ _past = _out.past_key_values
561
+ _next = _out.logits[:, -1, :].argmax(dim=-1, keepdim=True)
562
+ _generated = torch.cat([_generated, _next], dim=1)
563
+ _token_times.append(time.time() - _t0)
564
+
565
+ _ratio = sum(_token_times[45:]) / max(sum(_token_times[:5]), 1e-9)
566
+ print(f"First 5 tok: {[f'{t*1000:.0f}ms' for t in _token_times[:5]]}")
567
+ print(f"Last 5 tok: {[f'{t*1000:.0f}ms' for t in _token_times[45:]]}")
568
+ print(f"Ratio last/first: {_ratio:.1f}x")
569
+ assert _ratio < 5, f"KV cache BROKEN (ratio {_ratio:.1f}×). Check model.config.use_cache."
570
+ print("✓ KV cache working correctly")
571
+
572
+ del _past, _generated, _kv_inputs, _token_times, _out
573
+ import gc; gc.collect()
574
+ torch.cuda.empty_cache()
575
+ ```
576
+
577
+ **Gate:** Ratio < 3×.
578
+
579
+ ### Cell 7: Single Inference Test
580
+
581
+ ```python
582
+ # Verify model generates coherent Portuguese and closes <|im_end|>
583
+
584
+ FastLanguageModel.for_inference(model)
585
+
586
+ test_msgs = [
587
+ {"role": "system", "content": "Você é um assistente de IA especializado em e-commerce brasileiro."},
588
+ {"role": "user", "content": "Analise esta avaliação: 'Produto chegou quebrado, péssima embalagem. Nunca mais compro aqui.' Retorne um objeto JSON com os campos: sentiment, sentiment_score, delivery_issue, complaint_category."},
589
+ ]
590
+ text = tokenizer.apply_chat_template(test_msgs, tokenize=False, add_generation_prompt=True)
591
+ inputs = tokenizer(text, return_tensors="pt").to(model.device)
592
+
593
+ t0 = time.time()
594
+ outputs = model.generate(
595
+ **inputs,
596
+ max_new_tokens=256,
597
+ temperature=0.1, # low temp for deterministic eval
598
+ do_sample=True,
599
+ repetition_penalty=1.0,
600
+ )
601
+ elapsed = time.time() - t0
602
+
603
+ response = tokenizer.decode(outputs[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True)
604
+ print(f"Generation time: {elapsed:.1f}s")
605
+ print(f"Response length: {len(response)} chars")
606
+ print(f"Contains <think>: {'<think>' in response}")
607
+ print(f"Contains JSON {{ }}: {'{' in response and '}' in response}")
608
+ print(f"\n{'='*60}")
609
+ print(response[:500])
610
+ ```
611
+
612
+ **Gate:** Response is coherent Portuguese. Check whether `<think>` appears (document the result — this tells us if the Instruct model spontaneously thinks). Check if JSON structure is present.
613
+
614
+ ### Cell 8: Reward Functions
615
+
616
+ Complete reward functions — see Section 6 below for the full specification. This cell defines:
617
+
618
+ - `strip_think(text)` — remove `<think>...</think>` blocks
619
+ - `has_think_block(text)` — check for think blocks
620
+ - `_classify_task_type(prompt_text)` — classify prompt into task type
621
+ - `_extract_json(text)` — extract JSON from text robustly
622
+ - `reward_extraction(completion)` — continuous reward for JSON extraction (max 1.0)
623
+ - `reward_sql_qa(completion)` — continuous reward for SQL Q&A (max 1.0)
624
+ - `reward_insights(completion)` — continuous reward for insights (max 1.0)
625
+ - `reward_push(completion)` — continuous reward for push notifications (max 1.0)
626
+ - `commerce_reward_fn(completions, prompts, **kwargs)` — master dispatch function
627
+
628
+ ### Cell 9: Reward Calibration
629
+
630
+ ```python
631
+ # Load data, classify by task type, run calibration on 8 diverse samples
632
+
633
+ by_type = {"extraction": [], "sql_qa": [], "insights": [], "push": []}
634
+ with open(TRAIN_FILE) as f:
635
+ for line in f:
636
+ row = json.loads(line)
637
+ convs = row["conversations"]
638
+ prompt_msgs = [m for m in convs if m["role"] in ("system", "user")]
639
+ if not prompt_msgs:
640
+ continue
641
+ user_text = " ".join(m["content"] for m in prompt_msgs if m["role"] == "user")
642
+ task = _classify_task_type(user_text)
643
+ by_type[task].append(prompt_msgs)
644
+
645
+ print(f"Prompts by type: {', '.join(f'{k}={len(v)}' for k, v in by_type.items())}")
646
+
647
+ # Pick 2 samples per task type = 8 total
648
+ rng = random.Random(42)
649
+ cal_samples = []
650
+ for task_type in by_type:
651
+ pool = by_type[task_type]
652
+ if len(pool) >= 2:
653
+ cal_samples.extend(rng.sample(pool, 2))
654
+ elif pool:
655
+ cal_samples.extend(pool)
656
+
657
+ FastLanguageModel.for_inference(model)
658
+ print(f"\nReward calibration ({len(cal_samples)} samples):")
659
+ print("-" * 60)
660
+
661
+ cal_rewards = []
662
+ for i, msgs in enumerate(cal_samples):
663
+ text = tokenizer.apply_chat_template(msgs, tokenize=False, add_generation_prompt=True)
664
+ inputs = tokenizer(text, return_tensors="pt").to(model.device)
665
+ outputs = model.generate(
666
+ **inputs,
667
+ max_new_tokens=MAX_COMPLETION_LENGTH,
668
+ temperature=0.7,
669
+ do_sample=True,
670
+ repetition_penalty=1.0,
671
+ )
672
+ response = tokenizer.decode(outputs[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True)
673
+ r = commerce_reward_fn([response], [text])[0]
674
+ cal_rewards.append(r)
675
+ task = _classify_task_type(" ".join(m.get("content", "") for m in msgs if m["role"] == "user"))
676
+ has_think = "<think>" in response
677
+ answer_preview = strip_think(response)[:100]
678
+ print(f" Sample {i+1} [{task:12s}]: reward={r:.2f} | has_think={has_think} | {answer_preview}")
679
+
680
+ print(f"\nMean={sum(cal_rewards)/len(cal_rewards):.2f}, Min={min(cal_rewards):.2f}, Max={max(cal_rewards):.2f}")
681
+ print(f"Reward variance > 0: {len(set(f'{r:.4f}' for r in cal_rewards)) > 1}")
682
+ ```
683
+
684
+ **Gate:** Mean reward < 0.90 (if already ~1.0, the reward function is too easy — GRPO won't learn). Variance > 0. Document whether `<think>` appeared.
685
+
686
+ ### Cell 10: Dataset Preparation
687
+
688
+ ```python
689
+ from datasets import Dataset
690
+
691
+ def prepare_datasets(train_file, eval_ratio=EVAL_SPLIT, seed=42):
692
+ rng = random.Random(seed)
693
+
694
+ all_records = []
695
+ with open(train_file) as f:
696
+ for line in f:
697
+ row = json.loads(line)
698
+ convs = row["conversations"]
699
+ prompt_msgs = [m for m in convs if m["role"] in ("system", "user")]
700
+ if prompt_msgs:
701
+ all_records.append(prompt_msgs)
702
+
703
+ rng.shuffle(all_records)
704
+ n_eval = max(1, int(len(all_records) * eval_ratio))
705
+ eval_records = all_records[:n_eval]
706
+ train_records = all_records[n_eval:]
707
+
708
+ # Log task distribution
709
+ for label, records in [("train", train_records), ("eval", eval_records)]:
710
+ dist = {}
711
+ for msgs in records:
712
+ user_text = " ".join(m["content"] for m in msgs if m["role"] == "user")
713
+ task = _classify_task_type(user_text)
714
+ dist[task] = dist.get(task, 0) + 1
715
+ print(f" {label}: {len(records)} prompts — {dist}")
716
+
717
+ train_ds = Dataset.from_list([{"prompt": msgs} for msgs in train_records])
718
+ eval_ds = Dataset.from_list([{"prompt": msgs} for msgs in eval_records])
719
+ return train_ds, eval_ds
720
+
721
+ train_dataset, eval_dataset = prepare_datasets(TRAIN_FILE)
722
+ print(f"\n✓ Datasets: train={len(train_dataset)}, eval={len(eval_dataset)}")
723
+ ```
724
+
725
+ **Gate:** Train has ~1,650 prompts, eval has ~180. All 4 task types present in both.
726
+
727
+ ### Cell 11: Smoke Test (1 Step)
728
+
729
+ ```python
730
+ from trl import GRPOConfig, GRPOTrainer
731
+
732
+ FastLanguageModel.for_training(model)
733
+
734
+ smoke_config = GRPOConfig(
735
+ output_dir=str(CHECKPOINT_DIR / "smoke"),
736
+ num_generations=NUM_GENERATIONS,
737
+ scale_rewards=SCALE_REWARDS,
738
+ max_completion_length=MAX_COMPLETION_LENGTH,
739
+ max_steps=1,
740
+ temperature=TEMPERATURE,
741
+ per_device_train_batch_size=BATCH_SIZE,
742
+ gradient_accumulation_steps=1,
743
+ learning_rate=LEARNING_RATE,
744
+ fp16=False,
745
+ bf16=True,
746
+ logging_steps=1,
747
+ save_steps=999,
748
+ report_to="none",
749
+ max_prompt_length=MAX_SEQ_LENGTH // 2,
750
+ seed=42,
751
+ remove_unused_columns=False,
752
+ )
753
+
754
+ # ── UnslothGRPOTrainer (inherited from V2/V3) ────────────────────────────────
755
+ class UnslothGRPOTrainer(GRPOTrainer):
756
+ def _generate(self, prompts, images):
757
+ FastLanguageModel.for_inference(self.model)
758
+ try:
759
+ result = super()._generate(prompts, images)
760
+ finally:
761
+ FastLanguageModel.for_training(self.model)
762
+ return result
763
+
764
+ smoke_trainer = UnslothGRPOTrainer(
765
+ model=model,
766
+ reward_funcs=commerce_reward_fn,
767
+ args=smoke_config,
768
+ train_dataset=train_dataset,
769
+ processing_class=tokenizer,
770
+ )
771
+
772
+ t0 = time.time()
773
+ smoke_trainer.train()
774
+ step_time = time.time() - t0
775
+
776
+ peak_vram = torch.cuda.max_memory_allocated() / 1e9
777
+ print(f"\n✓ Smoke test passed!")
778
+ print(f" Step time: {step_time:.0f}s")
779
+ print(f" Peak VRAM: {peak_vram:.1f}GB / {torch.cuda.get_device_properties(0).total_mem / 1e9:.1f}GB")
780
+ print(f" Estimated full run ({MAX_STEPS} steps): {step_time * MAX_STEPS / 3600:.1f}h")
781
+
782
+ del smoke_trainer
783
+ gc.collect(); torch.cuda.empty_cache()
784
+ ```
785
+
786
+ **Gate:** No OOM. Peak VRAM < 20GB. Step time < 180s. Document whether ref model was loaded (check VRAM: if peak > 1.0GB, ref model is loaded; if ~0.5GB, it's skipped due to β=0).
787
+
788
+ ### Cell 12: Probe Run (10 Steps) — THE CRITICAL GATE
789
+
790
+ ```python
791
+ FastLanguageModel.for_training(model)
792
+
793
+ probe_config = GRPOConfig(
794
+ output_dir=str(CHECKPOINT_DIR / "probe"),
795
+ num_generations=NUM_GENERATIONS,
796
+ scale_rewards=SCALE_REWARDS,
797
+ max_completion_length=MAX_COMPLETION_LENGTH,
798
+ max_steps=10,
799
+ temperature=TEMPERATURE,
800
+ num_train_epochs=1,
801
+ per_device_train_batch_size=BATCH_SIZE,
802
+ gradient_accumulation_steps=GRAD_ACCUM,
803
+ learning_rate=LEARNING_RATE,
804
+ warmup_ratio=0.1,
805
+ lr_scheduler_type="cosine",
806
+ fp16=False,
807
+ bf16=True,
808
+ logging_steps=1,
809
+ save_steps=999,
810
+ report_to="none",
811
+ max_prompt_length=MAX_SEQ_LENGTH // 2,
812
+ seed=42,
813
+ remove_unused_columns=False,
814
+ )
815
+
816
+ probe_trainer = UnslothGRPOTrainer(
817
+ model=model,
818
+ reward_funcs=commerce_reward_fn,
819
+ args=probe_config,
820
+ train_dataset=train_dataset,
821
+ processing_class=tokenizer,
822
+ )
823
+
824
+ t0 = time.time()
825
+ result = probe_trainer.train()
826
+ elapsed = time.time() - t0
827
+
828
+ # ══════════════════════════════════════════════════════════════════════════════
829
+ # CRITICAL GATE: clip_ratio > 0 on at least 3 of 10 steps
830
+ # If this fails, STOP. See Fallback Plan (Section 8 of ADR-002).
831
+ # ══════════════════════════════════════════════════════════════════════════════
832
+ # TRL logs clip_ratio in training history. Extract from trainer.state.log_history.
833
+ clip_ratios = []
834
+ for entry in probe_trainer.state.log_history:
835
+ if "train/clip_ratio" in entry:
836
+ clip_ratios.append(entry["train/clip_ratio"])
837
+
838
+ nonzero_clips = sum(1 for cr in clip_ratios if cr > 0.0)
839
+ print(f"\n{'='*60}")
840
+ print(f"PROBE RESULTS ({elapsed:.0f}s, {elapsed/10:.0f}s/step)")
841
+ print(f" clip_ratios: {[f'{cr:.4f}' for cr in clip_ratios]}")
842
+ print(f" Non-zero clip steps: {nonzero_clips}/{len(clip_ratios)}")
843
+ print(f" Train loss: {result.training_loss:.4f}")
844
+ print(f"{'='*60}")
845
+
846
+ if nonzero_clips >= 3:
847
+ print("✓ PROBE GATE PASSED — proceed to full training")
848
+ elif nonzero_clips > 0:
849
+ print("⚠️ MARGINAL — clip_ratio > 0 on some steps but < 3. Consider increasing LR or G.")
850
+ else:
851
+ print("✗ PROBE GATE FAILED — clip_ratio = 0 on ALL steps.")
852
+ print(" DO NOT proceed to full training.")
853
+ print(" See ADR-002 Section 8 (Fallback Plan).")
854
+
855
+ del probe_trainer
856
+ gc.collect(); torch.cuda.empty_cache()
857
+ ```
858
+
859
+ **Gate:** `nonzero_clips >= 3`. If this fails, go to Section 8.
860
+
861
+ ### Cell 13: W&B Init + Full Training
862
+
863
+ ```python
864
+ import wandb
865
+
866
+ wandb.login()
867
+ wandb.init(
868
+ project=WANDB_PROJECT,
869
+ name=f"grpo-v4-instruct-0.5B-{time.strftime('%Y%m%d-%H%M')}",
870
+ config={
871
+ "model_id": MODEL_ID,
872
+ "version": "v4",
873
+ "num_generations": NUM_GENERATIONS,
874
+ "max_completion_length": MAX_COMPLETION_LENGTH,
875
+ "temperature": TEMPERATURE,
876
+ "learning_rate": LEARNING_RATE,
877
+ "beta": BETA,
878
+ "scale_rewards": SCALE_REWARDS,
879
+ "batch_size": BATCH_SIZE,
880
+ "grad_accum": GRAD_ACCUM,
881
+ "max_steps": MAX_STEPS,
882
+ "lora_r": LORA_R,
883
+ "lora_alpha": LORA_ALPHA,
884
+ "train_prompts": len(train_dataset),
885
+ "eval_prompts": len(eval_dataset),
886
+ "repetition_penalty_override": 1.0,
887
+ },
888
+ )
889
+ print(f"✓ W&B run: {wandb.run.url}")
890
+
891
+ # ── EvalRewardCallback (inherited from V2/V3, adapted) ──────────────────────
892
+ from transformers import TrainerCallback
893
+
894
+ class EvalRewardCallback(TrainerCallback):
895
+ def __init__(self, eval_records, reward_fn, patience, delta):
896
+ self.eval_records = eval_records
897
+ self.reward_fn = reward_fn
898
+ self.patience = patience
899
+ self.delta = delta
900
+ self.best_reward = -float("inf")
901
+ self.best_step = 0
902
+ self.no_improve_count = 0
903
+
904
+ def on_step_end(self, args, state, control, model=None, processing_class=None, **kwargs):
905
+ if state.global_step == 0 or state.global_step % EVAL_STEPS != 0:
906
+ return control
907
+
908
+ tokenizer_local = processing_class
909
+ if tokenizer_local is None:
910
+ print("[EvalRewardCallback] WARNING: tokenizer is None, skipping eval")
911
+ return control
912
+
913
+ mean_reward = self._run_eval(model, tokenizer_local, args)
914
+ improved = mean_reward > self.best_reward + self.delta
915
+
916
+ wandb.log({
917
+ "eval/mean_reward": mean_reward,
918
+ "eval/best_reward": max(self.best_reward, mean_reward),
919
+ "eval/no_improve_count": self.no_improve_count,
920
+ }, step=state.global_step)
921
+
922
+ status = "↑ improved" if improved else f"↔ no gain ({self.no_improve_count + 1}/{self.patience})"
923
+ print(f"\n[EvalReward] step={state.global_step} | mean={mean_reward:.4f} | best={self.best_reward:.4f} | {status}")
924
+
925
+ if improved:
926
+ self.best_reward = mean_reward
927
+ self.best_step = state.global_step
928
+ self.no_improve_count = 0
929
+ else:
930
+ self.no_improve_count += 1
931
+ if self.no_improve_count >= self.patience:
932
+ print(f"[EarlyStopping] No improvement for {self.patience} evals. Halting.")
933
+ control.should_training_stop = True
934
+ return control
935
+
936
+ def _run_eval(self, model, tokenizer_local, args):
937
+ FastLanguageModel.for_inference(model)
938
+ rewards = []
939
+ subset = self.eval_records[:EVAL_MAX_SAMPLES]
940
+ for record in subset:
941
+ msgs = record["prompt"]
942
+ text = tokenizer_local.apply_chat_template(msgs, tokenize=False, add_generation_prompt=True)
943
+ inputs = tokenizer_local(text, return_tensors="pt", truncation=True, max_length=args.max_prompt_length).to(model.device)
944
+ with torch.no_grad():
945
+ out = model.generate(
946
+ **inputs,
947
+ max_new_tokens=EVAL_MAX_TOKENS,
948
+ temperature=0.1, # deterministic eval
949
+ do_sample=True,
950
+ repetition_penalty=1.0,
951
+ )
952
+ resp = tokenizer_local.decode(out[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True)
953
+ rewards.append(self.reward_fn([resp], [text])[0])
954
+ FastLanguageModel.for_training(model)
955
+ return sum(rewards) / len(rewards) if rewards else 0.0
956
+
957
+ # ── Training ────────────────────────────────────────────────────────────────
958
+ FastLanguageModel.for_training(model)
959
+
960
+ grpo_config = GRPOConfig(
961
+ output_dir=str(CHECKPOINT_DIR),
962
+ num_generations=NUM_GENERATIONS,
963
+ scale_rewards=SCALE_REWARDS,
964
+ max_completion_length=MAX_COMPLETION_LENGTH,
965
+ max_steps=MAX_STEPS,
966
+ temperature=TEMPERATURE,
967
+ num_train_epochs=1,
968
+ per_device_train_batch_size=BATCH_SIZE,
969
+ gradient_accumulation_steps=GRAD_ACCUM,
970
+ learning_rate=LEARNING_RATE,
971
+ warmup_ratio=0.1,
972
+ lr_scheduler_type="cosine",
973
+ fp16=False,
974
+ bf16=True,
975
+ logging_steps=1,
976
+ save_steps=SAVE_STEPS,
977
+ save_total_limit=5,
978
+ save_only_model=True,
979
+ report_to="wandb",
980
+ max_prompt_length=MAX_SEQ_LENGTH // 2,
981
+ seed=42,
982
+ remove_unused_columns=False,
983
+ disable_tqdm=True,
984
+ logging_first_step=True,
985
+ )
986
+
987
+ eval_cb = EvalRewardCallback(
988
+ eval_records=list(eval_dataset),
989
+ reward_fn=commerce_reward_fn,
990
+ patience=EARLY_STOPPING_PATIENCE,
991
+ delta=EARLY_STOPPING_DELTA,
992
+ )
993
+
994
+ trainer = UnslothGRPOTrainer(
995
+ model=model,
996
+ reward_funcs=commerce_reward_fn,
997
+ args=grpo_config,
998
+ train_dataset=train_dataset,
999
+ processing_class=tokenizer,
1000
+ callbacks=[eval_cb],
1001
+ )
1002
+
1003
+ t_start = time.time()
1004
+ result = trainer.train()
1005
+ elapsed = time.time() - t_start
1006
+
1007
+ wandb.log({
1008
+ "train/final_loss": result.training_loss,
1009
+ "train/duration_hours": elapsed / 3600,
1010
+ "train/total_steps": result.global_step,
1011
+ "eval/best_reward_final": eval_cb.best_reward,
1012
+ "eval/best_step": eval_cb.best_step,
1013
+ })
1014
+ wandb.finish()
1015
+
1016
+ print(f"\n{'='*60}")
1017
+ print(f"V4 Training Complete")
1018
+ print(f" Loss: {result.training_loss:.4f}")
1019
+ print(f" Steps: {result.global_step}")
1020
+ print(f" Duration: {elapsed/3600:.1f}h")
1021
+ print(f" Best eval: {eval_cb.best_reward:.4f} (step {eval_cb.best_step})")
1022
+ print(f"{'='*60}")
1023
+ ```
1024
+
1025
+ ### Cell 14: Validation (20 Held-Out Samples)
1026
+
1027
+ ```python
1028
+ # Run validation on 20 held-out samples, broken down by task type
1029
+
1030
+ FastLanguageModel.for_inference(model)
1031
+
1032
+ val_samples = list(eval_dataset)[:20]
1033
+ val_results = {"extraction": [], "sql_qa": [], "insights": [], "push": []}
1034
+
1035
+ for i, record in enumerate(val_samples):
1036
+ msgs = record["prompt"]
1037
+ user_text = " ".join(m["content"] for m in msgs if m["role"] == "user")
1038
+ task = _classify_task_type(user_text)
1039
+
1040
+ text = tokenizer.apply_chat_template(msgs, tokenize=False, add_generation_prompt=True)
1041
+ inputs = tokenizer(text, return_tensors="pt").to(model.device)
1042
+ with torch.no_grad():
1043
+ out = model.generate(
1044
+ **inputs,
1045
+ max_new_tokens=MAX_COMPLETION_LENGTH,
1046
+ temperature=0.1,
1047
+ do_sample=True,
1048
+ repetition_penalty=1.0,
1049
+ )
1050
+ resp = tokenizer.decode(out[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True)
1051
+ r = commerce_reward_fn([resp], [text])[0]
1052
+ val_results[task].append(r)
1053
+ print(f" [{task:12s}] reward={r:.2f} | {strip_think(resp)[:80]}")
1054
+
1055
+ print(f"\n{'='*60}")
1056
+ print("Validation Results by Task:")
1057
+ for task, rewards in val_results.items():
1058
+ if rewards:
1059
+ mean_r = sum(rewards) / len(rewards)
1060
+ print(f" {task:12s}: mean={mean_r:.3f} (n={len(rewards)})")
1061
+ print(f"{'='*60}")
1062
+ ```
1063
+
1064
+ ### Cell 15: Save Adapter
1065
+
1066
+ ```python
1067
+ # Save the GRPO-tuned LoRA adapter
1068
+
1069
+ model.save_pretrained(str(ADAPTER_DIR))
1070
+ tokenizer.save_pretrained(str(ADAPTER_DIR))
1071
+ print(f"✓ Adapter saved to {ADAPTER_DIR}")
1072
+ ```
1073
+
1074
+ ---
1075
+
1076
+ ## 6. Reward Functions: Complete Specification
1077
+
1078
+ These are the exact reward functions the implementing agent must use. They are adapted from V2/V3 with one critical change: `strip_think()` is called defensively on ALL completions before scoring, even for the Instruct model.
1079
+
1080
+ ```python
1081
+ def strip_think(text: str) -> str:
1082
+ """Remove <think>...</think> block, return the answer portion."""
1083
+ return re.sub(r"<think>.*?</think>", "", text, flags=re.DOTALL).strip()
1084
+
1085
+ def has_think_block(text: str) -> bool:
1086
+ return bool(re.search(r"<think>.+</think>", text, flags=re.DOTALL))
1087
+
1088
+ def _classify_task_type(prompt_text: str) -> str:
1089
+ p = prompt_text.lower()
1090
+ if "retorne um objeto json" in p or "extraia dados" in p or "json" in p:
1091
+ return "extraction"
1092
+ elif "notificação push" in p or "notificação de reengajamento" in p:
1093
+ return "push"
1094
+ elif "perfil do cliente" in p or "retenção" in p or "análise" in p or "insight" in p:
1095
+ return "insights"
1096
+ else:
1097
+ return "sql_qa"
1098
+
1099
+ def _extract_json(text: str) -> dict | None:
1100
+ """Extract first JSON object from text. Returns parsed dict or None."""
1101
+ # Try direct parse first
1102
+ stripped = text.strip()
1103
+ # Remove markdown code blocks if present
1104
+ stripped = re.sub(r"^```(?:json)?\s*", "", stripped)
1105
+ stripped = re.sub(r"\s*```$", "", stripped)
1106
+ stripped = stripped.strip()
1107
+ try:
1108
+ return json.loads(stripped)
1109
+ except (json.JSONDecodeError, TypeError):
1110
+ pass
1111
+ # Try to find JSON object within text
1112
+ match = re.search(r"\{[^{}]*(?:\{[^{}]*\}[^{}]*)*\}", text, re.DOTALL)
1113
+ if match:
1114
+ try:
1115
+ return json.loads(match.group())
1116
+ except (json.JSONDecodeError, TypeError):
1117
+ pass
1118
+ return None
1119
+
1120
+ def reward_extraction(completion: str) -> float:
1121
+ """Continuous reward for extraction tasks (max 1.0)."""
1122
+ answer = strip_think(completion)
1123
+ data = _extract_json(answer)
1124
+
1125
+ if data is None:
1126
+ # Partial credit for JSON-like structure
1127
+ if "{" in answer and "}" in answer:
1128
+ return 0.05
1129
+ return 0.0
1130
+
1131
+ if not isinstance(data, dict):
1132
+ return 0.1 # valid JSON but not an object
1133
+
1134
+ score = 0.3 # valid JSON object
1135
+
1136
+ # Schema completeness (0.3 total)
1137
+ present = sum(1 for f in EXTRACTION_FIELDS if f in data)
1138
+ score += 0.3 * (present / len(EXTRACTION_FIELDS))
1139
+
1140
+ # Value validity (0.4 total, split across checks)
1141
+ checks_passed = 0
1142
+ checks_total = 0
1143
+
1144
+ for field, validator in [
1145
+ ("sentiment", lambda v: v in VALID_SENTIMENTS),
1146
+ ("complaint_category", lambda v: v in VALID_CATEGORIES),
1147
+ ("churn_risk", lambda v: v in VALID_CHURN),
1148
+ ("repeat_intent", lambda v: v in VALID_REPEAT),
1149
+ ("sentiment_score", lambda v: isinstance(v, (int, float)) and 1 <= v <= 5),
1150
+ ]:
1151
+ checks_total += 1
1152
+ if field in data and validator(data[field]):
1153
+ checks_passed += 1
1154
+
1155
+ for bool_field in ("delivery_issue", "product_issue", "seller_issue", "would_recommend"):
1156
+ checks_total += 1
1157
+ if bool_field in data and isinstance(data[bool_field], bool):
1158
+ checks_passed += 1
1159
+
1160
+ if checks_total > 0:
1161
+ score += 0.4 * (checks_passed / checks_total)
1162
+
1163
+ return min(score, 1.0)
1164
+
1165
+ def reward_sql_qa(completion: str) -> float:
1166
+ """Continuous reward for SQL Q&A (max 1.0)."""
1167
+ answer = strip_think(completion)
1168
+ if not answer.strip():
1169
+ return 0.0
1170
+
1171
+ score = 0.0
1172
+
1173
+ # Numerical content (more numbers = more specific answer)
1174
+ numbers = re.findall(r"\d+(?:[.,]\d+)?", answer)
1175
+ score += min(0.4, 0.1 * len(numbers))
1176
+
1177
+ # Length: 50-500 chars optimal
1178
+ length = len(answer)
1179
+ if 50 <= length <= 500:
1180
+ score += 0.3
1181
+ elif length > 0:
1182
+ score += 0.3 * max(0, 1 - abs(length - 275) / 275)
1183
+
1184
+ # Portuguese business vocabulary
1185
+ pt_business = ["pedidos", "clientes", "média", "total", "taxa", "vendas",
1186
+ "produtos", "período", "categoria", "região", "faturamento"]
1187
+ pt_matches = sum(1 for w in pt_business if w in answer.lower())
1188
+ score += min(0.3, 0.06 * pt_matches)
1189
+
1190
+ return min(score, 1.0)
1191
+
1192
+ def reward_insights(completion: str) -> float:
1193
+ """Continuous reward for insights (max 1.0)."""
1194
+ answer = strip_think(completion)
1195
+ if not answer.strip():
1196
+ return 0.0
1197
+
1198
+ score = 0.0
1199
+
1200
+ # Actionable language
1201
+ action_words = ["recomend", "implement", "melhor", "reduzir", "aumentar",
1202
+ "priorizar", "investir", "otimizar", "estratégi", "ação"]
1203
+ matches = sum(1 for w in action_words if w in answer.lower())
1204
+ score += min(0.4, 0.08 * matches)
1205
+
1206
+ # Length: 100-800 chars optimal
1207
+ length = len(answer)
1208
+ if 100 <= length <= 800:
1209
+ score += 0.3
1210
+ elif length > 0:
1211
+ score += 0.3 * max(0, 1 - abs(length - 450) / 450)
1212
+
1213
+ # Structure: bullet points, numbered lists, headers
1214
+ structure_marks = len(re.findall(r"^[-•*]\s|^\d+[.)]\s|^#{1,3}\s", answer, re.MULTILINE))
1215
+ score += min(0.2, 0.04 * structure_marks)
1216
+
1217
+ # Portuguese coherence marker
1218
+ if any(w in answer.lower() for w in ["cliente", "produto", "serviço", "empresa"]):
1219
+ score += 0.1
1220
+
1221
+ return min(score, 1.0)
1222
+
1223
+ def reward_push(completion: str) -> float:
1224
+ """Continuous reward for push notifications (max 1.0)."""
1225
+ answer = strip_think(completion).strip()
1226
+ if not answer:
1227
+ return 0.0
1228
+
1229
+ # Length: ≤120 chars gets full credit
1230
+ length = len(answer)
1231
+ if length <= 120:
1232
+ length_score = 0.5
1233
+ else:
1234
+ length_score = 0.5 * max(0, 1 - (length - 120) / 120)
1235
+
1236
+ # Portuguese content
1237
+ pt_markers = re.findall(r"[ãçéêóúâõ]|você|para|como|seu|sua|oferta|desconto|produto",
1238
+ answer, re.IGNORECASE)
1239
+ lang_score = min(0.3, 0.03 * len(pt_markers))
1240
+
1241
+ # Non-generic (penalize very generic phrases)
1242
+ generic = ["olá", "obrigado pela compra", "agradecemos"]
1243
+ is_generic = any(g in answer.lower() for g in generic)
1244
+ creativity_score = 0.0 if is_generic else 0.2
1245
+
1246
+ return min(length_score + lang_score + creativity_score, 1.0)
1247
+
1248
+ def commerce_reward_fn(completions, prompts, **kwargs) -> list[float]:
1249
+ """Master reward function: dispatches by task type."""
1250
+ rewards = []
1251
+ for completion, prompt in zip(completions, prompts):
1252
+ if isinstance(completion, list):
1253
+ comp_text = completion[-1]["content"] if completion else ""
1254
+ else:
1255
+ comp_text = str(completion)
1256
+
1257
+ if isinstance(prompt, list):
1258
+ prompt_text = " ".join(m.get("content", "") for m in prompt)
1259
+ else:
1260
+ prompt_text = str(prompt)
1261
+
1262
+ task = _classify_task_type(prompt_text)
1263
+
1264
+ if task == "extraction":
1265
+ rewards.append(reward_extraction(comp_text))
1266
+ elif task == "sql_qa":
1267
+ rewards.append(reward_sql_qa(comp_text))
1268
+ elif task == "insights":
1269
+ rewards.append(reward_insights(comp_text))
1270
+ elif task == "push":
1271
+ rewards.append(reward_push(comp_text))
1272
+ else:
1273
+ # Fallback: basic coherence
1274
+ r = 0.2 if comp_text.strip() else 0.0
1275
+ rewards.append(r)
1276
+
1277
+ return rewards
1278
+
1279
+ print("✓ Reward functions defined")
1280
+ ```
1281
+
1282
+ ---
1283
+
1284
+ ## 7. Monitoring & Gate Conditions
1285
+
1286
+ ### Real-Time W&B Monitoring
1287
+
1288
+ | Metric | Healthy Range | Stop Condition |
1289
+ |---|---|---|
1290
+ | `train/clip_ratio` | > 0 on majority of steps | Still 0 after step 20 on probe → abort |
1291
+ | `train/frac_reward_zero_std` | < 0.2 | Sustained > 0.5 → entropy collapse |
1292
+ | `train/reward` | Increasing trend, NOT starting at > 0.85 | Plateau at SFT-level → not learning |
1293
+ | `train/kl` | 0.01 – 0.5 | Near-zero → policy not moving; > 1.0 → instability |
1294
+ | `train/completion_length` | 50 – 400 | Hitting 512 ceiling → need to raise MAX_COMPLETION_LENGTH |
1295
+ | `eval/mean_reward` | Increasing trend | Plateau → early stopping will fire |
1296
+
1297
+ ### Success Criteria (Post-Training Validation)
1298
+
1299
+ | Gate | Target | Pass/Fail |
1300
+ |---|---|---|
1301
+ | Extraction mean reward (20 samples) | ≥ 0.30 | Must pass |
1302
+ | Push mean reward (20 samples) | ≥ 0.40 | Must pass |
1303
+ | SQL Q&A mean reward (20 samples) | ≥ 0.20 | Should pass (lower bar — harder task for 0.5B) |
1304
+ | Insights mean reward (20 samples) | ≥ 0.20 | Should pass |
1305
+ | Overall mean > SFT calibration baseline | Mean V4 > Mean Cell 9 calibration | Must pass |
1306
+
1307
+ ---
1308
+
1309
+ ## 8. Fallback Plan
1310
+
1311
+ ### If Probe Gate Fails (clip_ratio = 0 on all 10 steps)
1312
+
1313
+ **Step 1: Increase learning rate to 5e-6.** The model may need a stronger gradient push to overcome APO resistance.
1314
+
1315
+ **Step 2: If still 0, try 0.5B-Base.** `Polygl0t/Tucano2-qwen-0.5B-Base` exists and has NO APO training. Load it, apply Unsloth LoRA, and repeat the probe. This requires NO SFT step — go directly Base → GRPO. The base model won't follow instructions well initially, but GRPO's reward signal should shape it.
1316
+
1317
+ **Step 3: If Base also shows clip_ratio = 0, the issue is fundamental.** Possible causes: (a) TRL 0.24.0 bug in clip ratio computation, (b) reward function rewards are too uniform, (c) GRPO at this scale simply doesn't produce large enough probability changes per step. Try reducing `num_generations` to 8 (fewer completions = larger per-completion gradient contribution) and increasing `learning_rate` to 1e-5.
1318
+
1319
+ **Step 4: If all above fail, switch to DPO.** Use the SFT model to generate completions, score them with reward functions, create preference pairs (chosen = highest reward, rejected = lowest reward in each group), and train iterative DPO. This bypasses the GRPO signal-to-noise issue entirely.
1320
+
1321
+ ### If Training Succeeds but Insights/SQL Scores Are < 0.15
1322
+
1323
+ The 0.5B model may simply lack the capacity for analytical tasks. Accept this and plan the 3.7B scale-up for those tasks. Use the 0.5B results as validation that GRPO works on Tucano2-Instruct, then apply the validated recipe to `Polygl0t/Tucano2-qwen-3.7B-Instruct`.
1324
+
1325
+ ---
1326
+
1327
+ ## 9. Hyperparameter Decision Log
1328
+
1329
+ | Parameter | Value | Rationale |
1330
+ |---|---|---|
1331
+ | model | `Polygl0t/Tucano2-qwen-0.5B-Instruct` | 2× better benchmarks than Think; no `<think>` overhead; structured output proven in model card |
1332
+ | temperature | 1.0 | Skywork-OR1 (2505.22312): τ=1.0 delays entropy collapse |
1333
+ | repetition_penalty | 1.0 (override from 1.2) | 1.2 suppresses diversity; GRPO needs maximally diverse rollouts |
1334
+ | num_generations | 16 | VRAM headroom at 0.5B allows G=16; more generations = more reward variance = stronger signal |
1335
+ | max_completion_length | 512 | No `<think>` overhead; extraction ~100 tok, SQL ~200, insights ~300 |
1336
+ | learning_rate | 2e-6 | Dr. GRPO Appendix G; 4× V2's 5e-7 to push harder against APO |
1337
+ | beta (KL) | 0.0 | Dr. GRPO §3.2: β=0 optimal for rule-based rewards; no ref model memory needed |
1338
+ | scale_rewards | False | Dr. GRPO: removes std normalization bias |
1339
+ | max_steps | 200 | Validation run; extend only if probe passes |
1340
+ | lora_r | 16 | Standard; matches V2/V3 SFT adapter |
1341
+ | lora_alpha | 32 | 2× lora_r |
1342
+ | batch_size | 2 | Effective batch: 2 prompts × 16 gen = 32 completions per step |
1343
+ | grad_accum | 1 | Keep effective batch small for faster iteration |
1344
+ | max_seq_length | 2048 | Model supports 4096; 2048 is generous for Instruct (no think overhead) |
1345
+ | use_cache | True (override from false) | Required for O(n) autoregressive generation |
1346
+ | top_k | 0 (override from 50) | Disable top-k; let temperature alone control diversity |
1347
+
1348
+ ---
1349
+
1350
+ ## 10. File Structure
1351
+
1352
+ ```
1353
+ tucano2_pipeline/
1354
+ ├── v4_instruct_grpo.ipynb ← THE NOTEBOOK (single model, all tasks)
1355
+ ├── data/
1356
+ │ └── pairs/
1357
+ │ └── train.jsonl ← existing full V2 training set (ALL tasks)
1358
+ └── models/
1359
+ ├── tucano2-commerce-sft/ ← existing V2 SFT adapter (3.7B) — not used in V4
1360
+ └── tucano2-0.5B-instruct-grpo-v4/ ← V4 output: Instruct model GRPO adapter
1361
+ ```
1362
+
1363
+ No task-specific data splits needed. No Think model artifacts.
1364
+
1365
+ ---
1366
+
1367
+ *ADR-002 authored 2026-04-25. Based on direct audit of model repos `Polygl0t/Tucano2-qwen-0.5B-Instruct` and `Polygl0t/Tucano2-qwen-0.5B-Think`, cross-referenced with `docs/INVESTIGATION_REPORT.md` (20+ papers) and V1–V3 accumulated learnings.*