notebooks: add V4.1 GRPO notebook (parser fix, 600 steps, LR 5e-6, constant_with_warmup)
Browse files
notebooks/v4_1_instruct_grpo.ipynb
ADDED
|
@@ -0,0 +1,202 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"cells": [
|
| 3 |
+
{
|
| 4 |
+
"cell_type": "markdown",
|
| 5 |
+
"metadata": {},
|
| 6 |
+
"source": "# Tucano2 Commerce β GRPO Training V4.1 (Instruct-Only, 0.5B)\n\n**Reference:** `docs/v4_1-handoff.md` \n**Base:** V4 notebook with 4 targeted changes\n\n## V4.1 Changes from V4\n\n| Change | V4 | V4.1 | Why |\n|--------|-----|------|-----|\n| JSON parser | Brittle regex `_extract_json()` | `json-repair` + PT-BR decimal normalizer | Model penalized for correct PT-BR formatting (reward misspec) |\n| Max steps | 200 | **600** | V4 saw only 13.5% of data; 600 steps covers ~80% once |\n| LR schedule | Cosine (decayed to 1.5e-10) | **Constant with warmup** | LR reached ~0 by step 130, killing learning |\n| Peak LR | 2e-6 | **5e-6** | grad_norm=0.065 was low; model tolerates stronger updates |\n\n**Also added:** Per-task reward breakdown in `EvalRewardCallback` (eval/extraction, eval/sql_qa, eval/insights, eval/push).\n\n**Prerequisites:**\n- Upload `data/pairs/train.jsonl` to `./data/pairs/`\n- Hardware: L4 (24GB), PyTorch kernel, bf16 supported\n- Estimated runtime: ~7h (600 steps Γ ~44s/step)"
|
| 7 |
+
},
|
| 8 |
+
{
|
| 9 |
+
"cell_type": "markdown",
|
| 10 |
+
"metadata": {},
|
| 11 |
+
"source": "---\n\n## Cell 1: Dependencies\n\n**Gate:** No errors. Verify TRL 0.24.0 installed.\n\n**V4.1 change:** Added `json-repair` for robust JSON parsing."
|
| 12 |
+
},
|
| 13 |
+
{
|
| 14 |
+
"cell_type": "code",
|
| 15 |
+
"execution_count": null,
|
| 16 |
+
"metadata": {},
|
| 17 |
+
"outputs": [],
|
| 18 |
+
"source": "# Cell 1 β Clean install\n# Run after kernel restart\n\n!pip install \"unsloth\"\n!pip install \"trl==0.24.0\" --no-deps\n!pip install \"rich\" \"wandb\"\n!pip install \"json-repair\" # V4.1: robust JSON parser for Portuguese LLM output"
|
| 19 |
+
},
|
| 20 |
+
{
|
| 21 |
+
"cell_type": "markdown",
|
| 22 |
+
"metadata": {},
|
| 23 |
+
"source": "---\n\n## Cell 2: GPU + Unsloth Verification\n\n**Gate:** CUDA available, bf16=True, VRAM > 20GB, TRL 0.24.0."
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"cell_type": "code",
|
| 27 |
+
"execution_count": null,
|
| 28 |
+
"metadata": {},
|
| 29 |
+
"outputs": [],
|
| 30 |
+
"source": "import torch\n\nprint(f\"CUDA available: {torch.cuda.is_available()}\")\nprint(f\"GPU: {torch.cuda.get_device_name(0)}\")\nprint(f\"VRAM: {torch.cuda.get_device_properties(0).total_memory / 1e9:.1f} GB\")\nprint(f\"bf16 support: {torch.cuda.is_bf16_supported()}\")\n\nfrom unsloth import FastLanguageModel\nprint(f\"\\nβ Unsloth loaded\")\n\nimport trl\nassert trl.__version__ == \"0.24.0\", f\"Expected TRL 0.24.0, got {trl.__version__}\"\nprint(f\"β TRL {trl.__version__}\")\n\nimport transformers\nprint(f\"β Transformers {transformers.__version__}\")"
|
| 31 |
+
},
|
| 32 |
+
{
|
| 33 |
+
"cell_type": "markdown",
|
| 34 |
+
"metadata": {},
|
| 35 |
+
"source": "---\n\n## Cell 3: Config Constants\n\n**V4.1 changes:**\n- `MAX_STEPS`: 200 β **600** (cover ~80% of training data)\n- `LEARNING_RATE`: 2e-6 β **5e-6** (grad_norm was 0.065, model can tolerate more)\n- `LR_SCHEDULER_TYPE`: cosine β **constant_with_warmup** (cosine decayed to ~0 by step 130)\n- `WARMUP_RATIO`: 0.1 β **0.05** (short warmup for constant schedule)"
|
| 36 |
+
},
|
| 37 |
+
{
|
| 38 |
+
"cell_type": "code",
|
| 39 |
+
"execution_count": null,
|
| 40 |
+
"metadata": {},
|
| 41 |
+
"outputs": [],
|
| 42 |
+
"source": "import os\nimport json\nimport re\nimport time\nimport random\nimport gc\nimport warnings\nfrom pathlib import Path\n\n# ββ Suppress noisy deprecation warnings from Transformers 5.5.0 ββββββββββββββ\nwarnings.filterwarnings(\"ignore\", message=\".*AttentionMaskConverter.*\")\nwarnings.filterwarnings(\"ignore\", message=\".*Passing `generation_config` together with.*\")\nwarnings.filterwarnings(\"ignore\", message=\".*max_new_tokens.*max_length.*\")\nwarnings.filterwarnings(\"ignore\", category=FutureWarning)\n\n# ββ Disable Unsloth kernel recompilation βββββββββββββββββββββββββββββββββββββ\nos.environ[\"UNSLOTH_COMPILE_DISABLE\"] = \"1\"\nos.environ[\"PYTORCH_CUDA_ALLOC_CONF\"] = \"expandable_segments:True\"\n\n# ββ Model ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ\nMODEL_ID = \"Polygl0t/Tucano2-qwen-0.5B-Instruct\"\nMAX_SEQ_LENGTH = 2048 # model supports 4096, but 2048 is plenty for Instruct (no <think> overhead)\nMODELS_DIR = Path(\"/home/jupyter/tucano2/models\")\nADAPTER_DIR = MODELS_DIR / \"tucano2-0.5B-instruct-grpo-v4.1\"\nCHECKPOINT_DIR = ADAPTER_DIR / \"checkpoints\"\n\n# ββ Data βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ\nDATA_DIR = Path(\"/home/jupyter/tucano2/data\")\nTRAIN_FILE = DATA_DIR / \"pairs\" / \"train.jsonl\"\nEVAL_SPLIT = 0.10 # 10% held out for eval\n\n# ββ GRPO Hyperparameters βββββββββββββββββββββββββββββββββββββββββββββββββββββ\n# V4.1 CHANGES: MAX_STEPS 200β600, LEARNING_RATE 2e-6β5e-6, schedule constant\nNUM_GENERATIONS = 16 # 0.5B + short completions = VRAM allows G=16\nMAX_COMPLETION_LENGTH = 512 # Instruct: no <think> overhead\nTEMPERATURE = 1.0 # Skywork-OR1: Ο=1.0 for exploration\nLEARNING_RATE = 5e-6 # V4.1: was 2e-6. grad_norm=0.065 was low, push harder\nBETA = 0.0 # Dr. GRPO Β§3.2: Ξ²=0 optimal for rule-based rewards\nSCALE_REWARDS = False # Dr. GRPO: remove std normalization bias\nBATCH_SIZE = 2 # per-device batch size\nGRAD_ACCUM = 1 # effective batch = 2 * 1 = 2 prompts * 16 gen = 32 completions\nMAX_STEPS = 600 # V4.1: was 200. Cover ~80% of training set\nSAVE_STEPS = 50 # V4.1: was 20. More steps = wider save interval\nEVAL_STEPS = 20 # V4.1: was 10. Eval every 20 steps (30 evals over 600 steps)\nEARLY_STOPPING_PATIENCE = 15\nEARLY_STOPPING_DELTA = 0.005\nLR_SCHEDULER_TYPE = \"constant_with_warmup\" # V4.1: was \"cosine\"\nWARMUP_RATIO = 0.05 # V4.1: was 0.1 (5% of 600 = 30 warmup steps)\n\n# ββ LoRA βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ\nLORA_R = 16\nLORA_ALPHA = 32\n\n# ββ Monitoring βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ\nWANDB_PROJECT = \"tucano2-commerce\"\nEVAL_MAX_SAMPLES = 15 # eval callback samples\nEVAL_MAX_TOKENS = 512 # match training completion length\n\n# ββ Task Classification (inherited from V2/V3) ββββββββββββββββββββββββββββββ\nVALID_SENTIMENTS = {\"positive\", \"negative\", \"neutral\"}\nVALID_CATEGORIES = {\n \"delivery_delay\", \"product_quality\", \"product_not_received\",\n \"wrong_product\", \"seller_communication\", \"app_issue\",\n \"price_value\", \"other\", \"none\",\n}\nVALID_CHURN = {\"low\", \"medium\", \"high\"}\nVALID_REPEAT = {\"yes\", \"no\", \"maybe\"}\nEXTRACTION_FIELDS = [\n \"sentiment\", \"sentiment_score\", \"churn_risk\", \"delivery_issue\",\n \"product_issue\", \"seller_issue\", \"main_complaint\",\n \"complaint_category\", \"repeat_intent\", \"would_recommend\",\n]\n\n# ββ Verified Special Token IDs (from tokenizer_config.json) βββββββββββββββββ\nTOKEN_ID_BOS = 1 # <|im_start|>\nTOKEN_ID_EOS = 2 # <|im_end|>\nTOKEN_ID_PAD = 49109 # <|pad|>\nTOKEN_ID_THINK = 49116 # <think>\nTOKEN_ID_THINK_END = 49117 # </think>\n\n# ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ\n# TASK-AWARE SYSTEM PROMPTS (inherited from V3/V4)\n# ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ\n\nSYSTEM_EXTRACTION = (\n \"VocΓͺ Γ© um motor de extraΓ§Γ£o de dados de e-commerce brasileiro. \"\n \"Retorne APENAS um objeto JSON vΓ‘lido, sem nenhum texto antes ou depois. \"\n \"NΓO USE blocos de cΓ³digo markdown (```json). \"\n \"O primeiro caractere da sua resposta deve ser { e o ΓΊltimo deve ser }. \"\n \"Campos nΓ£o mencionados na avaliaΓ§Γ£o devem ser null β nunca invente valores. \"\n \"Sem explicaΓ§Γ£o. Sem comentΓ‘rios.\"\n)\n\nSYSTEM_SQL = (\n \"VocΓͺ Γ© um assistente de IA especializado em anΓ‘lise de e-commerce brasileiro. \"\n \"VocΓͺ compreende avaliaΓ§Γ΅es de clientes em portuguΓͺs e padrΓ΅es de comΓ©rcio brasileiro.\\n\\n\"\n \"Para consultas e anΓ‘lises de dados: apresente a resposta de forma direta \"\n \"com nΓΊmeros e dados concretos. Seja conciso.\"\n)\n\nSYSTEM_INSIGHTS = (\n \"VocΓͺ Γ© um assistente de IA especializado em anΓ‘lise de e-commerce brasileiro. \"\n \"VocΓͺ compreende avaliaΓ§Γ΅es de clientes em portuguΓͺs e padrΓ΅es de comΓ©rcio brasileiro.\\n\\n\"\n \"Para anΓ‘lises estratΓ©gicas: raciocine de forma estruturada e concisa, \"\n \"focando nos pontos principais e recomendaΓ§Γ΅es acionΓ‘veis.\"\n)\n\nSYSTEM_PUSH = (\n \"VocΓͺ Γ© um assistente de IA especializado em anΓ‘lise de e-commerce brasileiro. \"\n \"VocΓͺ compreende avaliaΓ§Γ΅es de clientes em portuguΓͺs e padrΓ΅es de comΓ©rcio brasileiro.\\n\\n\"\n \"Para notificaΓ§Γ΅es push: seja direto e criativo. \"\n \"A notificaΓ§Γ£o deve ter no mΓ‘ximo 120 caracteres. \"\n \"Responda diretamente.\"\n)\n\nSYSTEM_PT = (\n \"VocΓͺ Γ© um assistente de IA especializado em anΓ‘lise de e-commerce brasileiro. \"\n \"VocΓͺ compreende avaliaΓ§Γ΅es de clientes em portuguΓͺs e padrΓ΅es de comΓ©rcio brasileiro.\"\n)\n\ndef get_system_prompt(task_type: str) -> str:\n return {\n \"extraction\": SYSTEM_EXTRACTION,\n \"sql_qa\": SYSTEM_SQL,\n \"insights\": SYSTEM_INSIGHTS,\n \"push\": SYSTEM_PUSH,\n }.get(task_type, SYSTEM_PT)\n\ndef inject_task_system_prompt(msgs, task_type):\n new_msgs = []\n system_prompt = get_system_prompt(task_type)\n has_system = False\n for m in msgs:\n if m[\"role\"] == \"system\":\n new_msgs.append({\"role\": \"system\", \"content\": system_prompt})\n has_system = True\n else:\n new_msgs.append(m)\n if not has_system:\n new_msgs.insert(0, {\"role\": \"system\", \"content\": system_prompt})\n return new_msgs\n\nprint(\"β Task-aware system prompts defined\")\n\nprint(\"β Config loaded\")\nprint(f\" Model: {MODEL_ID}\")\nprint(f\" G={NUM_GENERATIONS}, max_comp={MAX_COMPLETION_LENGTH}, temp={TEMPERATURE}\")\nprint(f\" LR={LEARNING_RATE}, Ξ²={BETA}, scale_rewards={SCALE_REWARDS}\")\nprint(f\" LR schedule: {LR_SCHEDULER_TYPE}, warmup={WARMUP_RATIO}\")\nprint(f\" LoRA r={LORA_R}, Ξ±={LORA_ALPHA}\")\nprint(f\" Max steps: {MAX_STEPS}\")\nprint(f\" Save every {SAVE_STEPS} steps, eval every {EVAL_STEPS} steps\")"
|
| 43 |
+
},
|
| 44 |
+
{
|
| 45 |
+
"cell_type": "markdown",
|
| 46 |
+
"metadata": {},
|
| 47 |
+
"source": "---\n\n## Cell 4: Load Model + Apply Critical Overrides\n\n**Gate:** Model loaded, `use_cache=True`, `repetition_penalty=1.0`, `temperature=1.0`.\n\nUnchanged from V4."
|
| 48 |
+
},
|
| 49 |
+
{
|
| 50 |
+
"cell_type": "code",
|
| 51 |
+
"execution_count": null,
|
| 52 |
+
"metadata": {},
|
| 53 |
+
"outputs": [],
|
| 54 |
+
"source": "from unsloth import FastLanguageModel\n\nprint(\"Loading model...\")\nmodel, tokenizer = FastLanguageModel.from_pretrained(\n model_name=MODEL_ID,\n max_seq_length=MAX_SEQ_LENGTH,\n load_in_4bit=True,\n dtype=None,\n)\n\nfrom peft import LoraConfig, get_peft_model\n\nlora_config = LoraConfig(\n r=LORA_R,\n lora_alpha=LORA_ALPHA,\n target_modules=[\"q_proj\", \"k_proj\", \"v_proj\", \"o_proj\",\n \"gate_proj\", \"up_proj\", \"down_proj\"],\n lora_dropout=0,\n bias=\"none\",\n task_type=\"CAUSAL_LM\",\n)\nmodel = get_peft_model(model, lora_config)\nmodel.print_trainable_parameters()\n\n# βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ\n# CRITICAL OVERRIDES\n# βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ\n\nmodel.config.use_cache = True\nmodel.generation_config.use_cache = True\nmodel.generation_config.temperature = TEMPERATURE\nmodel.generation_config.repetition_penalty = 1.0\nmodel.generation_config.do_sample = True\nmodel.generation_config.top_k = 0\nmodel.generation_config.top_p = 1.0\nmodel.generation_config.max_length = None\n\nif tokenizer.pad_token is None:\n tokenizer.pad_token = tokenizer.eos_token\n\nprint(f\"β Model loaded on {model.device}\")\nprint(f\" use_cache: {model.config.use_cache}\")\nprint(f\" temperature: {model.generation_config.temperature}\")\nprint(f\" repetition_penalty: {model.generation_config.repetition_penalty}\")\nprint(f\" top_k: {model.generation_config.top_k}\")\nprint(f\" Params: {sum(p.numel() for p in model.parameters()) / 1e6:.0f}M\")\n\ntry:\n lm_ptr = model.base_model.model.lm_head.weight.data_ptr()\n embed_ptr = model.base_model.model.model.embed_tokens.weight.data_ptr()\n tied = lm_ptr == embed_ptr\n print(f\" Tied embeddings intact: {tied}\")\n if not tied:\n print(\" β οΈ WARNING: Tied embeddings broken after LoRA patching.\")\nexcept AttributeError as e:\n print(f\" β οΈ Could not check tied embeddings: {e}\")"
|
| 55 |
+
},
|
| 56 |
+
{
|
| 57 |
+
"cell_type": "markdown",
|
| 58 |
+
"metadata": {},
|
| 59 |
+
"source": "---\n\n## Cell 5: Token ID Verification\n\n**Gate:** All token IDs match. Unchanged from V4."
|
| 60 |
+
},
|
| 61 |
+
{
|
| 62 |
+
"cell_type": "code",
|
| 63 |
+
"execution_count": null,
|
| 64 |
+
"metadata": {},
|
| 65 |
+
"outputs": [],
|
| 66 |
+
"source": "tok_tests = {\n \"<|im_start|>\": TOKEN_ID_BOS,\n \"<|im_end|>\": TOKEN_ID_EOS,\n \"<|pad|>\": TOKEN_ID_PAD,\n \"<think>\": TOKEN_ID_THINK,\n \"</think>\": TOKEN_ID_THINK_END,\n}\n\nall_pass = True\nfor text, expected_id in tok_tests.items():\n ids = tokenizer.encode(text, add_special_tokens=False)\n actual_id = ids[0] if len(ids) == 1 else ids\n match = (len(ids) == 1 and ids[0] == expected_id)\n status = \"β\" if match else \"β\"\n print(f\" {status} '{text}' β expected {expected_id}, got {actual_id}\")\n if not match:\n all_pass = False\n\nassert all_pass, \"Token ID mismatch detected. Update constants in Cell 3 before proceeding.\"\nprint(\"\\nβ All token IDs verified\")\n\nassert tokenizer.eos_token_id == TOKEN_ID_EOS, f\"eos_token_id mismatch: {tokenizer.eos_token_id}\"\nprint(f\"β eos_token_id = {tokenizer.eos_token_id}\")"
|
| 67 |
+
},
|
| 68 |
+
{
|
| 69 |
+
"cell_type": "markdown",
|
| 70 |
+
"metadata": {},
|
| 71 |
+
"source": "---\n\n## Cell 6: KV Cache Diagnostic\n\n**Gate:** Ratio < 3Γ β KV cache OK. Unchanged from V4."
|
| 72 |
+
},
|
| 73 |
+
{
|
| 74 |
+
"cell_type": "code",
|
| 75 |
+
"execution_count": null,
|
| 76 |
+
"metadata": {},
|
| 77 |
+
"outputs": [],
|
| 78 |
+
"source": "FastLanguageModel.for_inference(model)\n\n_kv_msgs = [{\"role\": \"user\", \"content\": \"Qual a categoria de reclamaΓ§Γ£o mais frequente?\"}]\n_kv_text = tokenizer.apply_chat_template(_kv_msgs, tokenize=False, add_generation_prompt=True)\n_kv_inputs = tokenizer(_kv_text, return_tensors=\"pt\").to(model.device)\n\n_token_times, _past, _generated = [], None, _kv_inputs[\"input_ids\"]\nwith torch.no_grad():\n for _step in range(50):\n _t0 = time.time()\n seq_len = _generated.shape[1]\n if _past is None:\n _position_ids = torch.arange(seq_len, dtype=torch.long, device=model.device).unsqueeze(0)\n else:\n _position_ids = torch.tensor([[seq_len - 1]], dtype=torch.long, device=model.device)\n _out = model(\n input_ids=_generated[:, -1:] if _past else _generated,\n position_ids=_position_ids,\n attention_mask=torch.ones(1, seq_len, device=model.device),\n past_key_values=_past,\n use_cache=True,\n return_dict=True,\n )\n _past = _out.past_key_values\n _next = _out.logits[:, -1, :].argmax(dim=-1, keepdim=True)\n _generated = torch.cat([_generated, _next], dim=1)\n _token_times.append(time.time() - _t0)\n\n_ratio = sum(_token_times[45:]) / max(sum(_token_times[:5]), 1e-9)\nprint(f\"First 5 tok: {[f'{t*1000:.0f}ms' for t in _token_times[:5]]}\")\nprint(f\"Last 5 tok: {[f'{t*1000:.0f}ms' for t in _token_times[45:]]}\")\nprint(f\"Ratio last/first: {_ratio:.1f}x\")\nassert _ratio < 5, f\"KV cache BROKEN (ratio {_ratio:.1f}Γ). Check model.config.use_cache.\"\nprint(\"β KV cache working correctly\")\n\ndel _past, _generated, _kv_inputs, _token_times, _out\ngc.collect()\ntorch.cuda.empty_cache()"
|
| 79 |
+
},
|
| 80 |
+
{
|
| 81 |
+
"cell_type": "markdown",
|
| 82 |
+
"metadata": {},
|
| 83 |
+
"source": "---\n\n## Cell 7: Single Inference Test\n\n**Gate:** Response is coherent Portuguese. Unchanged from V4."
|
| 84 |
+
},
|
| 85 |
+
{
|
| 86 |
+
"cell_type": "code",
|
| 87 |
+
"execution_count": null,
|
| 88 |
+
"metadata": {},
|
| 89 |
+
"outputs": [],
|
| 90 |
+
"source": "FastLanguageModel.for_inference(model)\n\ntest_msgs = [\n {\"role\": \"system\", \"content\": SYSTEM_EXTRACTION},\n {\"role\": \"user\", \"content\": \"Analise esta avaliaΓ§Γ£o: 'Produto chegou quebrado, pΓ©ssima embalagem. Nunca mais compro aqui.' Retorne um objeto JSON com os campos: sentiment, sentiment_score, delivery_issue, complaint_category.\"},\n]\ntext = tokenizer.apply_chat_template(test_msgs, tokenize=False, add_generation_prompt=True)\ninputs = tokenizer(text, return_tensors=\"pt\").to(model.device)\n\nt0 = time.time()\noutputs = model.generate(\n **inputs,\n max_new_tokens=256,\n temperature=0.1,\n do_sample=True,\n repetition_penalty=1.0,\n)\nelapsed = time.time() - t0\n\nresponse = tokenizer.decode(outputs[0][inputs[\"input_ids\"].shape[1]:], skip_special_tokens=True)\nprint(f\"Generation time: {elapsed:.1f}s\")\nprint(f\"Response length: {len(response)} chars\")\nprint(f\"Contains <think>: {'<think>' in response}\")\nprint(f\"Contains JSON {{ }}: {'{' in response and '}' in response}\")\nprint(f\"\\n{'='*60}\")\nprint(response[:500])"
|
| 91 |
+
},
|
| 92 |
+
{
|
| 93 |
+
"cell_type": "markdown",
|
| 94 |
+
"metadata": {},
|
| 95 |
+
"source": "---\n\n## Cell 8: Reward Functions\n\n**V4.1 change:** `_extract_json()` replaced with robust parser using `json-repair` library\n+ Portuguese decimal comma normalizer. Handles: PT-BR decimals, trailing commas, single quotes,\nunquoted keys, markdown fences, JS comments, text before/after JSON.\n\nTest results: OLD parser 2/10, NEW parser **10/10** on representative test cases.\nString-internal commas (e.g. `\"delivery_delay\"`) are preserved correctly."
|
| 96 |
+
},
|
| 97 |
+
{
|
| 98 |
+
"cell_type": "code",
|
| 99 |
+
"execution_count": null,
|
| 100 |
+
"metadata": {},
|
| 101 |
+
"outputs": [],
|
| 102 |
+
"source": "import json_repair # V4.1: robust JSON parser for LLM output\n\n\ndef strip_think(text: str) -> str:\n \"\"\"Remove <think>...</think> block, return the answer portion.\"\"\"\n return re.sub(r\"<think>.*?</think>\", \"\", text, flags=re.DOTALL).strip()\n\n\ndef has_think_block(text: str) -> bool:\n return bool(re.search(r\"<think>.+</think>\", text, flags=re.DOTALL))\n\n\ndef _classify_task_type(prompt_text: str) -> str:\n p = prompt_text.lower()\n if \"retorne um objeto json\" in p or \"extraia dados\" in p or \"json\" in p:\n return \"extraction\"\n elif \"notificaΓ§Γ£o push\" in p or \"notificaΓ§Γ£o de reengajamento\" in p:\n return \"push\"\n elif \"perfil do cliente\" in p or \"retenΓ§Γ£o\" in p or \"anΓ‘lise\" in p or \"insight\" in p:\n return \"insights\"\n else:\n return \"sql_qa\"\n\n\n# βββββββββββββββββββββββοΏ½οΏ½οΏ½ββββββββββββββββββββββββββββββββββββββββββββββββββββββ\n# V4.1: ROBUST JSON PARSER\n# Replaces V4's brittle regex parser. Handles:\n# - PT-BR decimal commas (4,5 β 4.5) outside quoted strings\n# - Trailing commas, single quotes, unquoted keys\n# - Markdown code fences (```json ... ```)\n# - JS-style comments\n# - Text before/after JSON\n# Test: 2/10 β 10/10 on representative cases. String commas preserved.\n# ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ\n\ndef _normalize_pt_decimals(s: str) -> str:\n \"\"\"Convert PT-BR decimals (4,5) to JSON-valid (4.5), only outside quoted strings.\n \n String-aware: commas inside quoted strings (e.g. \"delivery_delay\") are preserved.\n Only digit,digit patterns outside quotes are converted.\"\"\"\n result, in_string, escape_next = [], False, False\n i = 0\n while i < len(s):\n c = s[i]\n if escape_next:\n result.append(c); escape_next = False; i += 1; continue\n if c == '\\\\' and in_string:\n result.append(c); escape_next = True; i += 1; continue\n if c == '\"':\n in_string = not in_string; result.append(c); i += 1; continue\n if not in_string:\n m = re.match(r'(\\d+),(\\d+)', s[i:])\n if m:\n result.append(m.group(1) + '.' + m.group(2))\n i += len(m.group(0)); continue\n result.append(c); i += 1\n return ''.join(result)\n\n\ndef _extract_json(text: str) -> dict | None:\n \"\"\"Robust JSON extraction for Portuguese LLM output.\n \n Pipeline:\n 1. Strip markdown code fences\n 2. Try strict json.loads (fastest path for clean output)\n 3. Normalize PT-BR decimals, try strict json.loads again\n 4. Use json-repair on normalized text (handles trailing commas, single quotes, etc.)\n 5. Use json-repair on original text (last resort)\n \"\"\"\n stripped = re.sub(r'^```(?:json)?\\s*|\\s*```$', '', text.strip(), flags=re.MULTILINE).strip()\n \n # Stage 1: strict parse (fast path)\n for attempt in [stripped, _normalize_pt_decimals(stripped)]:\n try:\n result = json.loads(attempt)\n if isinstance(result, dict):\n return result\n except (json.JSONDecodeError, TypeError):\n pass\n \n # Stage 2: json-repair on normalized text\n normalized = _normalize_pt_decimals(stripped)\n try:\n result = json_repair.repair_json(normalized, return_objects=True)\n if isinstance(result, dict):\n return result\n except Exception:\n pass\n \n # Stage 3: json-repair on original (last resort)\n try:\n result = json_repair.repair_json(stripped, return_objects=True)\n if isinstance(result, dict):\n return result\n except Exception:\n pass\n \n return None\n\n\ndef reward_extraction(completion: str) -> float:\n \"\"\"Continuous reward for extraction tasks (max 1.0).\"\"\"\n answer = strip_think(completion)\n data = _extract_json(answer)\n\n if data is None:\n if \"{\" in answer and \"}\" in answer:\n return 0.05\n return 0.0\n\n if not isinstance(data, dict):\n return 0.1\n\n score = 0.3 # valid JSON object\n\n # Schema completeness (0.3 total)\n present = sum(1 for f in EXTRACTION_FIELDS if f in data)\n score += 0.3 * (present / len(EXTRACTION_FIELDS))\n\n # Value validity (0.4 total)\n checks_passed = 0\n checks_total = 0\n\n for field, validator in [\n (\"sentiment\", lambda v: isinstance(v, str) and v in VALID_SENTIMENTS),\n (\"complaint_category\", lambda v: isinstance(v, str) and v in VALID_CATEGORIES),\n (\"churn_risk\", lambda v: isinstance(v, str) and v in VALID_CHURN),\n (\"repeat_intent\", lambda v: isinstance(v, str) and v in VALID_REPEAT),\n (\"sentiment_score\", lambda v: isinstance(v, (int, float)) and 1 <= v <= 5),\n ]:\n checks_total += 1\n if field in data and validator(data[field]):\n checks_passed += 1\n\n for bool_field in (\"delivery_issue\", \"product_issue\", \"seller_issue\", \"would_recommend\"):\n checks_total += 1\n if bool_field in data and isinstance(data[bool_field], bool):\n checks_passed += 1\n\n if checks_total > 0:\n score += 0.4 * (checks_passed / checks_total)\n\n return min(score, 1.0)\n\n\ndef reward_sql_qa(completion: str) -> float:\n \"\"\"Continuous reward for SQL Q&A (max 1.0).\"\"\"\n answer = strip_think(completion)\n if not answer.strip():\n return 0.0\n\n score = 0.0\n\n numbers = re.findall(r\"\\d+(?:[.,]\\d+)?\", answer)\n score += min(0.4, 0.1 * len(numbers))\n\n length = len(answer)\n if 50 <= length <= 500:\n score += 0.3\n elif length > 0:\n score += 0.3 * max(0, 1 - abs(length - 275) / 275)\n\n pt_business = [\"pedidos\", \"clientes\", \"mΓ©dia\", \"total\", \"taxa\", \"vendas\",\n \"produtos\", \"perΓodo\", \"categoria\", \"regiΓ£o\", \"faturamento\"]\n pt_matches = sum(1 for w in pt_business if w in answer.lower())\n score += min(0.3, 0.06 * pt_matches)\n\n return min(score, 1.0)\n\n\ndef reward_insights(completion: str) -> float:\n \"\"\"Continuous reward for insights (max 1.0).\"\"\"\n answer = strip_think(completion)\n if not answer.strip():\n return 0.0\n\n score = 0.0\n\n action_words = [\"recomend\", \"implement\", \"melhor\", \"reduzir\", \"aumentar\",\n \"priorizar\", \"investir\", \"otimizar\", \"estratΓ©gi\", \"aΓ§Γ£o\"]\n matches = sum(1 for w in action_words if w in answer.lower())\n score += min(0.4, 0.08 * matches)\n\n length = len(answer)\n if 100 <= length <= 800:\n score += 0.3\n elif length > 0:\n score += 0.3 * max(0, 1 - abs(length - 450) / 450)\n\n structure_marks = len(re.findall(r\"^[-β’*]\\s|^\\d+[.)]\\s|^#{1,3}\\s\", answer, re.MULTILINE))\n score += min(0.2, 0.04 * structure_marks)\n\n if any(w in answer.lower() for w in [\"cliente\", \"produto\", \"serviΓ§o\", \"empresa\"]):\n score += 0.1\n\n return min(score, 1.0)\n\n\ndef reward_push(completion: str) -> float:\n \"\"\"Continuous reward for push notifications (max 1.0).\"\"\"\n answer = strip_think(completion).strip()\n if not answer:\n return 0.0\n\n length = len(answer)\n if length <= 120:\n length_score = 0.5\n else:\n length_score = 0.5 * max(0, 1 - (length - 120) / 120)\n\n pt_markers = re.findall(r\"[ãçéΓͺΓ³ΓΊΓ’Γ΅]|vocΓͺ|para|como|seu|sua|oferta|desconto|produto\",\n answer, re.IGNORECASE)\n lang_score = min(0.3, 0.03 * len(pt_markers))\n\n generic = [\"olΓ‘\", \"obrigado pela compra\", \"agradecemos\"]\n is_generic = any(g in answer.lower() for g in generic)\n creativity_score = 0.0 if is_generic else 0.2\n\n return min(length_score + lang_score + creativity_score, 1.0)\n\n\ndef commerce_reward_fn(completions, prompts, **kwargs) -> list[float]:\n \"\"\"Master reward function: dispatches by task type.\"\"\"\n rewards = []\n for completion, prompt in zip(completions, prompts):\n if isinstance(completion, list):\n comp_text = completion[-1][\"content\"] if completion else \"\"\n else:\n comp_text = str(completion)\n\n if isinstance(prompt, list):\n prompt_text = \" \".join(m.get(\"content\", \"\") for m in prompt)\n else:\n prompt_text = str(prompt)\n\n task = _classify_task_type(prompt_text)\n\n if task == \"extraction\":\n rewards.append(reward_extraction(comp_text))\n elif task == \"sql_qa\":\n rewards.append(reward_sql_qa(comp_text))\n elif task == \"insights\":\n rewards.append(reward_insights(comp_text))\n elif task == \"push\":\n rewards.append(reward_push(comp_text))\n else:\n r = 0.2 if comp_text.strip() else 0.0\n rewards.append(r)\n\n return rewards\n\n\nprint(\"β Reward functions defined (V4.1: json-repair parser)\")"
|
| 103 |
+
},
|
| 104 |
+
{
|
| 105 |
+
"cell_type": "markdown",
|
| 106 |
+
"metadata": {},
|
| 107 |
+
"source": "---\n\n## Cell 9: Reward Calibration\n\n**Gate:** Mean reward < 0.90. Variance > 0.\n\n**Note:** With the V4.1 parser fix, extraction samples should score higher than V4's ~0.17 baseline."
|
| 108 |
+
},
|
| 109 |
+
{
|
| 110 |
+
"cell_type": "code",
|
| 111 |
+
"execution_count": null,
|
| 112 |
+
"metadata": {},
|
| 113 |
+
"outputs": [],
|
| 114 |
+
"source": "by_type = {\"extraction\": [], \"sql_qa\": [], \"insights\": [], \"push\": []}\nwith open(TRAIN_FILE) as f:\n for line in f:\n row = json.loads(line)\n convs = row[\"conversations\"]\n prompt_msgs = [m for m in convs if m[\"role\"] in (\"system\", \"user\")]\n if not prompt_msgs:\n continue\n user_text = \" \".join(m[\"content\"] for m in prompt_msgs if m[\"role\"] == \"user\")\n task = _classify_task_type(user_text)\n by_type[task].append(prompt_msgs)\n\nprint(f\"Prompts by type: {', '.join(f'{k}={len(v)}' for k, v in by_type.items())}\")\n\nrng = random.Random(42)\ncal_samples = []\nfor task_type in by_type:\n pool = by_type[task_type]\n if len(pool) >= 2:\n cal_samples.extend(rng.sample(pool, 2))\n elif pool:\n cal_samples.extend(pool)\n\nFastLanguageModel.for_inference(model)\nprint(f\"\\nReward calibration ({len(cal_samples)} samples):\")\nprint(\"-\" * 60)\n\ncal_rewards = []\nfor i, msgs in enumerate(cal_samples):\n user_text_cal = \" \".join(m.get(\"content\", \"\") for m in msgs if m[\"role\"] == \"user\")\n task_cal = _classify_task_type(user_text_cal)\n msgs = inject_task_system_prompt(msgs, task_cal)\n text = tokenizer.apply_chat_template(msgs, tokenize=False, add_generation_prompt=True)\n inputs = tokenizer(text, return_tensors=\"pt\").to(model.device)\n outputs = model.generate(\n **inputs,\n max_new_tokens=MAX_COMPLETION_LENGTH,\n temperature=0.7,\n do_sample=True,\n repetition_penalty=1.0,\n )\n response = tokenizer.decode(outputs[0][inputs[\"input_ids\"].shape[1]:], skip_special_tokens=True)\n r = commerce_reward_fn([response], [text])[0]\n cal_rewards.append(r)\n task = _classify_task_type(\" \".join(m.get(\"content\", \"\") for m in msgs if m[\"role\"] == \"user\"))\n has_think = \"<think>\" in response\n answer_preview = strip_think(response)[:100]\n print(f\" Sample {i+1} [{task:12s}]: reward={r:.2f} | has_think={has_think} | {answer_preview}\")\n\nprint(f\"\\nMean={sum(cal_rewards)/len(cal_rewards):.2f}, Min={min(cal_rewards):.2f}, Max={max(cal_rewards):.2f}\")\nprint(f\"Reward variance > 0: {len(set(f'{r:.4f}' for r in cal_rewards)) > 1}\")"
|
| 115 |
+
},
|
| 116 |
+
{
|
| 117 |
+
"cell_type": "markdown",
|
| 118 |
+
"metadata": {},
|
| 119 |
+
"source": "---\n\n## Cell 10: Dataset Preparation\n\n**Gate:** Train has ~1,480+ prompts, eval has ~160+. All 4 task types present in both."
|
| 120 |
+
},
|
| 121 |
+
{
|
| 122 |
+
"cell_type": "code",
|
| 123 |
+
"execution_count": null,
|
| 124 |
+
"metadata": {},
|
| 125 |
+
"outputs": [],
|
| 126 |
+
"source": "from datasets import Dataset\n\ndef prepare_datasets(train_file, eval_ratio=EVAL_SPLIT, seed=42):\n rng = random.Random(seed)\n\n all_records = []\n with open(train_file) as f:\n for line in f:\n row = json.loads(line)\n convs = row[\"conversations\"]\n prompt_msgs = [m for m in convs if m[\"role\"] in (\"system\", \"user\")]\n if prompt_msgs:\n all_records.append(prompt_msgs)\n\n rng.shuffle(all_records)\n n_eval = max(1, int(len(all_records) * eval_ratio))\n eval_records = all_records[:n_eval]\n train_records = all_records[n_eval:]\n\n for i, msgs in enumerate(train_records):\n user_text = \" \".join(m[\"content\"] for m in msgs if m[\"role\"] == \"user\")\n task = _classify_task_type(user_text)\n train_records[i] = inject_task_system_prompt(msgs, task)\n for i, msgs in enumerate(eval_records):\n user_text = \" \".join(m[\"content\"] for m in msgs if m[\"role\"] == \"user\")\n task = _classify_task_type(user_text)\n eval_records[i] = inject_task_system_prompt(msgs, task)\n print(\" β Task-aware system prompts injected\")\n\n for label, records in [(\"train\", train_records), (\"eval\", eval_records)]:\n dist = {}\n for msgs in records:\n user_text = \" \".join(m[\"content\"] for m in msgs if m[\"role\"] == \"user\")\n task = _classify_task_type(user_text)\n dist[task] = dist.get(task, 0) + 1\n print(f\" {label}: {len(records)} prompts β {dist}\")\n\n train_ds = Dataset.from_list([{\"prompt\": msgs} for msgs in train_records])\n eval_ds = Dataset.from_list([{\"prompt\": msgs} for msgs in eval_records])\n return train_ds, eval_ds\n\ntrain_dataset, eval_dataset = prepare_datasets(TRAIN_FILE)\nprint(f\"\\nβ Datasets: train={len(train_dataset)}, eval={len(eval_dataset)}\")"
|
| 127 |
+
},
|
| 128 |
+
{
|
| 129 |
+
"cell_type": "markdown",
|
| 130 |
+
"metadata": {},
|
| 131 |
+
"source": "---\n\n## Cell 11: Smoke Test (1 Step)\n\n**Gate:** No OOM. Peak VRAM < 20GB. Step time < 180s.\n\n**V4.1 change:** GRPOConfig uses `constant_with_warmup` schedule and LR=5e-6."
|
| 132 |
+
},
|
| 133 |
+
{
|
| 134 |
+
"cell_type": "code",
|
| 135 |
+
"execution_count": null,
|
| 136 |
+
"metadata": {},
|
| 137 |
+
"outputs": [],
|
| 138 |
+
"source": "from trl import GRPOConfig, GRPOTrainer\n\nFastLanguageModel.for_training(model)\n\nsmoke_config = GRPOConfig(\n output_dir=str(CHECKPOINT_DIR / \"smoke\"),\n num_generations=NUM_GENERATIONS,\n scale_rewards=SCALE_REWARDS,\n max_completion_length=MAX_COMPLETION_LENGTH,\n max_steps=1,\n temperature=TEMPERATURE,\n beta=BETA,\n per_device_train_batch_size=BATCH_SIZE,\n gradient_accumulation_steps=1,\n learning_rate=LEARNING_RATE, # V4.1: 5e-6\n lr_scheduler_type=LR_SCHEDULER_TYPE, # V4.1: constant_with_warmup\n warmup_ratio=WARMUP_RATIO, # V4.1: 0.05\n fp16=False,\n bf16=True,\n logging_steps=1,\n save_steps=999,\n report_to=\"none\",\n max_prompt_length=MAX_SEQ_LENGTH // 2,\n seed=42,\n remove_unused_columns=False,\n)\n\n\nclass UnslothGRPOTrainer(GRPOTrainer):\n def _generate(self, prompts, images):\n FastLanguageModel.for_inference(self.model)\n try:\n result = super()._generate(prompts, images)\n finally:\n FastLanguageModel.for_training(self.model)\n return result\n\n\nsmoke_trainer = UnslothGRPOTrainer(\n model=model,\n reward_funcs=commerce_reward_fn,\n args=smoke_config,\n train_dataset=train_dataset,\n processing_class=tokenizer,\n)\n\nt0 = time.time()\nsmoke_trainer.train()\nstep_time = time.time() - t0\n\npeak_vram = torch.cuda.max_memory_allocated() / 1e9\nprint(f\"\\nβ Smoke test passed!\")\nprint(f\" Step time: {step_time:.0f}s\")\nprint(f\" Peak VRAM: {peak_vram:.1f}GB / {torch.cuda.get_device_properties(0).total_memory / 1e9:.1f}GB\")\nprint(f\" Estimated full run ({MAX_STEPS} steps): {step_time * MAX_STEPS / 3600:.1f}h\")\n\ndel smoke_trainer\ngc.collect(); torch.cuda.empty_cache()"
|
| 139 |
+
},
|
| 140 |
+
{
|
| 141 |
+
"cell_type": "markdown",
|
| 142 |
+
"metadata": {},
|
| 143 |
+
"source": "---\n\n## Cell 12: Probe Run (10 Steps)\n\n**V4.1 changes:**\n- GRPOConfig uses `constant_with_warmup` and LR=5e-6\n- Removed hard clip_ratio gate (clip_ratio=0 is expected for LoRA + narrow domain, per V4 assessment)\n- Probe now monitors reward trend and grad_norm instead"
|
| 144 |
+
},
|
| 145 |
+
{
|
| 146 |
+
"cell_type": "code",
|
| 147 |
+
"execution_count": null,
|
| 148 |
+
"metadata": {},
|
| 149 |
+
"outputs": [],
|
| 150 |
+
"source": "FastLanguageModel.for_training(model)\n\nprobe_config = GRPOConfig(\n output_dir=str(CHECKPOINT_DIR / \"probe\"),\n num_generations=NUM_GENERATIONS,\n scale_rewards=SCALE_REWARDS,\n max_completion_length=MAX_COMPLETION_LENGTH,\n max_steps=10,\n temperature=TEMPERATURE,\n beta=BETA,\n num_train_epochs=1,\n per_device_train_batch_size=BATCH_SIZE,\n gradient_accumulation_steps=GRAD_ACCUM,\n learning_rate=LEARNING_RATE, # V4.1: 5e-6\n lr_scheduler_type=LR_SCHEDULER_TYPE, # V4.1: constant_with_warmup\n warmup_ratio=WARMUP_RATIO, # V4.1: 0.05\n fp16=False,\n bf16=True,\n logging_steps=1,\n save_steps=999,\n report_to=\"none\",\n max_prompt_length=MAX_SEQ_LENGTH // 2,\n seed=42,\n remove_unused_columns=False,\n)\n\nprobe_trainer = UnslothGRPOTrainer(\n model=model,\n reward_funcs=commerce_reward_fn,\n args=probe_config,\n train_dataset=train_dataset,\n processing_class=tokenizer,\n)\n\nt0 = time.time()\nresult = probe_trainer.train()\nelapsed = time.time() - t0\n\n# ββ Extract metrics from log history βββββββββββββββββββββββββββββββββββββββββ\nrewards = []\ngrad_norms = []\nzero_stds = []\nfor entry in probe_trainer.state.log_history:\n if \"train/reward\" in entry:\n rewards.append(entry[\"train/reward\"])\n if \"train/grad_norm\" in entry:\n grad_norms.append(entry[\"train/grad_norm\"])\n if \"train/frac_reward_zero_std\" in entry:\n zero_stds.append(entry[\"train/frac_reward_zero_std\"])\n\nprint(f\"\\n{'='*60}\")\nprint(f\"PROBE RESULTS ({elapsed:.0f}s, {elapsed/10:.0f}s/step)\")\nprint(f\" Rewards: {[f'{r:.3f}' for r in rewards]}\")\nprint(f\" Grad norms: {[f'{g:.4f}' for g in grad_norms]}\")\nprint(f\" Zero-std: {[f'{z:.2f}' for z in zero_stds]}\")\nprint(f\" Train loss: {result.training_loss:.4f}\")\nprint(f\"{'='*60}\")\n\n# V4.1: No hard clip_ratio gate. Instead check:\n# 1. Rewards are non-zero (model generates scoreable output)\n# 2. grad_norm is non-zero (gradients are flowing)\n# 3. frac_reward_zero_std < 0.5 (batches have reward variance)\n\nif rewards and max(rewards) > 0:\n print(\"β Model generates scoreable output\")\nelse:\n print(\"β WARNING: All rewards are 0. Check reward functions.\")\n\nif grad_norms and max(grad_norms) > 0:\n print(\"β Gradients are flowing\")\nelse:\n print(\"β WARNING: All grad_norms are 0. Check model/LoRA setup.\")\n\nif zero_stds and max(zero_stds) < 0.5:\n print(\"β Batches have reward variance (GRPO has signal)\")\nelse:\n print(\"β οΈ WARNING: High frac_reward_zero_std. Consider increasing G.\")\n\nprint(\"\\nβ Proceed to full training (Cell 13)\")\n\ndel probe_trainer\ngc.collect(); torch.cuda.empty_cache()"
|
| 151 |
+
},
|
| 152 |
+
{
|
| 153 |
+
"cell_type": "markdown",
|
| 154 |
+
"metadata": {},
|
| 155 |
+
"source": "---\n\n## Cell 13: W&B Init + Full Training\n\n**V4.1 changes:**\n- GRPOConfig: `constant_with_warmup`, LR=5e-6, MAX_STEPS=600\n- `EvalRewardCallback`: logs per-task reward breakdown (eval/extraction, eval/sql_qa, eval/insights, eval/push)\n- `SAVE_STEPS=50`, `EVAL_STEPS=20` adjusted for longer run"
|
| 156 |
+
},
|
| 157 |
+
{
|
| 158 |
+
"cell_type": "code",
|
| 159 |
+
"execution_count": null,
|
| 160 |
+
"metadata": {},
|
| 161 |
+
"outputs": [],
|
| 162 |
+
"source": "import wandb\nfrom transformers import TrainerCallback\n\nwandb.login()\nwandb.init(\n project=WANDB_PROJECT,\n name=f\"grpo-v4.1-instruct-0.5B-{time.strftime('%Y%m%d-%H%M')}\",\n config={\n \"model_id\": MODEL_ID,\n \"version\": \"v4.1\",\n \"num_generations\": NUM_GENERATIONS,\n \"max_completion_length\": MAX_COMPLETION_LENGTH,\n \"temperature\": TEMPERATURE,\n \"learning_rate\": LEARNING_RATE,\n \"lr_scheduler_type\": LR_SCHEDULER_TYPE,\n \"warmup_ratio\": WARMUP_RATIO,\n \"beta\": BETA,\n \"scale_rewards\": SCALE_REWARDS,\n \"batch_size\": BATCH_SIZE,\n \"grad_accum\": GRAD_ACCUM,\n \"max_steps\": MAX_STEPS,\n \"lora_r\": LORA_R,\n \"lora_alpha\": LORA_ALPHA,\n \"train_prompts\": len(train_dataset),\n \"eval_prompts\": len(eval_dataset),\n \"repetition_penalty_override\": 1.0,\n \"json_parser\": \"json-repair + PT-BR decimal normalizer\",\n \"changes_from_v4\": \"parser fix, 600 steps, LR 5e-6, constant_with_warmup\",\n },\n)\nprint(f\"β W&B run: {wandb.run.url}\")\n\n\n# ββ EvalRewardCallback β V4.1: added per-task reward logging ββββββββββββββββ\n\nclass EvalRewardCallback(TrainerCallback):\n def __init__(self, eval_records, reward_fn, patience, delta):\n self.eval_records = eval_records\n self.reward_fn = reward_fn\n self.patience = patience\n self.delta = delta\n self.best_reward = -float(\"inf\")\n self.best_step = 0\n self.no_improve_count = 0\n\n def on_step_end(self, args, state, control, model=None, processing_class=None, **kwargs):\n if state.global_step == 0 or state.global_step % EVAL_STEPS != 0:\n return control\n\n tokenizer_local = processing_class\n if tokenizer_local is None:\n print(\"[EvalRewardCallback] WARNING: tokenizer is None, skipping eval\")\n return control\n\n mean_reward, per_task = self._run_eval(model, tokenizer_local, args)\n improved = mean_reward > self.best_reward + self.delta\n\n # Log overall + per-task rewards\n log_data = {\n \"eval/mean_reward\": mean_reward,\n \"eval/best_reward\": max(self.best_reward, mean_reward),\n \"eval/no_improve_count\": self.no_improve_count,\n }\n # V4.1: per-task reward breakdown\n for task_name, task_rewards in per_task.items():\n if task_rewards:\n log_data[f\"eval/{task_name}\"] = sum(task_rewards) / len(task_rewards)\n wandb.log(log_data, step=state.global_step)\n\n status = \"β improved\" if improved else f\"β no gain ({self.no_improve_count + 1}/{self.patience})\"\n per_task_str = \" | \".join(\n f\"{t}={sum(v)/len(v):.3f}\" for t, v in per_task.items() if v\n )\n print(f\"\\n[EvalReward] step={state.global_step} | mean={mean_reward:.4f} | best={self.best_reward:.4f} | {status}\")\n print(f\" Per-task: {per_task_str}\")\n\n if improved:\n self.best_reward = mean_reward\n self.best_step = state.global_step\n self.no_improve_count = 0\n else:\n self.no_improve_count += 1\n if self.no_improve_count >= self.patience:\n print(f\"[EarlyStopping] No improvement for {self.patience} evals. Halting.\")\n control.should_training_stop = True\n return control\n\n def _run_eval(self, model, tokenizer_local, args):\n FastLanguageModel.for_inference(model)\n rewards = []\n per_task = {\"extraction\": [], \"sql_qa\": [], \"insights\": [], \"push\": []}\n subset = self.eval_records[:EVAL_MAX_SAMPLES]\n for record in subset:\n msgs = record[\"prompt\"]\n text = tokenizer_local.apply_chat_template(msgs, tokenize=False, add_generation_prompt=True)\n\n # V4.1: classify task for per-task logging\n user_txt = \" \".join(m.get(\"content\", \"\") for m in msgs if m[\"role\"] == \"user\")\n task = _classify_task_type(user_txt)\n\n inputs = tokenizer_local(text, return_tensors=\"pt\", truncation=True, max_length=args.max_prompt_length).to(model.device)\n with torch.no_grad():\n out = model.generate(\n **inputs,\n max_new_tokens=EVAL_MAX_TOKENS,\n temperature=0.1,\n do_sample=True,\n repetition_penalty=1.0,\n )\n resp = tokenizer_local.decode(out[0][inputs[\"input_ids\"].shape[1]:], skip_special_tokens=True)\n r = self.reward_fn([resp], [text])[0]\n rewards.append(r)\n if task in per_task:\n per_task[task].append(r)\n FastLanguageModel.for_training(model)\n mean_r = sum(rewards) / len(rewards) if rewards else 0.0\n return mean_r, per_task\n\n\n# ββ Training ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ\nFastLanguageModel.for_training(model)\n\ngrpo_config = GRPOConfig(\n output_dir=str(CHECKPOINT_DIR),\n num_generations=NUM_GENERATIONS,\n scale_rewards=SCALE_REWARDS,\n max_completion_length=MAX_COMPLETION_LENGTH,\n max_steps=MAX_STEPS, # V4.1: 600\n temperature=TEMPERATURE,\n beta=BETA,\n num_train_epochs=1,\n per_device_train_batch_size=BATCH_SIZE,\n gradient_accumulation_steps=GRAD_ACCUM,\n learning_rate=LEARNING_RATE, # V4.1: 5e-6\n lr_scheduler_type=LR_SCHEDULER_TYPE, # V4.1: constant_with_warmup\n warmup_ratio=WARMUP_RATIO, # V4.1: 0.05\n fp16=False,\n bf16=True,\n logging_steps=1,\n save_steps=SAVE_STEPS, # V4.1: 50\n save_total_limit=5,\n save_only_model=True,\n report_to=\"wandb\",\n max_prompt_length=MAX_SEQ_LENGTH // 2,\n seed=42,\n remove_unused_columns=False,\n disable_tqdm=True,\n logging_first_step=True,\n)\n\neval_cb = EvalRewardCallback(\n eval_records=list(eval_dataset),\n reward_fn=commerce_reward_fn,\n patience=EARLY_STOPPING_PATIENCE,\n delta=EARLY_STOPPING_DELTA,\n)\n\ntrainer = UnslothGRPOTrainer(\n model=model,\n reward_funcs=commerce_reward_fn,\n args=grpo_config,\n train_dataset=train_dataset,\n processing_class=tokenizer,\n callbacks=[eval_cb],\n)\n\nt_start = time.time()\nresult = trainer.train()\nelapsed = time.time() - t_start\n\nwandb.log({\n \"train/final_loss\": result.training_loss,\n \"train/duration_hours\": elapsed / 3600,\n \"train/total_steps\": result.global_step,\n \"eval/best_reward_final\": eval_cb.best_reward,\n \"eval/best_step\": eval_cb.best_step,\n})\nwandb.finish()\n\nprint(f\"\\n{'='*60}\")\nprint(f\"V4.1 Training Complete\")\nprint(f\" Loss: {result.training_loss:.4f}\")\nprint(f\" Steps: {result.global_step}\")\nprint(f\" Duration: {elapsed/3600:.1f}h\")\nprint(f\" Best eval: {eval_cb.best_reward:.4f} (step {eval_cb.best_step})\")\nprint(f\"{'='*60}\")"
|
| 163 |
+
},
|
| 164 |
+
{
|
| 165 |
+
"cell_type": "markdown",
|
| 166 |
+
"metadata": {},
|
| 167 |
+
"source": "---\n\n## Cell 14: Validation (20 Held-Out Samples)\n\n**Success criteria (from v4_1-handoff.md):**\n- eval/mean_reward β₯ 0.55 β Scale to 3.7B (Scenario A)\n- eval/mean_reward β 0.476 AND extraction β 0.17 β Switch base model (Scenario B)\n- eval improves but one task < 0.20 β Targeted fix (Scenario C)"
|
| 168 |
+
},
|
| 169 |
+
{
|
| 170 |
+
"cell_type": "code",
|
| 171 |
+
"execution_count": null,
|
| 172 |
+
"metadata": {},
|
| 173 |
+
"outputs": [],
|
| 174 |
+
"source": "FastLanguageModel.for_inference(model)\n\nval_samples = list(eval_dataset)[:20]\nval_results = {\"extraction\": [], \"sql_qa\": [], \"insights\": [], \"push\": []}\n\nfor i, record in enumerate(val_samples):\n msgs = record[\"prompt\"]\n user_text = \" \".join(m[\"content\"] for m in msgs if m[\"role\"] == \"user\")\n task = _classify_task_type(user_text)\n\n text = tokenizer.apply_chat_template(msgs, tokenize=False, add_generation_prompt=True)\n inputs = tokenizer(text, return_tensors=\"pt\").to(model.device)\n with torch.no_grad():\n out = model.generate(\n **inputs,\n max_new_tokens=MAX_COMPLETION_LENGTH,\n temperature=0.1,\n do_sample=True,\n repetition_penalty=1.0,\n )\n resp = tokenizer.decode(out[0][inputs[\"input_ids\"].shape[1]:], skip_special_tokens=True)\n r = commerce_reward_fn([resp], [text])[0]\n val_results[task].append(r)\n print(f\" [{task:12s}] reward={r:.2f} | {strip_think(resp)[:80]}\")\n\nprint(f\"\\n{'='*60}\")\nprint(\"Validation Results by Task:\")\noverall = []\nfor task, rewards in val_results.items():\n if rewards:\n mean_r = sum(rewards) / len(rewards)\n overall.extend(rewards)\n print(f\" {task:12s}: mean={mean_r:.3f} (n={len(rewards)})\")\nif overall:\n print(f\" {'OVERALL':12s}: mean={sum(overall)/len(overall):.3f} (n={len(overall)})\")\nprint(f\"{'='*60}\")\n\n# Decision tree from v4_1-handoff.md\nmean_overall = sum(overall) / len(overall) if overall else 0\next_rewards = val_results.get(\"extraction\", [])\nmean_ext = sum(ext_rewards) / len(ext_rewards) if ext_rewards else 0\n\nprint(f\"\\n--- Decision ---\")\nif mean_overall >= 0.55:\n print(f\" Scenario A: eval={mean_overall:.3f} β₯ 0.55 β SCALE TO 3.7B-Instruct\")\nelif mean_overall < 0.50 and mean_ext < 0.20:\n print(f\" Scenario B: eval={mean_overall:.3f}, ext={mean_ext:.3f} β Switch to 0.5B-Base with SFT\")\nelse:\n weakest = min(val_results.items(), key=lambda x: sum(x[1])/len(x[1]) if x[1] else float('inf'))\n print(f\" Scenario C: Weakest task = {weakest[0]} β Targeted fix needed\")"
|
| 175 |
+
},
|
| 176 |
+
{
|
| 177 |
+
"cell_type": "markdown",
|
| 178 |
+
"metadata": {},
|
| 179 |
+
"source": "---\n\n## Cell 15: Save Adapter"
|
| 180 |
+
},
|
| 181 |
+
{
|
| 182 |
+
"cell_type": "code",
|
| 183 |
+
"execution_count": null,
|
| 184 |
+
"metadata": {},
|
| 185 |
+
"outputs": [],
|
| 186 |
+
"source": "ADAPTER_DIR.mkdir(parents=True, exist_ok=True)\nmodel.save_pretrained(str(ADAPTER_DIR))\ntokenizer.save_pretrained(str(ADAPTER_DIR))\nprint(f\"β Adapter saved to {ADAPTER_DIR}\")"
|
| 187 |
+
}
|
| 188 |
+
],
|
| 189 |
+
"metadata": {
|
| 190 |
+
"kernelspec": {
|
| 191 |
+
"display_name": "Python 3",
|
| 192 |
+
"language": "python",
|
| 193 |
+
"name": "python3"
|
| 194 |
+
},
|
| 195 |
+
"language_info": {
|
| 196 |
+
"name": "python",
|
| 197 |
+
"version": "3.10.0"
|
| 198 |
+
}
|
| 199 |
+
},
|
| 200 |
+
"nbformat": 4,
|
| 201 |
+
"nbformat_minor": 5
|
| 202 |
+
}
|