InosLihka commited on
Commit
8227b63
·
1 Parent(s): 6226884

README: drop iter2 plots, keep only SFT v3 loss curve (current pipeline)

Browse files
Files changed (1) hide show
  1. README.md +1 -7
README.md CHANGED
@@ -45,13 +45,7 @@ Plus the gpt-5.4 teacher (the upper-bound reference) hits **0.611 in-dist / 0.62
45
 
46
  ![SFT v3 loss](plots/sft_v3_training_loss.png)
47
 
48
- **GRPO iter 2 historical RL run** before we pivoted to Algorithm Distillation. 400 steps of GRPO on Qwen 2.5-3B + LoRA, real env-replay reward.
49
-
50
- ![GRPO iter2 loss](plots/grpo_iter2_training_loss.png)
51
-
52
- **Baseline vs trained agents on the env** (random + heuristic + iter2 trained). The v2-grader version of this plot for the SFT v3 student lands after the running eval finishes.
53
-
54
- ![Baseline vs trained](plots/grpo_iter2_baseline_vs_trained.png)
55
 
56
  ## Why a Life Simulator?
57
 
 
45
 
46
  ![SFT v3 loss](plots/sft_v3_training_loss.png)
47
 
48
+ The bar comparison (random vs heuristic vs distilled student) is in the **Headline result** table above. Numbers source: `eval_results.json` in the [trained model repo](https://huggingface.co/InosLihka/rhythm-env-meta-trained-sft-v3).
 
 
 
 
 
 
49
 
50
  ## Why a Life Simulator?
51