aamrinder commited on
Commit
8083b95
Β·
verified Β·
1 Parent(s): 629a13b

strip em-dashes

Browse files
Files changed (1) hide show
  1. README.md +14 -14
README.md CHANGED
@@ -18,11 +18,11 @@ tags:
18
 
19
  This repo wraps that exact task as an **OpenEnv environment**. Built on MUStARD (a 690-clip sarcasm-in-sitcom dataset) and a small prosody pipeline of pyin pitch, RMS energy and pause timing, all baked straight into the env. The agent sees a transcript plus a text rendering of the prosody, does a chain of thought, and finishes with `<final>{"label":..., "confidence":...}</final>`. Reward is graded on correctness, reasoning, and format.
20
 
21
- I trained a baseline on top of this β€” Qwen2.5-3B-Instruct, LoRA r=16, 200 GRPO steps via Unsloth + TRL β€” on a strict 80/20 train/test split. The trained model is sitting at [aamrinder/subtext-arena-grpo](https://huggingface.co/aamrinder/subtext-arena-grpo). Total compute spent on the whole thing β€” eleven dollars only.
22
 
23
  Built for the OpenEnv Hackathon Finale (Apr 2026, Bangalore). Theme: World Modeling.
24
 
25
- Team **BalleBalle** β€” Amrinder Singh, Shubham Kapoor.
26
 
27
  **Links:**
28
  - Environment (live HF Space, with web UI): https://huggingface.co/spaces/aamrinder/subtext-arena
@@ -33,13 +33,13 @@ Team **BalleBalle** β€” Amrinder Singh, Shubham Kapoor.
33
 
34
  ---
35
 
36
- Detecting sarcasm from audio prosody is honestly not a solved problem at all. GPT-4o sits at 67% Macro-F1 on MUStARD++, meaning one out of three times the most powerful model on the planet is being fooled by tone of voice itself. There is a documented modality gap on this β€” the [LISTEN paper](https://arxiv.org/abs/2510.10444) from October 2025 β€” and as of today the gap is still wide open. And there is no public RL training environment for this task that I could find, so one fine evening I just sat down and made one.
37
 
38
  The closest prior work is [AudioToolAgent](https://arxiv.org/abs/2510.02995) (Oct 2025), which prompted a frontier LLM to use audio analysis tools. Same kind of architecture really, but they didn't actually train anything. Subtext Arena is the training-side counterpart of that idea.
39
 
40
  ---
41
 
42
- Each episode is one MUStARD clip. The prompt the agent sees has the transcript (target line plus 1-7 lines of preceding conversation, with speaker tags), prosody features rendered as text β€” pitch mean and variability, contour shape, energy mean and variability, voiced ratio, pre-utterance silence, internal pauses with timestamps β€” and a pitch contour drawn out as an 8-level ASCII sparkline.
43
 
44
  The model emits something like:
45
 
@@ -58,7 +58,7 @@ The reward is a composable rubric:
58
  0.15 * format (1.0 if a valid <final> JSON parses, 0 otherwise)
59
  ```
60
 
61
- The env also exposes four tools β€” `get_transcript`, `get_prosody_features`, `get_pitch_contour`, `submit_belief` β€” for interactive multi-step inference. That is what you can poke at on this Space's web UI itself. Training uses the single-prompt format above so it matches the deck-linked Wordle and Sudoku notebooks.
62
 
63
  ---
64
 
@@ -66,7 +66,7 @@ I trained for 200 steps with `num_generations=4`, LoRA r=16, dropout 0.05, on an
66
 
67
  ![reward curve](docs/plots/reward_curve.png)
68
 
69
- Reward climbs from 0.335 to 0.97 on training prompts. The shaded band around the line is within-batch rollout variance β€” when it is narrow, the four group-relative generations are mostly agreeing; when it goes wide, the model is exploring.
70
 
71
  After training, I ran greedy inference on 80 clips the model had never seen during training.
72
 
@@ -78,11 +78,11 @@ After training, I ran greedy inference on 80 clips the model had never seen duri
78
  | Prosody-Pivot Set in eval (audio-decisive clips) | 5/6 = **83%** |
79
  | Well-formed completions | 79/80 = 98.75% |
80
 
81
- The honest read β€” 51% on the broad set is roughly what a plain text-only baseline would also do, meaning pyin-derived prosody summary stats are still too simple to push a 3B model much beyond what it already gets from just reading the transcript. Fair hit, no hiding it. But when the audio is genuinely decisive (the Pivot Set), the trained model actually uses it β€” 5 out of 6 correct on those clips, vs 0/6 for a text-only baseline that confidently picks the wrong label every single time.
82
 
83
- The 0.97 train vs 0.51 held-out gap is itself the anti-memorization signal β€” if the model had been just gaming the reward, train and held-out would have matched.
84
 
85
- [`docs/side_by_side.html`](docs/side_by_side.html) shows 5 hand-picked clips from the held-out set where text-only Qwen confidently picks the wrong label and the prosody-trained model picks the right one. Tally β€” baseline 0/5, trained 5/5.
86
 
87
  ![training dynamics](docs/plots/training_dynamics.png)
88
 
@@ -173,11 +173,11 @@ A Colab-friendly version of the same script is at [notebooks/train_grpo_colab.ip
173
 
174
  This is just the start really, I am not stopping here at all. The whole point of putting in the effort to build an environment in the first place is that it stays alive even after the hackathon is done, leaderboard is locked, credits run out, all of that. The audio-tool layer of the env is decoupled from the model interface itself, so anyone with a richer feature stack can plug in straight on top without touching anything else. Specifically:
175
 
176
- - pyin pitch contour β†’ wav2vec2 / HuBERT prosody embeddings
177
- - RMS summary β†’ spectrogram patch tokens
178
- - speaker-anonymous prompts β†’ speaker-aware features
179
- - 3B text policy β†’ 7B audio LLM (Qwen2-Audio) end-to-end
180
- - sarcasm only β†’ polite refusals, hidden anger, suppressed feelings, basically any case where the words and the actual meaning don't agree
181
 
182
  If any of these drop the broad held-out number from 51% toward AMuSeD's 81% F1 multimodal SOTA on the same dataset, the env will measure it cleanly because the held-out split and reward function are fixed.
183
 
 
18
 
19
  This repo wraps that exact task as an **OpenEnv environment**. Built on MUStARD (a 690-clip sarcasm-in-sitcom dataset) and a small prosody pipeline of pyin pitch, RMS energy and pause timing, all baked straight into the env. The agent sees a transcript plus a text rendering of the prosody, does a chain of thought, and finishes with `<final>{"label":..., "confidence":...}</final>`. Reward is graded on correctness, reasoning, and format.
20
 
21
+ I trained a baseline on top of this. Qwen2.5-3B-Instruct, LoRA r=16, 200 GRPO steps via Unsloth + TRL, on a strict 80/20 train/test split. The trained model is sitting at [aamrinder/subtext-arena-grpo](https://huggingface.co/aamrinder/subtext-arena-grpo). Total compute spent on the whole thing was eleven dollars only.
22
 
23
  Built for the OpenEnv Hackathon Finale (Apr 2026, Bangalore). Theme: World Modeling.
24
 
25
+ Team **BalleBalle**: Amrinder Singh, Shubham Kapoor.
26
 
27
  **Links:**
28
  - Environment (live HF Space, with web UI): https://huggingface.co/spaces/aamrinder/subtext-arena
 
33
 
34
  ---
35
 
36
+ Detecting sarcasm from audio prosody is honestly not a solved problem at all. GPT-4o sits at 67% Macro-F1 on MUStARD++, meaning one out of three times the most powerful model on the planet is being fooled by tone of voice itself. There is a documented modality gap on this, the [LISTEN paper](https://arxiv.org/abs/2510.10444) from October 2025, and as of today the gap is still wide open. And there is no public RL training environment for this task that I could find, so one fine evening I just sat down and made one.
37
 
38
  The closest prior work is [AudioToolAgent](https://arxiv.org/abs/2510.02995) (Oct 2025), which prompted a frontier LLM to use audio analysis tools. Same kind of architecture really, but they didn't actually train anything. Subtext Arena is the training-side counterpart of that idea.
39
 
40
  ---
41
 
42
+ Each episode is one MUStARD clip. The prompt the agent sees has the transcript (target line plus 1-7 lines of preceding conversation, with speaker tags), prosody features rendered as text (pitch mean and variability, contour shape, energy mean and variability, voiced ratio, pre-utterance silence, internal pauses with timestamps), and a pitch contour drawn out as an 8-level ASCII sparkline.
43
 
44
  The model emits something like:
45
 
 
58
  0.15 * format (1.0 if a valid <final> JSON parses, 0 otherwise)
59
  ```
60
 
61
+ The env also exposes four tools (`get_transcript`, `get_prosody_features`, `get_pitch_contour`, `submit_belief`) for interactive multi-step inference. That is what you can poke at on this Space's web UI itself. Training uses the single-prompt format above so it matches the deck-linked Wordle and Sudoku notebooks.
62
 
63
  ---
64
 
 
66
 
67
  ![reward curve](docs/plots/reward_curve.png)
68
 
69
+ Reward climbs from 0.335 to 0.97 on training prompts. The shaded band around the line is within-batch rollout variance. When it is narrow, the four group-relative generations are mostly agreeing; when it goes wide, the model is exploring.
70
 
71
  After training, I ran greedy inference on 80 clips the model had never seen during training.
72
 
 
78
  | Prosody-Pivot Set in eval (audio-decisive clips) | 5/6 = **83%** |
79
  | Well-formed completions | 79/80 = 98.75% |
80
 
81
+ The honest read: 51% on the broad set is roughly what a plain text-only baseline would also do, meaning pyin-derived prosody summary stats are still too simple to push a 3B model much beyond what it already gets from just reading the transcript. Fair hit, no hiding it. But when the audio is genuinely decisive (the Pivot Set), the trained model actually uses it. 5 out of 6 correct on those clips, vs 0/6 for a text-only baseline that confidently picks the wrong label every single time.
82
 
83
+ The 0.97 train vs 0.51 held-out gap is itself the anti-memorization signal. If the model had been just gaming the reward, train and held-out would have matched.
84
 
85
+ [`docs/side_by_side.html`](docs/side_by_side.html) shows 5 hand-picked clips from the held-out set where text-only Qwen confidently picks the wrong label and the prosody-trained model picks the right one. Tally: baseline 0/5, trained 5/5.
86
 
87
  ![training dynamics](docs/plots/training_dynamics.png)
88
 
 
173
 
174
  This is just the start really, I am not stopping here at all. The whole point of putting in the effort to build an environment in the first place is that it stays alive even after the hackathon is done, leaderboard is locked, credits run out, all of that. The audio-tool layer of the env is decoupled from the model interface itself, so anyone with a richer feature stack can plug in straight on top without touching anything else. Specifically:
175
 
176
+ - pyin pitch contour to wav2vec2 / HuBERT prosody embeddings
177
+ - RMS summary to spectrogram patch tokens
178
+ - speaker-anonymous prompts to speaker-aware features
179
+ - 3B text policy to 7B audio LLM (Qwen2-Audio) end-to-end
180
+ - sarcasm only to polite refusals, hidden anger, suppressed feelings, basically any case where the words and the actual meaning don't agree
181
 
182
  If any of these drop the broad held-out number from 51% toward AMuSeD's 81% F1 multimodal SOTA on the same dataset, the env will measure it cleanly because the held-out split and reward function are fixed.
183