KnowRL: Boosting LLM Reasoning via Reinforcement Learning with Minimal-Sufficient Knowledge Guidance
Abstract
KnowRL is a knowledge-guided reinforcement learning framework that improves reasoning in language models by optimizing compact, interaction-aware guidance subsets through constrained subset search and addressing pruning interaction paradoxes.
RLVR improves reasoning in large language models, but its effectiveness is often limited by severe reward sparsity on hard problems. Recent hint-based RL methods mitigate sparsity by injecting partial solutions or abstract templates, yet they typically scale guidance by adding more tokens, which introduce redundancy, inconsistency, and extra training overhead. We propose KnowRL (Knowledge-Guided Reinforcement Learning), an RL training framework that treats hint design as a minimal-sufficient guidance problem. During RL training, KnowRL decomposes guidance into atomic knowledge points (KPs) and uses Constrained Subset Search (CSS) to construct compact, interaction-aware subsets for training. We further identify a pruning interaction paradox -- removing one KP may help while removing multiple such KPs can hurt -- and explicitly optimize for robust subset curation under this dependency structure. We train KnowRL-Nemotron-1.5B from OpenMath-Nemotron-1.5B. Across eight reasoning benchmarks at the 1.5B scale, KnowRL-Nemotron-1.5B consistently outperforms strong RL and hinting baselines. Without KP hints at inference, KnowRL-Nemotron-1.5B reaches 70.08 average accuracy, already surpassing Nemotron-1.5B by +9.63 points; with selected KPs, performance improves to 74.16, establishing a new state of the art at this scale. The model, curated training data, and code are publicly available at https://github.com/Hasuer/KnowRL.
Community
This work proposes a surprisingly effective idea:
👉 you don’t need long reasoning traces — you just need the right minimal hints.
Instead of scaling supervision with longer CoT or dense trajectories, we introduce a hint-based RL paradigm where the model is guided by a small set of distilled knowledge points. These hints are:
- minimal (only the essential reasoning signals)
- non-redundant
- problem-agnostic (no leakage of instance-specific details)
💡 The key finding is quite striking:
Performance is not proportional to hint quantity — it’s driven by critical hint subsets.
A few well-chosen knowledge points can unlock correctness, leading to:
- significantly improved reasoning accuracy
- reduced reward sparsity in RL
- much higher training efficiency
Even more interesting, the results reveal a non-additive interaction effect:
adding more hints does not always help — only the right combination matters.
Get this paper in your agent:
hf papers read 2604.12627 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 1
Datasets citing this paper 2
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper