Model Card for Disctil-Qwen3-1.7B
This model is a fine-tuned version of reaperdoesntknow/DiStil-Qwen3-1.7B-uncensored. It has been trained using TRL.
Quick start
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="reaperdoesntknow/Disctil-Qwen3-1.7B", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
Training procedure
This model was trained with SFT.
Framework versions
- TRL: 0.29.1
- Transformers: 5.0.0
- Pytorch: 2.10.0+cu128
- Datasets: 4.0.0
- Tokenizers: 0.22.2
Mathematical Foundations: Discrepancy Calculus (DISC)
This model is the DISC-refined node in the DistilQwen distillation chain. Discrepancy Calculus is a measure-theoretic framework that quantifies mismatch between integration and differentiation via the discrepancy operator:
DISC refinement applies the Mesh Fundamental Identity decomposition ($f = \text{AC} + \text{jumps} + \text{Cantor}$) to the model's weight space, identifying and preserving structural boundaries that standard fine-tuning smears across. The Meta-Discrepancy Theorem (Th. 11.15) proves that when gap measure and discrepancy energy are both positive, classical smooth optimization provably cannot capture the full structure.
Full theory: "On the Formal Analysis of Discrepancy Calculus" (Colca, 2026; Convergent Intelligence LLC: Research Division).
Citations
Cite TRL as:
@software{vonwerra2020trl,
title = {{TRL: Transformers Reinforcement Learning}},
author = {von Werra, Leandro and Belkada, Younes and Tunstall, Lewis and Beeching, Edward and Thrush, Tristan and Lambert, Nathan and Huang, Shengyi and Rasul, Kashif and Gallouédec, Quentin},
license = {Apache-2.0},
url = {https://github.com/huggingface/trl},
year = {2020}
}
From the Convergent Intelligence Portfolio
DistilQwen Collection — Our only BF16 series. Proof-weighted distillation from Qwen3-30B-A3B → 1.7B and 0.6B on H100. Three teacher variants (Instruct, Thinking, Coder), nine models, 2,788 combined downloads. The rest of the portfolio proves structure beats scale on CPU. This collection shows what happens when you give the methodology real hardware.
Top model: Qwen3-1.7B-Coder-Distilled-SFT — 508 downloads
Full methodology: Structure Over Scale (DOI: 10.57967/hf/8165)
Convergent Intelligence LLC: Research Division
Convergent Intelligence Portfolio
Part of the DistilQwen Series by Convergent Intelligence LLC: Research Division
Related Models
| Model | Downloads | Format |
|---|---|---|
| TopologicalQwen | 1,974 | BF16 |
| Qwen3-1.7B-Thinking-Distil | 1,903 | BF16 |
| Qwen3-1.7B-Coder-Distilled-SFT | 1,677 | BF16 |
| DiStil-Qwen3-1.7B-uncensored | 1,602 | BF16 |
| DistilQwen3-1.7B-uncensored | 1,574 | BF16 |
| Qwen3-1.7B-Distilled-30B-A3B | 1,138 | BF16 |
Papers
| Paper | DOI |
|---|---|
| Structure Over Scale | 10.57967/hf/8165 |
| Three Teachers to Dual Cognition | 10.57967/hf/8184 |
| Discrepancy Calculus | 10.57967/hf/8194 |
Last updated: 2026-03-31 by Convergent Intelligence LLC: Research Division
- Downloads last month
- 4,406
Model tree for reaperdoesntknow/Disctil-Qwen3-1.7B
Base model
Qwen/Qwen3-1.7B-Base