clemsail commited on
Commit
b2ff2b4
·
verified ·
1 Parent(s): 6cd157f

docs: AI Act transparency + benchmarks

Browse files
Files changed (1) hide show
  1. README.md +151 -0
README.md ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: unsloth/Devstral-Small-2507-unsloth-bnb-4bit
3
+ library_name: peft
4
+ model_name: devstral-v3-sft
5
+ tags:
6
+ - base_model:adapter:unsloth/Devstral-Small-2507-unsloth-bnb-4bit
7
+ - lora
8
+ - sft
9
+ - transformers
10
+ - trl
11
+ - unsloth
12
+ licence: license
13
+ pipeline_tag: text-generation
14
+ ---
15
+
16
+ # Model Card for devstral-v3-sft
17
+
18
+ This model is a fine-tuned version of [unsloth/Devstral-Small-2507-unsloth-bnb-4bit](https://huggingface.co/unsloth/Devstral-Small-2507-unsloth-bnb-4bit).
19
+ It has been trained using [TRL](https://github.com/huggingface/trl).
20
+
21
+ ## Quick start
22
+
23
+ ```python
24
+ from transformers import pipeline
25
+
26
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
27
+ generator = pipeline("text-generation", model="None", device="cuda")
28
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
29
+ print(output["generated_text"])
30
+ ```
31
+
32
+ ## Training procedure
33
+
34
+
35
+
36
+
37
+ This model was trained with SFT.
38
+
39
+ ### Framework versions
40
+
41
+ - PEFT 0.18.1
42
+ - TRL: 0.24.0
43
+ - Transformers: 5.5.0
44
+ - Pytorch: 2.10.0
45
+ - Datasets: 4.3.0
46
+ - Tokenizers: 0.22.2
47
+
48
+ ## Citations
49
+
50
+
51
+
52
+ Cite TRL as:
53
+
54
+ ```bibtex
55
+ @misc{vonwerra2022trl,
56
+ title = {{TRL: Transformer Reinforcement Learning}},
57
+ author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
58
+ year = 2020,
59
+ journal = {GitHub repository},
60
+ publisher = {GitHub},
61
+ howpublished = {\url{https://github.com/huggingface/trl}}
62
+ }
63
+ ```
64
+
65
+ # devstral-v3-sft
66
+
67
+
68
+ ## 🇪🇺 EU AI Act transparency
69
+
70
+ This model is published under the AI Act framework (Regulation EU 2024/1689).
71
+
72
+ | Field | Value |
73
+ |---|---|
74
+ | Provider | L'Électron Rare (clemsail) |
75
+ | Role under AI Act | GPAI provider |
76
+ | Adapter type | LoRA / PEFT — supervised fine-tune adapter |
77
+ | Base model | `mistralai/Devstral-Small-2-24B-Instruct-2512` |
78
+ | License | Apache-2.0 (this artefact); upstream Mistral licence applies separately |
79
+ | Intended use | Code generation across Python / Rust / TypeScript / C++ / SQL / shell, with stronger reasoning on engineering questions |
80
+ | Out of scope | Healthcare diagnosis, legal advice, autonomous safety-critical decisions, generation of malicious code or exploits |
81
+ | Risk classification | Limited risk — Article 50 transparency obligations apply |
82
+ | Copyright respect | Training data does not include scraped copyrighted material. Public engineering documentation under permissive licences plus internal synthetic distillation. |
83
+ | Full provenance | https://github.com/L-electron-Rare/eu-kiki/tree/main/docs/provenance |
84
+ | Contact | postmaster@saillant.cc |
85
+
86
+ ⚠️ **You are using an AI model.** Outputs may be inaccurate, biased or
87
+ fabricated. Do not act on them without independent verification, especially
88
+ in regulated domains.
89
+
90
+ ## Benchmarks
91
+
92
+ Run via `lm-eval-harness` v0.4.x against the FUSED checkpoint (base + this
93
+ adapter merged for inference). Strict-match where applicable.
94
+
95
+ | Task | Metric | Score |
96
+ |---|---|---|
97
+ | gsm8k | `exact_match,strict-match` | **0.844** |
98
+ | ifeval | `prompt_level_strict_acc,none` | **0.691** |
99
+ | bbh_cot_fewshot | `exact_match,get-answer` | **0.795** |
100
+ | bbh_cot_fewshot_boolean_expressions | `exact_match,get-answer` | **0.900** |
101
+ | bbh_cot_fewshot_causal_judgement | `exact_match,get-answer` | **0.600** |
102
+ | bbh_cot_fewshot_date_understanding | `exact_match,get-answer` | **0.933** |
103
+ | bbh_cot_fewshot_disambiguation_qa | `exact_match,get-answer` | **0.767** |
104
+ | bbh_cot_fewshot_dyck_languages | `exact_match,get-answer` | **0.100** |
105
+ | bbh_cot_fewshot_formal_fallacies | `exact_match,get-answer` | **0.600** |
106
+ | bbh_cot_fewshot_geometric_shapes | `exact_match,get-answer` | **0.367** |
107
+ | bbh_cot_fewshot_hyperbaton | `exact_match,get-answer` | **1.000** |
108
+ | bbh_cot_fewshot_logical_deduction_five_objects | `exact_match,get-answer` | **0.767** |
109
+ | bbh_cot_fewshot_logical_deduction_seven_objects | `exact_match,get-answer` | **0.533** |
110
+ | bbh_cot_fewshot_logical_deduction_three_objects | `exact_match,get-answer` | **0.900** |
111
+ | bbh_cot_fewshot_movie_recommendation | `exact_match,get-answer` | **0.833** |
112
+ | bbh_cot_fewshot_multistep_arithmetic_two | `exact_match,get-answer` | **0.867** |
113
+ | bbh_cot_fewshot_navigate | `exact_match,get-answer` | **0.967** |
114
+ | bbh_cot_fewshot_object_counting | `exact_match,get-answer` | **0.967** |
115
+ | bbh_cot_fewshot_penguins_in_a_table | `exact_match,get-answer` | **0.933** |
116
+ | bbh_cot_fewshot_reasoning_about_colored_objects | `exact_match,get-answer` | **0.967** |
117
+ | bbh_cot_fewshot_ruin_names | `exact_match,get-answer` | **0.667** |
118
+ | bbh_cot_fewshot_salient_translation_error_detection | `exact_match,get-answer` | **0.700** |
119
+ | bbh_cot_fewshot_snarks | `exact_match,get-answer` | **0.700** |
120
+ | bbh_cot_fewshot_sports_understanding | `exact_match,get-answer` | **0.900** |
121
+ | bbh_cot_fewshot_temporal_sequences | `exact_match,get-answer` | **0.967** |
122
+ | bbh_cot_fewshot_tracking_shuffled_objects_five_objects | `exact_match,get-answer` | **0.967** |
123
+ | bbh_cot_fewshot_tracking_shuffled_objects_seven_objects | `exact_match,get-answer` | **0.933** |
124
+ | bbh_cot_fewshot_tracking_shuffled_objects_three_objects | `exact_match,get-answer` | **0.967** |
125
+ | bbh_cot_fewshot_web_of_lies | `exact_match,get-answer` | **1.000** |
126
+ | bbh_cot_fewshot_word_sorting | `exact_match,get-answer` | **0.667** |
127
+ | mmlu_pro | `exact_match,custom-extract` | **0.619** |
128
+ | mmlu_pro_biology | `exact_match,custom-extract` | **0.768** |
129
+ | mmlu_pro_business | `exact_match,custom-extract` | **0.660** |
130
+ | mmlu_pro_chemistry | `exact_match,custom-extract` | **0.580** |
131
+ | mmlu_pro_computer_science | `exact_match,custom-extract` | **0.676** |
132
+ | mmlu_pro_economics | `exact_match,custom-extract` | **0.678** |
133
+ | mmlu_pro_engineering | `exact_match,custom-extract` | **0.448** |
134
+ | mmlu_pro_health | `exact_match,custom-extract` | **0.678** |
135
+ | mmlu_pro_history | `exact_match,custom-extract` | **0.575** |
136
+ | mmlu_pro_law | `exact_match,custom-extract` | **0.432** |
137
+ | mmlu_pro_math | `exact_match,custom-extract` | **0.678** |
138
+ | mmlu_pro_other | `exact_match,custom-extract` | **0.612** |
139
+ | mmlu_pro_philosophy | `exact_match,custom-extract` | **0.549** |
140
+ | mmlu_pro_physics | `exact_match,custom-extract` | **0.630** |
141
+ | mmlu_pro_psychology | `exact_match,custom-extract` | **0.704** |
142
+ | leaderboard_math_hard | `exact_match,none` | **0.341** |
143
+ | leaderboard_math_algebra_hard | `exact_match,none` | **0.570** |
144
+ | leaderboard_math_counting_and_prob_hard | `exact_match,none` | **0.252** |
145
+ | leaderboard_math_geometry_hard | `exact_match,none` | **0.182** |
146
+ | leaderboard_math_intermediate_algebra_hard | `exact_match,none` | **0.139** |
147
+ | leaderboard_math_num_theory_hard | `exact_match,none` | **0.416** |
148
+ | leaderboard_math_prealgebra_hard | `exact_match,none` | **0.523** |
149
+ | leaderboard_math_precalculus_hard | `exact_match,none` | **0.126** |
150
+
151
+ Raw `results_*.json` files are committed under `evals/`.