Update README.md
Browse files
README.md
CHANGED
|
@@ -16,13 +16,17 @@ library_name: transformers
|
|
| 16 |
- [Boldt-1B](https://huggingface.co/Boldt/Boldt-1B)
|
| 17 |
- [Boldt-1B-IT-Preview](https://huggingface.co/Boldt/Boldt-1B-IT-Preview)
|
| 18 |
|
| 19 |
-
|
|
|
|
| 20 |
|
| 21 |
-
-
|
| 22 |
-
|
| 23 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 24 |
|
| 25 |
-
As a result, instead of single-pass pre-training on a large web corpus, Boldt models were trained for multiple epochs on a small, high-quality subset of a web corpus. We find that repeated training on high quality subsets outperforms single-pass training on larger, less diverse corpora. For more details regarding the origin of this model and the reasearch behind it, please refer to our [preprint](https://arxiv.org/abs/2604.28075)!
|
| 26 |
|
| 27 |
## Usage
|
| 28 |
|
|
@@ -42,19 +46,22 @@ outputs = model.generate(**inputs, max_new_tokens=64)
|
|
| 42 |
|
| 43 |
## Evaluation
|
| 44 |
|
| 45 |
-
We evaluate Boldt-350M on our [modernized German benchmark suite](https://huggingface.co/collections/Boldt/german-llm-benchmarks).
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
|
| 54 |
-
||
|
| 55 |
-
|
|
| 56 |
-
|
|
| 57 |
-
|
|
|
|
|
|
|
|
|
|
|
| 58 |
|
| 59 |
## Safety & Ethics
|
| 60 |
|
|
|
|
| 16 |
- [Boldt-1B](https://huggingface.co/Boldt/Boldt-1B)
|
| 17 |
- [Boldt-1B-IT-Preview](https://huggingface.co/Boldt/Boldt-1B-IT-Preview)
|
| 18 |
|
| 19 |
+
### Repetition over Diversity
|
| 20 |
+
The training philosophy behind **Boldt** is centered on a key finding from our research: **repetition over diversity**.
|
| 21 |
|
| 22 |
+
Standard pre-training paradigms typically balance quality filtering against the need for massive token volume and broad corpus diversity. In contrast, Boldt models are trained for multiple epochs on a highly filtered dataset: the German ***Dense-Core*** subset of [FineWeb-2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2). We isolated this subset using a combination of three hierarchical filters:
|
| 23 |
+
|
| 24 |
+
- **Coherence:** Eliminates structurally fragmented or incoherent documents.
|
| 25 |
+
- **Information Value:** Isolates content-rich and fact-bearing texts.
|
| 26 |
+
- **Educational Quality:** Selects strictly for pedagogical clarity and deep explanations.
|
| 27 |
+
|
| 28 |
+
We demonstrate that repeated exposure to this strict, high-quality subset is more sample-efficient than a single pass over less filtered and more diverse corpora. For a comprehensive look at our experiments, please refer to our preprint: [*Repetition over Diversity*](https://arxiv.org/abs/2604.28075).
|
| 29 |
|
|
|
|
| 30 |
|
| 31 |
## Usage
|
| 32 |
|
|
|
|
| 46 |
|
| 47 |
## Evaluation
|
| 48 |
|
| 49 |
+
We evaluate Boldt-350M on our [modernized German benchmark suite](https://huggingface.co/collections/Boldt/german-llm-benchmarks). See our paper [(Aynetdinov et al., 2026)](https://arxiv.org/abs/2604.28075) for details on the structural and translation corrections we performed.
|
| 50 |
+
|
| 51 |
+
Boldt-350M while significantly smaller than the 1B models we compare with, still fares comparatively well, outperforming the much larger, multilingual Gemma-3-1B and Llama-3.2-1B models.
|
| 52 |
+
|
| 53 |
+
|
| 54 |
+
### 1B Weight Class (Direct Comparison)
|
| 55 |
+
*Note: Bold text indicates the best score in the 1B category.*
|
| 56 |
+
|
| 57 |
+
| Model | Tokens | MMLU | ARC-C | ARC-E | H-Swag | LAMBADA | OBQA | Avg. |
|
| 58 |
+
| :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- |
|
| 59 |
+
| [Boldt-DC-350M](https://huggingface.co/Boldt/Boldt-DC-350M) | 200B | 29.29 | 32.24 | 52.87 | 43.21 | 37.48 | 45.86 | 40.16 |
|
| 60 |
+
| [Boldt-DC-1B](https://huggingface.co/Boldt/Boldt-DC-1B) | 200B | 31.06 | **35.99** | **57.30** | 48.69 | 42.80 | 48.48 | 44.05 |
|
| 61 |
+
| **Boldt-1B (this model)** | 230B | **31.42** | 34.11 | 55.78 | **48.77** | 44.70 | **52.32** | **44.52** |
|
| 62 |
+
| [LLäMmlein-1B](https://huggingface.co/LSX-UniWue/LLaMmlein_1B) | 1T | 29.26 | 30.27 | 48.19 | 44.80 | **44.89** | 47.27 | 40.78 |
|
| 63 |
+
| [Gemma-3-1B](https://huggingface.co/google/gemma-3-1b-pt) | 2T* | 30.01 | 30.55 | 47.89 | 43.43 | 41.71 | 45.05 | 39.77 |
|
| 64 |
+
| [Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) | 9T* | 28.58 | 29.90 | 40.51 | 40.07 | 44.31 | 44.04 | 37.90 |
|
| 65 |
|
| 66 |
## Safety & Ethics
|
| 67 |
|