Boldt-350M
Boldt is a series of German Small Language Models (SLMs) trained from scratch. Our inital release includes three models:
- Boldt-DC-350M (this model)
- Boldt-DC-1B
- Boldt-1B
- Boldt-1B-IT-Preview
Repetition over Diversity
The training philosophy behind Boldt is centered on a key finding from our research: repetition over diversity.
Standard pre-training paradigms typically balance quality filtering against the need for massive token volume and broad corpus diversity. In contrast, Boldt models are trained for multiple epochs on a highly filtered dataset: the German Dense-Core subset of FineWeb-2. We isolated this subset using a combination of three hierarchical filters:
- Coherence: Eliminates structurally fragmented or incoherent documents.
- Information Value: Isolates content-rich and fact-bearing texts.
- Educational Quality: Selects strictly for pedagogical clarity and deep explanations.
We demonstrate that repeated exposure to this strict, high-quality subset is more sample-efficient than a single pass over less filtered and more diverse corpora. For a comprehensive look at our experiments, please refer to our preprint: Repetition over Diversity.
Usage
Note: This is a base language model, not an instruction-tuned model. It is not optimized for chat or instruction following. For best results, use standard text completion rather than chat templates.
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "Boldt/Boldt-DC-350M"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Basic text completion
text = "Berlin ist eine Stadt, wo"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=64)
Evaluation
We evaluate Boldt-350M on our modernized German benchmark suite. See our paper (Aynetdinov et al., 2026) for details on the structural and translation corrections we performed.
Boldt-350M while significantly smaller than the 1B models we compare with, still fares comparatively well, outperforming the much larger, multilingual Gemma-3-1B and Llama-3.2-1B models.
Comparison against 1B Reference Models
Note: Bold text indicates the best score in the 1B category.
| Model | Tokens | MMLU | ARC-C | ARC-E | H-Swag | LAMBADA | OBQA | Avg. |
|---|---|---|---|---|---|---|---|---|
| Boldt-DC-300M (this model) | 200B | 29.29 | 32.24 | 52.87 | 43.21 | 37.48 | 45.86 | 40.16 |
| Boldt-DC-1B | 200B | 31.06 | 35.99 | 57.30 | 48.69 | 42.80 | 48.48 | 44.05 |
| Boldt-1B | 230B | 31.42 | 34.11 | 55.78 | 48.77 | 44.70 | 52.32 | 44.52 |
| LLäMmlein-1B | 1T | 29.26 | 30.27 | 48.19 | 44.80 | 44.89 | 47.27 | 40.78 |
| Gemma-3-1B | 2T* | 30.01 | 30.55 | 47.89 | 43.43 | 41.71 | 45.05 | 39.77 |
| Llama-3.2-1B | 9T* | 28.58 | 29.90 | 40.51 | 40.07 | 44.31 | 44.04 | 37.90 |
| Qwen3.5-0.8B-Base | >36T* | 30.79 | 32.05 | 46.20 | 38.90 | 36.02 | 43.84 | 37.97 |
Safety & Ethics
We have not conducted systematic model evaluations of toxicity, demographic biases, or harmful stereotypes. Quality filtering may reduce some risks relative to unfiltered web data, but cannot guarantee their absence, and repeated exposure during multi-epoch training could amplify rather than mitigate encoded biases. Users should exercise caution in sensitive use-cases without further evaluation.
Citation
@misc{boldt2026,
title={Repetition over Diversity: High-Signal Data Filtering for Sample-Efficient German Language Modeling},
author={Ansar Aynetdinov and Patrick Haller and Alan Akbik},
year={2026},
eprint={2604.28075},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2604.28075},
}
- Downloads last month
- 52
docker model run hf.co/Boldt/Boldt-DC-350M