| --- |
| license: apache-2.0 |
| language: |
| - de |
| pipeline_tag: text-generation |
| library_name: transformers |
| tags: |
| - instruction-tuned |
| - german |
| base_model: |
| - Boldt/Boldt-1B |
| --- |
| |
| # Boldt-1B-IT-Preview |
|
|
| <img src="logo.png" width="500"> |
|
|
| **Boldt-1B-IT-Preview** is a preview of an instruction-tuned German language model, fine-tuned on top of [Boldt-1B](https://huggingface.co/Boldt/Boldt-1B). It is part of the **Boldt** series of German Small Language Models (SLMs) trained from scratch at Humboldt-Universität zu Berlin. |
|
|
| - [Boldt-DC-350M](https://huggingface.co/Boldt/Boldt-DC-350M) |
| - [Boldt-DC-1B](https://huggingface.co/Boldt/Boldt-DC-1B) |
| - [Boldt-1B](https://huggingface.co/Boldt/Boldt-1B) |
| - **Boldt-1B-IT-Preview** *(this model)* |
|
|
| > **Preview status.** This is an early release intended to demonstrate instruction-following capabilities emerging from our quality-focused pre-training recipe. It has not undergone systematic safety evaluation and should not be used in production settings without further assessment. |
|
|
| ## Training data |
|
|
| Boldt-1B-IT-Preview was fine-tuned on a curated mixture of 95k German instruction-output pairs from four sources: |
|
|
| - **Aya:** German subset of the [Aya dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset), consisting of approximately 200 human-authored instruction-output pairs. |
| - **SmolTalk2 (DE, improved):** an improved German subset of the [SmolTalk2](https://huggingface.co/datasets/HuggingFaceTB/smoltalk2) dataset. We adjusted 52k prompts for more natural flowing German and regenerated outputs using [Qwen-3.6-27B](https://huggingface.co/Qwen/Qwen3.6-27B) to improve their quality. |
| - **r/FragReddit:** 7k prompts sourced from the [r/FragReddit](https://www.reddit.com/r/FragReddit/) subreddit. Outputs were generated using [Qwen-3.6-27B](https://huggingface.co/Qwen/Qwen3.6-27B). |
| - **Synthetic Reddit:** 19k synthetic QA pairs derived from a dump of r/FragReddit posts. We used [Qwen-3.6-27B](https://huggingface.co/Qwen/Qwen3.6-27B) to filter useful posts, rephrase questions for clarity, and generate helpful and educational answers. |
| - **NER instructions:** 17k NER tasks derived from 2 German NER datasets. |
|
|
| The mixture is designed to combine broad topical coverage with naturalness of German expression, complementing the information-dense pre-training corpus underlying the base model. |
|
|
| ## Usage |
|
|
| Boldt-1B-IT-Preview is designed for single-turn German-language instruction-following tasks. It was not fine-tuned for multi-turn conversations, and performance in multi-turn settings is not guaranteed. It uses a standard chat template and can be used as follows: |
|
|
| ```python |
| from transformers import AutoTokenizer, AutoModelForCausalLM |
| |
| model_name = "Boldt/Boldt-1B-IT-Preview" |
| tokenizer = AutoTokenizer.from_pretrained(model_name) |
| model = AutoModelForCausalLM.from_pretrained(model_name) |
| |
| messages = [ |
| {"role": "user", "content": "Erkläre mir kurz, wie Quantencomputer funktionieren."} |
| ] |
| |
| input_ids = tokenizer.apply_chat_template( |
| messages, |
| tokenize=True, |
| add_generation_prompt=True, |
| return_tensors="pt" |
| ) |
| |
| outputs = model.generate( |
| input_ids, |
| max_new_tokens=256, |
| do_sample=True, |
| temperature=0.7, |
| top_p=0.9, |
| ) |
| print(tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True)) |
| ``` |
|
|
| ## Limitations |
|
|
| - **Language:** This model is optimized for German. Other languages are not supported. |
| - **Preview status:** This model is released as a research preview. It may produce factually incorrect or inconsistent outputs. Not optimized for multi-turn dialogue. |
| - **Safety:** We have not conducted systematic evaluations for toxic content, demographic biases, or harmful stereotypes. Quality filtering during pre-training may reduce some risks relative to unfiltered corpora but cannot eliminate them. Repeated multi-epoch exposure may amplify encoded biases. Users should exercise caution in sensitive applications. |
|
|
| ## Citation |
|
|
| ```bibtex |
| @misc{boldt, |
| title={Repetition over Diversity: High-Signal Data Filtering for Sample-Efficient German Language Modeling}, |
| author={Ansar Aynetdinov and Patrick Haller and Alan Akbik}, |
| year={2026}, |
| eprint={2604.28075}, |
| archivePrefix={arXiv}, |
| primaryClass={cs.CL}, |
| url={https://arxiv.org/abs/2604.28075}, |
| } |
| ``` |