Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,89 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
language:
|
| 4 |
+
- de
|
| 5 |
+
pipeline_tag: text-generation
|
| 6 |
+
library_name: transformers
|
| 7 |
+
tags:
|
| 8 |
+
- instruction-tuned
|
| 9 |
+
- german
|
| 10 |
+
base_model:
|
| 11 |
+
- Boldt/Boldt-1B
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
# Boldt-1B-IT-Preview
|
| 15 |
+
|
| 16 |
+
<img src="logo.png" width="500">
|
| 17 |
+
|
| 18 |
+
**Boldt-1B-IT-Preview** is a preview of an instruction-tuned German language model, fine-tuned on top of [Boldt-1B](https://huggingface.co/Boldt/Boldt-1B). It is part of the **Boldt** series of German Small Language Models (SLMs) trained from scratch at Humboldt-Universität zu Berlin.
|
| 19 |
+
|
| 20 |
+
- [Boldt-DC-350M](https://huggingface.co/Boldt/Boldt-DC-350M)
|
| 21 |
+
- [Boldt-DC-1B](https://huggingface.co/Boldt/Boldt-DC-1B)
|
| 22 |
+
- [Boldt-1B](https://huggingface.co/Boldt/Boldt-1B)
|
| 23 |
+
- **Boldt-1B-IT-Preview** *(this model)*
|
| 24 |
+
|
| 25 |
+
> **Preview status.** This is an early release intended to demonstrate instruction-following capabilities emerging from our quality-focused pre-training recipe. It has not undergone systematic safety evaluation and should not be used in production settings without further assessment.
|
| 26 |
+
|
| 27 |
+
## Training data
|
| 28 |
+
|
| 29 |
+
Boldt-1B-IT-Preview was fine-tuned on a curated mixture of 95k German instruction-output pairs from four sources:
|
| 30 |
+
|
| 31 |
+
- **Aya:** German subset of the [Aya dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset), consisting of approximately 200 human-authored instruction-output pairs.
|
| 32 |
+
- **SmolTalk2 (DE, improved):** an improved German subset of the [SmolTalk2](https://huggingface.co/datasets/HuggingFaceTB/smoltalk2) dataset. We adjusted 52k prompts for more natural flowing German and regenerated outputs using [Qwen-3.6-27B](https://huggingface.co/Qwen/Qwen3.6-27B) to improve their quality.
|
| 33 |
+
- **r/FragReddit:** 7k prompts sourced from the [r/FragReddit](https://www.reddit.com/r/FragReddit/) subreddit. Outputs were generated using [Qwen-3.6-27B](https://huggingface.co/Qwen/Qwen3.6-27B).
|
| 34 |
+
- **Synthetic Reddit:** 19k synthetic QA pairs derived from a dump of r/FragReddit posts. We used [Qwen-3.6-27B](https://huggingface.co/Qwen/Qwen3.6-27B) to filter useful posts, rephrase questions for clarity, and generate helpful and educational answers.
|
| 35 |
+
- **NER instructions:** 17k NER tasks derived from...
|
| 36 |
+
|
| 37 |
+
The mixture is designed to combine broad topical coverage with naturalness of German expression, complementing the information-dense pre-training corpus underlying the base model.
|
| 38 |
+
|
| 39 |
+
## Usage
|
| 40 |
+
|
| 41 |
+
Boldt-1B-IT-Preview is designed for single-turn German-language instruction-following tasks. It was not fine-tuned for multi-turn conversations, and performance in multi-turn settings is not guaranteed. It uses a standard chat template and can be used as follows:
|
| 42 |
+
|
| 43 |
+
```python
|
| 44 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 45 |
+
|
| 46 |
+
model_name = "Boldt/Boldt-1B-IT-Preview"
|
| 47 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
| 48 |
+
model = AutoModelForCausalLM.from_pretrained(model_name)
|
| 49 |
+
|
| 50 |
+
messages = [
|
| 51 |
+
{"role": "user", "content": "Erkläre mir kurz, wie Quantencomputer funktionieren."}
|
| 52 |
+
]
|
| 53 |
+
|
| 54 |
+
input_ids = tokenizer.apply_chat_template(
|
| 55 |
+
messages,
|
| 56 |
+
tokenize=True,
|
| 57 |
+
add_generation_prompt=True,
|
| 58 |
+
return_tensors="pt"
|
| 59 |
+
)
|
| 60 |
+
|
| 61 |
+
outputs = model.generate(
|
| 62 |
+
input_ids,
|
| 63 |
+
max_new_tokens=256,
|
| 64 |
+
do_sample=True,
|
| 65 |
+
temperature=0.7,
|
| 66 |
+
top_p=0.9,
|
| 67 |
+
)
|
| 68 |
+
print(tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True))
|
| 69 |
+
```
|
| 70 |
+
|
| 71 |
+
## Limitations
|
| 72 |
+
|
| 73 |
+
- **Language:** This model is optimized for German. Other languages are not supported.
|
| 74 |
+
- **Preview status:** This model is released as a research preview. It may produce factually incorrect or inconsistent outputs. Not optimized for multi-turn dialogue.
|
| 75 |
+
- **Safety:** We have not conducted systematic evaluations for toxic content, demographic biases, or harmful stereotypes. Quality filtering during pre-training may reduce some risks relative to unfiltered corpora but cannot eliminate them. Repeated multi-epoch exposure may amplify encoded biases. Users should exercise caution in sensitive applications.
|
| 76 |
+
|
| 77 |
+
## Citation
|
| 78 |
+
|
| 79 |
+
```bibtex
|
| 80 |
+
@misc{boldt,
|
| 81 |
+
title={Repetition over Diversity: High-Signal Data Filtering for Sample-Efficient German Language Modeling},
|
| 82 |
+
author={Ansar Aynetdinov and Patrick Haller and Alan Akbik},
|
| 83 |
+
year={2026},
|
| 84 |
+
eprint={2604.28075},
|
| 85 |
+
archivePrefix={arXiv},
|
| 86 |
+
primaryClass={cs.CL},
|
| 87 |
+
url={https://arxiv.org/abs/2604.28075},
|
| 88 |
+
}
|
| 89 |
+
```
|