File size: 10,114 Bytes
e6d2c01
 
 
 
 
 
 
a22a382
e6d2c01
 
 
41d13ec
84d8e5d
a22a382
 
e6d2c01
a22a382
 
 
41d13ec
84d8e5d
41d13ec
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e6d2c01
 
84d8e5d
e6d2c01
449ec98
e6d2c01
a22a382
 
 
 
 
 
 
 
 
 
 
84d8e5d
a22a382
84d8e5d
a22a382
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
84d8e5d
a22a382
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
84d8e5d
a22a382
 
 
 
 
 
 
 
 
84d8e5d
a22a382
 
 
 
 
 
 
 
 
 
41d13ec
a22a382
 
 
84d8e5d
a22a382
4d7489b
 
a22a382
4d7489b
a22a382
 
 
4d7489b
a22a382
 
 
 
 
 
 
 
 
84d8e5d
41d13ec
a22a382
 
 
 
 
 
 
84d8e5d
41d13ec
a22a382
 
 
 
 
 
 
84d8e5d
41d13ec
a22a382
 
 
84d8e5d
a22a382
 
 
 
 
 
 
 
 
 
 
41d13ec
 
 
 
84d8e5d
 
 
 
41d13ec
a22a382
 
84d8e5d
a22a382
 
4d7489b
a22a382
 
 
 
 
 
 
 
ee044b7
 
 
 
 
 
 
 
a22a382
 
 
 
 
4d7489b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- biomedical
- clinical
- encoder
- modernbert
- fill-mask
datasets:
- almanach/Biomed-Enriched
base_model:
- answerdotai/ModernBERT-base
pipeline_tag: fill-mask
widget:
- text: "The patient was diagnosed with [MASK] and started on antibiotics."
- text: "Mitochondria is the powerhouse of the [MASK]."
model-index:
- name: ModernBERT-bio-base
  results:
  - task:
      type: token-classification
      name: NER
    dataset:
      name: AnatEM
      type: bigbio/anatem
    metrics:
    - type: f1
      value: 81.0
  - task:
      type: token-classification
      name: NER
    dataset:
      name: BC5CDR
      type: bigbio/bc5cdr
    metrics:
    - type: f1
      value: 89.1
  - task:
      type: token-classification
      name: NER
    dataset:
      name: JNLPBA
      type: bigbio/jnlpba
    metrics:
    - type: f1
      value: 74.5
  - task:
      type: token-classification
      name: NER
    dataset:
      name: NCBI Disease
      type: bigbio/ncbi_disease
    metrics:
    - type: f1
      value: 80.1
  - task:
      type: text-classification
      name: Text Classification
    dataset:
      name: GAD
      type: bigbio/gad
    metrics:
    - type: f1
      value: 78.8
  - task:
      type: text-classification
      name: Text Classification
    dataset:
      name: HoC
      type: bigbio/hallmarks_of_cancer
    metrics:
    - type: f1
      value: 70.0
  - task:
      type: text-classification
      name: Text Classification
    dataset:
      name: ChemProt
      type: bigbio/chemprot
    metrics:
    - type: f1
      value: 90.1
  - task:
      type: text-classification
      name: Text Classification
    dataset:
      name: DEID
      type: n2c2/2006-deid
    metrics:
    - type: f1
      value: 83.2
---

# ModernBERT-bio-base

*ModernBERT-bio is available in two sizes: [base](https://huggingface.co/almanach/ModernBERT-bio-base) (149M parameters) and [large](https://huggingface.co/almanach/ModernBERT-bio-large) (396M parameters).*

## Table of Contents

1. [Model Summary](#model-summary)
2. [Usage](#usage)
3. [Training](#training)
4. [Evaluation](#evaluation)
5. [License](#license)
6. [Citation](#citation)

## Model Summary

ModernBERT-bio is an English biomedical encoder built by continued pretraining of [ModernBERT](https://huggingface.co/answerdotai/ModernBERT-base) using a **CLM detour** recipe. Instead of standard MLM continued pretraining, we temporarily switch to causal language modeling (CLM) before returning to MLM. This produces lasting representational changes in early transformer layers that improve downstream biomedical performance.

ModernBERT-bio achieves **78.0% average F1** across 11 English biomedical benchmarks (5 Clinical + 6 BigBIO), the highest balanced score across both task families.

| | |
|---|---|
| **Architecture** | ModernBERT (FlashAttention, RoPE, alternating local/global attention, unpadding) |
| **Parameters** | 149M |
| **Layers** | 22 |
| **Hidden size** | 768 |
| **Attention heads** | 12 |
| **Context length** | 8,192 tokens |
| **Language** | English |
| **Base model** | [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) |

## Usage

You can use this model with the `transformers` library (v4.48.0+):

```bash
pip install -U transformers>=4.48.0
```

If your GPU supports it, install Flash Attention for best efficiency:

```bash
pip install flash-attn
```

### Masked Language Modeling

```python
from transformers import AutoTokenizer, AutoModelForMaskedLM

model_id = "almanach/ModernBERT-bio-base"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForMaskedLM.from_pretrained(model_id)

text = "The patient was diagnosed with [MASK] and started on antibiotics."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)

masked_index = inputs["input_ids"][0].tolist().index(tokenizer.mask_token_id)
predicted_token_id = outputs.logits[0, masked_index].argmax(axis=-1)
predicted_token = tokenizer.decode(predicted_token_id)
print("Predicted token:", predicted_token)
```

### Fine-tuning (Classification, NER, etc.)

```python
from transformers import AutoTokenizer, AutoModel

model_id = "almanach/ModernBERT-bio-base"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModel.from_pretrained(model_id)

text = "The patient presented with acute myocardial infarction and was treated with percutaneous coronary intervention."
inputs = tokenizer(text, return_tensors="pt", max_length=8192, truncation=True)
outputs = model(**inputs)
# outputs.last_hidden_state: [batch, seq_len, 768]
```

**Note:** ModernBERT-bio does not use token type IDs. You can omit the `token_type_ids` parameter.

## Training

### Data

| Corpus | Proportion | Description |
|--------|------------|-------------|
| PubMed | 60% | Biomedical abstracts |
| Med-Inst | 20% | Medical instructions |
| MIMIC | 20% | Clinical notes |
| **Total** | **50B tokens** | |

### Methodology

ModernBERT-bio-base is trained in two phases, initialized from [ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base):

* **Phase 1 (CLM detour, 50B tokens):** The bidirectional attention mask is replaced with a causal mask, and the model is trained with next-token prediction. This dense training signal (100% of positions) deeply modifies early transformer layers for domain adaptation.
* **Phase 2 (MLM decay, 5B tokens):** Bidirectional attention is restored, and the model is trained with masked language modeling at 15% masking. The learning rate decays from peak to 10% following a 1-sqrt schedule.

Both phases use the same data mix (55B tokens total). Training used AdamW (lr=2e-4, beta1=0.9, beta2=0.98), bf16 mixed precision, global batch size of 384 sequences (~3.1M tokens), on 4× H100 80GB GPUs with [Composer](https://github.com/mosaicml/composer).

### Why a CLM Detour?

CLM supervises every token position, producing dense gradient updates that deeply modify early transformer layers (layers 0-7). These changes persist through the MLM decay phase, even when the decay matches the CLM phase in length. We provide causal evidence through freeze interventions showing that early-layer modification is both necessary and sufficient for the CLM benefit (double dissociation). See our paper for the full mechanistic analysis.

## Evaluation

English biomedical benchmark results (11 tasks, 5 seeds per model):

### Clinical Tasks

| Model | Ctx | ChemProt | Phenotype | COS | Social Hist. | DEID | **Avg** |
|-------|-----|----------|-----------|-----|-------------|------|---------|
| **ModernBERT-bio-base** | 8192 | 90.1 | **61.9** | **95.2** | 54.2 | **83.2** | **76.9** |
| BioClinical-ModernBERT-base | 8192 | 90.0 | 60.7 | 94.8 | **56.0** | 81.8 | 76.7 |
| PubMedBERT | 512 | **90.2** | 52.0 | 95.0 | 48.7 | 80.4 | 73.3 |
| ModernBERT-base | 8192 | 89.5 | 48.4 | 94.0 | 53.1 | 78.3 | 72.7 |

### BigBIO Tasks

| Model | Ctx | AnatEM | BC5CDR | JNLPBA | NCBI | GAD | HoC | **Avg** |
|-------|-----|--------|--------|--------|------|-----|-----|---------|
| **ModernBERT-bio-base** | 8192 | 81.0 | **89.1** | 74.5 | 80.1 | 78.8 | **70.0** | **78.9** |
| BioClinical-ModernBERT-base | 8192 | 79.2 | 88.7 | 74.8 | 78.7 | 75.8 | 67.0 | 77.4 |
| PubMedBERT | 512 | **83.3** | 89.7 | **74.9** | **82.1** | **79.3** | 71.0 | 80.1 |
| ModernBERT-base | 8192 | 77.2 | 87.9 | 74.3 | 77.7 | 76.8 | 66.6 | 76.8 |

### Overall

| Model | Clinical | BigBIO | **Overall** |
|-------|----------|--------|-------------|
| **ModernBERT-bio-base** | **76.9** | **78.9** | **78.0** |
| BioClinical-ModernBERT-base | 76.7 | 77.4 | 77.0 |
| PubMedBERT | 73.3 | 80.1 | 77.0 |
| ModernBERT-base | 72.7 | 76.8 | 74.9 |

ModernBERT-bio-base achieves the highest balanced score (78.0%) across both Clinical and BigBIO task families. PubMedBERT scores higher on short-context BigBIO NER tasks but falls behind on long-context tasks (Phenotype: 52.0% vs 61.9%).

## Intended Use

This model is designed for English biomedical and clinical NLP tasks:
- Named entity recognition (diseases, chemicals, genes, anatomy)
- Document classification (clinical phenotyping, relation extraction)
- De-identification of clinical notes
- Information extraction from PubMed abstracts and clinical reports

The 8,192-token context is important for long clinical documents (discharge summaries, pathology reports) that are truncated by 512-token models.

## Related Models

| Model | Language | Parameters |
|-------|----------|------------|
| [ModernBERT-bio-base](https://huggingface.co/almanach/ModernBERT-bio-base) | English | 149M |
| [ModernBERT-bio-large](https://huggingface.co/almanach/ModernBERT-bio-large) | English | 396M |
| [ModernCamemBERT-bio-base](https://huggingface.co/almanach/ModernCamemBERT-bio-base) | French | 150M |
| [ModernCamemBERT-bio-large](https://huggingface.co/almanach/ModernCamemBERT-bio-large) | French | 350M |

## Limitations

- Trained on English biomedical text; not suitable for other languages without further adaptation. See [ModernCamemBERT-bio](https://huggingface.co/almanach/ModernCamemBERT-bio-base) for French.
- Encoder model: produces contextualized representations, does not generate text.
- Clinical text may contain sensitive patterns; users are responsible for compliance with applicable regulations (HIPAA, etc.).
- The English CLM-MLM improvement (+0.3pp at Base scale) is smaller than in French (+2.8pp) and not statistically significant at Base scale (binomial p=0.27). The practical benefit is clearest at Large scale (+0.8pp) and on long-context tasks.

## License

Apache 2.0

## Citation

```bibtex
@misc{touchent2026causallanguagemodelingdetour,
      title={A Causal Language Modeling Detour Improves Encoder Continued Pretraining}, 
      author={Rian Touchent and Eric de la Clergerie},
      year={2026},
      eprint={2605.12438},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2605.12438}, 
}
```

## Acknowledgments

This work was performed using HPC resources from GENCI-IDRIS (Grant 2024-AD011014393R2).