--- language: - en license: apache-2.0 library_name: transformers tags: - biomedical - clinical - encoder - modernbert - fill-mask datasets: - almanach/Biomed-Enriched base_model: - answerdotai/ModernBERT-large pipeline_tag: fill-mask widget: - text: "The patient was diagnosed with [MASK] and started on antibiotics." - text: "Mitochondria is the powerhouse of the [MASK]." model-index: - name: ModernBERT-bio-large results: - task: type: token-classification name: NER dataset: name: AnatEM type: bigbio/anatem metrics: - type: f1 value: 83.2 - task: type: token-classification name: NER dataset: name: BC5CDR type: bigbio/bc5cdr metrics: - type: f1 value: 89.8 - task: type: token-classification name: NER dataset: name: JNLPBA type: bigbio/jnlpba metrics: - type: f1 value: 75.3 - task: type: token-classification name: NER dataset: name: NCBI Disease type: bigbio/ncbi_disease metrics: - type: f1 value: 81.7 - task: type: text-classification name: Text Classification dataset: name: GAD type: bigbio/gad metrics: - type: f1 value: 79.7 - task: type: text-classification name: Text Classification dataset: name: HoC type: bigbio/hallmarks_of_cancer metrics: - type: f1 value: 69.3 - task: type: text-classification name: Text Classification dataset: name: ChemProt type: bigbio/chemprot metrics: - type: f1 value: 90.4 - task: type: text-classification name: Text Classification dataset: name: DEID type: n2c2/2006-deid metrics: - type: f1 value: 84.2 --- # ModernBERT-bio-large *ModernBERT-bio is available in two sizes: [base](https://huggingface.co/almanach/ModernBERT-bio-base) (149M parameters) and [large](https://huggingface.co/almanach/ModernBERT-bio-large) (396M parameters).* ## Table of Contents 1. [Model Summary](#model-summary) 2. [Usage](#usage) 3. [Training](#training) 4. [Evaluation](#evaluation) 5. [License](#license) 6. [Citation](#citation) ## Model Summary ModernBERT-bio-large is the Large variant of our English biomedical encoder, built by continued pretraining of [ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large) using a **CLM detour** recipe. Instead of standard MLM continued pretraining, we temporarily switch to causal language modeling (CLM) before returning to MLM. ModernBERT-bio-large achieves **78.7% average F1** across 11 English biomedical benchmarks, the highest overall score, outperforming both the MLM baseline (+0.8pp, 7/11 task wins) and all other models. | | | |---|---| | **Architecture** | ModernBERT (FlashAttention, RoPE, alternating local/global attention, unpadding) | | **Parameters** | 396M | | **Layers** | 28 | | **Hidden size** | 1024 | | **Attention heads** | 16 | | **Context length** | 8,192 tokens | | **Language** | English | | **Base model** | [answerdotai/ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large) | ## Usage You can use this model with the `transformers` library (v4.48.0+): ```bash pip install -U transformers>=4.48.0 ``` If your GPU supports it, install Flash Attention for best efficiency: ```bash pip install flash-attn ``` ### Masked Language Modeling ```python from transformers import AutoTokenizer, AutoModelForMaskedLM model_id = "almanach/ModernBERT-bio-large" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForMaskedLM.from_pretrained(model_id) text = "The patient was diagnosed with [MASK] and started on antibiotics." inputs = tokenizer(text, return_tensors="pt") outputs = model(**inputs) masked_index = inputs["input_ids"][0].tolist().index(tokenizer.mask_token_id) predicted_token_id = outputs.logits[0, masked_index].argmax(axis=-1) predicted_token = tokenizer.decode(predicted_token_id) print("Predicted token:", predicted_token) ``` ### Fine-tuning (Classification, NER, etc.) ```python from transformers import AutoTokenizer, AutoModel model_id = "almanach/ModernBERT-bio-large" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModel.from_pretrained(model_id) text = "The patient presented with acute myocardial infarction and was treated with percutaneous coronary intervention." inputs = tokenizer(text, return_tensors="pt", max_length=8192, truncation=True) outputs = model(**inputs) # outputs.last_hidden_state: [batch, seq_len, 1024] ``` **Note:** ModernBERT-bio does not use token type IDs. You can omit the `token_type_ids` parameter. ## Training ### Data | Corpus | Proportion | Description | |--------|------------|-------------| | PubMed | 60% | Biomedical abstracts | | Med-Inst | 20% | Medical instructions | | MIMIC | 20% | Clinical notes | | **Total** | **50B tokens** | | ### Methodology ModernBERT-bio-large is trained in two phases, initialized from [ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large): * **Phase 1 (CLM detour, 50B tokens):** The bidirectional attention mask is replaced with a causal mask, and the model is trained with next-token prediction. This dense training signal (100% of positions) deeply modifies early transformer layers for domain adaptation. * **Phase 2 (MLM decay, 5B tokens):** Bidirectional attention is restored, and the model is trained with masked language modeling at 15% masking. The learning rate decays from peak to 10% following a 1-sqrt schedule. Both phases use the same data mix (55B tokens total). Training used AdamW (lr=2e-4, beta1=0.9, beta2=0.98), bf16 mixed precision, global batch size of 384 sequences (~3.1M tokens), on 4× H100 80GB GPUs with [Composer](https://github.com/mosaicml/composer). ### Why a CLM Detour? CLM supervises every token position, producing dense gradient updates that deeply modify early transformer layers. These changes persist through the MLM decay phase, even when the decay matches the CLM phase in length. The Large model retains 67.2% CKA divergence from its MLM counterpart (vs 56.5% for Base), showing that the effect scales with model capacity. The CLM benefit also widens at Large scale: +0.8pp (Large) vs +0.3pp (Base). See our paper for the full mechanistic analysis. ## Evaluation English biomedical benchmark results (11 tasks, 5 seeds per model): ### Clinical Tasks | Model | Ctx | ChemProt | Phenotype | COS | Social Hist. | DEID | **Avg** | |-------|-----|----------|-----------|-----|-------------|------|---------| | **ModernBERT-bio-large** | 8192 | 90.4 | **61.3** | 94.7 | **56.5** | **84.2** | **77.4** | | MLM baseline Large (ours) | 8192 | **90.5** | 61.0 | 94.9 | 55.0 | 82.3 | 76.7 | | BioClinical-ModernBERT-base | 8192 | 90.0 | 60.7 | 94.8 | 56.0 | 81.8 | 76.7 | | PubMedBERT | 512 | 90.2 | 52.0 | **95.0** | 48.7 | 80.4 | 73.3 | ### BigBIO Tasks | Model | Ctx | AnatEM | BC5CDR | JNLPBA | NCBI | GAD | HoC | **Avg** | |-------|-----|--------|--------|--------|------|-----|-----|---------| | **ModernBERT-bio-large** | 8192 | **83.2** | **89.8** | 75.3 | 81.7 | **79.7** | 69.3 | **79.8** | | MLM baseline Large (ours) | 8192 | 82.0 | 89.4 | **75.5** | 81.8 | 76.4 | 67.8 | 78.8 | | BioClinical-ModernBERT-base | 8192 | 79.2 | 88.7 | 74.8 | 78.7 | 75.8 | 67.0 | 77.4 | | PubMedBERT | 512 | 83.3 | 89.7 | 74.9 | **82.1** | 79.3 | **71.0** | 80.1 | ### Overall | Model | Clinical | BigBIO | **Overall** | |-------|----------|--------|-------------| | **ModernBERT-bio-large** | **77.4** | **79.8** | **78.7** | | MLM baseline Large (ours) | 76.7 | 78.8 | 77.9 | | ModernBERT-bio-base | 76.9 | 78.9 | 78.0 | | BioClinical-ModernBERT-base | 76.7 | 77.4 | 77.0 | | PubMedBERT | 73.3 | 80.1 | 77.0 | ModernBERT-bio-large achieves the highest overall score (78.7%), with the CLM benefit widening at Large scale (+0.8pp vs +0.3pp for Base). The model sets new state-of-the-art on DEID (84.2%) and is competitive with the best baselines on the remaining tasks. ## Intended Use This model is designed for English biomedical and clinical NLP tasks: - Named entity recognition (diseases, chemicals, genes, anatomy) - Document classification (clinical phenotyping, relation extraction) - De-identification of clinical notes - Information extraction from PubMed abstracts and clinical reports The 8,192-token context is important for long clinical documents. The Large size provides improved performance over Base, particularly on NER tasks (AnatEM, DEID, GAD), at the cost of higher compute requirements. ## Related Models | Model | Language | Parameters | |-------|----------|------------| | [ModernBERT-bio-base](https://huggingface.co/almanach/ModernBERT-bio-base) | English | 149M | | [ModernBERT-bio-large](https://huggingface.co/almanach/ModernBERT-bio-large) | English | 396M | | [ModernCamemBERT-bio-base](https://huggingface.co/almanach/ModernCamemBERT-bio-base) | French | 150M | | [ModernCamemBERT-bio-large](https://huggingface.co/almanach/ModernCamemBERT-bio-large) | French | 350M | ## Limitations - Trained on English biomedical text; not suitable for other languages without further adaptation. See [ModernCamemBERT-bio](https://huggingface.co/almanach/ModernCamemBERT-bio-base) for French. - Encoder model: produces contextualized representations, does not generate text. - Clinical text may contain sensitive patterns; users are responsible for compliance with applicable regulations (HIPAA, etc.). - Training data includes MIMIC clinical notes, which are de-identified but derived from real patient records. ## License Apache 2.0 ## Citation ```bibtex @misc{touchent2026causallanguagemodelingdetour, title={A Causal Language Modeling Detour Improves Encoder Continued Pretraining}, author={Rian Touchent and Eric de la Clergerie}, year={2026}, eprint={2605.12438}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2605.12438}, } ``` ## Acknowledgments This work was performed using HPC resources from GENCI-IDRIS (Grant 2024-AD011014393R2).