--- language: - fr license: apache-2.0 library_name: transformers tags: - biomedical - clinical - encoder - modernbert - fill-mask datasets: - rntc/mc-bio-corpus base_model: - almanach/moderncamembert-base pipeline_tag: fill-mask widget: - text: "Les patients atteints de présentent un risque accru de complications cardiovasculaires." - text: "Le traitement par a montré une amélioration significative des symptômes." model-index: - name: ModernCamemBERT-bio-base results: - task: type: text-classification name: Text Classification dataset: name: FrACCO-30 type: rntc/fracco metrics: - type: f1 value: 74.8 - task: type: text-classification name: Text Classification dataset: name: FrACCO-100 type: rntc/fracco metrics: - type: f1 value: 60.1 - task: type: text-classification name: Text Classification dataset: name: CANTEMIST type: cantemist metrics: - type: f1 value: 71.0 - task: type: text-classification name: Text Classification dataset: name: DISTEMIST type: distemist metrics: - type: f1 value: 25.5 - task: type: text-classification name: Text Classification dataset: name: MedDialog type: meddialog metrics: - type: f1 value: 63.6 - task: type: text-classification name: Text Classification dataset: name: DiaMed type: diamed metrics: - type: f1 value: 67.4 - task: type: token-classification name: NER dataset: name: EMEA type: emea metrics: - type: f1 value: 68.6 - task: type: token-classification name: NER dataset: name: Medline type: medline metrics: - type: f1 value: 61.9 --- # ModernCamemBERT-bio-base *ModernCamemBERT-bio is available in two sizes: [base](https://huggingface.co/almanach/ModernCamemBERT-bio-base) (150M parameters) and [large](https://huggingface.co/almanach/ModernCamemBERT-bio-large) (350M parameters).* ## Table of Contents 1. [Model Summary](#model-summary) 2. [Usage](#usage) 3. [Training](#training) 4. [Evaluation](#evaluation) 5. [License](#license) 6. [Citation](#citation) ## Model Summary ModernCamemBERT-bio is a French biomedical encoder built by continued pretraining of [ModernCamemBERT](https://huggingface.co/almanach/moderncamembert-base) using a **CLM detour** recipe. Instead of standard MLM continued pretraining, we temporarily switch to causal language modeling (CLM) before returning to MLM. This produces lasting representational changes in early transformer layers that improve downstream biomedical performance by +2.8pp on average across 8 French biomedical tasks. The model uses the ModernBERT architecture with FlashAttention, rotary positional embeddings (RoPE), alternating local/global attention, and unpadding, supporting **8,192-token context**, critical for long clinical documents that exceed the 512-token limit of previous French biomedical models. | | | |---|---| | **Architecture** | ModernBERT | | **Parameters** | 150M | | **Layers** | 22 | | **Hidden size** | 768 | | **Attention heads** | 12 | | **Context length** | 8,192 tokens | | **Language** | French | | **Base model** | [almanach/moderncamembert-base](https://huggingface.co/almanach/moderncamembert-base) | ## Usage You can use this model with the `transformers` library (v4.48.0+): ```bash pip install -U transformers>=4.48.0 ``` If your GPU supports it, install Flash Attention for best efficiency: ```bash pip install flash-attn ``` ### Masked Language Modeling ```python from transformers import AutoTokenizer, AutoModelForMaskedLM model_id = "almanach/ModernCamemBERT-bio-base" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForMaskedLM.from_pretrained(model_id) text = "Le patient présente une aiguë du myocarde." inputs = tokenizer(text, return_tensors="pt") outputs = model(**inputs) masked_index = inputs["input_ids"][0].tolist().index(tokenizer.mask_token_id) predicted_token_id = outputs.logits[0, masked_index].argmax(axis=-1) predicted_token = tokenizer.decode(predicted_token_id) print("Predicted token:", predicted_token) ``` ### Fine-tuning (Classification, NER, etc.) ```python from transformers import AutoTokenizer, AutoModel model_id = "almanach/ModernCamemBERT-bio-base" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModel.from_pretrained(model_id) text = "Compte rendu d'hospitalisation du patient admis pour décompensation cardiaque." inputs = tokenizer(text, return_tensors="pt", max_length=8192, truncation=True) outputs = model(**inputs) # outputs.last_hidden_state: [batch, seq_len, 768] ``` **Note:** ModernCamemBERT-bio does not use token type IDs. You can omit the `token_type_ids` parameter. ## Training ### Data | Corpus | Tokens | Description | |--------|--------|-------------| | MC-Bio | 7B | Quality-filtered French biomedical text (scientific articles, drug leaflets, clinical guidelines) | | MCQA | 2B | Medical question-answer pairs | | E3C | 400M | Clinical cases from journals and theses | | EMEA | 600M | Pharmaceutical documents (European Medicines Agency) | | **Total** | **10B** | | ### Methodology ModernCamemBERT-bio is trained in two phases, initialized from [ModernCamemBERT](https://huggingface.co/almanach/moderncamembert-base): * **Phase 1 (CLM detour, 10B tokens):** The bidirectional attention mask is replaced with a causal mask, and the model is trained with next-token prediction. This dense training signal (100% of positions) deeply modifies early transformer layers for domain adaptation. * **Phase 2 (MLM decay, 1B tokens):** Bidirectional attention is restored, and the model is trained with masked language modeling at 15% masking. The learning rate decays from peak to 10% following a 1-sqrt schedule. Both phases use the same data mix (11B tokens total). Training used AdamW (lr=2e-4, beta1=0.9, beta2=0.98), bf16 mixed precision, global batch size of 384 sequences (~3.1M tokens), on 4× H100 80GB GPUs with [Composer](https://github.com/mosaicml/composer). Total training time: ~4 hours wall-clock (16 GPU-h, 0.46 kg CO₂eq). ### Why a CLM Detour? CLM supervises every token position, producing dense gradient updates that deeply modify early transformer layers (layers 0-7). These changes persist through the MLM decay phase, even when the decay matches the CLM phase in length. We provide causal evidence through freeze interventions: freezing early layers during CLM eliminates the downstream benefit (the model matches the MLM baseline), while freezing mid layers preserves it (double dissociation). See our paper for the full mechanistic analysis. ## Evaluation French biomedical benchmark results (8 tasks, 9 seeds per model, macro-averaged F1): | Model | Ctx | FrACCO-30 | FrACCO-100 | CANTEMIST | DISTEMIST | MedDialog | DiaMed | EMEA | Medline | **Avg** | |-------|-----|-----------|------------|-----------|-----------|-----------|--------|------|---------|---------| | **ModernCamemBERT-bio-base** | 8192 | **74.8** | **60.1** | **71.0** | **25.5** | 63.6 | **67.4** | 68.6 | 61.9 | **61.6** | | MLM baseline (ours) | 8192 | 69.9 | 56.8 | 64.9 | 23.5 | 62.5 | 63.4 | 68.5 | 61.4 | 58.9 | | ModernCamemBERT | 8192 | 70.1 | 55.3 | 63.3 | 20.2 | 60.6 | 56.4 | 68.0 | 59.7 | 56.7 | | DrBERT | 512 | 53.0 | 35.6 | 37.9 | 21.4 | 63.6 | 57.0 | 69.6 | 62.8 | 50.1 | | CamemBERT-bio | 512 | 41.9 | 20.1 | 12.8 | 9.6 | 38.6 | 47.7 | **70.8** | **65.2** | 38.3 | ModernCamemBERT-bio-base outperforms the matched MLM baseline on all 8 tasks (+2.8pp, binomial p=0.004). ## Intended Use This model is designed for French biomedical and clinical NLP tasks: - Named entity recognition (diseases, chemicals, procedures) - Document classification (clinical specialties, ICD coding) - Multilabel classification on long clinical documents - Information extraction from clinical reports, drug leaflets, and scientific articles The 8,192-token context is critical for long clinical documents (discharge summaries, oncology reports) that are truncated by 512-token models. ## Related Models | Model | Language | Parameters | |-------|----------|------------| | [ModernBERT-bio-base](https://huggingface.co/almanach/ModernBERT-bio-base) | English | 149M | | [ModernBERT-bio-large](https://huggingface.co/almanach/ModernBERT-bio-large) | English | 396M | | [ModernCamemBERT-bio-base](https://huggingface.co/almanach/ModernCamemBERT-bio-base) | French | 150M | | [ModernCamemBERT-bio-large](https://huggingface.co/almanach/ModernCamemBERT-bio-large) | French | 350M | ## Limitations - Trained on French biomedical text; not suitable for other languages without further adaptation. - Encoder model: produces contextualized representations, does not generate text. - Clinical text may contain sensitive patterns; users are responsible for compliance with applicable regulations. ## License Apache 2.0 ## Citation ```bibtex @misc{touchent2026causallanguagemodelingdetour, title={A Causal Language Modeling Detour Improves Encoder Continued Pretraining}, author={Rian Touchent and Eric de la Clergerie}, year={2026}, eprint={2605.12438}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2605.12438}, } ``` ## Acknowledgments This work was performed using HPC resources from GENCI-IDRIS (Grant 2024-AD011014393R2).