rntc commited on
Commit
a22a382
·
verified ·
1 Parent(s): e6d2c01

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +175 -5
README.md CHANGED
@@ -5,17 +5,187 @@ license: apache-2.0
5
  library_name: transformers
6
  tags:
7
  - biomedical
 
8
  - encoder
9
  - modernbert
10
  - fill-mask
 
 
11
  pipeline_tag: fill-mask
 
 
 
12
  ---
13
 
14
  # cpt-en-base
15
 
16
- Stealth upload. Model card will be updated before public release.
17
 
18
- - Architecture: ModernBERT (22 layers, 768 hidden, 12 heads)
19
- - Parameters: 149M
20
- - Context: 8192 tokens
21
- - Language: English
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  library_name: transformers
6
  tags:
7
  - biomedical
8
+ - clinical
9
  - encoder
10
  - modernbert
11
  - fill-mask
12
+ base_model:
13
+ - answerdotai/ModernBERT-base
14
  pipeline_tag: fill-mask
15
+ widget:
16
+ - text: "The patient was diagnosed with [MASK] and started on antibiotics."
17
+ - text: "Mitochondria is the powerhouse of the [MASK]."
18
  ---
19
 
20
  # cpt-en-base
21
 
22
+ *cpt-en-base is available in two sizes: [base](https://huggingface.co/rntc/cpt-en-base) (149M parameters) and [large](https://huggingface.co/almanach/cpt-en-large) (396M parameters). Our code is available in our [GitHub repository](https://github.com/Rian-T/colm2026-clm-detour).*
23
 
24
+ ## Table of Contents
25
+
26
+ 1. [Model Summary](#model-summary)
27
+ 2. [Usage](#usage)
28
+ 3. [Training](#training)
29
+ 4. [Evaluation](#evaluation)
30
+ 5. [License](#license)
31
+ 6. [Citation](#citation)
32
+
33
+ ## Model Summary
34
+
35
+ cpt-en-base is an English biomedical encoder built by continued pretraining of [ModernBERT](https://huggingface.co/answerdotai/ModernBERT-base) using a **CLM detour** recipe. Instead of standard MLM continued pretraining, we temporarily switch to causal language modeling (CLM) before returning to MLM. This produces lasting representational changes in early transformer layers that improve downstream biomedical performance.
36
+
37
+ cpt-en-base achieves **78.0% average F1** across 11 English biomedical benchmarks (5 Clinical + 6 BigBIO), the highest balanced score across both task families.
38
+
39
+ | | |
40
+ |---|---|
41
+ | **Architecture** | ModernBERT (FlashAttention, RoPE, alternating local/global attention, unpadding) |
42
+ | **Parameters** | 149M |
43
+ | **Layers** | 22 |
44
+ | **Hidden size** | 768 |
45
+ | **Attention heads** | 12 |
46
+ | **Context length** | 8,192 tokens |
47
+ | **Language** | English |
48
+ | **Base model** | [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) |
49
+
50
+ ## Usage
51
+
52
+ You can use this model with the `transformers` library (v4.48.0+):
53
+
54
+ ```bash
55
+ pip install -U transformers>=4.48.0
56
+ ```
57
+
58
+ If your GPU supports it, install Flash Attention for best efficiency:
59
+
60
+ ```bash
61
+ pip install flash-attn
62
+ ```
63
+
64
+ ### Masked Language Modeling
65
+
66
+ ```python
67
+ from transformers import AutoTokenizer, AutoModelForMaskedLM
68
+
69
+ model_id = "rntc/cpt-en-base"
70
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
71
+ model = AutoModelForMaskedLM.from_pretrained(model_id)
72
+
73
+ text = "The patient was diagnosed with [MASK] and started on antibiotics."
74
+ inputs = tokenizer(text, return_tensors="pt")
75
+ outputs = model(**inputs)
76
+
77
+ masked_index = inputs["input_ids"][0].tolist().index(tokenizer.mask_token_id)
78
+ predicted_token_id = outputs.logits[0, masked_index].argmax(axis=-1)
79
+ predicted_token = tokenizer.decode(predicted_token_id)
80
+ print("Predicted token:", predicted_token)
81
+ ```
82
+
83
+ ### Fine-tuning (Classification, NER, etc.)
84
+
85
+ ```python
86
+ from transformers import AutoTokenizer, AutoModel
87
+
88
+ model_id = "rntc/cpt-en-base"
89
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
90
+ model = AutoModel.from_pretrained(model_id)
91
+
92
+ text = "The patient presented with acute myocardial infarction and was treated with percutaneous coronary intervention."
93
+ inputs = tokenizer(text, return_tensors="pt", max_length=8192, truncation=True)
94
+ outputs = model(**inputs)
95
+ # outputs.last_hidden_state: [batch, seq_len, 768]
96
+ ```
97
+
98
+ **Note:** cpt-en-base does not use token type IDs. You can omit the `token_type_ids` parameter.
99
+
100
+ ## Training
101
+
102
+ ### Data
103
+
104
+ | Corpus | Proportion | Description |
105
+ |--------|------------|-------------|
106
+ | PubMed | 60% | Biomedical abstracts |
107
+ | Med-Inst | 20% | Medical instructions |
108
+ | MIMIC | 20% | Clinical notes |
109
+ | **Total** | **50B tokens** | Single epoch |
110
+
111
+ ### Methodology
112
+
113
+ cpt-en-base is trained in two phases, initialized from [ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base):
114
+
115
+ * **Phase 1 — CLM detour (50B tokens):** The bidirectional attention mask is replaced with a causal mask, and the model is trained with next-token prediction. This dense training signal (100% of positions) deeply modifies early transformer layers for domain adaptation.
116
+ * **Phase 2 — MLM decay (5B tokens):** Bidirectional attention is restored, and the model is trained with masked language modeling at 15% masking. The learning rate decays from peak to 10% following a 1-sqrt schedule.
117
+
118
+ Both phases use the same data mix. Training used AdamW (lr=2e-4, beta1=0.9, beta2=0.98), bf16 mixed precision, global batch size of 384 sequences (~3.1M tokens), on 4x H100 GPUs with [Composer](https://github.com/mosaicml/composer).
119
+
120
+ ### Why a CLM Detour?
121
+
122
+ CLM supervises every token position, producing dense gradient updates that deeply modify early transformer layers (layers 0-7). These changes persist through the MLM decay phase — a phenomenon we call **computational hysteresis**. We provide causal evidence through freeze interventions showing that early-layer modification is both necessary and sufficient for the CLM benefit (double dissociation). See our paper for the full mechanistic analysis.
123
+
124
+ ## Evaluation
125
+
126
+ English biomedical benchmark results (11 tasks, 5 seeds per model):
127
+
128
+ ### Clinical Tasks
129
+
130
+ | Model | Ctx | ChemProt | Phenotype | COS | Social Hist. | DEID | **Avg** |
131
+ |-------|-----|----------|-----------|-----|-------------|------|---------|
132
+ | **cpt-en-base** | 8192 | 90.1 | **61.9** | **95.2** | 54.2 | **83.2** | **76.9** |
133
+ | BioClinical-ModernBERT | 8192 | 90.0 | 60.7 | 94.8 | **56.0** | 81.8 | 76.7 |
134
+ | PubMedBERT | 512 | **90.2** | 52.0 | 95.0 | 48.7 | 80.4 | 73.3 |
135
+ | ModernBERT-base | 8192 | 89.5 | 48.4 | 94.0 | 53.1 | 78.3 | 72.7 |
136
+
137
+ ### BigBIO Tasks
138
+
139
+ | Model | Ctx | AnatEM | BC5CDR | JNLPBA | NCBI | GAD | HoC | **Avg** |
140
+ |-------|-----|--------|--------|--------|------|-----|-----|---------|
141
+ | **cpt-en-base** | 8192 | 81.0 | **89.1** | 74.5 | 80.1 | 78.8 | **70.0** | **78.9** |
142
+ | BioClinical-ModernBERT | 8192 | 79.2 | 88.7 | 74.8 | 78.7 | 75.8 | 67.0 | 77.4 |
143
+ | PubMedBERT | 512 | **83.3** | 89.7 | **74.9** | **82.1** | **79.3** | 71.0 | 80.1 |
144
+ | ModernBERT-base | 8192 | 77.2 | 87.9 | 74.3 | 77.7 | 76.8 | 66.6 | 76.8 |
145
+
146
+ ### Overall
147
+
148
+ | Model | Clinical | BigBIO | **Overall** |
149
+ |-------|----------|--------|-------------|
150
+ | **cpt-en-base** | **76.9** | **78.9** | **78.0** |
151
+ | BioClinical-ModernBERT | 76.7 | 77.4 | 77.0 |
152
+ | PubMedBERT | 73.3 | 80.1 | 77.0 |
153
+ | ModernBERT-base | 72.7 | 76.8 | 74.9 |
154
+
155
+ cpt-en-base achieves the highest balanced score (78.0%) across both Clinical and BigBIO task families. PubMedBERT scores higher on short-context BigBIO NER tasks but falls behind on long-context tasks (Phenotype: 52.0% vs 61.9%).
156
+
157
+ ## Intended Use
158
+
159
+ This model is designed for English biomedical and clinical NLP tasks:
160
+ - Named entity recognition (diseases, chemicals, genes, anatomy)
161
+ - Document classification (clinical phenotyping, relation extraction)
162
+ - De-identification of clinical notes
163
+ - Information extraction from PubMed abstracts and clinical reports
164
+
165
+ The 8,192-token context is important for long clinical documents (discharge summaries, pathology reports) that are truncated by 512-token models.
166
+
167
+ ## Limitations
168
+
169
+ - Trained on English biomedical text; not suitable for other languages without further adaptation. See [cpt-fr-base](https://huggingface.co/almanach/cpt-fr-base-base) for French.
170
+ - Encoder model: produces contextualized representations, does not generate text.
171
+ - Clinical text may contain sensitive patterns; users are responsible for compliance with applicable regulations (HIPAA, etc.).
172
+ - The English CLM-MLM improvement (+0.3pp at Base scale) is smaller than in French (+2.9pp) and not statistically significant at Base scale (binomial p=0.27). The practical benefit is clearest at Large scale (+0.8pp) and on long-context tasks.
173
+
174
+ ## License
175
+
176
+ Apache 2.0
177
+
178
+ ## Citation
179
+
180
+ ```bibtex
181
+ @inproceedings{touchent2026clm,
182
+ title={A Causal Language Modeling Detour Improves Encoder Continued Pretraining},
183
+ author={Touchent, Rian and de la Clergerie, {\'E}ric},
184
+ booktitle={Proceedings of COLM},
185
+ year={2026}
186
+ }
187
+ ```
188
+
189
+ ## Acknowledgments
190
+
191
+ This work was performed using HPC resources from GENCI-IDRIS (Grant 2024-AD011015883).