--- tags: - medical license: other license_name: research-only-rail-m model-index: - name: Curia results: - task: type: classification dataset: type: CuriaBench name: CuriaBench Anatomy Recognition metrics: - name: Accuracy type: accuracy value: 98.1 datasets: - raidium/CuriaBench extra_gated_prompt: >- Please confirm that you have read and agree to the following disclaimer. The model in this repository is provided for research use only (Research-only RAIL-M license). The model(s) and/or software are not intended for use in clinical decision-making or for any other clinical use, and performance for clinical use has not been established. ---
Raidium

🌟 Github | 📄 Paper Link | 🌐 Blog post

Curia: A Multi-Modal Foundation Model for Radiology

We introduce Curia, a foundation model trained on the entire cross-sectional imaging output of a major hospital over several years—which to our knowledge is the largest such corpus of real-world data—encompassing 150,000 exams (130 TB). On a newly curated 19-task external validation benchmark, Curia accurately identifies organs, detects conditions like brain hemorrhages and myocardial infarctions, and predicts outcomes in tumor staging. Curia meets or surpasses the performance of radiologists and recent foundation models, and exhibits clinically significant emergent properties in cross-modality, and low-data regimes. Check the research paper: https://arxiv.org/abs/2509.06830
Results
## Loading the model To load the model, use the `AutoModel` class from huggingface transformers library. ```python from transformers import AutoModel model = AutoModel.from_pretrained("raidium/curia") ``` You can also load the image pre-processor ```python from transformers import AutoImageProcessor processor = AutoImageProcessor.from_pretrained("raidium/curia", trust_remote_code=True) ``` Then to forward an image: ```python img = np.random.rand(-1024, 1024, size=(256, 256)) # single axial slice, in PL orientation model_input = processor(img) features = model(**model_input) ``` The image must follow the following format: ``` input: numpy array of shape (H, W) Images needs to be in: - PL for axial - IL for coronal - IP for sagittal for CT, no windowing, just hounsfield or normalized image for MRI, similar, no windowing, just raw values or normalized image ``` ## Loading model with heads The following heads are available: ```abdominal-trauma anatomy-ct anatomy-mri atlas-stroke covidx-ct deep-lesion-site emidec-classification-mask ich ixi kits kneeMRI luna16-3D neural_foraminal_narrowing oasis spinal_canal_stenosis subarticular_stenosis ``` To load the head, specify its name when loading the model ```python from transformers import AutoImageProcessor, AutoModelForImageClassification processor = AutoImageProcessor.from_pretrained("raidium/curia", trust_remote_code=True) model = AutoModelForImageClassification.from_pretrained( "raidium/curia", subfolder="anatomy-ct", trust_remote_code=True ) ``` You can find the class of each label in `id_to_labels.json` ## License The model is released under the RESEARCH-ONLY RAIL-M license. https://huggingface.co/raidium/curia/blob/main/LICENSE ## Cite our paper ``` @article{dancette2025curia, title={Curia: A Multi-Modal Foundation Model for Radiology}, author={Dancette, Corentin and Khlaut, Julien and Saporta, Antoine and Philippe, Helene and Ferreres, Elodie and Callard, Baptiste and Danielou, Th{\'e}o and Alberge, L{\'e}o and Machado, L{\'e}o and Tordjman, Daniel and others}, journal={arXiv preprint arXiv:2509.06830}, year={2025} } ```