Access request for PMC-9K
Please acknowledge the benchmark-use notice below before accessing this repository.
By requesting access to this repository, you acknowledge that PMC-9K is a research benchmark intended for cross-modal biomedical image-text retrieval evaluation. If you reconstruct or link back to upstream source images, you are responsible for complying with the relevant upstream source terms. You also agree not to use this benchmark to identify individuals or for clinical decision-making.
Log in or Sign Up to review the conditions and access this dataset content.
Dataset Card for PMC-9K
Dataset Summary
PMC-9K is a curated biomedical image-text benchmark associated with the ConceptCLIP project.
It is intended primarily for cross-modal retrieval evaluation, including:
- image-to-text retrieval
- text-to-image retrieval
- benchmarking biomedical multimodal encoders
- reproducibility studies for ConceptCLIP and related methods
Benchmark Role
PMC-9K is a held-out evaluation benchmark designed to test how well a model aligns biomedical images and texts.
This repository is therefore best understood as an evaluation resource, rather than as a general-purpose clinical dataset.
What This Repository Contains
This repository is intended to host the released benchmark artifacts associated with PMC-9K, such as captions, metadata, identifiers, and related files needed for evaluation or reconstruction workflows.
Please inspect the repository file tree for the exact contents of the current release.
Data Provenance
PMC-9K is associated with the same broader project pipeline as MedConcept-23M and is intended as a curated held-out benchmark for retrieval evaluation.
If your workflow reconstructs or links source-derived image content from upstream resources, you remain responsible for complying with the applicable upstream terms.
Gated Access Notice
This repository uses gated access with contact sharing and an additional acknowledgment form.
The purpose of the gate is to make users explicitly acknowledge that:
- PMC-9K is a research benchmark
- it is not a medical device or clinical support product
- source-linked content may carry upstream obligations
- the benchmark must not be used for privacy-invasive purposes
Intended Uses
Direct Use
- Evaluating image-to-text retrieval
- Evaluating text-to-image retrieval
- Benchmarking biomedical vision-language models
- Reproducing the retrieval experiments of the ConceptCLIP project
Downstream Use
- Internal evaluation of new biomedical multimodal encoders
- Method comparison in papers, reports, and technical studies
- Educational demonstrations of biomedical retrieval systems
Out-of-Scope Use
- Direct clinical decision making
- Re-identification attempts
- Use that violates upstream source terms or institutional policy
- Marketing claims of clinical validity without independent validation
Responsible-Use Notice
PMC-9K is meant for research and benchmarking.
Users should not present benchmark performance on PMC-9K as evidence that a model is clinically validated or suitable for patient-care deployment.
If source-linked content is reconstructed, users must handle that process responsibly and in compliance with the relevant upstream terms.
Relationship to Other Project Repositories
- Main model repository:
JerrryNie/ConceptCLIP - Pretraining resource:
JerrryNie/MedConcept-23M - Code and project repository: https://github.com/JerrryNie/ConceptCLIP
Example Loading Pattern
from datasets import load_dataset
# Replace the split / config names with the actual ones present in the repo.
dataset = load_dataset("JerrryNie/pmc9k")
print(dataset)
License Note
The files released directly in this repository follow the repository license shown on the Hugging Face page.
If your workflow reconstructs or accesses source-linked materials from upstream repositories, those materials may still be governed by their original terms.
Citation
If you use this benchmark, please cite:
@article{nie2025conceptclip,
title={An Explainable Biomedical Foundation Model via Large-Scale Concept-Enhanced Vision-Language Pre-training},
author={Nie, Yuxiang and He, Sunan and Bie, Yequan and Wang, Yihui and Chen, Zhixuan and Yang, Shu and Cai, Zhiyuan and Wang, Hongmei and Wang, Xi and Luo, Luyang and Wu, Mingxiang and Wu, Xian and Chan, Ronald Cheong Kin and Lau, Yuk Ming and Zheng, Yefeng and Rajpurkar, Pranav and Chen, Hao},
journal={arXiv preprint arXiv:2501.15579},
year={2025}
}
Contact
For questions about benchmark usage or release contents, please use the repository discussion page or the main project repository.
- Downloads last month
- 27