Add metadata and improve model card

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +21 -3
README.md CHANGED
@@ -1,3 +1,9 @@
 
 
 
 
 
 
1
  <p align="center">
2
  <img src="figures/logo.jpg" alt="AROMA Logo" width="120">
3
  </p>
@@ -5,11 +11,11 @@
5
  <h2 align="center"> 🧬 AROMA: Augmented Reasoning Over a Multimodal Architecture for Virtual Cell Genetic Perturbation Modeling<br>(ACL 2026 Findings)</h2>
6
 
7
  <p align="center">
8
- πŸ“ƒ <a href="https://arxiv.org/pdf/2604.20263" target="_blank">Paper</a> β€’ πŸ™ <a href="https://github.com/blazerye/AROMA" target="_blank">Code</a> β€’ πŸ—‚οΈ <a href="https://huggingface.co/datasets/blazerye/PerturbReason" target="_blank">Datasets</a><br>
9
  </p>
10
  </p>
11
 
12
- > Please refer to our [repository](https://github.com/blazerye/AROMA) and [paper](https://arxiv.org/pdf/2604.20263) for more details.
13
 
14
  ## 🌐 Overview
15
 
@@ -25,4 +31,16 @@ The overall AROMA pipeline is illustrated in the figure above and is divided int
25
 
26
  - **Modeling stage.** AROMA adopts a retrieval-augmented strategy to incorporate query-relevant information, thereby providing explicit evidence cues for prediction. In addition, it jointly leverages topological representations learned from graph neural networks (GNN) and protein sequence representations encoded by ESM-2, and applies a cross-attention module to explicitly model perturbation-target gene dependencies across modalities.
27
 
28
- - **Training stage.** AROMA first performs multimodal supervised fine-tuning (SFT), and is then further optimized with Group Relative Policy Optimization (GRPO) reinforcement learning to enhance predictive performance while generating biologically meaningful explanations.
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ pipeline_tag: text-generation
4
+ base_model: Qwen/Qwen3-8B
5
+ ---
6
+
7
  <p align="center">
8
  <img src="figures/logo.jpg" alt="AROMA Logo" width="120">
9
  </p>
 
11
  <h2 align="center"> 🧬 AROMA: Augmented Reasoning Over a Multimodal Architecture for Virtual Cell Genetic Perturbation Modeling<br>(ACL 2026 Findings)</h2>
12
 
13
  <p align="center">
14
+ πŸ“ƒ <a href="https://huggingface.co/papers/2604.20263" target="_blank">Paper</a> β€’ πŸ™ <a href="https://github.com/blazerye/AROMA" target="_blank">Code</a> β€’ πŸ—‚οΈ <a href="https://huggingface.co/datasets/blazerye/PerturbReason" target="_blank">Datasets</a><br>
15
  </p>
16
  </p>
17
 
18
+ > Please refer to our [repository](https://github.com/blazerye/AROMA) and [paper](https://huggingface.co/papers/2604.20263) for more details.
19
 
20
  ## 🌐 Overview
21
 
 
31
 
32
  - **Modeling stage.** AROMA adopts a retrieval-augmented strategy to incorporate query-relevant information, thereby providing explicit evidence cues for prediction. In addition, it jointly leverages topological representations learned from graph neural networks (GNN) and protein sequence representations encoded by ESM-2, and applies a cross-attention module to explicitly model perturbation-target gene dependencies across modalities.
33
 
34
+ - **Training stage.** AROMA first performs multimodal supervised fine-tuning (SFT), and is then further optimized with Group Relative Policy Optimization (GRPO) reinforcement learning to enhance predictive performance while generating biologically meaningful explanations.
35
+
36
+ ## πŸ“Œ Citation
37
+ If you find AROMA useful for your research and applications, please cite using this BibTeX:
38
+ ```bibtex
39
+ @inproceedings{wang2026aroma,
40
+ title="{AROMA}: Augmented Reasoning Over a Multimodal Architecture for Virtual Cell Genetic Perturbation Modeling",
41
+ author="Wang, Zhenyu and Ye, Geyan and Liu, Wei and Ng, Man Tat Alexander",
42
+ booktitle="Findings of the Association for Computational Linguistics: ACL 2026",
43
+ year="2026",
44
+ publisher="Association for Computational Linguistics"
45
+ }
46
+ ```