Model Details

RAMP (Raw-text Anchored Message Passing) recasts the LLM as a graph-native aggregation operator on text-rich graphs. This checkpoint is pretrained with 1 message passing layer and compression ratio 0.1.

  • Base model: Qwen2.5-7B-Instruct
  • Message passing layers: 1
  • Compression ratio: 0.1

For training, evaluation, and usage details, see our GitHub repo.

Citation

@article{zhang2026llm,
  title={LLM as Graph Kernel: Rethinking Message Passing on Text-Rich Graphs},
  author={Zhang, Ying and Yu, Hang and Zhang, Haipeng and Di, Peng},
  journal={arXiv preprint arXiv:2603.14937},
  year={2026}
}

Paper

Downloads last month
24
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for JJYDXFS/RAMP_7B_mp_1_ratio_0.1

Base model

Qwen/Qwen2.5-7B
Finetuned
(3210)
this model

Paper for JJYDXFS/RAMP_7B_mp_1_ratio_0.1