zihaojing's picture
Upload README.md with huggingface_hub
35b869c verified
metadata
license: mit
task_categories:
  - text-generation
language:
  - en
tags:
  - molecules
  - chemistry
  - pretraining
  - graph-llm
  - smiles
pretty_name: EDT-Former Encoder Pretraining Data
size_categories:
  - 10M<n<100M

EDT-Former Encoder Pretraining Data

Pretraining corpus for the EDT-Former Stage 1 encoder, as described in the ICLR 2026 paper:

Entropy-Guided Dynamic Tokens for Graph-LLM Alignment in Molecular Understanding Zihao Jing, Qiuhao Zeng, Ruiyi Fang, Yan Sun, Boyu Wang, Pingzhao Hu ICLR 2026 · Paper · Code

Dataset Description

This dataset is used to train the Stage 1 EDT-Former encoder, which learns to align molecular graph representations with text. It is derived from PubChem molecules annotated with BRICS fragment IDs and entropy-guided graph IDs.

Data Format

Each split is stored as a JSONL file where each line is a JSON object representing one molecule-text pair, including graph features, SMILES, and associated text descriptions.

Split File Size
Train train.jsonl ~12 GB
Validation val.jsonl ~37 MB
Test test.jsonl ~78 MB

Usage

from datasets import load_dataset

dataset = load_dataset("zihaojing/EDT-Former-pretrain-data")

Or point to it in the EDT-Former config:

# configs/stage1_dqw2d/data_config_preprocessed.yaml
use_preprocessed: true
preprocessed_data: zihaojing/EDT-Former-pretrain-data

Related Resources

Resource Link
SFT Data zihaojing/EDT-Former-sft-data
Encoder (Stage 1) zihaojing/EDT-Former-encoder
Full Model (Stage 2) zihaojing/EDT-Former-model
Code selmiss/DQ-Former

Citation

@inproceedings{jing2026edtformer,
  title={Entropy-Guided Dynamic Tokens for Graph-LLM Alignment in Molecular Understanding},
  author={Jing, Zihao and Zeng, Qiuhao and Fang, Ruiyi and Sun, Yan and Wang, Boyu and Hu, Pingzhao},
  booktitle={International Conference on Learning Representations (ICLR)},
  year={2026}
}