EquiformerV3:
Scaling Efficient, Expressive, and General SE(3)-Equivariant Graph Attention Transformers

Code | Paper

This repository contains the checkpoints of the work "EquiformerV3: Scaling Efficient, Expressive, and General SE(3)-Equivariant Graph Attention Transformers". Please refer to the code for detailed description of usage.

Content

  1. MPtrj
  2. OMat24 → MPtrj and sAlex

MPtrj

Model Training data Checkpoint
EquiformerV3 MPtrj mptrj_gradient.pt

OMat24 → MPtrj and sAlex

Training consists of (1) direct pre-training on OMat24, (2) gradient fine-tuning on OMat24 initialized from (1), and (3) gradient fine-tuning on MPtrj and sAlex initialized from (2).
Model Training data Config Checkpoint
EquiformerV3 (direct pre-training) OMat24 omat24_direct.yml omat24_direct.pt
EquiformerV3 (gradient fine-tuning) OMat24 omat24_gradient.yml omat24_gradient.pt
EquiformerV3 (gradient fine-tuning) MPtrj and sAlex mptrj-salex_gradient.yml omat24-mptrj-salex_gradient.pt
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for mirror-physics/equiformer_v3