Transformers
Safetensors
dplm2
custom_code
lhallee commited on
Commit
f2b27d4
Β·
verified Β·
1 Parent(s): d83021a

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +109 -109
README.md CHANGED
@@ -1,109 +1,109 @@
1
- ---
2
- library_name: transformers
3
- tags: []
4
- ---
5
-
6
- # NOTE
7
- The GitHub with the implementation and requirements can be found [here](https://github.com/Synthyra/FastPLMs.git).
8
-
9
- # DPLM2
10
- Synthyra DPLM2 checkpoints are HuggingFace AutoModel compatible and include FastPLMs embedding helpers.
11
-
12
- ## Supported models
13
- ```python
14
- model_dict = {
15
- "Synthyra/DPLM2-150M": "airkingbd/dplm2_150m",
16
- "Synthyra/DPLM2-650M": "airkingbd/dplm2_650m",
17
- "Synthyra/DPLM2-3B": "airkingbd/dplm2_3b",
18
- }
19
- ```
20
-
21
- ## Use with transformers
22
- ```python
23
- import torch
24
- from transformers import AutoModel, AutoModelForMaskedLM
25
-
26
- model_path = "Synthyra/DPLM2-150M"
27
- model = AutoModel.from_pretrained(model_path, trust_remote_code=True, dtype=torch.float16).eval()
28
- tokenizer = model.tokenizer
29
-
30
- batch = tokenizer(["MPRTEIN", "MSEQWENCE"], padding=True, return_tensors="pt")
31
- with torch.no_grad():
32
- hidden = model(**batch).last_hidden_state
33
-
34
- mlm = AutoModelForMaskedLM.from_pretrained(model_path, trust_remote_code=True, dtype=torch.float16).eval()
35
- with torch.no_grad():
36
- logits = mlm(**batch).logits
37
- ```
38
-
39
- ## DPLM2 modality types
40
- DPLM2 infers `type_ids` automatically from `input_ids` and `attention_mask` when they are not provided.
41
-
42
- ## Attention backends
43
-
44
- `sdpa` (PyTorch Scaled Dot Product Attention) is the default.
45
-
46
- | Backend | Key | Notes |
47
- | :--- | :--- | :--- |
48
- | PyTorch SDPA | `"sdpa"` | Default. Exact numerics, stable on all hardware. |
49
- | Flash Attention | `"kernels_flash"` | Fastest on Ampere/Hopper GPUs. Requires `pip install kernels` (pre-built β€” no hours-long compilation). Outputs are not bitwise identical to SDPA due to online softmax reordering; differences are often small but not guaranteed to be inconsequential β€” use `"sdpa"` if exact numerics matter. |
50
- | Flex Attention | `"flex"` | Skips padding tokens via block mask β€” faster on variable-length batches. Near-exact numerics. First use compiles a Triton kernel (30–120 s). Best combined with `torch.compile`. |
51
- | Auto | `"auto"` | Picks the best available: `kernels_flash` β†’ `flex` β†’ `sdpa`. |
52
-
53
- Set via config before loading, or change on the model after loading (DPLM2 propagates the change to all attention layers immediately):
54
-
55
- ```python
56
- from transformers import AutoConfig, AutoModel
57
-
58
- # Option 1: set before loading
59
- config = AutoConfig.from_pretrained("Synthyra/DPLM2-150M", trust_remote_code=True)
60
- config.attn_backend = "flex"
61
- model = AutoModel.from_pretrained("Synthyra/DPLM2-150M", config=config, trust_remote_code=True)
62
-
63
- # Option 2: set after loading
64
- model = AutoModel.from_pretrained("Synthyra/DPLM2-150M", trust_remote_code=True)
65
- model.attn_backend = "flex" # propagates to all attention layers in-place
66
- ```
67
-
68
- ## Embed datasets
69
- All DPLM2 models inherit `EmbeddingMixin`, so you can call `model.embed_dataset(...)` directly.
70
-
71
- ## Citations
72
-
73
- ```bibtex
74
- @article{wang2024dplm2,
75
- title={DPLM-2: A Multimodal Diffusion Protein Language Model},
76
- author={Wang, Xinyou and Ye, Zaixiang and Huang, Fei and Cao, Dongyan and Liang, Shujian and Huang, Liang},
77
- journal={arXiv preprint arXiv:2410.13782},
78
- year={2024}
79
- }
80
- ```
81
-
82
- ```bibtex
83
- @misc{FastPLMs,
84
- author={Hallee, Logan and Bichara, David and Gleghorn, Jason P.},
85
- title={FastPLMs: Fast, efficient, protein language model inference from Huggingface AutoModel.},
86
- year={2024},
87
- url={https://huggingface.co/Synthyra/ESMplusplus_small},
88
- DOI={10.57967/hf/3726},
89
- publisher={Hugging Face}
90
- }
91
- ```
92
-
93
- ```bibtex
94
- @article{dong2024flexattention,
95
- title={Flex Attention: A Programming Model for Generating Optimized Attention Kernels},
96
- author={Dong, Juechu and Feng, Boyuan and Guessous, Driss and Liang, Yanbo and He, Horace},
97
- journal={arXiv preprint arXiv:2412.05496},
98
- year={2024}
99
- }
100
- ```
101
-
102
- ```bibtex
103
- @inproceedings{paszke2019pytorch,
104
- title={PyTorch: An Imperative Style, High-Performance Deep Learning Library},
105
- author={Paszke, Adam and Gross, Sam and Massa, Francisco and Lerer, Adam and Bradbury, James and Chanan, Gregory and Killeen, Trevor and Lin, Zeming and Gimelshein, Natalia and Antiga, Luca and Desmaison, Alban and K{\"o}pf, Andreas and Yang, Edward and DeVito, Zach and Raison, Martin and Tejani, Alykhan and Chilamkurthy, Sasank and Steiner, Benoit and Fang, Lu and Bai, Junjie and Chintala, Soumith},
106
- booktitle={Advances in Neural Information Processing Systems 32},
107
- year={2019}
108
- }
109
- ```
 
1
+ ---
2
+ library_name: transformers
3
+ tags: []
4
+ ---
5
+
6
+ # NOTE
7
+ The GitHub with the implementation and requirements can be found [here](https://github.com/Synthyra/FastPLMs.git).
8
+
9
+ # DPLM2
10
+ Synthyra DPLM2 checkpoints are HuggingFace AutoModel compatible and include FastPLMs embedding helpers.
11
+
12
+ ## Supported models
13
+ ```python
14
+ model_dict = {
15
+ "Synthyra/DPLM2-150M": "airkingbd/dplm2_150m",
16
+ "Synthyra/DPLM2-650M": "airkingbd/dplm2_650m",
17
+ "Synthyra/DPLM2-3B": "airkingbd/dplm2_3b",
18
+ }
19
+ ```
20
+
21
+ ## Use with transformers
22
+ ```python
23
+ import torch
24
+ from transformers import AutoModel, AutoModelForMaskedLM
25
+
26
+ model_path = "Synthyra/DPLM2-150M"
27
+ model = AutoModel.from_pretrained(model_path, trust_remote_code=True, dtype=torch.float16).eval()
28
+ tokenizer = model.tokenizer
29
+
30
+ batch = tokenizer(["MPRTEIN", "MSEQWENCE"], padding=True, return_tensors="pt")
31
+ with torch.no_grad():
32
+ hidden = model(**batch).last_hidden_state
33
+
34
+ mlm = AutoModelForMaskedLM.from_pretrained(model_path, trust_remote_code=True, dtype=torch.float16).eval()
35
+ with torch.no_grad():
36
+ logits = mlm(**batch).logits
37
+ ```
38
+
39
+ ## DPLM2 modality types
40
+ DPLM2 infers `type_ids` automatically from `input_ids` and `attention_mask` when they are not provided.
41
+
42
+ ## Attention backends
43
+
44
+ `sdpa` (PyTorch Scaled Dot Product Attention) is the default.
45
+
46
+ | Backend | Key | Notes |
47
+ | :--- | :--- | :--- |
48
+ | PyTorch SDPA | `"sdpa"` | Default. Exact numerics, stable on all hardware. |
49
+ | Flash Attention | `"kernels_flash"` | Fastest on Ampere/Hopper GPUs. Requires `pip install kernels` (pre-built β€” no hours-long compilation). Outputs are not bitwise identical to SDPA due to online softmax reordering; differences are often small but not guaranteed to be inconsequential β€” use `"sdpa"` if exact numerics matter. |
50
+ | Flex Attention | `"flex"` | Skips padding tokens via block mask β€” faster on variable-length batches. Near-exact numerics. First use compiles a Triton kernel (30–120 s). Best combined with `torch.compile`. |
51
+ | Auto | `"auto"` | Picks the best available: `kernels_flash` β†’ `flex` β†’ `sdpa`. |
52
+
53
+ Set via config before loading, or change on the model after loading (DPLM2 propagates the change to all attention layers immediately):
54
+
55
+ ```python
56
+ from transformers import AutoConfig, AutoModel
57
+
58
+ # Option 1: set before loading
59
+ config = AutoConfig.from_pretrained("Synthyra/DPLM2-150M", trust_remote_code=True)
60
+ config.attn_backend = "flex"
61
+ model = AutoModel.from_pretrained("Synthyra/DPLM2-150M", config=config, trust_remote_code=True)
62
+
63
+ # Option 2: set after loading
64
+ model = AutoModel.from_pretrained("Synthyra/DPLM2-150M", trust_remote_code=True)
65
+ model.attn_backend = "flex" # propagates to all attention layers in-place
66
+ ```
67
+
68
+ ## Embed datasets
69
+ All DPLM2 models inherit `EmbeddingMixin`, so you can call `model.embed_dataset(...)` directly.
70
+
71
+ ## Citations
72
+
73
+ ```bibtex
74
+ @article{wang2024dplm2,
75
+ title={DPLM-2: A Multimodal Diffusion Protein Language Model},
76
+ author={Wang, Xinyou and Ye, Zaixiang and Huang, Fei and Cao, Dongyan and Liang, Shujian and Huang, Liang},
77
+ journal={arXiv preprint arXiv:2410.13782},
78
+ year={2024}
79
+ }
80
+ ```
81
+
82
+ ```bibtex
83
+ @misc{FastPLMs,
84
+ author={Hallee, Logan and Bichara, David and Gleghorn, Jason P.},
85
+ title={FastPLMs: Fast, efficient, protein language model inference from Huggingface AutoModel.},
86
+ year={2024},
87
+ url={https://huggingface.co/Synthyra/ESMplusplus_small},
88
+ DOI={10.57967/hf/3726},
89
+ publisher={Hugging Face}
90
+ }
91
+ ```
92
+
93
+ ```bibtex
94
+ @article{dong2024flexattention,
95
+ title={Flex Attention: A Programming Model for Generating Optimized Attention Kernels},
96
+ author={Dong, Juechu and Feng, Boyuan and Guessous, Driss and Liang, Yanbo and He, Horace},
97
+ journal={arXiv preprint arXiv:2412.05496},
98
+ year={2024}
99
+ }
100
+ ```
101
+
102
+ ```bibtex
103
+ @inproceedings{paszke2019pytorch,
104
+ title={PyTorch: An Imperative Style, High-Performance Deep Learning Library},
105
+ author={Paszke, Adam and Gross, Sam and Massa, Francisco and Lerer, Adam and Bradbury, James and Chanan, Gregory and Killeen, Trevor and Lin, Zeming and Gimelshein, Natalia and Antiga, Luca and Desmaison, Alban and K{\"o}pf, Andreas and Yang, Edward and DeVito, Zach and Raison, Martin and Tejani, Alykhan and Chilamkurthy, Sasank and Steiner, Benoit and Fang, Lu and Bai, Junjie and Chintala, Soumith},
106
+ booktitle={Advances in Neural Information Processing Systems 32},
107
+ year={2019}
108
+ }
109
+ ```