metayan commited on
Commit
83a3934
·
1 Parent(s): 4d9c695

initial ckpts upload

Browse files
README.md ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - chemistry
5
+ - gpt2
6
+ - representation-consistency
7
+ ---
8
+
9
+ # Consistency
10
+
11
+ **Architecture:** GPT-2 small
12
+ **Task:** Forward reaction prediction (SMILES or IUPAC representation for the input)
13
+ **Training data:** 80k mapped reactions
14
+ **Checkpoint size:** 124M params
15
+
16
+ ## Usage
17
+
18
+ ```python
19
+ from transformers import AutoModelForCausalLM, AutoTokenizer
20
+ # model trained on SMILES input, without KL divergence loss
21
+ tok = AutoTokenizer.from_pretrained("bing-yan/consistency", subfolder="nokl-smiles")
22
+ model = AutoModelForCausalLM.from_pretrained("bing-yan/consistency", subfolder="nokl-smiles")
23
+
24
+ # model trained on IUPAC input, without KL divergence loss
25
+ tok = AutoTokenizer.from_pretrained("bing-yan/consistency", subfolder="nokl-iupac")
26
+ model = AutoModelForCausalLM.from_pretrained("bing-yan/consistency", subfolder="nokl-iupac")
27
+
28
+ # model trained on SMILES input, with KL divergence loss
29
+ tok = AutoTokenizer.from_pretrained("bing-yan/consistency", subfolder="kl-smiles")
30
+ model = AutoModelForCausalLM.from_pretrained("bing-yan/consistency", subfolder="nokl-smiles")
31
+
32
+ # model trained on IUPAC input, with KL divergence loss
33
+ tok = AutoTokenizer.from_pretrained("bing-yan/consistency", subfolder="kl-iupac")
34
+ model = AutoModelForCausalLM.from_pretrained("bing-yan/consistency", subfolder="nokl-iupac")
35
+
36
+ ## Citations
37
+
38
+ - GitHub: [github.com/bingyan4science/consistency](https://github.com/bingyan4science/consistency)
39
+ - Dataset (Zenodo): [https://doi.org/10.5281/zenodo.14430369](https://doi.org/10.5281/zenodo.14430369)
40
+ - Paper: [Inconsistency of LLMs in Molecular Representations](https://chemrxiv.org/engage/chemrxiv/article-details/675b9de27be152b1d0ced2b5)
41
+
42
+
config.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "base_model": "gpt2",
3
+ "tokenizer_name": "gpt2",
4
+ "transformers_version": "4.44.2"
5
+ }
kl-iupac/config.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "base_model": "gpt2",
3
+ "tokenizer_name": "gpt2",
4
+ "transformers_version": "4.44.2"
5
+ }
kl-iupac/pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:14f80ac96e9bc512198d9fbf4ee86fe687606dd35d7dc381d2eac95ba451187c
3
+ size 497818578
kl-smiles/config.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "base_model": "gpt2",
3
+ "tokenizer_name": "gpt2",
4
+ "transformers_version": "4.44.2"
5
+ }
kl-smiles/pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:64ddca70f2adc5640f65c70e3fc2d3dd202c960ffa66e8034a06ac85c25c339e
3
+ size 497818578
nokl-iupac/config.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "base_model": "gpt2",
3
+ "tokenizer_name": "gpt2",
4
+ "transformers_version": "4.44.2"
5
+ }
nokl-iupac/pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8a194d460c5d785ab9751cc30e526da28a51cb0c604be1a1c92bfc0e109554ea
3
+ size 497818578
nokl-smiles/config.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "base_model": "gpt2",
3
+ "tokenizer_name": "gpt2",
4
+ "transformers_version": "4.44.2"
5
+ }
nokl-smiles/pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e67640b4f80479f8eb42cdf0b4b3c1aad8025662cd4a7f1ba211d81f7c01a8eb
3
+ size 497818578
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e67640b4f80479f8eb42cdf0b4b3c1aad8025662cd4a7f1ba211d81f7c01a8eb
3
+ size 497818578