Formbench-anon commited on
Commit
bb6c076
·
verified ·
1 Parent(s): e7bae1d

Initial release: FormBench TAPT-MNRL model (anonymised for review)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 1024,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,123 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: Alibaba-NLP/gte-large-en-v1.5
4
+ library_name: sentence-transformers
5
+ tags:
6
+ - sentence-transformers
7
+ - feature-extraction
8
+ - sentence-similarity
9
+ - formbench
10
+ - patent-retrieval
11
+ - chemistry
12
+ - formulations
13
+ - materials-science
14
+ language:
15
+ - en
16
+ pipeline_tag: sentence-similarity
17
+ ---
18
+
19
+ # gte-large-formbench-mnrl
20
+
21
+ A domain-adapted sentence-transformers model derived from
22
+ [`Alibaba-NLP/gte-large-en-v1.5`](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) and fine-tuned on the **FormBench**
23
+ retrieval benchmark for formulation chemistry. It maps passages from formulation patents
24
+ into a 1024-dimensional dense vector space and is optimised for within-domain
25
+ retrieval among structurally similar near-miss passages — the central capability targeted
26
+ by FormBench.
27
+
28
+ This repository hosts an anonymised release for NeurIPS 2026 double-blind review.
29
+
30
+ ## Model details
31
+
32
+ | Item | Value |
33
+ |---|---|
34
+ | Base model | `Alibaba-NLP/gte-large-en-v1.5` (434M params) |
35
+ | Training method | Task-adaptive pre-training (TAPT) via contrastive fine-tuning |
36
+ | Loss | `MultipleNegativesRankingLoss` (in-batch negatives) |
37
+ | Training data | FormBench-Triplets — 44,413 (query, anchor, hard-negative) tuples |
38
+ | Embedding dimension | 1024 |
39
+ | Max sequence length | 8192 (training: 2048) |
40
+ | Precision | bf16 |
41
+ | Learning rate | 1e-5 |
42
+ | Per-GPU batch size | 64 |
43
+ | Epochs | 5 |
44
+ | Hardware | 8× AMD MI250X, DDP |
45
+
46
+ The training-triplet set is reconstructable from the qrel files in
47
+ [`Formbench-anon/FormBench`](https://huggingface.co/datasets/Formbench-anon/FormBench)
48
+ following the protocol in §3 of the paper.
49
+
50
+ ## Evaluation results
51
+
52
+ Evaluated on the FormBench test split (n = 5,459 queries) under both corpus variants,
53
+ following the protocol in §4 of the paper. FAISS exact inner-product search at top-k = 100.
54
+
55
+ ### FormBench-Structured (C1) — within-domain near-miss distractors
56
+
57
+ | Metric | Value |
58
+ |---|---:|
59
+ | Binary nDCG@10 | **0.3561** |
60
+ | MRR (binary qrels) | 0.3125 |
61
+ | Graded nDCG@10 | 0.2087 |
62
+ | R@100 (binary qrels) | 0.7586 |
63
+ | FAISS search latency | 18.9 ms/query |
64
+
65
+ ### FormBench-Random (C0) — random-distractor corpus
66
+
67
+ | Metric | Value |
68
+ |---|---:|
69
+ | Binary nDCG@10 | **0.4308** |
70
+ | MRR (binary qrels) | 0.3883 |
71
+ | Graded nDCG@10 | 0.2553 |
72
+ | R@100 (binary qrels) | 0.8011 |
73
+ | FAISS search latency | 18.8 ms/query |
74
+
75
+ For reference: BM25 lexical baseline: binary nDCG@10 = 0.3751 (C1), 0.4665 (C0).
76
+
77
+ ## Usage
78
+
79
+ ```python
80
+ from sentence_transformers import SentenceTransformer
81
+
82
+ model = SentenceTransformer("Formbench-anon/gte-large-formbench-mnrl")
83
+
84
+ passages = [
85
+ "An adhesive composition comprising a styrene-acrylate copolymer ...",
86
+ "A water-based latex paint formulation containing ...",
87
+ ]
88
+ queries = [
89
+ "what wax-seeded latex polymers improve scuff resistance in architectural coatings?",
90
+ ]
91
+
92
+ passage_embeds = model.encode(passages, normalize_embeddings=True)
93
+ query_embeds = model.encode(queries, normalize_embeddings=True)
94
+ ```
95
+
96
+ ## Intended use
97
+
98
+ Domain-specific retrieval over formulation patents — adhesives, coatings, lubricants,
99
+ pharmaceuticals, agrochemicals, personal care, food. Particularly suited to
100
+ within-domain near-miss discrimination, where general-purpose embedders have been shown
101
+ to fail.
102
+
103
+ ## Limitations
104
+
105
+ - Training queries are LLM-generated (Sonnet 4 + Haiku 4.5 quality filter) and may not
106
+ match real practitioner intent.
107
+ - Coverage limited to USPTO utility patents (1995–2022) in English only.
108
+ - Performance on out-of-domain retrieval is not characterised.
109
+
110
+ ## Citation
111
+
112
+ ```bibtex
113
+ @misc{formbench2026,
114
+ title = { {FormBench}: Evaluating Chemical Knowledge Retrieval in Formulation Patents },
115
+ author = { Anonymous Authors },
116
+ year = { 2026 },
117
+ note = { Under double-blind review at NeurIPS 2026 Datasets \& Benchmarks Track }
118
+ }
119
+ ```
120
+
121
+ ## License
122
+
123
+ Apache 2.0, inherited from the base model.
_eval_status.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "c1": {
3
+ "status": "completed",
4
+ "queued_by_job": "4376439",
5
+ "started_at": "2026-04-11T00:16:02.927797",
6
+ "phase1_job": "4376439",
7
+ "phase1_completed_at": "2026-04-11T00:26:42.439787",
8
+ "completed_at": "2026-04-11T02:47:29.358781",
9
+ "metrics_path": "/lustre/orion/mat721/proj-shared/formbench_ner/experiments/results/gte_large_tapt_mnrl_ckpt-85/c1/metrics.json"
10
+ },
11
+ "c0": {
12
+ "status": "completed",
13
+ "queued_by_job": "4379497",
14
+ "started_at": "2026-04-12T08:54:12.288295",
15
+ "phase1_job": "4379497",
16
+ "phase1_completed_at": "2026-04-12T09:03:23.968890",
17
+ "completed_at": "2026-04-12T11:45:23.541570",
18
+ "metrics_path": "/lustre/orion/mat721/proj-shared/formbench_ner/experiments/results/gte_large_tapt_mnrl_ckpt-85/c0/metrics.json"
19
+ }
20
+ }
_legacy_eval_status.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "status": "completed",
3
+ "corpus": "c1",
4
+ "queued_by_job": "4376439",
5
+ "started_at": "2026-04-11T00:16:02.927797",
6
+ "phase1_job": "4376439",
7
+ "phase1_completed_at": "2026-04-11T00:26:42.439787",
8
+ "completed_at": "2026-04-11T02:47:29.358781",
9
+ "metrics_path": "/lustre/orion/mat721/proj-shared/formbench_ner/experiments/results/gte_large_tapt_mnrl_ckpt-85/c1/metrics.json"
10
+ }
config.json ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "NewModel"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.0,
6
+ "auto_map": {
7
+ "AutoConfig": "configuration.NewConfig",
8
+ "AutoModel": "Alibaba-NLP/new-impl--modeling.NewModel",
9
+ "AutoModelForMaskedLM": "Alibaba-NLP/new-impl--modeling.NewForMaskedLM",
10
+ "AutoModelForMultipleChoice": "Alibaba-NLP/new-impl--modeling.NewForMultipleChoice",
11
+ "AutoModelForQuestionAnswering": "Alibaba-NLP/new-impl--modeling.NewForQuestionAnswering",
12
+ "AutoModelForSequenceClassification": "Alibaba-NLP/new-impl--modeling.NewForSequenceClassification",
13
+ "AutoModelForTokenClassification": "Alibaba-NLP/new-impl--modeling.NewForTokenClassification"
14
+ },
15
+ "classifier_dropout": null,
16
+ "hidden_act": "gelu",
17
+ "hidden_dropout_prob": 0.1,
18
+ "hidden_size": 1024,
19
+ "initializer_range": 0.02,
20
+ "intermediate_size": 4096,
21
+ "layer_norm_eps": 1e-12,
22
+ "layer_norm_type": "layer_norm",
23
+ "logn_attention_clip1": false,
24
+ "logn_attention_scale": false,
25
+ "max_position_embeddings": 8192,
26
+ "model_type": "new",
27
+ "num_attention_heads": 16,
28
+ "num_hidden_layers": 24,
29
+ "pack_qkv": true,
30
+ "pad_token_id": 0,
31
+ "position_embedding_type": "rope",
32
+ "rope_scaling": {
33
+ "factor": 2.0,
34
+ "type": "ntk"
35
+ },
36
+ "rope_theta": 160000,
37
+ "torch_dtype": "float32",
38
+ "transformers_version": "4.51.3",
39
+ "type_vocab_size": 2,
40
+ "unpad_inputs": false,
41
+ "use_memory_efficient_attention": false,
42
+ "vocab_size": 30528
43
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_type": "SentenceTransformer",
3
+ "__version__": {
4
+ "sentence_transformers": "5.3.0",
5
+ "transformers": "4.51.3",
6
+ "pytorch": "2.3.1+rocm5.7"
7
+ },
8
+ "prompts": {
9
+ "query": "",
10
+ "document": ""
11
+ },
12
+ "default_prompt_name": null,
13
+ "similarity_fn_name": "cosine"
14
+ }
configuration.py ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2024 The GTE Team Authors and Alibaba Group.
3
+ # Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
4
+ #
5
+ # Licensed under the Apache License, Version 2.0 (the "License");
6
+ # you may not use this file except in compliance with the License.
7
+ # You may obtain a copy of the License at
8
+ #
9
+ # http://www.apache.org/licenses/LICENSE-2.0
10
+ #
11
+ # Unless required by applicable law or agreed to in writing, software
12
+ # distributed under the License is distributed on an "AS IS" BASIS,
13
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+ # See the License for the specific language governing permissions and
15
+ # limitations under the License.
16
+ """ NEW model configuration"""
17
+ from transformers.configuration_utils import PretrainedConfig
18
+ from transformers.utils import logging
19
+
20
+ logger = logging.get_logger(__name__)
21
+
22
+
23
+ class NewConfig(PretrainedConfig):
24
+ r"""
25
+ This is the configuration class to store the configuration of a [`NewModel`] or a [`TFNewModel`]. It is used to
26
+ instantiate a NEW model according to the specified arguments, defining the model architecture. Instantiating a
27
+ configuration with the defaults will yield a similar configuration to that of the NEW
28
+ [izhx/new-base-en](https://huggingface.co/izhx/new-base-en) architecture.
29
+
30
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
31
+ documentation from [`PretrainedConfig`] for more information.
32
+
33
+
34
+ Args:
35
+ vocab_size (`int`, *optional*, defaults to 30522):
36
+ Vocabulary size of the NEW model. Defines the number of different tokens that can be represented by the
37
+ `inputs_ids` passed when calling [`NewModel`] or [`TFNewModel`].
38
+ hidden_size (`int`, *optional*, defaults to 768):
39
+ Dimensionality of the encoder layers and the pooler layer.
40
+ num_hidden_layers (`int`, *optional*, defaults to 12):
41
+ Number of hidden layers in the Transformer encoder.
42
+ num_attention_heads (`int`, *optional*, defaults to 12):
43
+ Number of attention heads for each attention layer in the Transformer encoder.
44
+ intermediate_size (`int`, *optional*, defaults to 3072):
45
+ Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder.
46
+ hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`):
47
+ The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
48
+ `"relu"`, `"silu"` and `"gelu_new"` are supported.
49
+ hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
50
+ The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
51
+ attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1):
52
+ The dropout ratio for the attention probabilities.
53
+ max_position_embeddings (`int`, *optional*, defaults to 512):
54
+ The maximum sequence length that this model might ever be used with. Typically set this to something large
55
+ just in case (e.g., 512 or 1024 or 2048).
56
+ type_vocab_size (`int`, *optional*, defaults to 2):
57
+ The vocabulary size of the `token_type_ids` passed when calling [`NewModel`] or [`TFNewModel`].
58
+ initializer_range (`float`, *optional*, defaults to 0.02):
59
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
60
+ layer_norm_eps (`float`, *optional*, defaults to 1e-12):
61
+ The epsilon used by the layer normalization layers.
62
+ position_embedding_type (`str`, *optional*, defaults to `"rope"`):
63
+ Type of position embedding. Choose one of `"absolute"`, `"rope"`.
64
+ rope_theta (`float`, *optional*, defaults to 10000.0):
65
+ The base period of the RoPE embeddings.
66
+ rope_scaling (`Dict`, *optional*):
67
+ Dictionary containing the scaling configuration for the RoPE embeddings. Currently supports two scaling
68
+ strategies: linear and dynamic. Their scaling factor must be a float greater than 1. The expected format is
69
+ `{"type": strategy name, "factor": scaling factor}`. When using this flag, don't update
70
+ `max_position_embeddings` to the expected new maximum. See the following thread for more information on how
71
+ these scaling strategies behave:
72
+ https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/. This is an
73
+ experimental feature, subject to breaking API changes in future versions.
74
+ classifier_dropout (`float`, *optional*):
75
+ The dropout ratio for the classification head.
76
+
77
+ Examples:
78
+
79
+ ```python
80
+ >>> from transformers import NewConfig, NewModel
81
+
82
+ >>> # Initializing a NEW izhx/new-base-en style configuration
83
+ >>> configuration = NewConfig()
84
+
85
+ >>> # Initializing a model (with random weights) from the izhx/new-base-en style configuration
86
+ >>> model = NewModel(configuration)
87
+
88
+ >>> # Accessing the model configuration
89
+ >>> configuration = model.config
90
+ ```"""
91
+
92
+ model_type = "new"
93
+
94
+ def __init__(
95
+ self,
96
+ vocab_size=30528,
97
+ hidden_size=768,
98
+ num_hidden_layers=12,
99
+ num_attention_heads=12,
100
+ intermediate_size=3072,
101
+ hidden_act="gelu",
102
+ hidden_dropout_prob=0.1,
103
+ attention_probs_dropout_prob=0.0,
104
+ max_position_embeddings=2048,
105
+ type_vocab_size=1,
106
+ initializer_range=0.02,
107
+ layer_norm_type='layer_norm',
108
+ layer_norm_eps=1e-12,
109
+ # pad_token_id=0,
110
+ position_embedding_type="rope",
111
+ rope_theta=10000.0,
112
+ rope_scaling=None,
113
+ classifier_dropout=None,
114
+ pack_qkv=True,
115
+ unpad_inputs=False,
116
+ use_memory_efficient_attention=False,
117
+ logn_attention_scale=False,
118
+ logn_attention_clip1=False,
119
+ **kwargs,
120
+ ):
121
+ super().__init__(**kwargs)
122
+
123
+ self.vocab_size = vocab_size
124
+ self.hidden_size = hidden_size
125
+ self.num_hidden_layers = num_hidden_layers
126
+ self.num_attention_heads = num_attention_heads
127
+ self.hidden_act = hidden_act
128
+ self.intermediate_size = intermediate_size
129
+ self.hidden_dropout_prob = hidden_dropout_prob
130
+ self.attention_probs_dropout_prob = attention_probs_dropout_prob
131
+ self.max_position_embeddings = max_position_embeddings
132
+ self.type_vocab_size = type_vocab_size
133
+ self.initializer_range = initializer_range
134
+ self.layer_norm_type = layer_norm_type
135
+ self.layer_norm_eps = layer_norm_eps
136
+ self.position_embedding_type = position_embedding_type
137
+ self.rope_theta = rope_theta
138
+ self.rope_scaling = rope_scaling
139
+ self.classifier_dropout = classifier_dropout
140
+
141
+ self.pack_qkv = pack_qkv
142
+ self.unpad_inputs = unpad_inputs
143
+ self.use_memory_efficient_attention = use_memory_efficient_attention
144
+ self.logn_attention_scale = logn_attention_scale
145
+ self.logn_attention_clip1 = logn_attention_clip1
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ }
14
+ ]
optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:64397c7aa4a2dacf444d67bec7497fccfe978b73ade96377af9ac1f0bf8ced9c
3
+ size 3473337082
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd6a8831041ff301f4f67aa54e8d45b5f30ff5751ad965cfd422906930f76f5e
3
+ size 1736645498
rng_state_0.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a83ef4343978d1f7e354b9516f75e630a3960e0d739b3650a20c810721edaf2
3
+ size 15920
rng_state_1.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:de14dc30c6f7be7943f49d41f1e3e29c4132bb9e785015f146b8f5e347e10237
3
+ size 15920
rng_state_10.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dd0005acbbdf8a7b33339cec975a7a9002279c012581ed9bc52386b57a5f78c0
3
+ size 15933
rng_state_11.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab2f19d08ef417c08d372915e6c1c93ba158084da24e66e7e69ceba64977f822
3
+ size 15933
rng_state_12.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec9c188fb842911bb8b6832fda1848c7f993dae8810cc0bc97d3c35e09557550
3
+ size 15933
rng_state_13.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b7b29bd36f46c6ee121f123b48380b1434092e4e4ec4b3f251a1911cc26badf9
3
+ size 15933
rng_state_14.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bbe2bf360e95a1410aeaab45f8f0deacb8845d23e81d1ed3efcc6bef3348141b
3
+ size 15933
rng_state_15.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:24769949e2b3ed45cbe89647ab9edf1d27acbc9ae0c4e412c0273998d7056b41
3
+ size 15933
rng_state_16.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca38c34e6e6bfbd0ad144bd0fb192a753c34f1896c62e0b271e243ea046f7ec2
3
+ size 15933
rng_state_17.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:984cb7509354d586bc3f8b5774c344604cc4d3472a68c7a059917a3b76c090d3
3
+ size 15933
rng_state_18.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2be0fb614921e973fd54938ecedb5c81f4762d528c90bbe7284d4970faba5c25
3
+ size 15933
rng_state_19.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8e8085779ca29c2ab435d5ac3ff11bb782485330cfee05788995792689e53771
3
+ size 15933
rng_state_2.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:767be626dfff661a14d59fc41ad556726f94e9037bef00f75faa4705366f308f
3
+ size 15920
rng_state_20.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b761c520d528980fb566e4bfcdd045b19ce70bcec96ff0c25deb62644cefc929
3
+ size 15933
rng_state_21.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:165ebec7d73c62bf63402403002a7a60b5f4b154c43de1ec0ae17dd5f29fd9ec
3
+ size 15933
rng_state_22.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b6237a08e6004d89ca840ae063b4d1b75b246dbbb105c14e13588fac71407ea0
3
+ size 15933
rng_state_23.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:44eb87a2d89f248d700b3f94a133d9212ea0095ee45e8b5c955f154255ee8bae
3
+ size 15933
rng_state_24.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2eb212bb16c6386b3113bb6834868ce993b3a85f2ca54eaa471f2f7bd25767be
3
+ size 15933
rng_state_25.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b10539066b224b5ba21d892902fd2f006141e0a9207427a28013eca1e1f99a72
3
+ size 15933
rng_state_26.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bac858cefded2051d3e8387005664a6ee331a2d413b439619691b2459e726f65
3
+ size 15933
rng_state_27.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:35835a5a9f00ec15c406741374c0f78837b21c6fb3f205b3b163b5e23de93b62
3
+ size 15933
rng_state_28.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3d7c5b2869983c9fa4ff7da4eff8b1176c7bdbb2ea2b373f5dee3aed5c350094
3
+ size 15933
rng_state_29.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:376a17f3b745442b31c9819dcec3e7d0d416125d89f759f52767794f9d9d5a70
3
+ size 15933
rng_state_3.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:649ec3aad85bd77fc0d98f7ae416fe033bd1cdd23b29ee5f779bc0a4e2ccf7e9
3
+ size 15920
rng_state_30.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3e0fc0234ebd2bcb3165419b5475ccc70e7250391ac5762275e0682c3c002dbf
3
+ size 15933
rng_state_31.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:933a4b82cc7e76f1d159ea8dac4948c6b6b45f76854171fa870f4905f2af0594
3
+ size 15933
rng_state_32.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8871c23bab1d989131031089b50196f6c5b3206f168a1c61412c09ba5eb18420
3
+ size 15933
rng_state_33.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:32fa3f16a069d2d4214e737f40c3e6b1723ac4c8dc3d26b1fd842555bc4cb2a5
3
+ size 15933
rng_state_34.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:84ecca3c5e1a199faf7c11d1182589f1de34841cc34c13fe351759a1dc76151f
3
+ size 15933
rng_state_35.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b6c0a39eff9cc1e09770cbcf82db8f18be43b3cd0956061f56759d4e7276adf1
3
+ size 15933
rng_state_36.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:07b7c835b039d139939a4dc75daddb87fbf69ba4beb52897a60c2fe2d3cc96ab
3
+ size 15933
rng_state_37.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c51b2d27904e35cd28248d5803ae58400b0ce5638590a32b1d5bfbfae973319e
3
+ size 15933
rng_state_38.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3f8340152c3acf3226e2687bd202784e3c414fe64ac77996c7df3523e32e8f66
3
+ size 15933
rng_state_39.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4a2def22827769419f3abcce05feb539210d4a88bf774fa573937252036b5bef
3
+ size 15933
rng_state_4.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3e69a12d9a9c3b609f7eab473fb8f98242e1e2aab7922032ad286bdddf3d1ab5
3
+ size 15920
rng_state_40.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e5d7acd9d9547f49b3277b702a1c42433323e470120cac7b41ace52bf203386d
3
+ size 15933
rng_state_41.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2cfc6c6d2293b64e3cd4f82049f6553c4b1b52015688157a87c1b5e16591e4cb
3
+ size 15933
rng_state_42.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a8ca7700b3d83b162cf47fa8c703f7a24d0885ae1505c79af84e527ce88f8229
3
+ size 15933
rng_state_43.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4cc88406bac48feadcc90b9c5accc876e54449ef585466fd58c62a4de4d48d2f
3
+ size 15933
rng_state_44.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3dfe4dcee08e50612f4fc23e4f008f7ae68d90a0b34af1ad41e086cb88524ff4
3
+ size 15933