Sentence Similarity
sentence-transformers
Safetensors
mpnet
feature-extraction
Generated from Trainer
dataset_size:1084
loss:ContrastiveTensionLossInBatchNegatives
text-embeddings-inference
Instructions to use andreyunic23/beds_step4 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sentence-transformers
How to use andreyunic23/beds_step4 with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("andreyunic23/beds_step4") sentences = [ "Heat and smoke detectors should trigger an alarm and extinguishing systems.", "Laboratory Operators must be trained to manipulate energetic materials correctly.", "Loss or damage to test environments.", "Heat and smoke detectors should trigger an alarm and extinguishing systems." ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [4, 4] - Notebooks
- Google Colab
- Kaggle
Add new SentenceTransformer model.
Browse files- 1_Pooling/config.json +10 -0
- README.md +356 -0
- config.json +24 -0
- config_sentence_transformers.json +10 -0
- model.safetensors +3 -0
- modules.json +20 -0
- sentence_bert_config.json +4 -0
- special_tokens_map.json +51 -0
- tokenizer.json +0 -0
- tokenizer_config.json +72 -0
- vocab.txt +0 -0
1_Pooling/config.json
ADDED
|
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"word_embedding_dimension": 768,
|
| 3 |
+
"pooling_mode_cls_token": false,
|
| 4 |
+
"pooling_mode_mean_tokens": true,
|
| 5 |
+
"pooling_mode_max_tokens": false,
|
| 6 |
+
"pooling_mode_mean_sqrt_len_tokens": false,
|
| 7 |
+
"pooling_mode_weightedmean_tokens": false,
|
| 8 |
+
"pooling_mode_lasttoken": false,
|
| 9 |
+
"include_prompt": true
|
| 10 |
+
}
|
README.md
ADDED
|
@@ -0,0 +1,356 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
tags:
|
| 3 |
+
- sentence-transformers
|
| 4 |
+
- sentence-similarity
|
| 5 |
+
- feature-extraction
|
| 6 |
+
- generated_from_trainer
|
| 7 |
+
- dataset_size:1084
|
| 8 |
+
- loss:ContrastiveTensionLossInBatchNegatives
|
| 9 |
+
base_model: sentence-transformers/all-mpnet-base-v2
|
| 10 |
+
widget:
|
| 11 |
+
- source_sentence: Heat and smoke detectors should trigger an alarm and extinguishing
|
| 12 |
+
systems.
|
| 13 |
+
sentences:
|
| 14 |
+
- Laboratory Operators must be trained to manipulate energetic materials correctly.
|
| 15 |
+
- Loss or damage to test environments.
|
| 16 |
+
- Heat and smoke detectors should trigger an alarm and extinguishing systems.
|
| 17 |
+
- source_sentence: Sensitive information regarding the aircrafts, vehicles, payloads
|
| 18 |
+
and ground support systems designs or procedures; personnel data; production process;
|
| 19 |
+
material; organic or operational safety information, among others, shall be kept
|
| 20 |
+
with the allocated restricted access.
|
| 21 |
+
sentences:
|
| 22 |
+
- Take off / landing executed only when runway is empty.
|
| 23 |
+
- Sensitive information regarding the aircrafts, vehicles, payloads and ground support
|
| 24 |
+
systems designs or procedures; personnel data; production process; material; organic
|
| 25 |
+
or operational safety information, among others, shall be kept with the allocated
|
| 26 |
+
restricted access.
|
| 27 |
+
- Pilots must not execute maneuver with incorrect climb rate, final altitude, etc.
|
| 28 |
+
- source_sentence: Doors close while person in the doorway.
|
| 29 |
+
sentences:
|
| 30 |
+
- ACS must not provide attitude maneuver commands too late after ASTRO-H has rotated
|
| 31 |
+
too far.
|
| 32 |
+
- A/C must maintain minimum safe altitude limits.
|
| 33 |
+
- Doors close while person in the doorway.
|
| 34 |
+
- source_sentence: Doors shall remain closed when train is moving.
|
| 35 |
+
sentences:
|
| 36 |
+
- Doors shall remain closed when train is moving.
|
| 37 |
+
- Catching fire inside the ship.
|
| 38 |
+
- Aircraft enters uncontrolled state.
|
| 39 |
+
- source_sentence: Customer's data released to public.
|
| 40 |
+
sentences:
|
| 41 |
+
- Exposure of Earth life or human assets off Earth to toxic, radioactive, or energetic
|
| 42 |
+
elements of mission hardware.
|
| 43 |
+
- Customer's data released to public.
|
| 44 |
+
- Equipment Operated Beyond Limits.
|
| 45 |
+
pipeline_tag: sentence-similarity
|
| 46 |
+
library_name: sentence-transformers
|
| 47 |
+
---
|
| 48 |
+
|
| 49 |
+
# SentenceTransformer based on sentence-transformers/all-mpnet-base-v2
|
| 50 |
+
|
| 51 |
+
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
|
| 52 |
+
|
| 53 |
+
## Model Details
|
| 54 |
+
|
| 55 |
+
### Model Description
|
| 56 |
+
- **Model Type:** Sentence Transformer
|
| 57 |
+
- **Base model:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) <!-- at revision 9a3225965996d404b775526de6dbfe85d3368642 -->
|
| 58 |
+
- **Maximum Sequence Length:** 384 tokens
|
| 59 |
+
- **Output Dimensionality:** 768 tokens
|
| 60 |
+
- **Similarity Function:** Cosine Similarity
|
| 61 |
+
<!-- - **Training Dataset:** Unknown -->
|
| 62 |
+
<!-- - **Language:** Unknown -->
|
| 63 |
+
<!-- - **License:** Unknown -->
|
| 64 |
+
|
| 65 |
+
### Model Sources
|
| 66 |
+
|
| 67 |
+
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
|
| 68 |
+
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
|
| 69 |
+
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
|
| 70 |
+
|
| 71 |
+
### Full Model Architecture
|
| 72 |
+
|
| 73 |
+
```
|
| 74 |
+
SentenceTransformer(
|
| 75 |
+
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
|
| 76 |
+
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
|
| 77 |
+
(2): Normalize()
|
| 78 |
+
)
|
| 79 |
+
```
|
| 80 |
+
|
| 81 |
+
## Usage
|
| 82 |
+
|
| 83 |
+
### Direct Usage (Sentence Transformers)
|
| 84 |
+
|
| 85 |
+
First install the Sentence Transformers library:
|
| 86 |
+
|
| 87 |
+
```bash
|
| 88 |
+
pip install -U sentence-transformers
|
| 89 |
+
```
|
| 90 |
+
|
| 91 |
+
Then you can load this model and run inference.
|
| 92 |
+
```python
|
| 93 |
+
from sentence_transformers import SentenceTransformer
|
| 94 |
+
|
| 95 |
+
# Download from the 🤗 Hub
|
| 96 |
+
model = SentenceTransformer("andreyunic23/beds_step4")
|
| 97 |
+
# Run inference
|
| 98 |
+
sentences = [
|
| 99 |
+
"Customer's data released to public.",
|
| 100 |
+
"Customer's data released to public.",
|
| 101 |
+
'Equipment Operated Beyond Limits.',
|
| 102 |
+
]
|
| 103 |
+
embeddings = model.encode(sentences)
|
| 104 |
+
print(embeddings.shape)
|
| 105 |
+
# [3, 768]
|
| 106 |
+
|
| 107 |
+
# Get the similarity scores for the embeddings
|
| 108 |
+
similarities = model.similarity(embeddings, embeddings)
|
| 109 |
+
print(similarities.shape)
|
| 110 |
+
# [3, 3]
|
| 111 |
+
```
|
| 112 |
+
|
| 113 |
+
<!--
|
| 114 |
+
### Direct Usage (Transformers)
|
| 115 |
+
|
| 116 |
+
<details><summary>Click to see the direct usage in Transformers</summary>
|
| 117 |
+
|
| 118 |
+
</details>
|
| 119 |
+
-->
|
| 120 |
+
|
| 121 |
+
<!--
|
| 122 |
+
### Downstream Usage (Sentence Transformers)
|
| 123 |
+
|
| 124 |
+
You can finetune this model on your own dataset.
|
| 125 |
+
|
| 126 |
+
<details><summary>Click to expand</summary>
|
| 127 |
+
|
| 128 |
+
</details>
|
| 129 |
+
-->
|
| 130 |
+
|
| 131 |
+
<!--
|
| 132 |
+
### Out-of-Scope Use
|
| 133 |
+
|
| 134 |
+
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
|
| 135 |
+
-->
|
| 136 |
+
|
| 137 |
+
<!--
|
| 138 |
+
## Bias, Risks and Limitations
|
| 139 |
+
|
| 140 |
+
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
|
| 141 |
+
-->
|
| 142 |
+
|
| 143 |
+
<!--
|
| 144 |
+
### Recommendations
|
| 145 |
+
|
| 146 |
+
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
|
| 147 |
+
-->
|
| 148 |
+
|
| 149 |
+
## Training Details
|
| 150 |
+
|
| 151 |
+
### Training Dataset
|
| 152 |
+
|
| 153 |
+
#### Unnamed Dataset
|
| 154 |
+
|
| 155 |
+
|
| 156 |
+
* Size: 1,084 training samples
|
| 157 |
+
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
|
| 158 |
+
* Approximate statistics based on the first 1000 samples:
|
| 159 |
+
| | sentence1 | sentence2 | label |
|
| 160 |
+
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:-----------------------------|
|
| 161 |
+
| type | string | string | int |
|
| 162 |
+
| details | <ul><li>min: 4 tokens</li><li>mean: 13.7 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 13.7 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>0: 100.00%</li></ul> |
|
| 163 |
+
* Samples:
|
| 164 |
+
| sentence1 | sentence2 | label |
|
| 165 |
+
|:--------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------|:---------------|
|
| 166 |
+
| <code>A collision between the ACROBOTER robotic platform and an unknown object must be avoided at all times.</code> | <code>A collision between the ACROBOTER robotic platform and an unknown object must be avoided at all times.</code> | <code>0</code> |
|
| 167 |
+
| <code>A non‐patient is injured or killed by radiation.</code> | <code>A non‐patient is injured or killed by radiation.</code> | <code>0</code> |
|
| 168 |
+
| <code>A nonpatient is injured or killed in the process of MRI simulation.</code> | <code>A nonpatient is injured or killed in the process of MRI simulation.</code> | <code>0</code> |
|
| 169 |
+
* Loss: [<code>ContrastiveTensionLossInBatchNegatives</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#contrastivetensionlossinbatchnegatives)
|
| 170 |
+
|
| 171 |
+
### Training Hyperparameters
|
| 172 |
+
#### Non-Default Hyperparameters
|
| 173 |
+
|
| 174 |
+
- `per_device_train_batch_size`: 16
|
| 175 |
+
- `per_device_eval_batch_size`: 16
|
| 176 |
+
- `learning_rate`: 2e-05
|
| 177 |
+
- `num_train_epochs`: 12
|
| 178 |
+
|
| 179 |
+
#### All Hyperparameters
|
| 180 |
+
<details><summary>Click to expand</summary>
|
| 181 |
+
|
| 182 |
+
- `overwrite_output_dir`: False
|
| 183 |
+
- `do_predict`: False
|
| 184 |
+
- `eval_strategy`: no
|
| 185 |
+
- `prediction_loss_only`: True
|
| 186 |
+
- `per_device_train_batch_size`: 16
|
| 187 |
+
- `per_device_eval_batch_size`: 16
|
| 188 |
+
- `per_gpu_train_batch_size`: None
|
| 189 |
+
- `per_gpu_eval_batch_size`: None
|
| 190 |
+
- `gradient_accumulation_steps`: 1
|
| 191 |
+
- `eval_accumulation_steps`: None
|
| 192 |
+
- `torch_empty_cache_steps`: None
|
| 193 |
+
- `learning_rate`: 2e-05
|
| 194 |
+
- `weight_decay`: 0.0
|
| 195 |
+
- `adam_beta1`: 0.9
|
| 196 |
+
- `adam_beta2`: 0.999
|
| 197 |
+
- `adam_epsilon`: 1e-08
|
| 198 |
+
- `max_grad_norm`: 1.0
|
| 199 |
+
- `num_train_epochs`: 12
|
| 200 |
+
- `max_steps`: -1
|
| 201 |
+
- `lr_scheduler_type`: linear
|
| 202 |
+
- `lr_scheduler_kwargs`: {}
|
| 203 |
+
- `warmup_ratio`: 0.0
|
| 204 |
+
- `warmup_steps`: 0
|
| 205 |
+
- `log_level`: passive
|
| 206 |
+
- `log_level_replica`: warning
|
| 207 |
+
- `log_on_each_node`: True
|
| 208 |
+
- `logging_nan_inf_filter`: True
|
| 209 |
+
- `save_safetensors`: True
|
| 210 |
+
- `save_on_each_node`: False
|
| 211 |
+
- `save_only_model`: False
|
| 212 |
+
- `restore_callback_states_from_checkpoint`: False
|
| 213 |
+
- `no_cuda`: False
|
| 214 |
+
- `use_cpu`: False
|
| 215 |
+
- `use_mps_device`: False
|
| 216 |
+
- `seed`: 42
|
| 217 |
+
- `data_seed`: None
|
| 218 |
+
- `jit_mode_eval`: False
|
| 219 |
+
- `use_ipex`: False
|
| 220 |
+
- `bf16`: False
|
| 221 |
+
- `fp16`: False
|
| 222 |
+
- `fp16_opt_level`: O1
|
| 223 |
+
- `half_precision_backend`: auto
|
| 224 |
+
- `bf16_full_eval`: False
|
| 225 |
+
- `fp16_full_eval`: False
|
| 226 |
+
- `tf32`: None
|
| 227 |
+
- `local_rank`: 0
|
| 228 |
+
- `ddp_backend`: None
|
| 229 |
+
- `tpu_num_cores`: None
|
| 230 |
+
- `tpu_metrics_debug`: False
|
| 231 |
+
- `debug`: []
|
| 232 |
+
- `dataloader_drop_last`: False
|
| 233 |
+
- `dataloader_num_workers`: 0
|
| 234 |
+
- `dataloader_prefetch_factor`: None
|
| 235 |
+
- `past_index`: -1
|
| 236 |
+
- `disable_tqdm`: False
|
| 237 |
+
- `remove_unused_columns`: True
|
| 238 |
+
- `label_names`: None
|
| 239 |
+
- `load_best_model_at_end`: False
|
| 240 |
+
- `ignore_data_skip`: False
|
| 241 |
+
- `fsdp`: []
|
| 242 |
+
- `fsdp_min_num_params`: 0
|
| 243 |
+
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
|
| 244 |
+
- `fsdp_transformer_layer_cls_to_wrap`: None
|
| 245 |
+
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
|
| 246 |
+
- `deepspeed`: None
|
| 247 |
+
- `label_smoothing_factor`: 0.0
|
| 248 |
+
- `optim`: adamw_torch
|
| 249 |
+
- `optim_args`: None
|
| 250 |
+
- `adafactor`: False
|
| 251 |
+
- `group_by_length`: False
|
| 252 |
+
- `length_column_name`: length
|
| 253 |
+
- `ddp_find_unused_parameters`: None
|
| 254 |
+
- `ddp_bucket_cap_mb`: None
|
| 255 |
+
- `ddp_broadcast_buffers`: False
|
| 256 |
+
- `dataloader_pin_memory`: True
|
| 257 |
+
- `dataloader_persistent_workers`: False
|
| 258 |
+
- `skip_memory_metrics`: True
|
| 259 |
+
- `use_legacy_prediction_loop`: False
|
| 260 |
+
- `push_to_hub`: False
|
| 261 |
+
- `resume_from_checkpoint`: None
|
| 262 |
+
- `hub_model_id`: None
|
| 263 |
+
- `hub_strategy`: every_save
|
| 264 |
+
- `hub_private_repo`: False
|
| 265 |
+
- `hub_always_push`: False
|
| 266 |
+
- `gradient_checkpointing`: False
|
| 267 |
+
- `gradient_checkpointing_kwargs`: None
|
| 268 |
+
- `include_inputs_for_metrics`: False
|
| 269 |
+
- `eval_do_concat_batches`: True
|
| 270 |
+
- `fp16_backend`: auto
|
| 271 |
+
- `push_to_hub_model_id`: None
|
| 272 |
+
- `push_to_hub_organization`: None
|
| 273 |
+
- `mp_parameters`:
|
| 274 |
+
- `auto_find_batch_size`: False
|
| 275 |
+
- `full_determinism`: False
|
| 276 |
+
- `torchdynamo`: None
|
| 277 |
+
- `ray_scope`: last
|
| 278 |
+
- `ddp_timeout`: 1800
|
| 279 |
+
- `torch_compile`: False
|
| 280 |
+
- `torch_compile_backend`: None
|
| 281 |
+
- `torch_compile_mode`: None
|
| 282 |
+
- `dispatch_batches`: None
|
| 283 |
+
- `split_batches`: None
|
| 284 |
+
- `include_tokens_per_second`: False
|
| 285 |
+
- `include_num_input_tokens_seen`: False
|
| 286 |
+
- `neftune_noise_alpha`: None
|
| 287 |
+
- `optim_target_modules`: None
|
| 288 |
+
- `batch_eval_metrics`: False
|
| 289 |
+
- `eval_on_start`: False
|
| 290 |
+
- `use_liger_kernel`: False
|
| 291 |
+
- `eval_use_gather_object`: False
|
| 292 |
+
- `batch_sampler`: batch_sampler
|
| 293 |
+
- `multi_dataset_batch_sampler`: proportional
|
| 294 |
+
|
| 295 |
+
</details>
|
| 296 |
+
|
| 297 |
+
### Training Logs
|
| 298 |
+
| Epoch | Step | Training Loss |
|
| 299 |
+
|:------:|:----:|:-------------:|
|
| 300 |
+
| 7.3529 | 500 | 0.0658 |
|
| 301 |
+
|
| 302 |
+
|
| 303 |
+
### Framework Versions
|
| 304 |
+
- Python: 3.10.12
|
| 305 |
+
- Sentence Transformers: 3.1.1
|
| 306 |
+
- Transformers: 4.45.2
|
| 307 |
+
- PyTorch: 2.5.1+cu121
|
| 308 |
+
- Accelerate: 1.2.1
|
| 309 |
+
- Datasets: 3.2.0
|
| 310 |
+
- Tokenizers: 0.20.3
|
| 311 |
+
|
| 312 |
+
## Citation
|
| 313 |
+
|
| 314 |
+
### BibTeX
|
| 315 |
+
|
| 316 |
+
#### Sentence Transformers
|
| 317 |
+
```bibtex
|
| 318 |
+
@inproceedings{reimers-2019-sentence-bert,
|
| 319 |
+
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
|
| 320 |
+
author = "Reimers, Nils and Gurevych, Iryna",
|
| 321 |
+
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
|
| 322 |
+
month = "11",
|
| 323 |
+
year = "2019",
|
| 324 |
+
publisher = "Association for Computational Linguistics",
|
| 325 |
+
url = "https://arxiv.org/abs/1908.10084",
|
| 326 |
+
}
|
| 327 |
+
```
|
| 328 |
+
|
| 329 |
+
#### ContrastiveTensionLossInBatchNegatives
|
| 330 |
+
```bibtex
|
| 331 |
+
@inproceedings{carlsson2021semantic,
|
| 332 |
+
title={Semantic Re-tuning with Contrastive Tension},
|
| 333 |
+
author={Fredrik Carlsson and Amaru Cuba Gyllensten and Evangelia Gogoulou and Erik Ylip{"a}{"a} Hellqvist and Magnus Sahlgren},
|
| 334 |
+
booktitle={International Conference on Learning Representations},
|
| 335 |
+
year={2021},
|
| 336 |
+
url={https://openreview.net/forum?id=Ov_sMNau-PF}
|
| 337 |
+
}
|
| 338 |
+
```
|
| 339 |
+
|
| 340 |
+
<!--
|
| 341 |
+
## Glossary
|
| 342 |
+
|
| 343 |
+
*Clearly define terms in order to be accessible across audiences.*
|
| 344 |
+
-->
|
| 345 |
+
|
| 346 |
+
<!--
|
| 347 |
+
## Model Card Authors
|
| 348 |
+
|
| 349 |
+
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
|
| 350 |
+
-->
|
| 351 |
+
|
| 352 |
+
<!--
|
| 353 |
+
## Model Card Contact
|
| 354 |
+
|
| 355 |
+
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
|
| 356 |
+
-->
|
config.json
ADDED
|
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"_name_or_path": "sentence-transformers/all-mpnet-base-v2",
|
| 3 |
+
"architectures": [
|
| 4 |
+
"MPNetModel"
|
| 5 |
+
],
|
| 6 |
+
"attention_probs_dropout_prob": 0.1,
|
| 7 |
+
"bos_token_id": 0,
|
| 8 |
+
"eos_token_id": 2,
|
| 9 |
+
"hidden_act": "gelu",
|
| 10 |
+
"hidden_dropout_prob": 0.1,
|
| 11 |
+
"hidden_size": 768,
|
| 12 |
+
"initializer_range": 0.02,
|
| 13 |
+
"intermediate_size": 3072,
|
| 14 |
+
"layer_norm_eps": 1e-05,
|
| 15 |
+
"max_position_embeddings": 514,
|
| 16 |
+
"model_type": "mpnet",
|
| 17 |
+
"num_attention_heads": 12,
|
| 18 |
+
"num_hidden_layers": 12,
|
| 19 |
+
"pad_token_id": 1,
|
| 20 |
+
"relative_attention_num_buckets": 32,
|
| 21 |
+
"torch_dtype": "float32",
|
| 22 |
+
"transformers_version": "4.45.2",
|
| 23 |
+
"vocab_size": 30527
|
| 24 |
+
}
|
config_sentence_transformers.json
ADDED
|
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"__version__": {
|
| 3 |
+
"sentence_transformers": "3.1.1",
|
| 4 |
+
"transformers": "4.45.2",
|
| 5 |
+
"pytorch": "2.5.1+cu121"
|
| 6 |
+
},
|
| 7 |
+
"prompts": {},
|
| 8 |
+
"default_prompt_name": null,
|
| 9 |
+
"similarity_fn_name": null
|
| 10 |
+
}
|
model.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f114f6140ce59e2ced55930f63d70f1859d57ba3d62bc9d6335dcd35381d6d04
|
| 3 |
+
size 437967672
|
modules.json
ADDED
|
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"idx": 0,
|
| 4 |
+
"name": "0",
|
| 5 |
+
"path": "",
|
| 6 |
+
"type": "sentence_transformers.models.Transformer"
|
| 7 |
+
},
|
| 8 |
+
{
|
| 9 |
+
"idx": 1,
|
| 10 |
+
"name": "1",
|
| 11 |
+
"path": "1_Pooling",
|
| 12 |
+
"type": "sentence_transformers.models.Pooling"
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"idx": 2,
|
| 16 |
+
"name": "2",
|
| 17 |
+
"path": "2_Normalize",
|
| 18 |
+
"type": "sentence_transformers.models.Normalize"
|
| 19 |
+
}
|
| 20 |
+
]
|
sentence_bert_config.json
ADDED
|
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"max_seq_length": 384,
|
| 3 |
+
"do_lower_case": false
|
| 4 |
+
}
|
special_tokens_map.json
ADDED
|
@@ -0,0 +1,51 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"bos_token": {
|
| 3 |
+
"content": "<s>",
|
| 4 |
+
"lstrip": false,
|
| 5 |
+
"normalized": false,
|
| 6 |
+
"rstrip": false,
|
| 7 |
+
"single_word": false
|
| 8 |
+
},
|
| 9 |
+
"cls_token": {
|
| 10 |
+
"content": "<s>",
|
| 11 |
+
"lstrip": false,
|
| 12 |
+
"normalized": false,
|
| 13 |
+
"rstrip": false,
|
| 14 |
+
"single_word": false
|
| 15 |
+
},
|
| 16 |
+
"eos_token": {
|
| 17 |
+
"content": "</s>",
|
| 18 |
+
"lstrip": false,
|
| 19 |
+
"normalized": false,
|
| 20 |
+
"rstrip": false,
|
| 21 |
+
"single_word": false
|
| 22 |
+
},
|
| 23 |
+
"mask_token": {
|
| 24 |
+
"content": "<mask>",
|
| 25 |
+
"lstrip": true,
|
| 26 |
+
"normalized": false,
|
| 27 |
+
"rstrip": false,
|
| 28 |
+
"single_word": false
|
| 29 |
+
},
|
| 30 |
+
"pad_token": {
|
| 31 |
+
"content": "<pad>",
|
| 32 |
+
"lstrip": false,
|
| 33 |
+
"normalized": false,
|
| 34 |
+
"rstrip": false,
|
| 35 |
+
"single_word": false
|
| 36 |
+
},
|
| 37 |
+
"sep_token": {
|
| 38 |
+
"content": "</s>",
|
| 39 |
+
"lstrip": false,
|
| 40 |
+
"normalized": false,
|
| 41 |
+
"rstrip": false,
|
| 42 |
+
"single_word": false
|
| 43 |
+
},
|
| 44 |
+
"unk_token": {
|
| 45 |
+
"content": "[UNK]",
|
| 46 |
+
"lstrip": false,
|
| 47 |
+
"normalized": false,
|
| 48 |
+
"rstrip": false,
|
| 49 |
+
"single_word": false
|
| 50 |
+
}
|
| 51 |
+
}
|
tokenizer.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
tokenizer_config.json
ADDED
|
@@ -0,0 +1,72 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"added_tokens_decoder": {
|
| 3 |
+
"0": {
|
| 4 |
+
"content": "<s>",
|
| 5 |
+
"lstrip": false,
|
| 6 |
+
"normalized": false,
|
| 7 |
+
"rstrip": false,
|
| 8 |
+
"single_word": false,
|
| 9 |
+
"special": true
|
| 10 |
+
},
|
| 11 |
+
"1": {
|
| 12 |
+
"content": "<pad>",
|
| 13 |
+
"lstrip": false,
|
| 14 |
+
"normalized": false,
|
| 15 |
+
"rstrip": false,
|
| 16 |
+
"single_word": false,
|
| 17 |
+
"special": true
|
| 18 |
+
},
|
| 19 |
+
"2": {
|
| 20 |
+
"content": "</s>",
|
| 21 |
+
"lstrip": false,
|
| 22 |
+
"normalized": false,
|
| 23 |
+
"rstrip": false,
|
| 24 |
+
"single_word": false,
|
| 25 |
+
"special": true
|
| 26 |
+
},
|
| 27 |
+
"3": {
|
| 28 |
+
"content": "<unk>",
|
| 29 |
+
"lstrip": false,
|
| 30 |
+
"normalized": true,
|
| 31 |
+
"rstrip": false,
|
| 32 |
+
"single_word": false,
|
| 33 |
+
"special": true
|
| 34 |
+
},
|
| 35 |
+
"104": {
|
| 36 |
+
"content": "[UNK]",
|
| 37 |
+
"lstrip": false,
|
| 38 |
+
"normalized": false,
|
| 39 |
+
"rstrip": false,
|
| 40 |
+
"single_word": false,
|
| 41 |
+
"special": true
|
| 42 |
+
},
|
| 43 |
+
"30526": {
|
| 44 |
+
"content": "<mask>",
|
| 45 |
+
"lstrip": true,
|
| 46 |
+
"normalized": false,
|
| 47 |
+
"rstrip": false,
|
| 48 |
+
"single_word": false,
|
| 49 |
+
"special": true
|
| 50 |
+
}
|
| 51 |
+
},
|
| 52 |
+
"bos_token": "<s>",
|
| 53 |
+
"clean_up_tokenization_spaces": false,
|
| 54 |
+
"cls_token": "<s>",
|
| 55 |
+
"do_lower_case": true,
|
| 56 |
+
"eos_token": "</s>",
|
| 57 |
+
"mask_token": "<mask>",
|
| 58 |
+
"max_length": 128,
|
| 59 |
+
"model_max_length": 384,
|
| 60 |
+
"pad_to_multiple_of": null,
|
| 61 |
+
"pad_token": "<pad>",
|
| 62 |
+
"pad_token_type_id": 0,
|
| 63 |
+
"padding_side": "right",
|
| 64 |
+
"sep_token": "</s>",
|
| 65 |
+
"stride": 0,
|
| 66 |
+
"strip_accents": null,
|
| 67 |
+
"tokenize_chinese_chars": true,
|
| 68 |
+
"tokenizer_class": "MPNetTokenizer",
|
| 69 |
+
"truncation_side": "right",
|
| 70 |
+
"truncation_strategy": "longest_first",
|
| 71 |
+
"unk_token": "[UNK]"
|
| 72 |
+
}
|
vocab.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|