Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
Paper • 1908.10084 • Published • 12
This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2 on the json dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("GbrlOl/finetune-embedding-all-MiniLM-L6-v2-geotechnical-test-v1")
# Run inference
sentences = [
'¿Qué medidas incorpora el plan de cierre aprobado en el año 2010?',
'P or lo tanto, en atención a \nlos puntos precedentes, la elaboración de este plan de cierre está focalizada en cumplir con los \nrequerimientos establecidos en el Régimen Transitorio de la Ley de C ierre y las guías \nmetodológicas desarrolladas por SERNAGEOMIN a este respecto, realizando una valorización del \núltimo plan aprobado bajo la Resolución Nº 0687 del 03 de Agosto de 2010 de SERNAGEOMIN. \n \nEl plan de cierre incorpora las medidas presentadas en el plan de cierre aprobado el año 2010, los \ncompromisos ambientales emanados de la tramitación de proyectos en el Sistema de Evaluación \nde Impacto Ambiental (SEIA), y los compromisos de cierre establecidos en las resoluciones \notorgadas por SERNAGEOMIN a la faena de Guanaco Compañía Minera. \n \nPosterior a la entrada en vigencia de la Ley 19.300 de Bases Generales de Medio Ambiente, \nGuanaco Compañía Minera ha presentado 4 proyectos los cuales mejoran, modifican o amplían \nprocesos mineros de la faena.',
'15 metros de espesor \nsobre una superficie de 31.410 m2 m3 \n4.711,50 \n \n0,17 \n \n796,63 \nCosto del material (limos) \nEstimación del costo del material a \nutilizar para el cubrimiento del \ndepósito \ngl \n1,00 \n \n833,91 \n \n833,91 \nDisposición de estrato de suelo vegetal \nsobre la superficie y taludes del depósito \nCapa de 0,3 metros de espesor sobre \nuna superficie de 31.410 m2 m3 \n9.423,00 \n \n0,17 \n \n1.593,25 \nNivelación de la superficie del depósito \n(tipo "domo") \nNivelación en una superficie estimada \nde 31.410 m2 m2 \n31.410,00 \n \n0,01 \n \n369,97 \nCierre de accesos Pretil de 1,5 m de altura y 3 m de m3',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
query, sentence, and label| query | sentence | label | |
|---|---|---|---|
| type | string | string | int |
| details |
|
|
|
| query | sentence | label |
|---|---|---|
¿Es estable el depósito? |
28 |
0 |
¿Es estable el depósito? |
La superficie de la corona será paralela a la |
0 |
¿Cuál es la elevación máxima de la corona del depósito? |
La superficie de la corona será paralela a la |
1 |
CoSENTLoss with these parameters:{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
per_device_train_batch_size: 16per_device_eval_batch_size: 16learning_rate: 2e-05num_train_epochs: 100warmup_ratio: 0.1fp16: Truebatch_sampler: no_duplicatesoverwrite_output_dir: Falsedo_predict: Falseeval_strategy: noprediction_loss_only: Trueper_device_train_batch_size: 16per_device_eval_batch_size: 16per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 2e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1.0num_train_epochs: 100max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.1warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Falsefp16: Truefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torchoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters: auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Nonedispatch_batches: Nonesplit_batches: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseeval_use_gather_object: Falseaverage_tokens_across_devices: Falseprompts: Nonebatch_sampler: no_duplicatesmulti_dataset_batch_sampler: proportional| Epoch | Step | Training Loss |
|---|---|---|
| 6.7143 | 100 | 2.2976 |
| 13.3571 | 200 | 0.3082 |
| 20.0714 | 300 | 0.0002 |
| 26.7143 | 400 | 0.0 |
| 33.3571 | 500 | 0.0 |
| 40.0714 | 600 | 0.0 |
| 46.7143 | 700 | 0.0 |
| 53.3571 | 800 | 0.0 |
| 60.0714 | 900 | 0.0 |
| 66.7143 | 1000 | 0.0 |
| 73.3571 | 1100 | 0.0 |
| 80.0714 | 1200 | 0.0 |
| 86.7143 | 1300 | 0.0 |
| 93.3571 | 1400 | 0.0 |
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
@online{kexuefm-8847,
title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
author={Su Jianlin},
year={2022},
month={Jan},
url={https://kexue.fm/archives/8847},
}
Base model
sentence-transformers/all-MiniLM-L6-v2