SentenceTransformer based on JFernandoGRE/bert-ner-colombian-elitenames

This is a sentence-transformers model finetuned from JFernandoGRE/bert-ner-colombian-elitenames. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertModel'})
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("JFernandoGRE/gtelarge-colombian-elitenames-righttail")
# Run inference
sentences = [
    'STEVEN ANDRES SANCHEZ HERNANDEZ',
    'BRYAN MANUEL SANCHEZ HERNANDEZ',
    'JHONN JAIRO GOMEZ RAMIREZ',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.5568, 0.3681],
#         [0.5568, 1.0000, 0.3804],
#         [0.3681, 0.3804, 1.0000]])

Training Details

Training Dataset

Unnamed Dataset

  • Size: 10,629 training samples
  • Columns: sentence1, sentence2, and label
  • Approximate statistics based on the first 1000 samples:
    sentence1 sentence2 label
    type string string int
    details
    • min: 4 tokens
    • mean: 8.5 tokens
    • max: 15 tokens
    • min: 4 tokens
    • mean: 8.85 tokens
    • max: 15 tokens
    • 0: ~83.10%
    • 1: ~16.90%
  • Samples:
    sentence1 sentence2 label
    JUAN CARLOS CORTES GONZALEZ JEAN CARLOS CORTES GONZALEZ 0
    SERGIO ANTONIO PACHECO AHUMAITRE SERGIO ANTONIO PACHECO AUMAITRE 1
    JOSÉ MAURICIO ZULUAGA TOBÓN JOSE HERNANDO ZULUAGA MARIN 0
  • Loss: OnlineContrastiveLoss

Evaluation Dataset

Unnamed Dataset

  • Size: 2,658 evaluation samples
  • Columns: sentence1, sentence2, and label
  • Approximate statistics based on the first 1000 samples:
    sentence1 sentence2 label
    type string string int
    details
    • min: 4 tokens
    • mean: 8.55 tokens
    • max: 13 tokens
    • min: 5 tokens
    • mean: 8.88 tokens
    • max: 15 tokens
    • 0: ~81.50%
    • 1: ~18.50%
  • Samples:
    sentence1 sentence2 label
    YEIMIN DARIO PEREZ MOLINA YESSIKA YULIETH PEREZ MOLINA 0
    CARLOS HUMBERTO MARTINEZ ALZATE CARLOS HUMBERTO MARTINEZ PATIÑO 0
    CRISTHIAN DAVID BERMUDEZ TORRES CRISTHIAN DAVID FORERO TORRES 0
  • Loss: OnlineContrastiveLoss

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • learning_rate: 1e-05
  • num_train_epochs: 5
  • warmup_ratio: 0.182
  • fp16: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 1e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.182
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • project: huggingface
  • trackio_space_id: trackio
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: no
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: True
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss Validation Loss
0.1504 100 0.3195 0.6335
0.3008 200 0.3014 0.5719
0.4511 300 0.2717 0.4428
0.6015 400 0.2156 0.2780
0.7519 500 0.1779 0.2302
0.9023 600 0.1264 0.1554
1.0526 700 0.1178 0.1374
1.2030 800 0.1093 0.0901
1.3534 900 0.1025 0.1134
1.5038 1000 0.0988 0.0862
1.6541 1100 0.0876 0.0876
1.8045 1200 0.1004 0.0781
1.9549 1300 0.0669 0.0896
2.1053 1400 0.0765 0.0745
2.2556 1500 0.0623 0.0798
2.4060 1600 0.0705 0.0671
2.5564 1700 0.0429 0.0678
2.7068 1800 0.0612 0.0780
2.8571 1900 0.0605 0.0594
3.0075 2000 0.0541 0.0654
3.1579 2100 0.0554 0.0544
3.3083 2200 0.0423 0.0528
3.4586 2300 0.0379 0.0552
3.6090 2400 0.0384 0.0541
3.7594 2500 0.0312 0.0534
3.9098 2600 0.0508 0.0569
4.0602 2700 0.0374 0.0548
4.2105 2800 0.0239 0.0495
4.3609 2900 0.034 0.0489
4.5113 3000 0.0419 0.0504
4.6617 3100 0.0248 0.0489
4.8120 3200 0.0437 0.0482
4.9624 3300 0.0357 0.0492

Framework Versions

  • Python: 3.12.12
  • Sentence Transformers: 5.2.0
  • Transformers: 4.57.3
  • PyTorch: 2.9.1+cu128
  • Accelerate: 1.12.0
  • Datasets: 4.0.0
  • Tokenizers: 0.22.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month
3
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for JFernandoGRE/gtelarge-colombian-elitenames-righttail

Finetuned
(1)
this model

Collection including JFernandoGRE/gtelarge-colombian-elitenames-righttail

Paper for JFernandoGRE/gtelarge-colombian-elitenames-righttail