SentenceTransformer based on BAAI/bge-small-en-v1.5

This is a sentence-transformers model finetuned from BAAI/bge-small-en-v1.5. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-small-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 384 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True, 'architecture': 'BertModel'})
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("shahamitkumar/resume-bge-small")
# Run inference
sentences = [
    'exit interviews AND employee feedback',
    'e l p S M E s i n t h e i r g l o b a l m a r k e t r e s e a r c h ● C u l t i v a t e d s t r o n g c l i e n t r e l a t i o n s h i p s a n d a c h i e v e d a 1 0 0 % j o b s u c c e s s s c o r e ● S h a r p e n e d a b i l i t y t o c o m m u n i c a t e e \x00 e c t i v e l y w i t h g l o b a l c l i e n t s a n d s t a k e h o l d e r s H u m a n R e s o u r c e s I n t e r n | P I A ( P a k i s t a n I n t e r n a t i o n a l A i r l i n e s ) | J u n e 2 0 2 1 - J u l y 2 0 2 1 ● C o n t r i b u t e d t o a s i g n i fi c a n t r e d u c t i o n i n w o r k f o r c e s i z e , w i t h n e a r l y 2 0 0 0 e m p l o y e e s o p t i n g f o r t h e V S S , l e a d i n g t o e n h a n c e d o p e r a t i o n a l a g i l i t y a n d c o s t s a v i n g s f o r t h e o r g a n i z a t i o n ● A s s i s t e d i n c o n d u c t i n g e x i t i n t e r v i e w s f o r d e p a r t i n g e m p l o y e e s , g a t h e r i n g v a l u a b l e f e e d b a c k t o i d e n t i f y a r e a s f o r i m p r o v e m e n',
    'Settlement Process SKILLS • Investment decisioning • Funds Flow process • Prepaid card processing knowledge (VISA, MasterCard, Discover & FIS) • ACH Transaction processing system • ACH Migration Project Lead • Understanding of Debit & Prepaid card settlement Process • Funds flow process between Program Manager, Issuing Financial Ins. & Processor and the role of ODFI & RDFI. • Team Player and Proficient in MS E xcel • Fluent in [ English, Urdu & Punjabi ]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.2549, 0.2693],
#         [0.2549, 1.0000, 0.1984],
#         [0.2693, 0.1984, 1.0000]])

Training Details

Training Dataset

Unnamed Dataset

  • Size: 1,865 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 5 tokens
    • mean: 10.29 tokens
    • max: 21 tokens
    • min: 25 tokens
    • mean: 330.39 tokens
    • max: 512 tokens
  • Samples:
    sentence_0 sentence_1
    operations management AND marketing research be a great lead in customer service and effecting it with your behavior”. Advanced Operations Management: Analyzed supply chain system of Packages limited w.r.t dynamics. Marketing Research: Researched and presented solutions for the problem of low sales volume for “DeSOM” ADDITIONAL EXPERIENCE General Secretary , Business Management Club 2014 – 2015 • Conducted and promoted several university trips to Northern areas with my team of 12 members. Team Leader, Polo Club 2015 - 2016 • I was the team leader university polo club team. • Going to be a certified Customer Re lation M anager By BMW. HONORS, AWARDS & TRAININGS • 3rd Position representing LGU at LUMS supply chain summit. • 1st position in university for finishing the fairy meadow trek. • 1st position in university dramatic competition • Hands on experience on SAP for 2.5 years SKILLS & INTERESTS • Competent user of Microsoft Office, SAP & CRM systems. • Vast interest in travelling, hiking and keeping pet. Pets make your surroundin...
    Microsoft CRM AND sales strategy internet lead phone calls Selling a minimum number of products or bringing in a minimum of customers from the Internet, based on goals and objectives defined. INTECH Process Automation Microsoft CRM Administator July 2014 - December 2014 (6 months) Lahore, Pakistan Education COMSATS Institute of Information and Technology Master of Business Administration - MBA, Business Administration and Management, General · (2018 - 2019) COMSATS Institute of Information and Technology Bachelor's degree, Business/Managerial Economics · (2012 - 2016) Page 3 of 3
    who has experience with SAP ERP and Oracle? SHAHRUKH ALI
    shahrukh.ali@descon.com 4618452
    linkedin.com/in/shahrukh-ali-3021ab24a
    Summary
    A highly motivated and result-driven Business Development Executive who also has served as a Finance &
    Accounts Professional with 12 years of invaluable experience including being skilled in numerous business
    development, financial & accounting fields. I have the ability to handle complex assignments effectively &
    possessing the confidence to work as part of a team or independently. An established reputation for self‐
    motivation, attention to detail excellent organizational skills, and high ethical conduct. Presently looking for the
    best opportunity with a goal-oriented company where I can excel, deliver and achieve according to my potential. I
    am enthusiastic to learn keenly about emerging developments in my job sector.
    Experience
    Senior Associate
    KPMG In Pakistan
    Aug 2021 - Feb 2022 (7 months)
    Team Supervision and Training
    Data entry analyst: Update the system as invoices are received.
    Bank Pa...
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim",
        "gather_across_devices": false,
        "directions": [
            "query_to_doc"
        ],
        "partition_mode": "joint",
        "hardness_mode": null,
        "hardness_strength": 0.0
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 3
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: None
  • warmup_ratio: None
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • enable_jit_checkpoint: False
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • use_cpu: False
  • seed: 42
  • data_seed: None
  • bf16: False
  • fp16: False
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: -1
  • ddp_backend: None
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • group_by_length: False
  • length_column_name: length
  • project: huggingface
  • trackio_space_id: trackio
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • auto_find_batch_size: False
  • full_determinism: False
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_num_input_tokens_seen: no
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: True
  • use_cache: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin
  • router_mapping: {}
  • learning_rate_mapping: {}

Framework Versions

  • Python: 3.12.12
  • Sentence Transformers: 5.3.0
  • Transformers: 5.0.0
  • PyTorch: 2.10.0+cu128
  • Accelerate: 1.13.0
  • Datasets: 4.0.0
  • Tokenizers: 0.22.2

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{oord2019representationlearningcontrastivepredictive,
      title={Representation Learning with Contrastive Predictive Coding},
      author={Aaron van den Oord and Yazhe Li and Oriol Vinyals},
      year={2019},
      eprint={1807.03748},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/1807.03748},
}
Downloads last month
167
Safetensors
Model size
33.4M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for shahamitkumar/resume-bge-small

Finetuned
(305)
this model

Papers for shahamitkumar/resume-bge-small