| --- |
| base_model: intfloat/multilingual-e5-large-instruct |
| datasets: [] |
| language: [] |
| library_name: sentence-transformers |
| pipeline_tag: sentence-similarity |
| tags: |
| - sentence-transformers |
| - sentence-similarity |
| - feature-extraction |
| - generated_from_trainer |
| - dataset_size:45199 |
| - loss:MultipleNegativesRankingLoss |
| widget: |
| - source_sentence: प्रधानमन्त्री नरेन्द्र मोदी सरकारका असफलताहरू के के हुन्? |
| sentences: |
| - पूर्वोत्तर राज्यहरूका मुख्य समस्याहरू के के हुन् र तिनीहरूको केन्द्रीय सरकारसँग |
| असन्तोष के हो? |
| - पूर्णांक के हो? |
| - नरेन्द्र मोदी सरकारले कुन क्षेत्रमा असफल भएको छ? |
| - source_sentence: 'मैले विचार गर्नुपर्ने कलेजहरू के के हुन्, विचार गर्नुपर्ने कारकहरू: |
| केएमसी म्यानिपल वा केएमसी मंगोलमा?' |
| sentences: |
| - मंगलोर शान्त वा हिंस्रक स्थान हो? |
| - पुरुषहरूको तुलनामा महिलाहरूको लागि यौनिक आनन्द बढी हुन्छ कि हुँदैन? |
| - के कसैले केएमसी मानिपाल र मंगलोरको संक्षिप्त तुलना गर्न सक्छ? |
| - source_sentence: म कसरी मेरो अङ्ग्रेजी भाषा सुधार गर्न सक्छु? |
| sentences: |
| - म कसरी एक नेचुरल अंग्रेजी वक्ता बन्न सक्छु? |
| - म जहाँ कुनै मूल अंग्रेजी वक्ताहरू छन् जो मेरो साथ मित्र बन्न चाहन्छन् र मलाई मद्दत |
| गर्न चाहन्छन्? |
| - ने टी २०१ 6 को लागि निजी कलेजहरूको लागि एमबीबीएसको लागि के कटअफ हुनेछ? |
| - source_sentence: समय यात्रा सम्भव छ कि छैन? यदि छ भने, कसरी? |
| sentences: |
| - अन्धकारमय वेब सुरक्षित छ कि छैन ब्राउज गर्न? |
| - यदि कुनै बितेको समय राम्रो थियो र समयको यात्रा सम्भव थियो भने म किन वर्तमान समयमा |
| बाँचिरहेको छु? |
| - भविष्यमा समय यात्रा सम्भव हुनेछ कि छैन? |
| - source_sentence: म कसरी बिस्तारै तौल घटाउन सक्छु? |
| sentences: |
| - कसरी कुनै केटाले त्यो केटीसँग बदला लिन सक्छ जसले उसलाई धोका दिएको छ? |
| - कस्तो प्रकारको आहार कसैले आहार नचाहने व्यक्तिका लागि उत्तम हुन्छ? |
| - वजन घटाउनको लागि कुनै राम्रो आहार हो? |
| --- |
| |
| # SentenceTransformer based on intfloat/multilingual-e5-large-instruct |
|
|
| This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct) on the universalml0/nepali_embedding_dataset dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. |
|
|
| ## Model Details |
|
|
| ### Model Description |
| - **Model Type:** Sentence Transformer |
| - **Base model:** [intfloat/multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct) <!-- at revision baa7be480a7de1539afce709c8f13f833a510e0a --> |
| - **Maximum Sequence Length:** 512 tokens |
| - **Output Dimensionality:** 1024 tokens |
| - **Similarity Function:** Cosine Similarity |
| - **Training Dataset:** |
| - universalml0/nepali_embedding_dataset |
| <!-- - **Language:** Unknown --> |
| <!-- - **License:** Unknown --> |
|
|
| ### Model Sources |
|
|
| - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) |
| - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) |
| - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) |
|
|
| ### Full Model Architecture |
|
|
| ``` |
| SentenceTransformer( |
| (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel |
| (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) |
| (2): Normalize() |
| ) |
| ``` |
|
|
| ## Usage |
|
|
| ### Direct Usage (Sentence Transformers) |
|
|
| First install the Sentence Transformers library: |
|
|
| ```bash |
| pip install -U sentence-transformers |
| ``` |
|
|
| Then you can load this model and run inference. |
| ```python |
| from sentence_transformers import SentenceTransformer |
| |
| # Download from the 🤗 Hub |
| model = SentenceTransformer("universalml0/finetuned_embedding_model_e5-large-multilingual-large") |
| # Run inference |
| sentences = [ |
| 'म कसरी बिस्तारै तौल घटाउन सक्छु?', |
| 'वजन घटाउनको लागि कुनै राम्रो आहार हो?', |
| 'कस्तो प्रकारको आहार कसैले आहार नचाहने व्यक्तिका लागि उत्तम हुन्छ?', |
| ] |
| embeddings = model.encode(sentences) |
| print(embeddings.shape) |
| # [3, 1024] |
| |
| # Get the similarity scores for the embeddings |
| similarities = model.similarity(embeddings, embeddings) |
| print(similarities.shape) |
| # [3, 3] |
| ``` |
|
|
| <!-- |
| ### Direct Usage (Transformers) |
|
|
| <details><summary>Click to see the direct usage in Transformers</summary> |
|
|
| </details> |
| --> |
|
|
| <!-- |
| ### Downstream Usage (Sentence Transformers) |
|
|
| You can finetune this model on your own dataset. |
|
|
| <details><summary>Click to expand</summary> |
|
|
| </details> |
| --> |
|
|
| <!-- |
| ### Out-of-Scope Use |
|
|
| *List how the model may foreseeably be misused and address what users ought not to do with the model.* |
| --> |
|
|
| <!-- |
| ## Bias, Risks and Limitations |
|
|
| *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* |
| --> |
|
|
| <!-- |
| ### Recommendations |
|
|
| *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* |
| --> |
|
|
| ## Training Details |
|
|
| ### Training Dataset |
|
|
| #### universalml0/nepali_embedding_dataset |
|
|
| * Dataset: universalml0/nepali_embedding_dataset |
| * Size: 45,199 training samples |
| * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> |
| * Approximate statistics based on the first 1000 samples: |
| | | anchor | positive | negative | |
| |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| |
| | type | string | string | string | |
| | details | <ul><li>min: 7 tokens</li><li>mean: 17.53 tokens</li><li>max: 486 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 17.68 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 18.9 tokens</li><li>max: 156 tokens</li></ul> | |
| * Samples: |
| | anchor | positive | negative | |
| |:----------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------| |
| | <code>भारतीय सरकारले ५०० र १००० रुपयाको नोटमाथि प्रतिबन्ध लगाउनुको कारण के थियो?</code> | <code>भारतीय सरकारले ५०० र १००० को नोटलाई निष्क्रिय पारेको छ तर तिनीहरूलाई ५०० र २००० को नोटहरूसँग प्रतिस्थापन गरेको छ। के यो विरोधाभासी छैन?</code> | <code>भारतीय सरकारले किन चाहेको भए सीमित मात्रामा नोटहरू मुद्रण गर्न र बजेट घाटा क्लियर गर्न सक्दैन? विशेष गरी, किन कुनै पनि देशले यो गर्न सक्दैन?</code> | |
| | <code>भारतीय हुनुको अनुभूति कस्तो हुन्छ?</code> | <code>भारतीय हुनुको अनुभूति कस्तो हुन्छ?</code> | <code>भारतीय महिला हुनुको अनुभव कस्तो हुन्छ?</code> | |
| | <code>के कुनै व्यक्तिले edWisor मार्फत कुनै नौकरी पाएको छ?</code> | <code>एडवाइजर वैध छ र के कसैले यस मार्फत कुनै नौकरी पाएको छ?</code> | <code>एलिटमसको माध्यमबाट कसैले काम पाएको छ?</code> | |
| * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: |
| ```json |
| { |
| "scale": 20.0, |
| "similarity_fct": "cos_sim" |
| } |
| ``` |
|
|
| ### Training Hyperparameters |
| #### Non-Default Hyperparameters |
|
|
| - `per_device_train_batch_size`: 4 |
| - `learning_rate`: 1e-06 |
| - `num_train_epochs`: 1 |
| - `warmup_ratio`: 0.3 |
| - `bf16`: True |
| - `batch_sampler`: no_duplicates |
| |
| #### All Hyperparameters |
| <details><summary>Click to expand</summary> |
| |
| - `overwrite_output_dir`: False |
| - `do_predict`: False |
| - `eval_strategy`: no |
| - `prediction_loss_only`: True |
| - `per_device_train_batch_size`: 4 |
| - `per_device_eval_batch_size`: 8 |
| - `per_gpu_train_batch_size`: None |
| - `per_gpu_eval_batch_size`: None |
| - `gradient_accumulation_steps`: 1 |
| - `eval_accumulation_steps`: None |
| - `torch_empty_cache_steps`: None |
| - `learning_rate`: 1e-06 |
| - `weight_decay`: 0.0 |
| - `adam_beta1`: 0.9 |
| - `adam_beta2`: 0.999 |
| - `adam_epsilon`: 1e-08 |
| - `max_grad_norm`: 1.0 |
| - `num_train_epochs`: 1 |
| - `max_steps`: -1 |
| - `lr_scheduler_type`: linear |
| - `lr_scheduler_kwargs`: {} |
| - `warmup_ratio`: 0.3 |
| - `warmup_steps`: 0 |
| - `log_level`: passive |
| - `log_level_replica`: warning |
| - `log_on_each_node`: True |
| - `logging_nan_inf_filter`: True |
| - `save_safetensors`: True |
| - `save_on_each_node`: False |
| - `save_only_model`: False |
| - `restore_callback_states_from_checkpoint`: False |
| - `no_cuda`: False |
| - `use_cpu`: False |
| - `use_mps_device`: False |
| - `seed`: 42 |
| - `data_seed`: None |
| - `jit_mode_eval`: False |
| - `use_ipex`: False |
| - `bf16`: True |
| - `fp16`: False |
| - `fp16_opt_level`: O1 |
| - `half_precision_backend`: auto |
| - `bf16_full_eval`: False |
| - `fp16_full_eval`: False |
| - `tf32`: None |
| - `local_rank`: 0 |
| - `ddp_backend`: None |
| - `tpu_num_cores`: None |
| - `tpu_metrics_debug`: False |
| - `debug`: [] |
| - `dataloader_drop_last`: False |
| - `dataloader_num_workers`: 0 |
| - `dataloader_prefetch_factor`: None |
| - `past_index`: -1 |
| - `disable_tqdm`: False |
| - `remove_unused_columns`: True |
| - `label_names`: None |
| - `load_best_model_at_end`: False |
| - `ignore_data_skip`: False |
| - `fsdp`: [] |
| - `fsdp_min_num_params`: 0 |
| - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} |
| - `fsdp_transformer_layer_cls_to_wrap`: None |
| - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} |
| - `deepspeed`: None |
| - `label_smoothing_factor`: 0.0 |
| - `optim`: adamw_torch |
| - `optim_args`: None |
| - `adafactor`: False |
| - `group_by_length`: False |
| - `length_column_name`: length |
| - `ddp_find_unused_parameters`: None |
| - `ddp_bucket_cap_mb`: None |
| - `ddp_broadcast_buffers`: False |
| - `dataloader_pin_memory`: True |
| - `dataloader_persistent_workers`: False |
| - `skip_memory_metrics`: True |
| - `use_legacy_prediction_loop`: False |
| - `push_to_hub`: False |
| - `resume_from_checkpoint`: None |
| - `hub_model_id`: None |
| - `hub_strategy`: every_save |
| - `hub_private_repo`: False |
| - `hub_always_push`: False |
| - `gradient_checkpointing`: False |
| - `gradient_checkpointing_kwargs`: None |
| - `include_inputs_for_metrics`: False |
| - `eval_do_concat_batches`: True |
| - `fp16_backend`: auto |
| - `push_to_hub_model_id`: None |
| - `push_to_hub_organization`: None |
| - `mp_parameters`: |
| - `auto_find_batch_size`: False |
| - `full_determinism`: False |
| - `torchdynamo`: None |
| - `ray_scope`: last |
| - `ddp_timeout`: 1800 |
| - `torch_compile`: False |
| - `torch_compile_backend`: None |
| - `torch_compile_mode`: None |
| - `dispatch_batches`: None |
| - `split_batches`: None |
| - `include_tokens_per_second`: False |
| - `include_num_input_tokens_seen`: False |
| - `neftune_noise_alpha`: None |
| - `optim_target_modules`: None |
| - `batch_eval_metrics`: False |
| - `eval_on_start`: False |
| - `eval_use_gather_object`: False |
| - `batch_sampler`: no_duplicates |
| - `multi_dataset_batch_sampler`: proportional |
|
|
| </details> |
|
|
| ### Training Logs |
| <details><summary>Click to expand</summary> |
|
|
| | Epoch | Step | Training Loss | |
| |:------:|:-----:|:-------------:| |
| | 0.0088 | 100 | 0.8671 | |
| | 0.0177 | 200 | 0.8234 | |
| | 0.0265 | 300 | 0.8223 | |
| | 0.0354 | 400 | 0.7423 | |
| | 0.0442 | 500 | 0.6605 | |
| | 0.0531 | 600 | 0.5558 | |
| | 0.0619 | 700 | 0.4076 | |
| | 0.0708 | 800 | 0.3617 | |
| | 0.0796 | 900 | 0.3087 | |
| | 0.0885 | 1000 | 0.2747 | |
| | 0.0973 | 1100 | 0.2409 | |
| | 0.1062 | 1200 | 0.229 | |
| | 0.1150 | 1300 | 0.209 | |
| | 0.1239 | 1400 | 0.2556 | |
| | 0.1327 | 1500 | 0.2536 | |
| | 0.1416 | 1600 | 0.2092 | |
| | 0.1504 | 1700 | 0.2464 | |
| | 0.1593 | 1800 | 0.1727 | |
| | 0.1681 | 1900 | 0.281 | |
| | 0.1770 | 2000 | 0.2289 | |
| | 0.1858 | 2100 | 0.2065 | |
| | 0.1947 | 2200 | 0.1751 | |
| | 0.2035 | 2300 | 0.231 | |
| | 0.2124 | 2400 | 0.2127 | |
| | 0.2212 | 2500 | 0.1908 | |
| | 0.2301 | 2600 | 0.2131 | |
| | 0.2389 | 2700 | 0.1704 | |
| | 0.2478 | 2800 | 0.1923 | |
| | 0.2566 | 2900 | 0.1635 | |
| | 0.2655 | 3000 | 0.2061 | |
| | 0.2743 | 3100 | 0.1843 | |
| | 0.2832 | 3200 | 0.1443 | |
| | 0.2920 | 3300 | 0.1513 | |
| | 0.3009 | 3400 | 0.1879 | |
| | 0.3097 | 3500 | 0.2372 | |
| | 0.3186 | 3600 | 0.1542 | |
| | 0.3274 | 3700 | 0.2523 | |
| | 0.3363 | 3800 | 0.2055 | |
| | 0.3451 | 3900 | 0.1474 | |
| | 0.3540 | 4000 | 0.1647 | |
| | 0.3628 | 4100 | 0.1615 | |
| | 0.3717 | 4200 | 0.1271 | |
| | 0.3805 | 4300 | 0.1451 | |
| | 0.3894 | 4400 | 0.1887 | |
| | 0.3982 | 4500 | 0.1334 | |
| | 0.4071 | 4600 | 0.1962 | |
| | 0.4159 | 4700 | 0.1695 | |
| | 0.4248 | 4800 | 0.1561 | |
| | 0.4336 | 4900 | 0.1146 | |
| | 0.4425 | 5000 | 0.1381 | |
| | 0.4513 | 5100 | 0.1452 | |
| | 0.4602 | 5200 | 0.2388 | |
| | 0.4690 | 5300 | 0.1951 | |
| | 0.4779 | 5400 | 0.1142 | |
| | 0.4867 | 5500 | 0.182 | |
| | 0.4956 | 5600 | 0.1968 | |
| | 0.5044 | 5700 | 0.1744 | |
| | 0.5133 | 5800 | 0.1868 | |
| | 0.5221 | 5900 | 0.1452 | |
| | 0.5310 | 6000 | 0.1345 | |
| | 0.5398 | 6100 | 0.1318 | |
| | 0.5487 | 6200 | 0.218 | |
| | 0.5575 | 6300 | 0.2118 | |
| | 0.5664 | 6400 | 0.1972 | |
| | 0.5752 | 6500 | 0.0935 | |
| | 0.5841 | 6600 | 0.1991 | |
| | 0.5929 | 6700 | 0.1252 | |
| | 0.6018 | 6800 | 0.1128 | |
| | 0.6106 | 6900 | 0.1585 | |
| | 0.6195 | 7000 | 0.2293 | |
| | 0.6283 | 7100 | 0.2104 | |
| | 0.6372 | 7200 | 0.1416 | |
| | 0.6460 | 7300 | 0.2004 | |
| | 0.6549 | 7400 | 0.1446 | |
| | 0.6637 | 7500 | 0.1171 | |
| | 0.6726 | 7600 | 0.1386 | |
| | 0.6814 | 7700 | 0.1291 | |
| | 0.6903 | 7800 | 0.1546 | |
| | 0.6991 | 7900 | 0.1484 | |
| | 0.7080 | 8000 | 0.129 | |
| | 0.7168 | 8100 | 0.1873 | |
| | 0.7257 | 8200 | 0.1333 | |
| | 0.7345 | 8300 | 0.1713 | |
| | 0.7434 | 8400 | 0.1016 | |
| | 0.7522 | 8500 | 0.1519 | |
| | 0.7611 | 8600 | 0.1851 | |
| | 0.7699 | 8700 | 0.144 | |
| | 0.7788 | 8800 | 0.1488 | |
| | 0.7876 | 8900 | 0.1568 | |
| | 0.7965 | 9000 | 0.1672 | |
| | 0.8053 | 9100 | 0.1236 | |
| | 0.8142 | 9200 | 0.0973 | |
| | 0.8230 | 9300 | 0.1491 | |
| | 0.8319 | 9400 | 0.2251 | |
| | 0.8407 | 9500 | 0.1433 | |
| | 0.8496 | 9600 | 0.2634 | |
| | 0.8584 | 9700 | 0.1723 | |
| | 0.8673 | 9800 | 0.2373 | |
| | 0.8761 | 9900 | 0.1065 | |
| | 0.8850 | 10000 | 0.1578 | |
| | 0.8938 | 10100 | 0.1127 | |
| | 0.9027 | 10200 | 0.1632 | |
| | 0.9115 | 10300 | 0.19 | |
| | 0.9204 | 10400 | 0.0958 | |
| | 0.9292 | 10500 | 0.1029 | |
| | 0.9381 | 10600 | 0.1183 | |
| | 0.9469 | 10700 | 0.1779 | |
| | 0.9558 | 10800 | 0.1571 | |
| | 0.9646 | 10900 | 0.1666 | |
| | 0.9735 | 11000 | 0.1405 | |
| | 0.9823 | 11100 | 0.147 | |
| | 0.9912 | 11200 | 0.1428 | |
| | 1.0 | 11300 | 0.1724 | |
|
|
| </details> |
|
|
| ### Framework Versions |
| - Python: 3.9.5 |
| - Sentence Transformers: 3.0.1 |
| - Transformers: 4.44.2 |
| - PyTorch: 2.3.0+cu121 |
| - Accelerate: 0.33.0 |
| - Datasets: 2.21.0 |
| - Tokenizers: 0.19.1 |
|
|
| ## Citation |
|
|
| ### BibTeX |
|
|
| #### Sentence Transformers |
| ```bibtex |
| @inproceedings{reimers-2019-sentence-bert, |
| title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", |
| author = "Reimers, Nils and Gurevych, Iryna", |
| booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", |
| month = "11", |
| year = "2019", |
| publisher = "Association for Computational Linguistics", |
| url = "https://arxiv.org/abs/1908.10084", |
| } |
| ``` |
|
|
| #### MultipleNegativesRankingLoss |
| ```bibtex |
| @misc{henderson2017efficient, |
| title={Efficient Natural Language Response Suggestion for Smart Reply}, |
| author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, |
| year={2017}, |
| eprint={1705.00652}, |
| archivePrefix={arXiv}, |
| primaryClass={cs.CL} |
| } |
| ``` |
|
|
| <!-- |
| ## Glossary |
|
|
| *Clearly define terms in order to be accessible across audiences.* |
| --> |
|
|
| <!-- |
| ## Model Card Authors |
|
|
| *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* |
| --> |
|
|
| <!-- |
| ## Model Card Contact |
|
|
| *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* |
| --> |