Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation
Paper • 2004.09813 • Published • 1
This is a sentence-transformers model finetuned from FacebookAI/xlm-roberta-base on the en-sa dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("saikasyap/samskritam-xlm-roberata-base")
# Run inference
sentences = [
'अनन्तरं Microbiology इति टङ्कनं करोमि ।',
'Then I will type Microbiology.',
'The comic image basically consists of three parts.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
non_english, english, and label| non_english | english | label | |
|---|---|---|---|
| type | string | string | list |
| details |
|
|
|
| non_english | english | label |
|---|---|---|
ॐ तपः स्वाध्यायनिरतं तपस्वी वाग्विदां वरम्। नारदं परिपप्रच्छ वाल्मीकिर्मुनिपुङ्गवम्॥ |
The ascetic Vālmīki asked Nārada, the best of sages and foremost of those conversant with words, ever engaged in austerities and Vedic studies. |
[0.15034635365009308, 0.35359007120132446, -0.3348075747489929, 0.15415771305561066, 0.020571526139974594, ...] |
कोन्वस्मिन् साम्प्रतं लोके गुणवान् कश्च वीर्यवान्। धर्मज्ञश्च कृतज्ञश्च सत्यवाक्यो दृढत्नतः॥ |
Who at present in this world is like crowned with qualities, and with prowess, knowing duty, and grateful, and truthful, and firm in vow. |
[-0.46556514501571655, 0.4740210175514221, -0.2033461034297943, -1.6129034757614136, -0.016881834715604782, ...] |
चारित्रेण च को युक्तः सर्वभूतेषु को हितः। विद्वान् कः कः समर्थश्च कश्चैकप्रियदर्शनः॥ |
Who is qualified by virtue of his character, and who is engaged in the welfare of all creatures? Who is learned and capable. Who alone is ever lovely to behold? |
[-0.09693514555692673, 0.4206468462944031, -0.3034357726573944, -1.2955875396728516, 0.3836270868778229, ...] |
MSELossnon_english, english, and label| non_english | english | label | |
|---|---|---|---|
| type | string | string | list |
| details |
|
|
|
| non_english | english | label |
|---|---|---|
तथा दिमागी रूप से तंदुरुस्त हों । |
And also be mentally fit. |
[0.2053176611661911, -0.15136581659317017, -0.1492331326007843, -0.13915303349494934, -0.08056919276714325, ...] |
अपरञ्च युष्माकम् आनन्दो यत् सम्पूर्णो भवेद् तदर्थं वयम् एतानि लिखामः। |
"""And these things write we unto you, that your joy may be full.""" |
[0.0013895286247134209, 0.09506042301654816, -0.3513864576816559, -0.6496815085411072, 0.7649527192115784, ...] |
पञ्च व्यञ्जनानां तेषां च एकम्-एकम स्वास्थ्यसम्बन्धीकार्याणां च सूचीं निर्मातु। |
List five spices and one health benefits of each. |
[0.37307825684547424, 0.8675527572631836, 0.6388981342315674, -0.27114301919937134, -0.30143851041793823, ...] |
MSELosseval_strategy: stepsper_device_train_batch_size: 64per_device_eval_batch_size: 64learning_rate: 2e-05num_train_epochs: 5warmup_ratio: 0.1fp16: Trueoverwrite_output_dir: Falsedo_predict: Falseeval_strategy: stepsprediction_loss_only: Trueper_device_train_batch_size: 64per_device_eval_batch_size: 64per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 2e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1.0num_train_epochs: 5max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.1warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Falsefp16: Truefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torchoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Falsehub_always_push: Falsegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters: auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Nonedispatch_batches: Nonesplit_batches: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseeval_use_gather_object: Falseaverage_tokens_across_devices: Falseprompts: Nonebatch_sampler: batch_samplermulti_dataset_batch_sampler: proportional| Epoch | Step | Training Loss |
|---|---|---|
| 0.1821 | 100 | 0.5076 |
| 0.3643 | 200 | 0.233 |
| 0.5464 | 300 | 0.2172 |
| 0.7286 | 400 | 0.2033 |
| 0.9107 | 500 | 0.1943 |
| 1.0929 | 600 | 0.1878 |
| 1.2750 | 700 | 0.1813 |
| 1.4572 | 800 | 0.1754 |
| 1.6393 | 900 | 0.1721 |
| 1.8215 | 1000 | 0.1688 |
| 2.0036 | 1100 | 0.1664 |
| 2.1858 | 1200 | 0.1632 |
| 2.3679 | 1300 | 0.1606 |
| 2.5501 | 1400 | 0.1588 |
| 2.7322 | 1500 | 0.1566 |
| 2.9144 | 1600 | 0.1558 |
| 3.0965 | 1700 | 0.154 |
| 3.2787 | 1800 | 0.1525 |
| 3.4608 | 1900 | 0.1508 |
| 3.6430 | 2000 | 0.15 |
| 3.8251 | 2100 | 0.1493 |
| 4.0073 | 2200 | 0.149 |
| 4.1894 | 2300 | 0.1479 |
| 4.3716 | 2400 | 0.1471 |
| 4.5537 | 2500 | 0.1466 |
| 4.7359 | 2600 | 0.1461 |
| 4.9180 | 2700 | 0.1466 |
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
@inproceedings{reimers-2020-multilingual-sentence-bert,
title = "Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2020",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2004.09813",
}
Base model
FacebookAI/xlm-roberta-base