SentenceTransformer based on Intellexus/mbert-tibetan-continual-wylie-final

This is a sentence-transformers model finetuned from Intellexus/mbert-tibetan-continual-wylie-final. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertModel'})
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    "de shi ba'i lus bsregs pa de la rlung ngam chus thal ba yang med par byed pa'i 'du shes dang / zhigs pa la phye ma yang med pa dang / rdul du gyur pa la rdul kyang med pa'i 'du shes su rnam par 'jig pa bsgoms shing bsgrubs te nang gzugs med par 'du shes pas phyi rol gyi gzugs chung ngu kha dog bzang po dang kha dog ngan pa rnams la mtshan mar yid la byed pas zil gyis gnon pa'i skye mched gsum pa rdzogs par byas te gnas so//",
    " 'dod chags shas che mi gtsang sha tshil dang // pags pa keng rus lta bas rnam par bzlog // byams dang snying rje'i chus gdab zhe sdang la// gti mug la ni rten 'brel lan gyis so//",
    'dge ba bcu la goms pa dang // bla ma la gus dbang po dul//\nnga rgyal khro las rnam par grol// des ni re zhig  nges ’grub ’gyur//\n',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.7146, 0.6052],
#         [0.7146, 1.0000, 0.6010],
#         [0.6052, 0.6010, 1.0000]])

Evaluation

Metrics

Semantic Similarity

Metric Value
pearson_cosine 0.8512
spearman_cosine 0.8336

Training Details

Training Dataset

Unnamed Dataset

  • Size: 93,428 training samples
  • Columns: text1 and text2
  • Approximate statistics based on the first 1000 samples:
    text1 text2
    type string string
    details
    • min: 7 tokens
    • mean: 70.96 tokens
    • max: 188 tokens
    • min: 7 tokens
    • mean: 68.04 tokens
    • max: 199 tokens
  • Samples:
    text1 text2
    /lam gyi bden pa bsgoms pas ma rig pa'i zag pa zad do/_/'dod pa'i zag pa spangs pas lha'i bu'i bdud choms so/ /ldong ros kyi mig sman dang lhan cig bsres nas mig la byugs na/_'khor chen po'i dbus su/_spyan ras gzigs dbang phyug gi gzugs su mthong bar 'gyur ro/
    /de bzhin du tshangs pa gzhan thams cad kyang sangs rgyas kyi mthus rang rang gi gnas rnams mog mog por gyur par shes te rang rang gi gnas rnams la mi dga' bar gyur to/_/'jig rten skyong ba dang _dbang phyug chen po dang gnas gtsang ma pa rnams kyang rang rang gi gnas rnams la mi dga' bar gyur to/ /yang na srog gi 'od zer 'phros pas/_bgegs thams cad bcom nas slar 'dus pas snying ga'i lha la mchod par bsam mo/
    /dge 'dun la phyag 'tshal lo//yang dag par rdzogs pa'i sangs rgyas nyan thos dang rang sangs rgyas dang /byang chub sems dpa'i dge 'dun dang bcas pa bdun po dag la phyag 'tshal lo//gnod sbyin gyi sde dpon chen po lag na rdo rje drag po la phyag 'tshal lo/_/rig sngags kyi rgyal po chen po rdo rje'i lcags kyu la phyag 'tshal lo//rdo rje'i lu gu rgyud la phyag 'tshal lo/ /bcings zin pa'i phyir/dper na bcings nas yun ring du lon pa bzhin no//ma bcings pa la yang 'ching ba'i rtsom pa med de ma bcings pa'i phyir/_dper na _thar pa bzhin no/
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim",
        "gather_across_devices": false
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 300 evaluation samples
  • Columns: text1, text2, and label
  • Approximate statistics based on the first 300 samples:
    text1 text2 label
    type string string float
    details
    • min: 10 tokens
    • mean: 35.49 tokens
    • max: 190 tokens
    • min: 9 tokens
    • mean: 34.67 tokens
    • max: 226 tokens
    • min: 0.0
    • mean: 0.5
    • max: 1.0
  • Samples:
    text1 text2 label
    ji ltar sprul pa ston byed pa// rdzu 'phrul phun sum tshogs pa yis// sprul zhing sprul pa'ang gzhan sprul byed// sprul pa des kyang gzhan dag ltar// ji ltar ston pas sprul pa ni// rdzu 'phrul phun tshogs kyis sprul zhing // sprul pa de yang sprul pa na// slar yang gzhan ni sprul pa ltar// 0.75
    du ba las ni mer shes shing // chu skyar las ni chur shes ltar// blo ldan byang chub sems dpa' yi// rigs ni mtshan ma rnams las shes// gang gis gang la gnas pa de nyid don la mi slu bar byed pa ni yang dag pa'i kun rdzob ste/ dper na du ba las me dpog pa bzhin no// 0.5625
    rtag tu dge ba'i bshes gnyen ni// bsten pa yongs su gtang mi bya// sangs rgyas chos rnams dge ba'i bshes la rten to zhes// yon tan kun gyi mchog mnga' rgyal ba de skad gsung // 0.5625
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim",
        "gather_across_devices": false
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 256
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • weight_decay: 0.1
  • num_train_epochs: 7
  • lr_scheduler_type: reduce_lr_on_plateau
  • warmup_ratio: 0.1
  • warmup_steps: 0.1
  • bf16: True
  • dataloader_drop_last: True
  • load_best_model_at_end: True

All Hyperparameters

Click to expand
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 256
  • per_device_eval_batch_size: 8
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.1
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 7
  • max_steps: -1
  • lr_scheduler_type: reduce_lr_on_plateau
  • lr_scheduler_kwargs: None
  • warmup_ratio: 0.1
  • warmup_steps: 0.1
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • enable_jit_checkpoint: False
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • use_cpu: False
  • seed: 42
  • data_seed: None
  • bf16: True
  • fp16: False
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: -1
  • ddp_backend: None
  • debug: []
  • dataloader_drop_last: True
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • group_by_length: False
  • length_column_name: length
  • project: huggingface
  • trackio_space_id: trackio
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • auto_find_batch_size: False
  • full_determinism: False
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_num_input_tokens_seen: no
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: True
  • use_cache: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss Validation Loss spearman_cosine
1.0 23 4.4094 0.6689 0.8336

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 5.2.2
  • Transformers: 5.1.0
  • PyTorch: 2.10.0+cu130
  • Accelerate: 1.12.0
  • Datasets: 4.4.1
  • Tokenizers: 0.22.2

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
4
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Shailu1492/tibetan-mbert-v1-consecutive-segments

Finetuned
(10)
this model

Papers for Shailu1492/tibetan-mbert-v1-consecutive-segments

Evaluation results