bert-base-stage2-sbert

SentenceTransformer checkpoint fine-tuned for Vietnamese legal retrieval.

Evaluation

  • Truncate dims: [768, 512, 256, 128]

UIT-ViQuAD2.0

method Accuracy@1 Accuracy@3 Accuracy@5 Accuracy@10 NDCG@3 NDCG@5 NDCG@10 MRR@3 MRR@5 MRR@10 Recall@5 Recall@10 Recall@100 MAP@100
BM25 0.606492 0.75565 0.805643 0.861663 0.694646 0.715313 0.733532 0.673492 0.685004 0.692583 0.805643 0.861663 0.966717 0.69741
768 0.439666 0.633201 0.711546 0.793179 0.552914 0.585221 0.611751 0.525156 0.543106 0.55413 0.711546 0.793179 0.960827 0.561602
512 0.432543 0.621148 0.702096 0.789892 0.543308 0.57661 0.605102 0.516368 0.534824 0.546642 0.702096 0.789892 0.958773 0.554129
256 0.413368 0.605533 0.686207 0.774825 0.525913 0.559013 0.587817 0.498379 0.516671 0.528646 0.686207 0.774825 0.954664 0.536549
128 0.391316 0.572935 0.652376 0.749349 0.497692 0.530387 0.561732 0.471671 0.489798 0.502725 0.652376 0.749349 0.945624 0.511337

Zalo-Legal

method Accuracy@1 Accuracy@3 Accuracy@5 Accuracy@10 NDCG@3 NDCG@5 NDCG@10 MRR@3 MRR@5 MRR@10 Recall@5 Recall@10 Recall@100 MAP@100
BM25 0.379442 0.616751 0.69797 0.77665 0.515637 0.549114 0.57497 0.48181 0.500402 0.511188 0.696701 0.775381 0.923858 0.517283
768 0.341371 0.524112 0.611675 0.722081 0.447023 0.483089 0.518636 0.421531 0.441201 0.455775 0.611041 0.721447 0.946066 0.46543
512 0.333756 0.506345 0.582487 0.72335 0.43408 0.465382 0.511369 0.409898 0.427475 0.446424 0.581218 0.722716 0.944797 0.456143
256 0.319797 0.506345 0.583756 0.72335 0.430026 0.461954 0.507023 0.404188 0.421827 0.440389 0.583122 0.722716 0.92703 0.449293
128 0.294416 0.48731 0.585025 0.727157 0.407384 0.447667 0.492874 0.380499 0.402771 0.420987 0.584391 0.726523 0.918147 0.428892
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support