SentenceTransformer based on BAAI/bge-m3
This is a sentence-transformers model finetuned from BAAI/bge-m3. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: BAAI/bge-m3
- Maximum Sequence Length: 8192 tokens
- Output Dimensionality: 1024 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False, 'architecture': 'XLMRobertaModel'})
(1): CustomPooler(
(ln_queries): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
(ln_tokens): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
(q_proj): Linear(in_features=1024, out_features=2048, bias=True)
(k_proj): Linear(in_features=1024, out_features=1024, bias=True)
(v_proj): Linear(in_features=1024, out_features=1024, bias=True)
(o_proj): Linear(in_features=2048, out_features=1024, bias=True)
(attn_drop): Dropout(p=0.05, inplace=False)
(fusion_proj): Linear(in_features=4096, out_features=1024, bias=True)
(mlp): SwiGLU(
(gate_proj): Linear(in_features=2048, out_features=3072, bias=True)
(up_proj): Linear(in_features=2048, out_features=3072, bias=True)
(down_proj): Linear(in_features=3072, out_features=1024, bias=True)
(drop): Dropout(p=0.05, inplace=False)
)
)
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("bobox/custom-XLMRoBERTa-m3-pooler-step1")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.9497, 0.8788],
# [0.9497, 1.0000, 0.8856],
# [0.8788, 0.8856, 1.0000]])
Training Details
Framework Versions
- Python: 3.12.12
- Sentence Transformers: 5.3.0
- Transformers: 5.0.0
- PyTorch: 2.10.0+cu128
- Accelerate: 1.13.0
- Datasets: 4.0.0
- Tokenizers: 0.22.2
Citation
BibTeX
- Downloads last month
- -
Model tree for bobox/custom-XLMRoBERTa-m3-pooler-step1
Base model
BAAI/bge-m3