potion-256d-v3 Model Card
This Model2Vec model is an improved v3 static embedding model, pre-trained using Tokenlearn with contrastive learning and born-again self-distillation. It is distilled from mixedbread-ai/mxbai-embed-large-v1. It uses static embeddings, allowing text embeddings to be computed orders of magnitude faster on both GPU and CPU. It is designed for applications where computational resources are limited or where real-time performance is critical. This model improves on potion-base-32M by +2.31 CatAVG through a stronger teacher model, contrastive tokenlearn training, born-again self-distillation, power normalization, and PCA 512D-to-256D compression.
Installation
Install model2vec using pip:
pip install model2vec
Usage
Load this model using the from_pretrained method:
from model2vec import StaticModel
# Load a pretrained Model2Vec model
model = StaticModel.from_pretrained("blobbybob/potion-256d-v3")
# Compute text embeddings
embeddings = model.encode(["Example sentence"])
How it works
Model2vec creates a small, static model that outperforms other static embedding models by a large margin on all tasks on MTEB. This model is pre-trained using Tokenlearn. It's created using the following steps:
- Distillation: a model is distilled from mxbai-embed-large-v1 at 512 dimensions using Model2Vec with a 63K token vocabulary.
- Training data creation: the teacher model is used to create training data by encoding 500K sentences from C4.
- Contrastive training: the distilled model is trained on the training data using Tokenlearn with contrastive loss.
- Born-again self-distillation: the trained model is further improved by distilling from itself (alpha=1.0), gaining +0.5 CatAVG.
- Power normalization: embeddings are transformed with sign(E) * |E|^0.7 for improved isotropy.
- PCA compression: the 512D model is compressed to 256D via PCA, preserving more variance than training directly at 256D.
Results
| Model | STS | Classification | PairClassification | CatAVG |
|---|---|---|---|---|
| potion-256d-v3 | 79.32 | 63.23 | 73.97 | 72.17 |
| potion-base-32M | 78.97 | 61.42 | 69.18 | 69.86 |
| all-MiniLM-L6-v2 | 78.95 | 69.25 | 82.37 | 74.65 |
| GloVe 300d | 61.52 | 62.73 | 72.48 | 61.45 |
The results show that potion-256d-v3 outperforms potion-base-32M by +2.31 CatAVG while remaining orders of magnitude faster than transformer models like all-MiniLM-L6-v2.
Additional Resources
- All Model2Vec models on the hub
- Model2Vec Repo
- Tokenlearn repo
- Model2Vec Results
- Model2Vec Tutorials
Citation
Please cite the Model2Vec repository if you use this model in your work.
@software{minishlab2024model2vec,
author = {Stephan Tulkens and {van Dongen}, Thomas},
title = {Model2Vec: Fast State-of-the-Art Static Embeddings},
year = {2024},
publisher = {Zenodo},
doi = {10.5281/zenodo.17270888},
url = {https://github.com/MinishLab/model2vec},
license = {MIT}
}
- Downloads last month
- 39