Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

LequeuISIR
/
ModernBERT-base-DPR-8e-05

Sentence Similarity
sentence-transformers
Safetensors
modernbert
feature-extraction
Generated from Trainer
dataset_size:478146
loss:CoSENTLoss
text-embeddings-inference
Model card Files Files and versions
xet
Community

Instructions to use LequeuISIR/ModernBERT-base-DPR-8e-05 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • sentence-transformers

    How to use LequeuISIR/ModernBERT-base-DPR-8e-05 with sentence-transformers:

    from sentence_transformers import SentenceTransformer
    
    model = SentenceTransformer("LequeuISIR/ModernBERT-base-DPR-8e-05")
    
    sentences = [
        "However, its underutilization is mainly due to the absence of a concrete and coherent dissemination strategy.",
        "At the same time, they need to understand that living in Europe brings great responsibilities in addition to great benefits.",
        "The mainstay of any intelligent and patriotic mineral policy can be summed up in the following postulate: \"since minerals are exhaustible, they should only be exploited with the maximum return for the economy of the country where they are mined\".",
        "We must move quickly to a shared sustainable energy supply, sustainable transportation and clean air."
    ]
    embeddings = model.encode(sentences)
    
    similarities = model.similarity(embeddings, embeddings)
    print(similarities.shape)
    # [4, 4]
  • Notebooks
  • Google Colab
  • Kaggle
ModernBERT-base-DPR-8e-05
600 MB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 2 commits
LequeuISIR's picture
LequeuISIR
Add new SentenceTransformer model
4062068 verified over 1 year ago
  • 1_Pooling
    Add new SentenceTransformer model over 1 year ago
  • .gitattributes
    1.52 kB
    initial commit over 1 year ago
  • README.md
    21.8 kB
    Add new SentenceTransformer model over 1 year ago
  • config.json
    1.29 kB
    Add new SentenceTransformer model over 1 year ago
  • config_sentence_transformers.json
    205 Bytes
    Add new SentenceTransformer model over 1 year ago
  • model.safetensors
    596 MB
    xet
    Add new SentenceTransformer model over 1 year ago
  • modules.json
    229 Bytes
    Add new SentenceTransformer model over 1 year ago
  • sentence_bert_config.json
    54 Bytes
    Add new SentenceTransformer model over 1 year ago
  • special_tokens_map.json
    694 Bytes
    Add new SentenceTransformer model over 1 year ago
  • tokenizer.json
    3.58 MB
    Add new SentenceTransformer model over 1 year ago
  • tokenizer_config.json
    20.8 kB
    Add new SentenceTransformer model over 1 year ago