sentence-transformers compatibility

#1
by LPN64 - opened

Hello,
I'm trying to update my needle in haystack benchmark with your new model, bge-m3 was already sota

Details about benchmark here https://huggingface.co/HIT-TMG/KaLM-embedding-multilingual-mini-instruct-v2/discussions/2

But i'm having really weird results, very high score on hard tasks and very low scores on easy tasks :

image.png

Have you tested sentence-transformers compatibility ?

Also, what are the default query prompts ?

Beijing Academy of Artificial Intelligence org

Hello, @LPN64 . I have updated the files to integrate this model with Sentence Transformers. Now you can try performing evaluation again.

Thanks @hanhainebula

Here are the results when comparing bge-m3 (sentence transformers backend) and this model with FlagEmbedding backend :

            model = FlagLLMModel("BAAI/bge-reasoner-embed-qwen3-8b-0923", 
                     query_instruction_for_retrieval="Given a question, retrieve relevant passages that help answer the question.",
                     query_instruction_format="Instruct: {}\nQuery: {}",
                     devices=device,  # set devices to "cuda:0" for testing on a single GPU
                     use_fp16=True)

image

Here are results with updated repository compatibility with sentence transformers :

image

LPN64 changed discussion status to closed

Sign up or log in to comment