bge-large-en-v1.5-GGUF for llama.cpp
This repository contains a converted version of the BAAI/bge-large-en-v1.5 model for text embeddings, specifically prepared for use with the llama.cpp or Python llama-cpp-python library.
Original Model: BAAI/bge-large-en-v1.5
Conversion Details:
- The conversion was performed using
llama.cpp's convert-hf-to-gguf.pyscript. - This conversion optimizes the model for the
llama.cpp.
Usage:
This model can be loaded and used for text embedding tasks using the llama-cpp-python library. Here's an example:
from llama import Model
# Load the converted model
model = Model.load("rbehzadan/bge-large-en-v1.5-ggml-f16")
# Encode some text
text = "This is a sample sentence."
encoded_text = model.embed(text)
Important Notes:
- This converted model might have slight performance variations compared to the original model due to the conversion process.
- Ensure you have the
llama-cpp-pythonlibrary installed for this model to function.
License:
The license for this model is inherited from the original BAAI/bge-large-en-v1.5 model (refer to the original model's repository for details).
Contact:
Feel free to create an issue in this repository for any questions or feedback.
- Downloads last month
- 4
Hardware compatibility
Log In to add your hardware
16-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support