harrier-oss-v1-270m-fastembed
Cleaned ONNX packaging for fastembed.TextEmbedding.
Layout
config.jsontokenizer.jsontokenizer_config.jsonspecial_tokens_map.jsonmodel_quantized.onnxmodel_quantized.onnx_data
Notes
tokenizer_config.json.model_max_lengthis patched to32768to avoid thefastembedtruncation overflow that occurs with the Hugging Face sentinel value.- The model is intended to be loaded with
TextEmbedding.add_custom_model(...). - The target repository for this packaging run is
ferrisS/harrier-oss-v1-270m-fastembed.
Example
from fastembed import TextEmbedding
from fastembed.common.model_description import ModelSource, PoolingType
TextEmbedding.add_custom_model(
model="ferrisS/harrier-oss-v1-270m-fastembed",
pooling=PoolingType.MEAN,
normalization=True,
sources=ModelSource(hf="ferrisS/harrier-oss-v1-270m-fastembed"),
dim=640,
model_file="model_quantized.onnx",
additional_files=["model_quantized.onnx_data"],
)
model = TextEmbedding(model_name="ferrisS/harrier-oss-v1-270m-fastembed")
vectors = list(model.embed(["hello world"]))
- Downloads last month
- 18
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for ferrisS/harrier-oss-v1-270m-fastembed
Base model
microsoft/harrier-oss-v1-270m