Model Overview
- Model Architecture: Qwen3
- Input: Text
- Output: Embeddings (Vector)
- Model Optimizations:
- Maximum Context Length: 8k tokens
- Maximum Sequence Length: 8192 tokens
- Task Type: Embedding (text-to-vector)
- Intended Use Cases: Semantic search, retrieval, and similarity matching. Same as Qwen/Qwen3-Embedding-8B.
- Release Date: 01/20/2026
- Version: v2026.1
- License(s): Apache 2.0 License
- Supported Inference Engine(s): Furiosa LLM
- Supported Hardware Compatibility: FuriosaAI RNGD
- Preferred Operating System(s): Linux
Description:
This model is the pre-compiled version of the Qwen/Qwen3-Embedding-8B, which is an embedding model designed for generating dense text representations for semantic search and retrieval tasks.
Usage
To run this model with Furiosa-LLM, follow the example command below after installing Furiosa-LLM and its prerequisites.
from furiosa_llm import LLM
llm = LLM.from_artifacts("furiosa-ai/Qwen3-Embedding-8B")
embeddings = llm.embed(["Hello, world!", "How are you?"])
- Downloads last month
- 30