cpt-qwen-composer for Ollama

This folder stages the local cpt-qwen-composer adapter in GGUF format for Ollama.

  • gguf/Qwen3-8B-Base-q8_0.gguf: local GGUF export of Qwen/Qwen3-8B-Base
  • gguf/cpt-qwen-composer-adapter-f16.gguf: local GGUF export of the cpt-qwen-composer LoRA adapter
  • Modelfile: points Ollama at the GGUF base model plus GGUF adapter

Local build:

cd /Users/milwright/clawd/diss/ollama-cpt-qwen-composer
ollama create cpt-qwen-composer-local -f Modelfile

Hugging Face repo:

milwright/cpt-qwen-composer
Downloads last month
32
GGUF
Model size
46.1M params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support