cpt-qwen-composer for Ollama
This folder stages the local cpt-qwen-composer adapter in GGUF format for Ollama.
gguf/Qwen3-8B-Base-q8_0.gguf: local GGUF export ofQwen/Qwen3-8B-Basegguf/cpt-qwen-composer-adapter-f16.gguf: local GGUF export of thecpt-qwen-composerLoRA adapterModelfile: points Ollama at the GGUF base model plus GGUF adapter
Local build:
cd /Users/milwright/clawd/diss/ollama-cpt-qwen-composer
ollama create cpt-qwen-composer-local -f Modelfile
Hugging Face repo:
milwright/cpt-qwen-composer
- Downloads last month
- 32
Hardware compatibility
Log In to add your hardware
8-bit
16-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support