ecu-pilot (GGUF Q8_0)

Quantized GGUF of ecu-pilot-fp16 — a fine-tuned Qwen3.5-35B-A3B for structured tool calling against project metadata via MCP.

Quantization

Source mach-kernel/ecu-pilot-fp16
Method Q8_0 via llama.cpp
Size ~35 GB
Architecture Mixture of Experts (35B total, 3B active per token)

Usage

Ollama

echo 'FROM ./ecu-pilot-q8_0.gguf
PARAMETER temperature 0.2
PARAMETER num_ctx 8192
PARAMETER stop <|im_end|>' > Modelfile

ollama create ecu-pilot -f Modelfile
ollama run ecu-pilot

llama.cpp

llama-cli -m ecu-pilot-q8_0.gguf -ngl 99 -cnv

All variants

Format Repository Size
FP16 mach-kernel/ecu-pilot-fp16 ~67 GB
GGUF Q4_K_M mach-kernel/ecu-pilot-q4km ~20 GB
GGUF Q8_0 (this repo) mach-kernel/ecu-pilot-q8_0 ~35 GB
LoRA adapter mach-kernel/ecu-pilot-fp16-lora ~4 GB

Why "ecu"

No reason. Just liked how it sounded. Definitely not a Caesar cipher of anything. Don't look into it.

Downloads last month
110
GGUF
Model size
35B params
Architecture
qwen35moe
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mach-kernel/ecu-pilot-q8_0

Quantized
(9)
this model