GreenBitAI/QwQ-32B-layer-mix-bpw-4.0-mlx
This quantized low-bit model GreenBitAI/QwQ-32B-layer-mix-bpw-4.0-mlx was converted to MLX format from GreenBitAI/QwQ-32B-layer-mix-bpw-4.0 using gbx-lm version 0.3.9.
Refer to the original model card for more details on the model.
Use with mlx
pip install gbx-lm
from gbx_lm import load, generate
model, tokenizer = load("GreenBitAI/QwQ-32B-layer-mix-bpw-4.0-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
- Downloads last month
- 5
Model size
6B params
Tensor type
F16
路
I16 路
U32 路
Hardware compatibility
Log In to add your hardware
Quantized
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support
Model tree for GreenBitAI/QwQ-32B-layer-mix-bpw-4.0-mlx
Base model
GreenBitAI/QwQ-32B-layer-mix-bpw-4.0