spoomplesmaxx-27b-4500-mlx-6bit
MLX 6-bit quantized version of aimeri/spoomplesmaxx-27b-4500, converted with mlx-vlm.
About the base model
spoomplesmaxx-27b-4500 is a continued pretraining (CPT) of Google's Gemma 3 27B PT, trained for 4,500 steps on text data using Unsloth. The SigLIP vision encoder is preserved from the original Gemma 3 architecture — the model retains multimodal (text + image) capability. See the base model card for full training details.
Quantization details
| Parameter | Value |
|---|---|
| Bits | 6 |
| Group size | 64 |
| Mode | Affine |
Usage
pip install -U mlx-vlm
Text + image:
python -m mlx_vlm.generate \
--model aimeri/spoomplesmaxx-27b-4500-mlx-6bit \
--prompt "Describe this image." \
--image path/to/image.jpg \
--max-tokens 200
Text only:
python -m mlx_vlm.generate \
--model aimeri/spoomplesmaxx-27b-4500-mlx-6bit \
--prompt "Once upon a time" \
--max-tokens 200
Python:
from mlx_vlm import load, generate
model, processor = load("aimeri/spoomplesmaxx-27b-4500-mlx-6bit")
output = generate(model, processor, prompt="Describe this image.", image="path/to/image.jpg", max_tokens=200)
print(output)
- Downloads last month
- 38
Model size
7B params
Tensor type
BF16
·
U32 ·
Hardware compatibility
Log In to add your hardware
6-bit
Model tree for aimeri/spoomplesmaxx-27b-4500-mlx-6bit
Base model
aimeri/spoomplesmaxx-base-gemma3-27b-4500 Finetuned
aimeri/spoomplesmaxx-27b-4500