Jackrong Qwen3.5-9B Claude Reasoning - Abliterated (MXFP8 MLX)
This is an abliterated (uncensored), MXFP8 quantized version of Jackrong/Qwen3.5-9B-Claude-4.6-Opus-Reasoning-Distilled, converted to MLX format for Apple Silicon.
What is this model?
The base model is Qwen3.5-9B fine-tuned on Claude 4.6 Opus reasoning data using Unsloth/LoRA. This version is abliterated (safety refusals removed) and quantized to MXFP8 for reduced memory usage (~8.6 GB vs ~16.6 GB bf16).
Abliteration Details
- Method: lukey03 (1 direction, norm-preserving, 3 refinement passes)
- Post-processing: LoRA compliance fine-tuning (rank=64, 80 iterations)
- Quantization: MXFP8 (8.25 bits per weight)
- Format: MLX (Apple Silicon native)
- Tool: OBLITERATUS
Usage
pip install mlx-lm
mlx_lm.generate --model AITRADER/Jackrong-Qwen3.5-9B-Claude-Reasoning-abliterated-mxfp8-MLX --prompt "Explain quantum computing"
Full Precision Version
The bf16 version is available at: AITRADER/Jackrong-Qwen3.5-9B-Claude-Reasoning-abliterated-fp16-MLX
Disclaimer
This model is provided for research purposes. Users are responsible for ensuring their use complies with applicable laws and regulations.
- Downloads last month
- 755
Model size
3B params
Tensor type
U8
路
U32 路
BF16 路
Hardware compatibility
Log In to add your hardware
8-bit
Model tree for AITRADER/Jackrong-Qwen3.5-9B-Claude-Reasoning-abliterated-mxfp8-MLX
Base model
Qwen/Qwen3.5-9B-Base Finetuned
Qwen/Qwen3.5-9B