Uploading mixed-precision MLX quantizations of popular open-weight models, produced with oMLX’s oQ (sensitivity-driven bit allocation). Target averages of 2, 3, 4, 6, and 8 bits are provided where feasible; actual per-layer bits vary by measured sensitivity. Compatible with mlx-lm, mlx-vlm, and oMLX on Apple Silicon. Benchmarks and comparisons are welcome.