license: other language: - en pipeline_tag: text-generation tags: - mlx - safetensors - nemotron_h - text-generation - conversational - custom_code - 4-bit - heretic - uncensored - decensored - abliterated - base_model:trohrbaugh/Nemotron-Cascade-2-30B-A3B-heretic-uncensored - base_model:quantized:trohrbaugh/Nemotron-Cascade-2-30B-A3B-heretic-uncensored

IvanSmit05/Nemotron-Cascade-2-30B-A3B-heretic-mlx-4bit

This model was converted to MLX format from trohrbaugh/Nemotron-Cascade-2-30B-A3B-heretic-uncensored using mlx-lm. Refer to the original model card for more details on the model.

Use with mlx

pip install -U mlx-lm
python -m mlx_lm.generate \
  --model IvanSmit05/Nemotron-Cascade-2-30B-A3B-heretic-mlx-4bit \
  --max-tokens 200 \
  --temp 0.7 \
  --prompt "Write a short story about a rogue AI."
Downloads last month
355
Safetensors
Model size
32B params
Tensor type
BF16
U32
F32
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support