gemma-4

gemma-4-31B-it-Uncensored-MAX-MLX

gemma-4-31B-it-Uncensored-MAX-MLX is an uncensored evolution built on top of google/gemma-4-31B-it. This model applies advanced refusal direction analysis and abliteration-based training strategies to significantly reduce internal refusal behaviors while preserving the reasoning and instruction-following strengths of the original architecture. The result is a powerful 31B parameter language model optimized for detailed responses and improved instruction adherence.

This model is materialized for research and learning purposes only. The model has reduced internal refusal behaviors, and any content generated by it is used at the user’s own risk. The authors and hosting page disclaim any liability for content generated by this model. Users are responsible for ensuring that the model is used in a safe, ethical, and lawful manner.

Key Highlights

  • Advanced Refusal Direction Analysis: Uses targeted activation analysis to identify and mitigate refusal directions within the model’s latent space.
  • Uncensored MAX Training: Fine-tuned to significantly reduce refusal patterns while maintaining coherent and detailed outputs.
  • 31B Parameter Architecture: Built on gemma-4-31B-it, offering stronger reasoning and knowledge capacity.
  • Improved Instruction Adherence: Optimized to follow complex prompts with minimal unnecessary refusals.
  • MLX Optimized Deployment: Adapted for efficient inference using Apple’s MLX framework on Apple Silicon.
  • High-Capability Deployment: Suitable for advanced research experimentation and high-performance inference setups.

Quick Start with MLX

pip install -U mlx-vlm
python -m mlx_vlm.generate \
  --model prithivMLmods/gemma-4-31B-it-Uncensored-MAX-MLX \
  --max-tokens 100 \
  --temperature 0.0 \
  --prompt "Describe this image." \
  --image <path_to_image>

Intended Use

  • Alignment & Refusal Research: Studying refusal behaviors and activation-level modifications.
  • Red-Teaming Experiments: Evaluating robustness across adversarial or edge-case prompts.
  • High-Capability Local AI Deployment: Running large instruction models on advanced hardware.
  • Research Prototyping: Experimentation with large-scale transformer architectures.

Limitations & Risks

Important Note: This model intentionally reduces built-in refusal mechanisms.

  • Sensitive Output Possibility: The model may generate controversial or explicit responses depending on prompts.
  • User Responsibility: Outputs should be handled responsibly and within legal and ethical boundaries.
  • Compute Requirements: A 31B model requires significant GPU memory or optimized inference strategies such as quantization, tensor parallelism, or MLX acceleration on Apple Silicon.

Dataset & Acknowledgements

Downloads last month
542
Safetensors
Model size
31B params
Tensor type
BF16
·
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/gemma-4-31B-it-Uncensored-MAX-MLX

Finetuned
(1)
this model

Collection including prithivMLmods/gemma-4-31B-it-Uncensored-MAX-MLX