Model Card for Huihui-Ministral-3B-Instruct-2512-abliterated-GGUF

This repository contains GGUF Quantizations of the Huihui-Ministral-3B-Instruct-2512-abliterated model.

The model is based on Ministral 3B, which has been "abliterated" (uncensored) to remove refusal mechanisms. This modification makes it highly responsive and capable of handling complex, unrestricted creative writing tasks. These GGUF files are strictly optimized for high-performance local inference on edge devices, laptops, and consumer-grade hardware.

Model Details

Model Description

Model Sources

Uses

Direct Use

This model is engineered for efficient local inference on hardware with limited VRAM. It is compatible with major GGUF inference engines including:

  • Ollama
  • llama.cpp
  • LM Studio
  • KoboldCPP

It is particularly effective for Creative Writing, Interactive Assistants, and Narrative Generation on edge devices where cloud latency or privacy is a concern. The "abliterated" nature ensures the model follows instructions precisely without unnecessary refusals.

Out-of-Scope Use

  • Vision/Image Analysis: This is a text-only model. It cannot see images.
  • Fact-Checking: As a 3B parameter model, it is optimized for creativity and reasoning rather than encyclopedic knowledge retrieval.

Bias, Risks, and Limitations

Warning: Uncensored Model This model has undergone "abliteration," a technique that selectively removes safety guardrails.

  • It will not refuse requests that standard models might reject.
  • It may generate sensitive or controversial content if prompted to do so.
  • Users are responsible for the content generated.

Recommended Stop Tokens

To prevent the model from generating artifacts (like +++++) or hallucinating user replies at the end of a response, ensure your inference tool uses the following stop sequences:

  • </s>
  • User:
  • Assistant:

How to Get Started with the Model

Option 1: Run with Ollama (Easiest)

You can pull this model directly to your command line:

ollama run hf.co/Abhiray/Huihui-Ministral-3B-Instruct-2512-abliterated-GGUF:Q4_K_M
Downloads last month
663
GGUF
Model size
3B params
Architecture
mistral3
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Abiray/Huihui-Ministral-3B-Instruct-2512-abliterated-GGUF