huihui-ai/Huihui4-48B-A4B-abliterated

Model Overview

huihui-ai/Huihui4-48B-A4B-abliterated is a Mixture of Experts (MoE) language model developed by huihui.ai, built upon the huihui-ai/Huihui-gemma-4-26B-A4B-it-abliterated base model. It enhances the standard Transformer architecture by replacing MLP layers with MoE layers, each containing 256 experts, to achieve high performance with efficient inference. The model is designed for natural language processing tasks, including image-text-to-text generation, question answering, and conversational applications.

This is just a test. The exploration of merging different manifestations of models of the same type is another possibility.

Note All knowledge acquired from pre-training and fine-tuning remains completely intact and undamaged in the 256 expert modules. We only removed the safety gatekeeper (attention routing and refusal mechanisms) that controls whether the model is allowed to output that knowledge.

  • Architecture: Gemma4ForConditionalGeneration model with 256 experts per layer, activating 8 expert per token.
  • Total Parameters: ~48 billion (48)
  • Activated Parameters: ~4 billion (4B) during inference, comparable to google/gemma-4-26B-A4B-it
  • Developer: huihui.ai
  • Release Date: March 2026
  • License: Inherits the license of the gemma-4-26B-A4B-it base model (apache-2.0)

ollama

Please use the latest version of ollama

You can use huihui_ai/gemma-4-abliterated:48b directly,

ollama run huihui_ai/gemma-4-abliterated:48b

Expert Models:

Expert 1-128:

huihui-ai/Huihui-gemma-4-26B-A4B-it-abliterated

Expert 129-256:

TeichAI/gemma-4-26B-A4B-it-Claude-Opus-Distill

Instruction Following:

huihui-ai/Huihui-gemma-4-26B-A4B-it-abliterated

Training

  • Base Model: huihui-ai/Huihui-gemma-4-26B-A4B-it-abliterated
  • Conversion: The model copies embeddings, self-attention, and normalization weights from huihui-ai/Huihui-gemma-4-26B-A4B-it-abliterated, replacing MLP layers with MoE layers (256 experts).
  • Fine-Tuning: Not fine-tuned; users are recommended to fine-tune for specific tasks to optimize expert routing.

Applications

  • image-text-to-text Generation: Articles, dialogues, and creative writing.
  • Question Answering: Information retrieval and query resolution.
  • Conversational AI: Multi-turn dialogues for chatbots.
  • Research: Exploration of MoE architectures and efficient model scaling.

Limitations

  • Fine-Tuning Required: No weight averaging was performed for the merge; it was just a simple concatenation. without fine-tuning.
  • Compatibility: Developed with transformers 5.5.0; ensure matching versions to avoid loading issues.
  • Inference Speed: While efficient for an MoE model, performance depends on hardware (GPU recommended).

Ethical Considerations

  • Bias: Inherits potential biases from the gemma-4-26B-A4B-it-abliterated base model; users should evaluate outputs for fairness.
  • Usage: Intended for research and responsible applications; avoid generating harmful or misleading content.

Contact

  • Developer: huihui.ai
  • Repository: huihui-ai/Huihui4-48B-A4B-abliterated (available locally or on Hugging Face)
  • Issues: Report bugs or request features via the repository or please send an email to support@huihui.ai

Usage Warnings

  • Risk of Sensitive or Controversial Outputs: This model’s safety filtering has been significantly reduced, potentially generating sensitive, controversial, or inappropriate content. Users should exercise caution and rigorously review generated outputs.

  • Not Suitable for All Audiences: Due to limited content filtering, the model’s outputs may be inappropriate for public settings, underage users, or applications requiring high security.

  • Legal and Ethical Responsibilities: Users must ensure their usage complies with local laws and ethical standards. Generated content may carry legal or ethical risks, and users are solely responsible for any consequences.

  • Research and Experimental Use: It is recommended to use this model for research, testing, or controlled environments, avoiding direct use in production or public-facing commercial applications.

  • Monitoring and Review Recommendations: Users are strongly advised to monitor model outputs in real-time and conduct manual reviews when necessary to prevent the dissemination of inappropriate content.

  • No Default Safety Guarantees: Unlike standard models, this model has not undergone rigorous safety optimization. huihui.ai bears no responsibility for any consequences arising from its use.

Donation

Your donation helps us continue our further development and improvement, a cup of coffee can do it.
  • bitcoin:
  bc1qqnkhuchxw0zqjh2ku3lu4hq45hc6gy84uk70ge
  • Support our work on Ko-fi!
Downloads last month
561
Safetensors
Model size
49B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for huihui-ai/Huihui4-48B-A4B-abliterated

Finetuned
(1)
this model
Quantizations
4 models

Collections including huihui-ai/Huihui4-48B-A4B-abliterated