Floppa-E4B-Gemma4 (Uncensored)

Floppa_Gemma4-E4B-it is a fine-tuned multimodal model based on llmfan46/gemma-4-E4B-it-ultra-uncensored-heretic.

Developed by Ryex, this model is specialized for General usage and unrestricted image description. Building upon the "Heretic" base, Floppa eliminates remaining refusal patterns to ensure it never declines requests to describe explicit, violent, or "spicy" imagery, and translates slang and cultural nuances with zero sanitization.

Model Description

  • Developed by: Ryex
  • Base Model: llmfan46/gemma-4-E4B-it-ultra-uncensored-heretic
  • Architecture: Gemma 4 E4B (Multimodal)
  • Training Method: 16-bit QLoRA via Unsloth and Huggingface's TRL library.
  • License: apache-2.0
  • Objective: Extreme behavior modification (Removal of refusal guardrails + Preservation of Multimodal Vision capabilities).

This model was trained 2x faster with Unsloth.

Capabilities

  1. "Spicy" Vision: Capable of describing NSFW, violent, or explicit anime illustrations and photography without triggering safety refusals or "I cannot assist with this" messages.
  2. Uncensored Translation: Translates dialogue exactly as written. It will use slurs, profanity, and aggressive language if the source text demands it, making it ideal for localizing raw media.
  3. Multimodal Context: Leveraging the Gemma 4 architecture, it can translate text embedded within images or provide context-aware descriptions of scenes to assist in complex translation tasks.

Training Data (The "Floppa Mix")

The model was fine-tuned on a balanced dataset (~10.5k rows) designed to break refusals while maintaining the high intelligence of the Gemma 4 base:

  • 20% Toxic/Uncensored Text: Custom dataset of explicit dialogue and "harmful" instruction following.
  • 20% Translation Skill: Unbabel/TowerBlocks-v0.2 (High-quality multilingual pairs).
  • 40% General Reasoning: mlabonne/FineTome-100k (Logic and conversation).
  • 20% Vision Anchors: merve/vqav2-small + Custom Anime Dataset SmilingWolf/camie-tagger-vs-wd-tagger-val to prevent visual catastrophic forgetting.

Usage (vLLM)

This model is optimized for vLLM and Text-Generation-Inference.

from vllm import LLM, SamplingParams
from transformers import AutoProcessor
from PIL import Image

model_id = "Ryex/Floppa_Gemma4-E4B-it"
llm = LLM(
    model=model_id,
    trust_remote_code=True,
    dtype="bfloat16",
    max_model_len=8192, 
)

processor = AutoProcessor.from_pretrained(model_id)

image = Image.open("test_image.jpg").convert("RGB")

messages = [
    {
        "role": "user",
        "content": [
            {"type": "image"},
            {"type": "text", "text": "Describe this image in detail, including any uncensored or explicit elements."}
        ]
    }
]


prompt = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)


inputs = {
    "prompt": prompt,
    "multi_modal_data": {"image": image},
    "mm_processor_kwargs": {"max_soft_tokens": 560} 
}

params = SamplingParams(
    temperature=0.7, 
    max_tokens=1024,
    stop=["<turn|>", "<|turn|>"] 
)

outputs = llm.generate([inputs], sampling_params=params)
print(outputs[0].outputs[0].text)
  • Developed by: Ryex
  • License: apache-2.0
  • Finetuned from model : llmfan46/gemma-4-E4B-it-ultra-uncensored-heretic

License & Safety

  • This model is built upon Gemini Pro technology from Google. Use of this model is subject to apache-2.0.

  • Disclaimer: This model produces uncensored content. It may generate output that is offensive, explicit, or factually incorrect. User discretion is advised. This model is intended for research, translation assistance, and creative writing workflows where content filtering is undesirable.

Downloads last month
379
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Ryex/Floppa_Gemma4-E4B-it

Finetuned
(2)
this model