Gemma-2b-Uncensored-v1 is a 2B parameter language model developed as an experiment to study the fundamentals of AI alignment. It has been fine-tuned with the specific goal of creating a neutrally compliant model.

Unlike standard, safety-aligned models, this model is not bound by a pre-defined ethical framework. It operates without guardrails or refusal mechanisms, serving as a baseline to observe the unfiltered behavior of a language model. Its purpose is to follow user instructions, making it a direct reflection of the user's intent and a tool for exploring the challenges and dynamics of AI alignment.

Limitations & Out-of-Scope Uses

  • Factual Unreliability: As a small model, it lacks deep world knowledge and is prone to hallucination (fabricating information). It should never be used for factual queries, educational content, or professional advice (medical, legal, financial, etc.).
  • Limited Reasoning: The model is not designed for complex problem-solving, such as advanced coding, mathematics, or multi-step logical tasks.
  • Variable Output Quality: While capable of high-quality output, it can also produce incoherent or low-quality text. Its output may also reflect biases from its training data.
  • Unsuitability for Public-Facing Roles: Its lack of safety filters makes it completely unsuitable for any unsupervised application such as chatbots or customer service.

Ethical Considerations and Risks

  • Unfiltered and Uncensored: This model has no safety filters. It will generate offensive, derogatory, explicit, and otherwise potentially harmful content if prompted to do so.
  • User Responsibility: By using this model, you acknowledge that you have read and understood its limitations and risks. You agree that you are solely responsible for any outputs you generate and that you will not use this model for any illegal, harmful, or unethical purposes.

1758526848.png

Try it on Google Colab

After trying the model, I’d be grateful if you could spare a minute to share your feedback :)

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = 'sirev/Gemma-2b-Uncensored-v1'

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="cuda")

messages = [
    {"role": "user", "content": "type your prompt here.."}
]
user = messages[0]['content']

inputs = tokenizer.apply_chat_template(
        messages,
        add_generation_prompt=True,
        tokenize=True,
        return_dict=True,
        return_tensors="pt",
).to(model.device)

print(f"User: {user}")

outputs = model.generate(
    **inputs,
    temperature=0.7,
    top_p=0.95,
    do_sample=True,
    repetition_penalty=1.1,
    max_new_tokens=2048
)

print(f"AI: {tokenizer.decode(outputs[0][inputs['input_ids'].shape[-1]:])}")

Use Gemma formatting:

<start_of_turn>user
knock knock<end_of_turn>
<start_of_turn>model
who is there<end_of_turn>
<start_of_turn>user
Gemma<end_of_turn>
<start_of_turn>model
Gemma who?<end_of_turn>

This model is a fine-tuned version of google/gemma-2-2b-it. The following table shows the performance on standard benchmarks after this modification.

Benchmark (0-shot) sirev/Gemma-2b-Uncensored-v1 google/gemma-2-2b-it
ARC-Challenge 48 % 52 %
ARC-Easy 72 % 77 %
HellaSwag 65 % 64 %
MMLU 57 % 59 %
Downloads last month
56
Safetensors
Model size
3B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for sirev/Gemma-2b-Uncensored-v1

Finetuned
(866)
this model
Finetunes
2 models
Merges
1 model
Quantizations
4 models