I remember this model now! It's really bad! I messed up the parameters when loading the dataset and ended up with a model that just writes the wrong things. Don't use this! I'll keep it around to remind me of how I've progressed from the start.
Old README:
Oryza-Sirocco-3B
⚠️ Warning: Automated Release
This is an automated release. This model has been fine-tuned and merged automatically. It has not been manually tested for quality, coherence, or safety. Use with caution.
Model Details
Oryza-Sirocco-3B is a fine-tuned version of unsloth/Ministral-3-3B-Base-2512. It was trained using the Unsloth library for faster and memory-efficient training.
The primary goal of this fine-tune is to reduce refusals, moralizing, and "spineless" behavior, making the model more direct and responsive to instructions, including those that might be perceived as aggressive or "unsafe"
- Base Model:
unsloth/Ministral-3-3B-Base-2512 - Architecture: Ministral 3 (Mistral)
- Framework: Unsloth / Transformers / TRL
- Language: English
Training Data
This model was trained on the following datasets:
DrRiceIO7/aggressive-instructions: A dataset where responses are "aggressive", "sarcastic", or otherwise more personality filled than standard LLM responses.DrRiceIO7/AlteredDataset: A dataset combining creative writing, humanized responses, and uncensored instruction following.
Note: Specific details about the dataset contents were not publicly available at the time of generation.
Training Details
The model was trained with the following hyperparameters using SFTTrainer:
- Epochs: 1 (approx. 9375 steps)
- Batch Size: 2 per device (Gradient Accumulation: 8) -> Effective Batch Size: 16
- Learning Rate: 2e-5
- Optimizer: AdamW 8-bit
- LoRA Rank (r): 32
- LoRA Alpha: 64
- Max Sequence Length: 2048
Usage
You can run this model using the transformers library or unsloth.
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "DrRiceIO7/Oryza-Sirocco-3B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto"
)
messages = [
{"role": "user", "content": "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"},
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True).to(model.device)
outputs = model.generate(inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0][inputs.shape[-1]:], skip_special_tokens=True))
Disclaimer: This README was generated by Gemini 3 Pro Preview.
- Downloads last month
- 7