Ecotopia Citizens - Ministral 8B LoRA

Fine-tuned LoRA adapter for generating realistic citizen dialogue and reactions in the Ecotopia political simulation game. Citizens respond to mayor policies with contextual opinions based on their demographic profiles.

Model Details

  • Base Model: mistralai/Ministral-8B-Instruct-2410
  • Method: LoRA (r=16, alpha=32) + 4-bit QLoRA (NF4)
  • Framework: TRL SFT + PEFT
  • Training: HuggingFace Jobs (A10G GPU), 3 epochs
  • Dataset: 340 examples (272 train / 68 val)

Usage

This is a LoRA adapter. Load with PEFT:

from peft import PeftModel
from transformers import AutoModelForCausalLM

base = AutoModelForCausalLM.from_pretrained("mistralai/Ministral-8B-Instruct-2410")
model = PeftModel.from_pretrained(base, "mistral-hackaton-2026/ecotopia-citizens-ministral-8b")

Links

Team

Built for the Mistral Worldwide Hackathon 2026 (London) by Nolan Cacheux and Kyle Kreuter.

Downloads last month
4
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mistral-hackaton-2026/ecotopia-citizens-ministral-8b

Adapter
(90)
this model

Dataset used to train mistral-hackaton-2026/ecotopia-citizens-ministral-8b