dm-llm-tiny / README.md
JBHarris's picture
Upload folder using huggingface_hub
6889397 verified
metadata
language:
  - en
license: apache-2.0
library_name: transformers
tags:
  - dnd
  - dungeons-and-dragons
  - rpg
  - text-generation
  - qlora
  - tinyllama
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
pipeline_tag: text-generation

DM-LLM-Tiny

A tiny (1.1B parameter) language model fine-tuned for Dungeons & Dragons content generation.

What it does

Generates creative D&D content including:

  • NPCs — memorable characters with backstories, motivations, and quirks
  • Quests — hooks, outlines, and full quest arcs
  • Dialog — in-character conversations, monologues, and banter
  • Locations — vivid descriptions of dungeons, towns, and wilderness
  • Encounters — combat, social, and puzzle encounters

Usage

With Ollama (easiest)

ollama run JBHarris/dm-llm-tiny

With Transformers

from transformers import pipeline

pipe = pipeline("text-generation", model="JBHarris/dm-llm-tiny")
messages = [
    {"role": "system", "content": "You are a creative D&D dungeon master's assistant."},
    {"role": "user", "content": "Create a mysterious NPC for a tavern scene."},
]
result = pipe(messages, max_new_tokens=512)
print(result[0]["generated_text"][-1]["content"])

Training

  • Base model: TinyLlama-1.1B-Chat-v1.0
  • Method: QLoRA (4-bit NF4 quantization + LoRA r=64)
  • Data: ~500 synthetic D&D instruction/response pairs generated with Claude
  • Hardware: NVIDIA RTX 4080 16GB

Limitations

This is a 1.1B parameter model. It's creative and fun for brainstorming but will not match the quality of larger models (7B+). Best used as a quick idea generator, not a replacement for a human DM's judgment.

License

Apache 2.0 (same as base model)