mysterydungeonGPT

LoRA fine-tuned adapter for Qwen3-0.6B on Mystery Dungeon map generation.

Model Details

This is a LoRA (Low-Rank Adaptation) fine-tuned adapter for Qwen/Qwen3-0.6B.

Fine-tuned on: Mystery Dungeon map generation data (56x32 maps with 6-12 rooms)

Format: Coordinate-based JSON output (walkable tiles as [x, y] coordinates)

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

from peft import PeftModel

Load base model

tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-0.6B") base_model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen3-0.6B", torch_dtype=torch.float16, device_map="auto" )

Load LoRA adapter

model = PeftModel.from_pretrained(base_model, "vishnusm/mysterydungeonGPT")

Generate map

prompt = "Generate a medium difficulty dungeon with 6 rooms" messages = [{"role": "user", "content": prompt}] inputs = tokenizer.apply_chat_template( messages, tokenize=True, return_dict=True, return_tensors="pt" ).to(model.device)

outputs = model.generate( **inputs, max_new_tokens=2=4000, temperature=0.7, top_p=0.9, do_sample=True )## Training Details

  • Base Model: Qwen/Qwen3-0.6B
  • Training Data: 5000 mystery dungeon maps (56x32)
  • Format: Coordinate-based (walkable tiles as coordinates)
  • Room Range: 6-12 rooms per map
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for vishnusm/mysterydungeonGPT

Finetuned
Qwen/Qwen3-0.6B
Adapter
(365)
this model