Mobile Game End-to-End Test Plan Generator

Fine-tuned Qwen2.5-Coder-7B-Instruct model specialized in writing comprehensive end-to-end test plans and test cases for mobile games.

What it does

Given a mobile game feature description, the model generates professional QA test plans including:

  • Preconditions
  • Step-by-step test steps
  • Expected results
  • Edge cases and boundary conditions

Training approach

Parameter Value
Base model Qwen/Qwen2.5-Coder-7B-Instruct
Method Supervised Fine-Tuning (SFT) with LoRA
LoRA rank 16
LoRA alpha 32
LoRA dropout 0.1
Target modules q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
Learning rate 3e-4
Epochs 3
Batch size 1 (effective = 8 via gradient accumulation)
Max sequence length 4096
Packing True
Loss on assistant only True

Dataset

  • Size: ~720 training samples, ~80 test samples
  • Source: Synthetic dataset covering 20 game genres and 15 feature categories
  • Format: Conversational (messages with user / assistant roles)
  • Coverage: RPG, Puzzle, Strategy, FPS, MOBA, Battle Royale, Runner, Simulation, Sports, Card Game, Tower Defense, Racing, Fighting, Adventure, Survival, MMORPG, Idle, Tycoon, Rhythm, Sandbox
  • Feature categories: shop, combat, quest, guild, gacha, accessibility, notification, monetization, customization, level, account, settings, progression, social, bug, localization, privacy, competitive, story

How to use

Training

pip install transformers trl peft accelerate datasets
python train.py

Inference

from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline

model_id = "randomtravellerai/mobile-game-test-plans-qwen2.5-coder-7b-lora"
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_id)

messages = [
    {"role": "user", "content": "Write an end-to-end test plan for this mobile RPG game feature:\nFeature: Player opens the shop from the main hub, browses weapon skins, previews them on their character, and purchases using in-game currency."}
]

pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
result = pipe(messages, max_new_tokens=1024)
print(result[0]["generated_text"])

Research basis

Training recipe adapted from:

  • Parameter-Efficient Fine-Tuning of Large Language Models for Unit Test Generation (arXiv:2411.02462) — LoRA hyperparameters
  • Enhancing Large Language Models for Text-to-Testcase Generation (arXiv:2402.11910) — prompt design and task framing

License

Apache-2.0 (matching base model)

Generated by ML Intern

This model repository was generated by ML Intern, an agent for machine learning research and development on the Hugging Face Hub.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Papers for randomtravellerai/mobile-game-test-plans-qwen2.5-coder-7b-lora