qwen3-ascii-tui-lora
LoRA adapter for Qwen/Qwen3-0.6B specialized for terminal-first educational content:
- high-quality ASCII diagrams
- TUI-style process maps
- concise conceptual explanations paired with diagram blocks
Model Details
- Developed by:
mr-dee - Base model:
Qwen/Qwen3-0.6B - Fine-tuning method: QLoRA / PEFT
- Adapter repo:
mr-dee/qwen3-ascii-tui-lora - Source code: https://github.com/dylanler/ascii-tui-qwen3-lab
Intended Use
Direct use:
- Generate CLI-friendly educational diagrams for science and engineering topics.
- Produce compact text layouts suitable for terminals, docs, and chat interfaces.
Downstream use:
- Learning tools, documentation assistants, terminal tutors, and curriculum prototyping.
Out-of-scope:
- Safety-critical decisions.
- Legal/medical/financial advice.
- Fully automated scientific tutoring without human review.
Training Data
Dataset characteristics:
- 1,000 synthetic instruction-response rows
- 950 train / 50 eval split
- 40 unique topics
- Required topic coverage includes:
- lifecycle of a volcano
- how does gravity work
- double-slit experiment
Each row includes:
topicinstructionresponse(multiline explanation + ASCII/TUI diagram)diagram_style,difficulty, and tags
Training Procedure
Hardware:
- 4x NVIDIA A100-SXM4-80GB (DDP via
torchrun --nproc_per_node=4)
LoRA config:
- rank
r=32 - alpha
64 - dropout
0.05 - target modules:
q_proj,k_proj,v_proj,o_proj,gate_proj,up_proj,down_proj
Optimization:
- epochs: 3
- learning rate:
2e-4 - per-device train batch size: 4
- gradient accumulation: 4
- max sequence length: 1024
- mixed precision: bf16
- optimizer: paged AdamW 8-bit
Metrics Snapshot
From trainer_state.json and train_metrics.json:
- train points: 45
- eval points: 9
- train loss:
4.12497 -> 0.10813 - eval loss:
2.06111 -> 0.11857 - best checkpoint:
checkpoint-45 - train runtime:
88.83s - train run average loss:
0.69445
Quick Start
Transformers + PEFT
import torch
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
BASE = "Qwen/Qwen3-0.6B"
ADAPTER = "mr-dee/qwen3-ascii-tui-lora"
tokenizer = AutoTokenizer.from_pretrained(BASE, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
BASE,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True,
)
model = PeftModel.from_pretrained(model, ADAPTER)
messages = [
{"role": "system", "content": "You create terminal-friendly educational ASCII/TUI diagrams."},
{"role": "user", "content": "Explain the double-slit experiment with a compact ASCII diagram."},
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=600, temperature=0.2, top_p=0.9)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True))
vLLM (OpenAI-compatible endpoint)
vllm serve Qwen/Qwen3-0.6B \
--enable-lora \
--lora-modules ascii=mr-dee/qwen3-ascii-tui-lora \
--tensor-parallel-size 2 \
--max-model-len 2048
Query example:
curl http://127.0.0.1:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "ascii",
"messages": [
{"role":"user","content":"Teach gravity using an ASCII flow diagram and 3 takeaways."}
],
"max_tokens": 500,
"temperature": 0.2
}'
Important:
- keep
prompt_tokens + max_tokens <= max_model_len - if exceeded, vLLM returns HTTP 400 validation errors
Risks and Limitations
- May generate plausible but incorrect scientific claims.
- ASCII layout quality can degrade for very long outputs.
- Not guaranteed to follow strict width limits unless prompted.
- Should be reviewed by a human before educational publication.
Responsible Use
- Verify scientific correctness.
- Validate generated text/code before classroom use.
- Avoid over-reliance in high-stakes domains.
Framework Versions
- PEFT: 0.18.1
- Transformers: 4.57.6
- TRL: 0.27.2
- PyTorch: 2.9.1
Citation
@misc{qwen3_ascii_tui_lora_2026,
title = {qwen3-ascii-tui-lora},
author = {mr-dee},
year = {2026},
url = {https://huggingface.co/mr-dee/qwen3-ascii-tui-lora}
}
- Downloads last month
- 1
Model tree for mr-dee/qwen3-ascii-tui-lora
Evaluation results
- Final logged train loss (step 45) on Synthetic ASCII/TUI educational instructionsself-reported0.108
- Final eval loss (step 45) on Synthetic ASCII/TUI educational instructionsself-reported0.119
- Train run average loss on Synthetic ASCII/TUI educational instructionsself-reported0.694