wave-ui-7b
Fine-tuned Qwen/Qwen2.5-VL-7B-Instruct on agentsea/wave-ui-25k for UI element grounding — given a screenshot and a button name, returns the bounding box coordinates.
Usage
from transformers import Qwen2_5_VLForConditionalGeneration, Qwen2_5_VLProcessor
from qwen_vl_utils import process_vision_info
from PIL import Image
import torch, re
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"miketes/wave-ui-7b", torch_dtype=torch.bfloat16, device_map="auto"
)
processor = Qwen2_5_VLProcessor.from_pretrained("miketes/wave-ui-7b")
image = Image.open("screenshot.png").convert("RGB")
messages = [{
"role": "user",
"content": [
{"type": "image", "image": image},
{"type": "text", "text": 'Where is the "login button"? Return the bounding box.'},
],
}]
text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
image_inputs, _ = process_vision_info(messages)
inputs = processor(text=[text], images=image_inputs, return_tensors="pt").to(model.device)
with torch.inference_mode():
out = model.generate(**inputs, max_new_tokens=64, do_sample=False)
result = processor.decode(out[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True)
coords = re.findall(r'\d+', result)
bbox = [int(x) for x in coords[:4]] if len(coords) >= 4 else None
print(bbox) # [678, 99, 772, 138]
Training details
- Dataset: agentsea/wave-ui-25k (25,000 labeled UI screenshots)
- Split: 80% train / 10% val / 10% test (fixed seed=42)
- Method: QLoRA fine-tuning (4-bit, rank=16)
- Platforms: Web, mobile, desktop UI screenshots
- Downloads last month
- 33
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for miketes/wave-ui-7b
Base model
Qwen/Qwen2.5-VL-7B-Instruct