SVG Generation Model Weights
Model weights for the DL Spring 2026 Kaggle Competition โ Text-to-SVG Generation.
Author: Ivan Aristy (NYU Tandon, CS-GY 9223 / ECE-GY 7123) Base Model: Qwen/Qwen2.5-Coder-1.5B-Instruct Best Public Score: 16.87 / 100 GitHub: ivanearisty/svg-gen
Models
| Model | Type | Public Score | Description |
|---|---|---|---|
| componly-r32-adapter | LoRA adapter | 16.87 | Best model. Requires merged-1.5b-r16 as base. LoRA r=32, alpha=64, trained on 45k competition-only samples. |
| merged-1.5b-r16 | Full model | โ | Qwen2.5-Coder-1.5B with Round 1 LoRA r=16 adapter permanently merged into weights. Base for the best adapter. |
| refined-7000 | Full model | 16.26 | Full fine-tune from merged base, checkpoint 7000, CE loss 0.308. Standalone model. |
| r16-3epoch | LoRA adapter | 15.47 | Round 1 adapter. LoRA r=16, 3 epochs on 46k competition data. Load on Qwen/Qwen2.5-Coder-1.5B-Instruct. |
| mixed-r32-adapter | LoRA adapter | 14.64 | LoRA r=32, trained on 76k mixed data (competition + external). External data hurt performance. |
| codegen-1.5b | Full model | 12.26 | Code generation experiment โ model outputs Python code instead of raw SVG. |
Training Strategy
We use an iterative merge-and-retrain approach:
- Round 1: LoRA r=16 on Qwen2.5-Coder-1.5B (46k competition data, 3 epochs)
- Merge adapter into base weights
- Round 2: Fresh LoRA r=32 on merged base (45k competition data, 2 epochs) โ best model
- Round 3: Merge Round 2 adapter, full fine-tune on DGX Spark
Key Findings
- System prompt is critical: Removing it drops score from 53.8 to 18.8 (local ablation)
- Less is more: Minimal system prompt ("Output valid SVG code only.") outperforms verbose prompts
- LoRA > Full fine-tune: Despite lower CE loss, full fine-tuning scores worse than LoRA
- Competition data only: External datasets (SVGX, OmniSVG) degraded performance
- Greedy decoding optimal: Any sampling or penalty variation hurts structured SVG output
Hardware
- Training (QLoRA): NVIDIA RTX 2000 Ada (16GB VRAM)
- Training (Full FT): NVIDIA DGX Spark (128GB unified memory)
- Inference: RTX 2000 Ada, 4-bit quantized, ~20s per SVG
Model tree for kaleidoscopicwhether/svg-gen-weights
Base model
Qwen/Qwen2.5-1.5B Finetuned
Qwen/Qwen2.5-Coder-1.5B Finetuned
Qwen/Qwen2.5-Coder-1.5B-Instruct