SVG Generation Model Weights

Model weights for the DL Spring 2026 Kaggle Competition โ€” Text-to-SVG Generation.

Author: Ivan Aristy (NYU Tandon, CS-GY 9223 / ECE-GY 7123) Base Model: Qwen/Qwen2.5-Coder-1.5B-Instruct Best Public Score: 16.87 / 100 GitHub: ivanearisty/svg-gen

Models

Model Type Public Score Description
componly-r32-adapter LoRA adapter 16.87 Best model. Requires merged-1.5b-r16 as base. LoRA r=32, alpha=64, trained on 45k competition-only samples.
merged-1.5b-r16 Full model โ€” Qwen2.5-Coder-1.5B with Round 1 LoRA r=16 adapter permanently merged into weights. Base for the best adapter.
refined-7000 Full model 16.26 Full fine-tune from merged base, checkpoint 7000, CE loss 0.308. Standalone model.
r16-3epoch LoRA adapter 15.47 Round 1 adapter. LoRA r=16, 3 epochs on 46k competition data. Load on Qwen/Qwen2.5-Coder-1.5B-Instruct.
mixed-r32-adapter LoRA adapter 14.64 LoRA r=32, trained on 76k mixed data (competition + external). External data hurt performance.
codegen-1.5b Full model 12.26 Code generation experiment โ€” model outputs Python code instead of raw SVG.

Training Strategy

We use an iterative merge-and-retrain approach:

  1. Round 1: LoRA r=16 on Qwen2.5-Coder-1.5B (46k competition data, 3 epochs)
  2. Merge adapter into base weights
  3. Round 2: Fresh LoRA r=32 on merged base (45k competition data, 2 epochs) โ€” best model
  4. Round 3: Merge Round 2 adapter, full fine-tune on DGX Spark

Key Findings

  • System prompt is critical: Removing it drops score from 53.8 to 18.8 (local ablation)
  • Less is more: Minimal system prompt ("Output valid SVG code only.") outperforms verbose prompts
  • LoRA > Full fine-tune: Despite lower CE loss, full fine-tuning scores worse than LoRA
  • Competition data only: External datasets (SVGX, OmniSVG) degraded performance
  • Greedy decoding optimal: Any sampling or penalty variation hurts structured SVG output

Hardware

  • Training (QLoRA): NVIDIA RTX 2000 Ada (16GB VRAM)
  • Training (Full FT): NVIDIA DGX Spark (128GB unified memory)
  • Inference: RTX 2000 Ada, 4-bit quantized, ~20s per SVG
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for kaleidoscopicwhether/svg-gen-weights

Adapter
(86)
this model