Datasets:

ArXiv:
Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

Sketch-Conditioned Image Generation with Abstraction-Aware Conditioning

Project Overview

A complete research pipeline for sketch-conditioned image generation that:

  1. Establishes a baseline using ControlNet with HED edge conditioning
  2. Implements a controlled abstraction protocol for varying sketch quality levels
  3. Develops an abstraction-aware conditioning strategy that adapts sketch guidance based on estimated sketch quality
  4. Evaluates and compares baseline vs. proposed method across abstraction levels

Research Foundation

Based on literature review of landmark papers:

  • SketchingReality (arXiv:2602.14648) - Freehand sketch-to-image with attention supervision
  • ControlNet (arXiv:2302.05543) - Conditional control with zero-convolution
  • ControlNet++ (arXiv:2404.07987) - Cycle consistency for better controllability
  • T2I-Adapter (arXiv:2302.08453) - Lightweight adapter for conditioning

Controlled Abstraction Protocol

Implements 4 dimensions of sketch quality degradation:

Dimension Description Range
Sparsity Random stroke removal 0.0 β†’ 1.0
Distortion Elastic geometric deformation 0.0 β†’ 1.0
Incompleteness Region erasure 0.0 β†’ 1.0
Quality Blur, noise, contrast reduction 0.0 β†’ 1.0

Abstraction Levels

Level Sparsity Distortion Incompleteness Quality
Excellent 0.0 0.0 0.0 0.0
Good 0.2 0.15 0.1 0.15
Moderate 0.4 0.3 0.25 0.3
Poor 0.6 0.45 0.4 0.5
Very Poor 0.8 0.6 0.6 0.7

Methods

Baseline: ControlNet + HED

  • Base: Stable Diffusion v1.5
  • Conditioning: ControlNet with HED edge maps
  • Fixed conditioning strength (1.0)

Proposed: Abstraction-Aware Adaptive Conditioning

  • Estimates sketch quality (abstraction level) via CNN
  • Adapts conditioning strength: better quality = stronger control
  • Formula: strength = 0.3 + (1 - abstraction) * 0.7

Dataset

FS-COCO via HuggingFace: xiaoyue1028/fscoco_sketch

  • ~10,000 image-sketch-caption triplets
  • Freehand scene sketches from human annotators

Repository Structure

scripts/
β”œβ”€β”€ abstraction_protocol.py          # Controlled abstraction protocol
β”œβ”€β”€ abstraction_aware_model.py       # Proposed method architecture
β”œβ”€β”€ baseline_pipeline.py             # ControlNet baseline
β”œβ”€β”€ dataset_loader.py                # FS-COCO dataset loading
β”œβ”€β”€ evaluation.py                   # FID, CLIP, LPIPS metrics
β”œβ”€β”€ experiment_runner.py            # Full experiment orchestration
β”œβ”€β”€ gpu_train_controlnet.py         # GPU training script
β”œβ”€β”€ inference_demo.py               # Inference demonstration
β”œβ”€β”€ quick_demo.py                   # Quick demo (no GPU)
β”œβ”€β”€ train_abstraction_aware.py      # Proposed method training
β”œβ”€β”€ train_baseline_controlnet.py    # Baseline training
β”œβ”€β”€ run_full_pipeline.py            # Complete pipeline runner
└── visualize_results.py            # Visualization utilities

Usage

from scripts.abstraction_protocol import SketchAbstractionProtocol

protocol = SketchAbstractionProtocol(resolution=512)

# Generate abstraction levels
results = protocol.generate_abstraction_levels(
    sketch_pil=your_sketch,
    levels=[{'sparsity': 0.5, 'distortion': 0.3, 'incompleteness': 0.2, 'quality': 0.4}]
)

# Compute abstraction score
scores = protocol.compute_abstraction_score(sketch_array)
# Returns: {'sparsity', 'distortion', 'incompleteness', 'quality', 'overall'}

References

  1. Bourouis et al. (2026). "SketchingReality: From Freehand Scene Sketches To Photorealistic Images." arXiv:2602.14648.
  2. Zhang & Agrawala (2023). "Adding Conditional Control to Text-to-Image Diffusion Models." arXiv:2302.05543.
  3. Li et al. (2024). "ControlNet++: Improving Conditional Controls with Efficient Consistency Feedback." arXiv:2404.07987.
  4. Mou et al. (2023). "T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models." arXiv:2302.08453.

License

This research implementation follows the licenses of underlying models and datasets.

  • FS-COCO: CC BY-NC 4.0
  • Stable Diffusion: CreativeML Open RAIL-M
  • ControlNet: Apache-2.0
Downloads last month
18

Papers for sumukhmarathe/sketch-abstraction-research