SketchingReality: From Freehand Scene Sketches To Photorealistic Images
Paper β’ 2602.14648 β’ Published
The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
A complete research pipeline for sketch-conditioned image generation that:
Based on literature review of landmark papers:
Implements 4 dimensions of sketch quality degradation:
| Dimension | Description | Range |
|---|---|---|
| Sparsity | Random stroke removal | 0.0 β 1.0 |
| Distortion | Elastic geometric deformation | 0.0 β 1.0 |
| Incompleteness | Region erasure | 0.0 β 1.0 |
| Quality | Blur, noise, contrast reduction | 0.0 β 1.0 |
| Level | Sparsity | Distortion | Incompleteness | Quality |
|---|---|---|---|---|
| Excellent | 0.0 | 0.0 | 0.0 | 0.0 |
| Good | 0.2 | 0.15 | 0.1 | 0.15 |
| Moderate | 0.4 | 0.3 | 0.25 | 0.3 |
| Poor | 0.6 | 0.45 | 0.4 | 0.5 |
| Very Poor | 0.8 | 0.6 | 0.6 | 0.7 |
strength = 0.3 + (1 - abstraction) * 0.7FS-COCO via HuggingFace: xiaoyue1028/fscoco_sketch
scripts/
βββ abstraction_protocol.py # Controlled abstraction protocol
βββ abstraction_aware_model.py # Proposed method architecture
βββ baseline_pipeline.py # ControlNet baseline
βββ dataset_loader.py # FS-COCO dataset loading
βββ evaluation.py # FID, CLIP, LPIPS metrics
βββ experiment_runner.py # Full experiment orchestration
βββ gpu_train_controlnet.py # GPU training script
βββ inference_demo.py # Inference demonstration
βββ quick_demo.py # Quick demo (no GPU)
βββ train_abstraction_aware.py # Proposed method training
βββ train_baseline_controlnet.py # Baseline training
βββ run_full_pipeline.py # Complete pipeline runner
βββ visualize_results.py # Visualization utilities
from scripts.abstraction_protocol import SketchAbstractionProtocol
protocol = SketchAbstractionProtocol(resolution=512)
# Generate abstraction levels
results = protocol.generate_abstraction_levels(
sketch_pil=your_sketch,
levels=[{'sparsity': 0.5, 'distortion': 0.3, 'incompleteness': 0.2, 'quality': 0.4}]
)
# Compute abstraction score
scores = protocol.compute_abstraction_score(sketch_array)
# Returns: {'sparsity', 'distortion', 'incompleteness', 'quality', 'overall'}
This research implementation follows the licenses of underlying models and datasets.