LayoutGPT: Compositional Visual Planning and Generation with Large Language Models
Abstract
LayoutGPT enhances LLMs with visual planning capabilities by generating domain-specific layouts from text, improving text-to-image and 3D scene synthesis accuracy.
Attaining a high degree of user controllability in visual generation often requires intricate, fine-grained inputs like layouts. However, such inputs impose a substantial burden on users when compared to simple text inputs. To address the issue, we study how Large Language Models (LLMs) can serve as visual planners by generating layouts from text conditions, and thus collaborate with visual generative models. We propose LayoutGPT, a method to compose in-context visual demonstrations in style sheet language to enhance the visual planning skills of LLMs. LayoutGPT can generate plausible layouts in multiple domains, ranging from 2D images to 3D indoor scenes. LayoutGPT also shows superior performance in converting challenging language concepts like numerical and spatial relations to layout arrangements for faithful text-to-image generation. When combined with a downstream image generation model, LayoutGPT outperforms text-to-image models/systems by 20-40% and achieves comparable performance as human users in designing visual layouts for numerical and spatial correctness. Lastly, LayoutGPT achieves comparable performance to supervised methods in 3D indoor scene synthesis, demonstrating its effectiveness and potential in multiple visual domains.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- 3D-Layout-R1: Structured Reasoning for Language-Instructed Spatial Editing (2026)
- SpatialReward: Verifiable Spatial Reward Modeling for Fine-Grained Spatial Consistency in Text-to-Image Generation (2026)
- DesignSense: A Human Preference Dataset and Reward Modeling Framework for Graphic Layout Generation (2026)
- SeeThrough3D: Occlusion Aware 3D Control in Text-to-Image Generation (2026)
- Spatial Chain-of-Thought: Bridging Understanding and Generation Models for Spatial Reasoning Generation (2026)
- Tokenization Allows Multimodal Large Language Models to Understand, Generate and Edit Architectural Floor Plans (2026)
- From Pixels to Policies: Reinforcing Spatial Reasoning in Language Models for Content-Aware Layout Design (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2305.15393 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper