HY-World 2.0: A Multi-Modal World Model for Reconstructing, Generating, and Simulating 3D Worlds Paper • 2604.14268 • Published 3 days ago • 72
CAD-MLLM: Unifying Multimodality-Conditioned CAD Generation With MLLM Paper • 2411.04954 • Published Nov 7, 2024 • 9 • 9
FUSION: Fully Integration of Vision-Language Representations for Deep Cross-Modal Understanding Paper • 2504.09925 • Published Apr 14, 2025 • 39
CAD-MLLM: Unifying Multimodality-Conditioned CAD Generation With MLLM Paper • 2411.04954 • Published Nov 7, 2024 • 9 • 9
CAD-MLLM: Unifying Multimodality-Conditioned CAD Generation With MLLM Paper • 2411.04954 • Published Nov 7, 2024 • 9 • 9
CAD-MLLM: Unifying Multimodality-Conditioned CAD Generation With MLLM Paper • 2411.04954 • Published Nov 7, 2024 • 9
CAD-MLLM: Unifying Multimodality-Conditioned CAD Generation With MLLM Paper • 2411.04954 • Published Nov 7, 2024 • 9 • 9
DebSDF: Delving into the Details and Bias of Neural Indoor Scene Reconstruction Paper • 2308.15536 • Published Aug 29, 2023
CAD-MLLM: Unifying Multimodality-Conditioned CAD Generation With MLLM Paper • 2411.04954 • Published Nov 7, 2024 • 9