SpatiaLab: Can Vision-Language Models Perform Spatial Reasoning in the Wild?
Abstract
SpatiaLab presents a comprehensive benchmark for evaluating vision-language models' spatial reasoning capabilities across realistic, diverse scenarios, revealing significant gaps compared to human performance.
Spatial reasoning is a fundamental aspect of human cognition, yet it remains a major challenge for contemporary vision-language models (VLMs). Prior work largely relied on synthetic or LLM-generated environments with limited task designs and puzzle-like setups, failing to capture the real-world complexity, visual noise, and diverse spatial relationships that VLMs encounter. To address this, we introduce SpatiaLab, a comprehensive benchmark for evaluating VLMs' spatial reasoning in realistic, unconstrained contexts. SpatiaLab comprises 1,400 visual question-answer pairs across six major categories: Relative Positioning, Depth & Occlusion, Orientation, Size & Scale, Spatial Navigation, and 3D Geometry, each with five subcategories, yielding 30 distinct task types. Each subcategory contains at least 25 questions, and each main category includes at least 200 questions, supporting both multiple-choice and open-ended evaluation. Experiments across diverse state-of-the-art VLMs, including open- and closed-source models, reasoning-focused, and specialized spatial reasoning models, reveal a substantial gap in spatial reasoning capabilities compared with humans. In the multiple-choice setup, InternVL3.5-72B achieves 54.93% accuracy versus 87.57% for humans. In the open-ended setting, all models show a performance drop of around 10-25%, with GPT-5-mini scoring highest at 40.93% versus 64.93% for humans. These results highlight key limitations in handling complex spatial relationships, depth perception, navigation, and 3D geometry. By providing a diverse, real-world evaluation framework, SpatiaLab exposes critical challenges and opportunities for advancing VLMs' spatial reasoning, offering a benchmark to guide future research toward robust, human-aligned spatial understanding. SpatiaLab is available at: https://spatialab-reasoning.github.io/.
Community
We are excited to share that our paper โ๐๐ฉ๐๐ญ๐ข๐๐๐๐: ๐๐๐ง ๐๐ข๐ฌ๐ข๐จ๐งโ๐๐๐ง๐ ๐ฎ๐๐ ๐ ๐๐จ๐๐๐ฅ๐ฌ ๐๐๐ซ๐๐จ๐ซ๐ฆ ๐๐ฉ๐๐ญ๐ข๐๐ฅ ๐๐๐๐ฌ๐จ๐ง๐ข๐ง๐ ๐ข๐ง ๐ญ๐ก๐ ๐๐ข๐ฅ๐?โ is accepted to ICLR 2026 (The Fourteenth International Conference on Learning Representations).
SpatiaLab investigates how visionโlanguage models handle spatial reasoning in real-world settings, and we hope it will serve as a useful benchmark and reference for future research in this area.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper