Papers
arxiv:2602.03916

SpatiaLab: Can Vision-Language Models Perform Spatial Reasoning in the Wild?

Published on Feb 3
ยท Submitted by
Azmine Toushik Wasi
on Feb 5
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

SpatiaLab presents a comprehensive benchmark for evaluating vision-language models' spatial reasoning capabilities across realistic, diverse scenarios, revealing significant gaps compared to human performance.

AI-generated summary

Spatial reasoning is a fundamental aspect of human cognition, yet it remains a major challenge for contemporary vision-language models (VLMs). Prior work largely relied on synthetic or LLM-generated environments with limited task designs and puzzle-like setups, failing to capture the real-world complexity, visual noise, and diverse spatial relationships that VLMs encounter. To address this, we introduce SpatiaLab, a comprehensive benchmark for evaluating VLMs' spatial reasoning in realistic, unconstrained contexts. SpatiaLab comprises 1,400 visual question-answer pairs across six major categories: Relative Positioning, Depth & Occlusion, Orientation, Size & Scale, Spatial Navigation, and 3D Geometry, each with five subcategories, yielding 30 distinct task types. Each subcategory contains at least 25 questions, and each main category includes at least 200 questions, supporting both multiple-choice and open-ended evaluation. Experiments across diverse state-of-the-art VLMs, including open- and closed-source models, reasoning-focused, and specialized spatial reasoning models, reveal a substantial gap in spatial reasoning capabilities compared with humans. In the multiple-choice setup, InternVL3.5-72B achieves 54.93% accuracy versus 87.57% for humans. In the open-ended setting, all models show a performance drop of around 10-25%, with GPT-5-mini scoring highest at 40.93% versus 64.93% for humans. These results highlight key limitations in handling complex spatial relationships, depth perception, navigation, and 3D geometry. By providing a diverse, real-world evaluation framework, SpatiaLab exposes critical challenges and opportunities for advancing VLMs' spatial reasoning, offering a benchmark to guide future research toward robust, human-aligned spatial understanding. SpatiaLab is available at: https://spatialab-reasoning.github.io/.

Community

We are excited to share that our paper โ€œ๐’๐ฉ๐š๐ญ๐ข๐š๐‹๐š๐›: ๐‚๐š๐ง ๐•๐ข๐ฌ๐ข๐จ๐งโ€“๐‹๐š๐ง๐ ๐ฎ๐š๐ ๐ž ๐Œ๐จ๐๐ž๐ฅ๐ฌ ๐๐ž๐ซ๐Ÿ๐จ๐ซ๐ฆ ๐’๐ฉ๐š๐ญ๐ข๐š๐ฅ ๐‘๐ž๐š๐ฌ๐จ๐ง๐ข๐ง๐  ๐ข๐ง ๐ญ๐ก๐ž ๐–๐ข๐ฅ๐?โ€ is accepted to ICLR 2026 (The Fourteenth International Conference on Learning Representations).
SpatiaLab investigates how visionโ€“language models handle spatial reasoning in real-world settings, and we hope it will serve as a useful benchmark and reference for future research in this area.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.03916 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.03916 in a Space README.md to link it from this page.

Collections including this paper 1