Papers
arxiv:2603.26839

From Pixels to BFS: High Maze Accuracy Does Not Imply Visual Planning

Published on Mar 27
Authors:

Abstract

Multimodal models solving visual spatial tasks rely on text-based enumeration rather than true planning, as demonstrated by a benchmark showing poor performance without explicit reasoning budgets.

AI-generated summary

How do multimodal models solve visual spatial tasks -- through genuine planning, or through brute-force search in token space? We introduce MazeBench, a benchmark of 110 procedurally generated maze images across nine controlled groups, and evaluate 16 model configurations from OpenAI, Anthropic, Google, and Alibaba. GPT-5.4 solves 91\% and Gemini 3.1 Pro 79\%, but these scores are misleading: models typically translate images into text grids and then enumerate paths step by step, consuming 1,710--22,818 tokens per solve for a task humans do quickly. Without added reasoning budgets, all configurations score only 2--12\%; on 20times20 ultra-hard mazes, they hit token limits and fail. Qualitative traces reveal a common two-stage strategy: image-to-grid translation followed by token-level search, effectively BFS in prose. A text-grid ablation shows Claude Sonnet 4.6 rising from 6\% on images to 80\% when given the correct grid, isolating weak visual extraction from downstream search. When explicitly instructed not to construct a grid or perform graph search, models still revert to the same enumeration strategy. MazeBench therefore shows that high accuracy on visual planning tasks does not imply human-like spatial understanding.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2603.26839
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2603.26839 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.26839 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.