ResAdapt: Adaptive Resolution for Efficient Multimodal Reasoning
Abstract
ResAdapt is an input-side adaptation framework that dynamically allocates visual resources to improve multimodal large language models' efficiency in video tasks while maintaining high performance.
Multimodal Large Language Models (MLLMs) achieve stronger visual understanding by scaling input fidelity, yet the resulting visual token growth makes jointly sustaining high spatial resolution and long temporal context prohibitive. We argue that the bottleneck lies not in how post-encoding representations are compressed but in the volume of pixels the encoder receives, and address it with ResAdapt, an Input-side adaptation framework that learns how much visual budget each frame should receive before encoding. ResAdapt couples a lightweight Allocator with an unchanged MLLM backbone, so the backbone retains its native visual-token interface while receiving an operator-transformed input. We formulate allocation as a contextual bandit and train the Allocator with Cost-Aware Policy Optimization (CAPO), which converts sparse rollout feedback into a stable accuracy-cost learning signal. Across budget-controlled video QA, temporal grounding, and image reasoning tasks, ResAdapt improves low-budget operating points and often lies on or near the efficiency-accuracy frontier, with the clearest gains on reasoning-intensive benchmarks under aggressive compression. Notably, ResAdapt supports up to 16x more frames at the same visual budget while delivering over 15% performance gain. Code is available at https://github.com/Xnhyacinth/ResAdapt.
Community
Handling both high spatial resolution and long temporal context in MLLMs is computationally prohibitive. Instead of compressing tokens after encoding (which often discards fine-grained evidence), this paper introduces ResAdapt, an input-side adaptation framework that dynamically learns how much pixel budget each frame should receive before encoding.
✨ Key Highlights:
- Smart Budget Allocation: Uses a lightweight Allocator trained via Cost-Aware Policy Optimization (CAPO) to assign resolution budgets dynamically, leaving the MLLM backbone architecture completely unchanged.
- Emergent Active Perception: Automatically concentrates high-resolution on information-dense events while aggressively compressing redundant frames.
- Extreme Efficiency: Supports up to 16 × more frames under the same visual budget, yielding a >15% performance gain on reasoning-intensive benchmarks!
A highly practical route to long-context video reasoning under tight visual budgets.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Dynamic Token Compression for Efficient Video Understanding through Reinforcement Learning (2026)
- ReDiPrune: Relevance-Diversity Pre-Projection Token Pruning for Efficient Multimodal LLMs (2026)
- PIO-FVLM: Rethinking Training-Free Visual Token Reduction for VLM Acceleration from an Inference-Objective Perspective (2026)
- Difference Feedback: Generating Multimodal Process-Level Supervision for VLM Reinforcement Learning (2026)
- Incentivizing Temporal-Awareness in Egocentric Video Understanding Models (2026)
- Improving Visual Reasoning with Iterative Evidence Refinement (2026)
- Mitigating the Reasoning Tax in Vision-Language Fine-Tuning with Input-Adaptive Depth Aggregation (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper