SafeGround: Know When to Trust GUI Grounding Models via Uncertainty Calibration
Abstract
SafeGround is a uncertainty-aware framework for GUI grounding models that uses distribution-aware uncertainty quantification and calibration to enable risk-aware predictions with controlled false discovery rates.
Graphical User Interface (GUI) grounding aims to translate natural language instructions into executable screen coordinates, enabling automated GUI interaction. Nevertheless, incorrect grounding can result in costly, hard-to-reverse actions (e.g., erroneous payment approvals), raising concerns about model reliability. In this paper, we introduce SafeGround, an uncertainty-aware framework for GUI grounding models that enables risk-aware predictions through calibrations before testing. SafeGround leverages a distribution-aware uncertainty quantification method to capture the spatial dispersion of stochastic samples from outputs of any given model. Then, through the calibration process, SafeGround derives a test-time decision threshold with statistically guaranteed false discovery rate (FDR) control. We apply SafeGround on multiple GUI grounding models for the challenging ScreenSpot-Pro benchmark. Experimental results show that our uncertainty measure consistently outperforms existing baselines in distinguishing correct from incorrect predictions, while the calibrated threshold reliably enables rigorous risk control and potentials of substantial system-level accuracy improvements. Across multiple GUI grounding models, SafeGround improves system-level accuracy by up to 5.38% percentage points over Gemini-only inference.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Double-Calibration: Towards Trustworthy LLMs via Calibrating Knowledge and Reasoning Confidence (2026)
- Calibrating LLM Judges: Linear Probes for Fast and Reliable Uncertainty Estimation (2025)
- NAACL: Noise-AwAre Verbal Confidence Calibration for LLMs in RAG Systems (2026)
- Fact-Checking with Large Language Models via Probabilistic Certainty and Consistency (2026)
- EpiCaR: Knowing What You Don't Know Matters for Better Reasoning in LLMs (2026)
- From Passive Metric to Active Signal: The Evolving Role of Uncertainty Quantification in Large Language Models (2026)
- Step-GUI Technical Report (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper