Papers
arxiv:2602.14279

Whom to Query for What: Adaptive Group Elicitation via Multi-Turn LLM Interactions

Published on Feb 15
Β· Submitted by
Ruomeng Ding
on Feb 23
Authors:
,
,
,
,
,

Abstract

Adaptive group elicitation framework combines LLM-based information gain scoring with graph neural networks to improve population-level predictions under budget constraints.

AI-generated summary

Eliciting information to reduce uncertainty about latent group-level properties from surveys and other collective assessments requires allocating limited questioning effort under real costs and missing data. Although large language models enable adaptive, multi-turn interactions in natural language, most existing elicitation methods optimize what to ask with a fixed respondent pool, and do not adapt respondent selection or leverage population structure when responses are partial or incomplete. To address this gap, we study adaptive group elicitation, a multi-round setting where an agent adaptively selects both questions and respondents under explicit query and participation budgets. We propose a theoretically grounded framework that combines (i) an LLM-based expected information gain objective for scoring candidate questions with (ii) heterogeneous graph neural network propagation that aggregates observed responses and participant attributes to impute missing responses and guide per-round respondent selection. This closed-loop procedure queries a small, informative subset of individuals while inferring population-level responses via structured similarity. Across three real-world opinion datasets, our method consistently improves population-level response prediction under constrained budgets, including a >12% relative gain on CES at a 10% respondent budget.

Community

Paper submitter

🧠 When we try to understand a group’s true preferences, the challenge is not only what to ask, but whom to query for what.

We study a new problem setting:
Adaptive Group Elicitation
Under real costs and missing data, the system must dynamically decide:
πŸ‘‰ ❓ Which question to ask
πŸ‘‰ πŸ‘₯ Which individuals to query
πŸ‘‰ 🌐 How to leverage population structure to infer unobserved responses

Most prior work optimizes only question selection,
assuming a fixed respondent pool.
But in practice:
β€’ πŸ“Š Individuals differ in their contribution to uncertainty reduction
β€’ πŸ”— Population structure induces correlated responses
β€’ 🧩 Observations are sparse and incomplete

We propose a framework that combines:
🧠 LLM-based expected information gain for scoring candidate questions
🌐 Heterogeneous GNN propagation to aggregate responses and attributes
🎯 Per-round adaptive respondent selection under explicit budgets
By querying a small, informative subset of individuals,
the model infers population-level responses through structured similarity.

Core insight:
Group-level elicitation is a budget-constrained, structure-aware uncertainty reduction problem β€”
not merely a survey design task.

πŸ“„ Paper: https://arxiv.org/pdf/2602.14279
πŸ’» Code: https://github.com/ZDCSlab/Group-Adaptive-Elicitation

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.14279 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.14279 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.14279 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.