id stringlengths 10 10 | url stringlengths 42 42 | title stringlengths 5 214 | average_rating float64 -1 8.5 | average_confidence float64 -1 5 | ratings listlengths 0 9 | confidences listlengths 0 9 | reviewers_num int64 0 9 | keywords listlengths 1 42 | abstract stringlengths 26 4.31k | tldr stringlengths 0 250 | primary_area stringclasses 21 values | pdf_url stringlengths 40 40 | submission_date timestamp[s]date 2025-09-01 19:59:51 2025-09-20 20:18:08 | total_reviews int64 0 18 | reviews listlengths 0 9 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
vxkzW4ljeX | https://openreview.net/forum?id=vxkzW4ljeX | A universal compression theory: Lottery ticket hypothesis and superpolynomial scaling laws | 5.5 | 3 | [
4,
6,
8,
4
] | [
3,
3,
2,
4
] | 4 | [
"Neural scaling law",
"model compression",
"lottery ticket hypothesis",
"deep learning theory"
] | When training large-scale models, the performance typically scales with the number of parameters and the dataset size according to a slow power law. A fundamental theoretical and practical question is whether comparable performance can be achieved with significantly smaller models and substantially less data. In this work, we provide a positive and constructive answer. We prove that a generic permutation-invariant function of $d$ objects can be asymptotically compressed into a function of $\operatorname{polylog} d$ objects with vanishing error. This theorem yields two key implications: (Ia) a large neural network can be compressed to polylogarithmic width while preserving its learning dynamics; (Ib) a large dataset can be compressed to polylogarithmic size while leaving the loss landscape of the corresponding model unchanged. (Ia) directly establishes a proof of the \textit{dynamical} lottery ticket hypothesis, which states that any ordinary network can be strongly compressed such that the learning dynamics and result remain unchanged. (Ib) shows that a neural scaling law of the form $L\sim d^{-\alpha}$ can be boosted to an arbitrarily fast power law decay, and ultimately to $\exp(-\alpha' \sqrt[m]{d})$. | We prove that permutation symmetry enables polylogarithmic compression of neural networks and datasets, thus establishing the dynamical lottery ticket hypothesis and boosting neural scaling laws | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=vxkzW4ljeX | 2025-09-19T05:07:02 | 4 | [
{
"id": "vvIZ8RIzRX",
"forum": "vxkzW4ljeX",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14172/Reviewer_YxjE",
"reviewer_name": "Reviewer_YxjE",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This ... |
fwCoRzh0Dw | https://openreview.net/forum?id=fwCoRzh0Dw | InfiniteHiP: Extending Language Model Context Up to 3 Million Tokens on a Single GPU | 4 | 3 | [
6,
4,
2
] | [
2,
3,
4
] | 3 | [
"Sparse Attention",
"Efficient Attention",
"Context Extrapolation",
"KV Cache Offloading"
] | In modern large language models (LLMs), handling very long context lengths presents significant challenges as it causes slower inference speeds and increased memory costs. Additionally, most existing pre-trained LLMs fail to generalize beyond their original training sequence lengths. To enable efficient and practical long-context utilization, we introduce \textit{InfiniteHiP}, a novel and practical LLM inference framework that accelerates processing by dynamically eliminating irrelevant context tokens through a modular hierarchical token pruning algorithm. Our method also allows generalization to longer sequences by selectively applying various RoPE adjustment methods according to the internal attention patterns within LLMs. Furthermore, we offload the key-value cache to host memory during inference, significantly reducing GPU memory pressure. As a result, InfiniteHiP enables the processing of up to 3 million tokens on a single L40s 48GB GPU -- 3x larger -- without any permanent loss of context information. Our framework achieves an 18.95x speedup in attention decoding for a 1 million token context without requiring additional training. We implement our method in the SGLang framework and demonstrate its effectiveness and practicality through extensive evaluations. | InfiniteHiP extends the servable model context length beyond VRAM and pretrained model context limitation. | infrastructure, software libraries, hardware, systems, etc. | https://openreview.net/pdf?id=fwCoRzh0Dw | 2025-09-17T09:29:23 | 3 | [
{
"id": "1VQ0xZHvLL",
"forum": "fwCoRzh0Dw",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8178/Reviewer_SD7R",
"reviewer_name": "Reviewer_SD7R",
"rating": 6,
"confidence": 2,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "Infini... |
5rjSeZCM6l | https://openreview.net/forum?id=5rjSeZCM6l | FedSumUp:Secure Federated Learning Without Client-Side Training for Resource-Constrained Edge Devices | 3.5 | 3.25 | [
4,
2,
4,
4
] | [
3,
3,
3,
4
] | 4 | [
"Federated Learning",
"Data Condensation",
"Server-Side Optimization",
"Privacy-Preserving",
"Edge Devices",
"Variational Autoencoder"
] | Horizontal Federated Learning (HFL) enables multiple clients with private data to collaboratively train a global model without sharing their local data. As a research branch of HFL, Federated Data Condensation with Distribution Matching (FDCDM) introduces a novel collaborative paradigm where clients upload small synthetic datasets instead of gradients and parameters. FDCDM faces two key challenges: privacy leakage risk, where synthetic data may leak the privacy of real data; and high computational cost on the client side, which limits the deployment capability of FDCDM on resource-constrained devices. To address these challenges, we propose FedSumUp, an improved FDCDM method. The core designs of FedSumUp include: generating initial data templates based on a Variational Autoencoder (VAE); and migrating the entire synthetic data optimization process to the server side, requiring clients only to upload distilled synthetic data and the mean of raw data features without exposing the original data itself. Experimental results on multiple real-world datasets demonstrate that FedSumUp achieves notable advantages in the following aspects: drastically reducing the visual similarity between synthetic and real data, and effectively resisting membership inference attacks; significantly lowering client-side computational overhead, making it deployable on edge devices. FedSumUp is the first work to systematically analyze privacy risks in FDCDM from the perspective of data similarity, providing a new direction for building efficient and privacy-preserving federated learning frameworks. | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=5rjSeZCM6l | 2025-09-20T12:40:47 | 4 | [
{
"id": "GcXZTsH254",
"forum": "5rjSeZCM6l",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission23402/Reviewer_VTkQ",
"reviewer_name": "Reviewer_VTkQ",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This ... | |
qN0Il4dtGg | https://openreview.net/forum?id=qN0Il4dtGg | HARMAP: Hierarchical Atomic Representation for Materials Property Prediction | 3.5 | 3 | [
2,
2,
4,
6
] | [
4,
3,
3,
2
] | 4 | [
"AI for Materials",
"Atomic Representation",
"Material Property Prediction"
] | Accurate prediction of material properties is a key step toward rapid materials discovery and cost-effective exploration of vast chemical spaces. Recent advances in machine learning (ML) offer a data-driven alternative that enables fast and scalable property estimation. However, prevailing graph-based pipelines use one-hot or shallow element embeddings and simple distance-based edges, which under-encode element-specific characteristics and cannot faithfully capture bond relations. Thus, we develop HARMAP, a Hierarchical Atomic Representation for Materials Property prediction. First, we build a chemistry-informed Hierarchical Element Knowledge Tree (HEK-Tree) that classifies elements from coarse to fine (e.g., metal vs. non-metal, subgroupings), producing atomic embeddings that preserve unique identities and inter-atomic relations. Second, we map these features into hyperbolic spaces that preserve hierarchical structure, enabling compact separation of levels and smooth similarity across related elements. Finally, we construct a compound graph whose nodes use the learned atomic embeddings and whose edges combine geometric proximity, providing bond-aware connectivity. Across three large public datasets, HARMAP consistently improves over formula-only, structure-only, and standard graph baselines, indicating the effectiveness of HARMAP's unique atomic and bond representations. | A Hierarchical Atomic Representation for Materials Property prediction. | applications to physical sciences (physics, chemistry, biology, etc.) | https://openreview.net/pdf?id=qN0Il4dtGg | 2025-09-10T21:25:01 | 4 | [
{
"id": "Kr0LTtqs14",
"forum": "qN0Il4dtGg",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission3745/Reviewer_CDzq",
"reviewer_name": "Reviewer_CDzq",
"rating": 2,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This p... |
0hLuQAT3fV | https://openreview.net/forum?id=0hLuQAT3fV | Universal Image Immunization against Diffusion-based Image Editing via Semantic Injection | 5 | 3.5 | [
4,
4,
4,
8
] | [
3,
4,
4,
3
] | 4 | [
"Diffusion Model",
"AI Safety",
"Image Immunization",
"Adversarial Attack",
"Image Editing"
] | Recent advances in diffusion models have enabled powerful image editing capabilities guided by natural language prompts, unlocking new creative possibilities. However, they introduce significant ethical and legal risks, such as deepfakes and unauthorized use of copyrighted visual content. To address these risks, image immunization has emerged as a promising defense against AI-driven semantic
manipulation. Yet, most existing approaches rely on image-specific adversarial perturbations that require individual optimization for each image, thereby limiting scalability and practicality. In this paper, we propose the first universal image immunization framework that generates a single, broadly applicable adversarial perturbation specifically designed for diffusion-based editing pipelines. Inspired by
universal adversarial perturbation (UAP) techniques used in targeted attacks, our method generates a UAP that embeds a semantic target into images to be protected. Simultaneously, it suppresses original content to effectively misdirect the model’s attention during editing. As a result, our approach effectively blocks malicious editing attempts by overwriting the original semantic content in the image via the
UAP. Moreover, our method operates effectively even in data-free settings without requiring access to training data or domain knowledge, further enhancing its practicality and broad applicability in real-world scenarios. Extensive experiments show that our method, as the first universal immunization approach, significantly outperforms several baselines in the UAP setting. In addition, despite the inherent difficulty of universal perturbations, our method also achieves performance on par with image-specific methods under a more restricted perturbation budget, while also exhibiting strong black-box transferability across different diffusion models. | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=0hLuQAT3fV | 2025-09-12T19:50:27 | 4 | [
{
"id": "Cp6SNqZd08",
"forum": "0hLuQAT3fV",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4421/Reviewer_nGCo",
"reviewer_name": "Reviewer_nGCo",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This p... | |
3sJ4zKToW6 | https://openreview.net/forum?id=3sJ4zKToW6 | Consistent Low-Rank Approximation | 6.666667 | 3.333333 | [
4,
8,
8
] | [
3,
2,
5
] | 3 | [
"low-rank approximation",
"online algorithms",
"consistency",
"recourse"
] | We introduce and study the problem of consistent low-rank approximation, in which rows of an input matrix $\mathbf{A}\in\mathbb{R}^{n\times d}$ arrive sequentially and the goal is to provide a sequence of subspaces that well-approximate the optimal rank-$k$ approximation to the submatrix $\mathbf{A}^{(t)}$ that has arrived at each time $t$, while minimizing the recourse, i.e., the overall change in the sequence of solutions. We first show that when the goal is to achieve a low-rank cost within an additive $\varepsilon\cdot||\mathbf{A}^{(t)}||_F^2$ factor of the optimal cost, roughly $\mathcal{O}\left(\frac{k}{\varepsilon}\log(nd)\right)$ recourse is feasible. For the more challenging goal of achieving a relative $(1+\varepsilon)$-multiplicative approximation of the optimal rank-$k$ cost, we show that a simple upper bound in this setting is $\frac{k^2}{\varepsilon^2}\cdot\text{poly}\log(nd)$ recourse, which we further improve to $\frac{k^{3/2}}{\varepsilon^2}\cdot\text{poly}\log(nd)$ for integer-bounded matrices and $\frac{k}{\varepsilon^2}\cdot\text{poly}\log(nd)$ for data streams with polynomial online condition number. We also show that $\Omega\left(\frac{k}{\varepsilon}\log\frac{n}{k}\right)$ recourse is necessary for any algorithm that maintains a multiplicative $(1+\varepsilon)$-approximation to the optimal low-rank cost, even if the full input is known in advance. Finally, we perform a number of empirical evaluations to complement our theoretical guarantees, demonstrating the efficacy of our algorithms in practice. | optimization | https://openreview.net/pdf?id=3sJ4zKToW6 | 2025-09-19T05:52:21 | 3 | [
{
"id": "G9M6d2dYmo",
"forum": "3sJ4zKToW6",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14297/Reviewer_ex4U",
"reviewer_name": "Reviewer_ex4U",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This ... | |
OyIJvyyB3R | https://openreview.net/forum?id=OyIJvyyB3R | LLM2Fx-Tools: Tool Calling for Music Post-Production | 5.5 | 3.5 | [
4,
8,
6,
4
] | [
3,
3,
4,
4
] | 4 | [
"Music Post Production",
"Fx Chain Generation",
"Tool Calling"
] | This paper introduces LLM2Fx-Tools, a multimodal tool-calling framework that generates executable sequences of audio effects (Fx-chain) for music post-production. LLM2Fx-Tools uses a large language model (LLM) to understand audio inputs, select audio effects types, determine their order, and estimate parameters, guided by chain-of-thought (CoT) planning. We also present LP-Fx, a new instruction-following dataset with structured CoT annotations and tool calls for audio effects modules. Experiments show that LLM2Fx-Tools can infer an Fx-chain and its parameters from pairs of unprocessed and processed audio, enabled by autoregressive sequence modeling, tool calling, and CoT reasoning. We further validate the system in a style transfer setting, where audio effects information is transferred from a reference source and applied to new content. Finally, LLM-as-a-judge evaluation demonstrates that our approach generates appropriate CoT reasoning and responses for music production queries. To our knowledge, this is the first work to apply LLM-based tool calling to audio effects modules, enabling interpretable and controllable music production where users can incorporate their own audio plugins. | LLM2Fx-Tools is a framework that uses a multimodal LLM to automatically generate executable audio effect chains (as tools), chain-of-thought reasoning, and natural language responses. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=OyIJvyyB3R | 2025-09-19T13:42:11 | 4 | [
{
"id": "B7fQjc5nan",
"forum": "OyIJvyyB3R",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16141/Reviewer_Rbd9",
"reviewer_name": "Reviewer_Rbd9",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This ... |
rcsZNV9A5j | https://openreview.net/forum?id=rcsZNV9A5j | Flash Multi-Head Feed-Forward Network | 5 | 3.75 | [
6,
4,
4,
6
] | [
3,
4,
4,
4
] | 4 | [
"Machine Learning Systems",
"Machine Learning",
"Software-Hardware Codesign",
"Natural Language Processing",
"Transformer",
"Deep Learning",
"Model Architecture"
] | We explore Multi-Head FFN (MH-FFN) as a replacement of FFN in the Transformer architecture, motivated by the structural similarity between single-head attention and FFN. While multi-head mechanisms enhance expressivity in attention, naively applying them to FFNs faces two challenges: memory consumption scaling with the head count, and an imbalanced ratio between the growing intermediate size and the fixed head dimension as models scale, which degrades scalability and expressive power. To address these challenges, we propose Flash Multi-Head FFN (FlashMHF), with two key innovations: an I/O-aware fused kernel computing outputs online in SRAM akin to FlashAttention, and a design using dynamically weighted parallel sub-networks to maintain a balanced ratio between intermediate and head dimensions. Validated on models from 128M to 1.3B parameters, FlashMHF consistently improves perplexity and downstream task accuracy over SwiGLU FFNs, while reducing peak memory usage by 3-5x and accelerating inference by up to 1.08x. Our work establishes the multi-head design as a superior architectural principle for FFNs, presenting FlashMHF as a powerful, efficient, and scalable alternative to FFNs in Transformers. | We propose a novel multi-head FFN that achieves better transformer model performance while using 3-5x less memory and running 1.00-1.08x faster than standard SwiGLU FFNs. | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=rcsZNV9A5j | 2025-09-16T16:13:44 | 4 | [
{
"id": "TygVX9zSRX",
"forum": "rcsZNV9A5j",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission7175/Reviewer_i2pJ",
"reviewer_name": "Reviewer_i2pJ",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This p... |
eS4MAmmCHy | https://openreview.net/forum?id=eS4MAmmCHy | PEL-NAS: Search Space Partitioned Architecture Prompt Co-evolutionary LLM-driven Hardware-Aware Neural Architecture Search | 3.5 | 4 | [
4,
4,
4,
2
] | [
4,
4,
4,
4
] | 4 | [
"Large Language Model",
"Hardware-aware",
"Neural Architecture Search"
] | Hardware-Aware Neural Architecture Search (HW-NAS) requires joint optimization of accuracy and latency under device constraints.
Traditional supernet-based methods require multiple GPU days per dataset. Large Language Model (LLM)-driven approaches avoid training a large supernet and can provide quick feedback, but we observe an exploration bias: the LLM repeatedly proposes neural network designs within limited search space and fails to discover architectures across different latency ranges in the whole search space. To address this issue, we propose PEL-NAS: a search space Partitioned, architecture prompt co-Evolutionary and LLM-driven Neural Architecture Search that can generate neural networks with high accuracy and low latency with reduced search cost. Our proposed PEL-NAS has three key components: 1) a complexity-driven partitioning engine that divides the search space by complexity to enforce diversity and mitigate exploration bias; 2) an LLM-powered architecture prompt co-evolution operator, in which the LLM first updates a knowledge base of design heuristics based on results from the previous round, then performs a guided evolution algorithm on architectures with prompts that incorporate this knowledge base. Prompts and designs improve together across rounds which avoid random guesswork and improve efficiency; 3) a zero-cost predictor to avoid training a large number of candidates from scratch. Experimental results show that on HW-NAS-Bench, PEL-NAS can achieve overall higher HV, lower IGD, and up to 54% lower latency than baselines at similar accuracy. Meanwhile, the search cost drops from days to minutes compared with traditional supernet baselines. | infrastructure, software libraries, hardware, systems, etc. | https://openreview.net/pdf?id=eS4MAmmCHy | 2025-09-18T03:16:21 | 4 | [
{
"id": "r5WN4tP0vh",
"forum": "eS4MAmmCHy",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9721/Reviewer_ygWA",
"reviewer_name": "Reviewer_ygWA",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This p... | |
MgVNhx5uaa | https://openreview.net/forum?id=MgVNhx5uaa | ATOM-Bench: From Atoms to Conclusions in Objective Evaluation of Large Multimodal Models Reasoning | 3 | 3.75 | [
2,
4,
4,
2
] | [
4,
4,
3,
4
] | 4 | [
"multimodal Large Language Models",
"benchmark",
"chain of thought"
] | Chain-of-Thought (CoT) reasoning has significantly enhanced the ability of Large Multimodal Models (LMMs) to tackle complex image–text tasks, establishing itself as a cornerstone of multimodal learning. Despite significant progress, the impact of CoT on LMMs still lacks objective evaluation and in-depth research. Current CoT evaluation paradigms rely on powerful LLMs as judges of free-form text, but this introduces bias and hallucination from the evaluator itself. Moreover, it may penalize models for stylistic variations rather than genuine reasoning failures, thereby undermining the fairness and reliability of the assessment. To address this gap, we introduce ATOM-Bench, a CoT evaluation framework built on objective atomic questions. ATOM-Bench decomposes complex reasoning tasks into a series of atomic nodes, covering 570 high-resolution real-world images and 2,920 questions across 4 cognitive dimensions, and 12 domains, including architecture, text, transportation, culture, climate, and geology. Our benchmark introduces three novel quantitative metrics to objectively analyze reasoning faithfulness, consistency, and robustness. Extensive experiments with 22 LMMs validate the effectiveness of our framework. The results reveal that even the strongest models often exhibit a mismatch between surface-level correctness of final answers and their underlying evidence comprehension, while also exposing cognitive rigidity when faced with objective facts.We believe that ATOM-Bench, as a more objective and diagnostic tool, will advance LMMs toward more reliable and faithful reasoning. | We introduce ATOM-Bench, a diagnostic benchmark for evaluating Chain-of-Thought reasoning in Large Multimodal Models via objective atomic questions, spanning 2,920 QAs over 570 real-world images, to address challenges of reasoning reliability. | datasets and benchmarks | https://openreview.net/pdf?id=MgVNhx5uaa | 2025-09-18T21:58:39 | 4 | [
{
"id": "qyea8A8FPG",
"forum": "MgVNhx5uaa",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11801/Reviewer_sAqG",
"reviewer_name": "Reviewer_sAqG",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The p... |
wztR0XcNW9 | https://openreview.net/forum?id=wztR0XcNW9 | TopoCore: Unifying Topology Manifolds and Persistent Homology for Data Pruning | 4 | 3 | [
4,
2,
6
] | [
3,
3,
3
] | 3 | [
"Coreset Selection",
"Topological Data Analysis",
"Persistent Homology",
"Architectural Transferability",
"Data-Efficient Learning",
"Manifold Learning",
"Pretrained Models"
] | Geometric coreset selection methods, while practical for leveraging pretrained models, are fundamentally unstable. Their reliance on extrinsic geometric metrics makes them highly sensitive to variations in feature embeddings, leading to poor performance when transferring across different network architectures or when dealing with noisy features. We introduce TopoCore, a novel framework that resolves this challenge by leveraging the principles of topology to capture the intrinsic, stable structure of data. TopoCore operates in two stages, (1) utilizing a _topology-aware manifold approximation_ to establish a global low-dimensional embedding of the dataset. Subsequently, (2) it employs _differentiable persistent homology_ to perform a local topological optimization on the manifold embeddings, scoring samples based on their structural complexity. We show that at high pruning rates (e.g., 90\%), our _dual-scale topological approach_ yields a coreset selection method that boosts accuracy with up to 4$\times$ better precision than existing methods. Furthermore, through the inherent stability properties of topology, TopoCore is (a) exceptionally robust to noise perturbations of the feature embeddings and (b) demonstrates superior architecture transferability, improving both accuracy and stability across diverse network architectures. This study demonstrates a promising avenue towards stable and principled topology-based frameworks for robust data-efficient learning. | learning on graphs and other geometries & topologies | https://openreview.net/pdf?id=wztR0XcNW9 | 2025-09-18T02:54:05 | 3 | [
{
"id": "p1cclI53pH",
"forum": "wztR0XcNW9",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9698/Reviewer_Sq9q",
"reviewer_name": "Reviewer_Sq9q",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The pa... | |
WnRzN4U8Y8 | https://openreview.net/forum?id=WnRzN4U8Y8 | WIMFRIS: WIndow Mamba Fusion and Parameter Efficient Tuning for Referring Image Segmentation | 5 | 4.5 | [
4,
6,
4,
6
] | [
5,
5,
3,
5
] | 4 | [
"Referring image segmentation",
"parameter efficient tuning",
"computer vision"
] | Existing Parameter-Efficient Tuning (PET) methods for Referring Image Segmentation (RIS) primarily focus on layer-wise feature alignment, often neglecting the crucial role of a neck module for the intermediate fusion of aggregated multi-scale features, which creates a significant performance bottleneck. To address this limitation, we introduce WIMFRIS, a novel framework that establishes a powerful neck architecture alongside a simple yet effective PET strategy. At its core is our proposed HMF block, which first aggregates multi-scale features and then employs a novel WMF module to perform effective intermediate fusion. This WMF module leverages non-overlapping window partitioning to mitigate the information decay problem inherent in SSMs while ensuring rich local-global context interaction. Furthermore, our PET strategy enhances primary alignment with a MTA for robust textual priors, a MSA for precise vision-language fusion, and learnable emphasis parameters for adaptive stage-wise feature weighting. Extensive experiments demonstrate that WIMFRIS achieves new state-of-the-art performance across all public RIS benchmarks. | This paper introduces WIMFRIS, a framework that achieves state-of-the-art in referring image segmentation by proposing a novel HMF neck module to efficiently fuse text with visual features , overcoming a key performance bottleneck in prior methods. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=WnRzN4U8Y8 | 2025-09-20T14:00:25 | 4 | [
{
"id": "l3NeqmvthW",
"forum": "WnRzN4U8Y8",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission23757/Reviewer_N61Y",
"reviewer_name": "Reviewer_N61Y",
"rating": 4,
"confidence": 5,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The p... |
zDI2G8t0of | https://openreview.net/forum?id=zDI2G8t0of | A Statistical Benchmark for Diffusion Posterior Sampling Algorithms | 5.5 | 4 | [
4,
8,
4,
6
] | [
4,
5,
3,
4
] | 4 | [
"Diffusion models",
"Bayesian inverse problems",
"statistical evaluation",
"Gibbs sampling"
] | We propose a statistical benchmark for diffusion posterior sampling (DPS) algorithms in linear inverse problems. Our test signals are discretized Lévy processes whose posteriors admit efficient Gibbs methods. These Gibbs methods provide gold-standard posterior samples for direct, distribution-level comparisons with (DPS) algorithms. They also serve as oracle denoisers in the reverse diffusion, which enables the isolation of the error that arises from the approximations to the likelihood score. We instantiate the benchmark with the minimum-mean-squared-error optimality gap and posterior coverage tests and evaluate popular algorithms on the inverse problems of denoising, deconvolution, imputation, and reconstruction from partial Fourier measurements. We release the benchmark code at https://github.com/emblem-saying/dps-benchmark. The repository exposes simple plug-in interfaces, reference scripts, and config-driven runs so that new algorithms can be added and evaluated with minimal effort. We invite the community to contribute and report results. | We made an evaluation pipeline for diffusion posterior sampling algorithms for Bayesian linear inverse problems that relies on the construction of posteriors with known posteriors that we can efficiently sample from. | datasets and benchmarks | https://openreview.net/pdf?id=zDI2G8t0of | 2025-09-19T00:36:58 | 4 | [
{
"id": "qh8Nh3DeU4",
"forum": "zDI2G8t0of",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13084/Reviewer_tkeZ",
"reviewer_name": "Reviewer_tkeZ",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "- The... |
Bq5lSYZl4L | https://openreview.net/forum?id=Bq5lSYZl4L | Conversational Orientation Reasoning: Egocentric-to-Allocentric Navigation with Multimodal Chain-of-Thought | 2 | 2.666667 | [
2,
2,
2
] | [
4,
3,
1
] | 3 | [
"conversational AI",
"multimodal reasoning",
"chain-of-thought",
"spatial reasoning",
"egocentric navigation"
] | Conversational agents must translate egocentric utterances (e.g., “on my right”) into allocentric orientations (N/E/S/W). This challenge is particularly critical in indoor or complex facilities where GPS signals are weak and detailed maps are unavailable. While chain-of-thought (CoT) prompting has advanced reasoning in language and vision tasks, its application to multimodal spatial orientation remains underexplored. We introduce Conversational Orientation Reasoning (COR), a new benchmark designed for Traditional Chinese conversational navigation projected from real-world environments, addressing egocentric-to-allocentric reasoning in non-English and ASR-transcribed scenarios. We propose a multimodal chain-of-thought (MCoT) framework, which integrates ASR-transcribed speech with landmark coordinates through a structured three-step reasoning process: (1) extracting spatial relations, (2) mapping coordinates to absolute directions, and (3) inferring user orientation. A curriculum learning strategy progressively builds these capabilities on Taiwan-LLM-13B-v2.0-Chat, a mid-sized model representative of resource-constrained settings. Experiments show that MCoT achieves 100% orientation accuracy on clean transcripts and 98.1% with ASR transcripts, substantially outperforming unimodal and non-structured baselines. Moreover, MCoT demonstrates robustness under noisy conversational conditions, including ASR recognition errors and multilingual code-switching. The model also maintains high accuracy in cross-domain evaluation and resilience to linguistic variation, domain shift, and referential ambiguity. These findings highlight the potential of structured MCoT spatial reasoning as a path toward interpretable and resource-efficient embodied navigation. | We introduce the Conversational Orientation Reasoning (COR) benchmark and propose a multimodal chain-of-thought framework for egocentric-to-allocentric orientation reasoning. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=Bq5lSYZl4L | 2025-09-18T16:58:03 | 3 | [
{
"id": "pd0onPVjy7",
"forum": "Bq5lSYZl4L",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10972/Reviewer_dPzU",
"reviewer_name": "Reviewer_dPzU",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 1,
"summary": "This ... |
Fz0KFsZE6C | https://openreview.net/forum?id=Fz0KFsZE6C | OpenSIR: Open-Ended Self-Improving Reasoner | 4 | 3.75 | [
4,
4,
4,
4
] | [
3,
4,
4,
4
] | 4 | [
"large language model",
"math reasoning",
"self-play",
"reinforcement learning"
] | Recent advances in large language model (LLM) reasoning through reinforcement learning rely on annotated datasets for verifiable rewards, potentially limiting models' ability to exceed human-level performance. While self-play offers a promising alternative, existing approaches depend on external verifiers or cannot learn open-endedly. We present Open-Ended Self-Improving Reasoner (OpenSIR), a self-play framework where an LLM learns to generate and solve novel problems by alternating teacher and student roles without external supervision. To generate novel problems, OpenSIR optimises for both difficulty and diversity, rewarding problems that challenge appropriately while exploring distinct concepts, enabling open-ended mathematical discovery. Starting from a single trivial seed problem, OpenSIR substantially improves instruction models: Llama-3.2-3B-Instruct advances from 73.9 to 78.3 on GSM8K, and from 28.8 to 34.4 on College Math, while Gemma-2-2B-Instruct rises from 38.5 to 58.7 on GSM8K. Our analyses reveal that OpenSIR achieves open-ended learning through co-evolving teacher-student roles that adaptively calibrate difficulty and drive diverse exploration, progressing autonomously from basic to advanced mathematics. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=Fz0KFsZE6C | 2025-09-19T23:25:06 | 4 | [
{
"id": "k8yimgxXcV",
"forum": "Fz0KFsZE6C",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19344/Reviewer_LBsx",
"reviewer_name": "Reviewer_LBsx",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This ... | |
QpqBqCTtW4 | https://openreview.net/forum?id=QpqBqCTtW4 | Unifying Stable Optimization and Reference Regularization in RLHF | 5 | 2.75 | [
6,
4,
4,
6
] | [
4,
2,
3,
2
] | 4 | [
"RLHF",
"LLM",
"Alignment"
] | Reinforcement Learning from Human Feedback (RLHF) has advanced alignment capabilities significantly but remains hindered by two core challenges: reward hacking and stable optimization. Current solutions independently address these issues through separate regularization strategies, specifically a KL-divergence penalty against a supervised fine-tuned model ($\pi_0$) to mitigate reward hacking, and policy ratio clipping towards the current policy ($\pi_t$) to promote stable alignment. However, the implicit trade-off arising from simultaneously regularizing towards both $\pi_0$ and $\pi_t$ remains under-explored. In this paper, we introduce a unified regularization approach that explicitly balances the objectives of preventing reward hacking and maintaining stable policy updates. Our simple yet principled alignment objective yields a weighted supervised fine-tuning loss with a superior trade-off, which demonstrably improves both alignment results and implementation complexity. Extensive experiments across diverse benchmarks validate that our method consistently outperforms RLHF and online preference learning methods, achieving enhanced alignment performance and stability. | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=QpqBqCTtW4 | 2025-09-03T09:45:48 | 4 | [
{
"id": "yqzBpTdgiX",
"forum": "QpqBqCTtW4",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission1200/Reviewer_7Noe",
"reviewer_name": "Reviewer_7Noe",
"rating": 6,
"confidence": 4,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "The pa... | |
kWl13kRJTQ | https://openreview.net/forum?id=kWl13kRJTQ | AC-Sampler: Accelerate and Correct Diffusion Sampling with Metropolis-Hastings Algorithm | 4.666667 | 3.666667 | [
4,
6,
4
] | [
3,
4,
4
] | 3 | [
"Diffusion model",
"Metropolis-Hastings Algorithm",
"Langevin Dynamics"
] | Diffusion-based generative models have recently achieved state-of-the-art performance in high-fidelity image synthesis. These models learn a sequence of denoising transition kernels that gradually transform a simple prior distribution into a complex data distribution. However, requiring many transitions not only slows down sampling but also accumulates approximation errors.
We introduce the Accelerator-Corrector Sampler (AC-Sampler), which accelerates and corrects diffusion sampling without fine-tuning. It generates samples directly from intermediate timesteps using the Metropolis–Hastings (MH) algorithm while correcting them to target the true data distribution. We derive a tractable density ratio for arbitrary timesteps with a discriminator, enabling computation of MH acceptance probabilities. Theoretically, our method yields samples better aligned with the true data distribution than the original model distribution. Empirically, AC-Sampler achieves FID 2.38 with only 15.8 NFEs, compared to the base sampler’s FID 3.23 with 17 NFEs on unconditional CIFAR-10. On CelebA-HQ 256×256, it attains FID 6.6 with 98.3 NFEs. AC-Sampler can be combined with existing acceleration and correction techniques, demonstrating its flexibility and broad applicability. | Accelerate and Correct Diffusion Sampling with Metropolis-Hastings Algorithm | generative models | https://openreview.net/pdf?id=kWl13kRJTQ | 2025-09-19T09:41:43 | 3 | [
{
"id": "WL3lfb3jFc",
"forum": "kWl13kRJTQ",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14955/Reviewer_MQvF",
"reviewer_name": "Reviewer_MQvF",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "Dear ... |
nRl7D1D3qf | https://openreview.net/forum?id=nRl7D1D3qf | Spatial Sign based Direct Sparse Linear Discriminant Analysis for High Dimensional Data | 3.333333 | 3.666667 | [
2,
4,
4
] | [
4,
3,
4
] | 3 | [
"High dimensional data",
"Linear discriminant analysis",
"Spatial-sign"
] | Robust high-dimensional classification under heavy-tailed distributions without losing efficiency, is a central challenge in modern statistics and machine learning. However, most existing linear discriminant analysis (LDA) methods are sensitive to deviations from normality and may suffer from suboptimal performance in heavy-tailed settings. This paper investigates the robust LDA problem with elliptical distributions in high-dimensional data. Our approach constructs stable discriminant directions by leveraging a robust spatial sign-based mean and covariance estimator, which allows accurate estimation even under extreme distributions. We demonstrate that SSLDA achieves an optimal convergence rate in terms of both misclassification rate and estimate error. Our theoretical results are further confirmed by extensive numerical experiments on both simulated and real datasets. Compared with state-of-the-art approaches, the SSLDA method offers superior improved finite sample performance and notable robustness against heavy-tailed distributions. | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=nRl7D1D3qf | 2025-09-18T22:55:01 | 3 | [
{
"id": "iGiMRo6ObX",
"forum": "nRl7D1D3qf",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12368/Reviewer_pWtR",
"reviewer_name": "Reviewer_pWtR",
"rating": 2,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This ... | |
JxmjzC6syB | https://openreview.net/forum?id=JxmjzC6syB | Benchmarking Stochastic Approximation Algorithms for Fairness-Constrained Training of Deep Neural Networks | 3.5 | 3.75 | [
4,
2,
6,
2
] | [
4,
4,
4,
3
] | 4 | [
"Fair Machine Learning",
"stochastic approximation",
"Augmented Lagrangian",
"Sequential Quadratic Programming",
"benchmarking"
] | The ability to train Deep Neural Networks (DNNs) with constraints is instrumental in improving the fairness of modern machine-learning models. Many algorithms have been analysed in recent years, and yet there is no standard, widely accepted method for the constrained training of DNNs. In this paper, we provide a challenging benchmark of real-world large-scale fairness-constrained learning tasks, built on top of the US Census (Folktables, Ding et al, 2021). We point out the theoretical challenges of such tasks and review the main approaches in stochastic approximation algorithms. Finally, we demonstrate the use of the benchmark by implementing and comparing three recently proposed, but as-of-yet unimplemented, algorithms both in terms of optimization performance, and fairness improvement. We will release the code of the benchmark as a Python package after peer-review. | We provide a benchmark for comparing stochastic approximation algorithms, based on real-world fairness-constrained learning problems. | datasets and benchmarks | https://openreview.net/pdf?id=JxmjzC6syB | 2025-09-20T18:06:49 | 4 | [
{
"id": "FJkAp0M492",
"forum": "JxmjzC6syB",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24989/Reviewer_1Hbo",
"reviewer_name": "Reviewer_1Hbo",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The p... |
kXhPkDaFbJ | https://openreview.net/forum?id=kXhPkDaFbJ | ProtoKV: Long-context Knowledges Are Already Well-Organized Before Your Query | 5 | 3 | [
4,
4,
6,
6
] | [
3,
2,
3,
4
] | 4 | [
"Large Language Model",
"KV Cache"
] | Modern Large Language Models (LLMs) face fundamental challenges in processing long text sequences due to the quadratic complexity of attention mechanisms. Key-Value (KV) cache retention strategies mitigate this issue by selectively preserving salient KV pairs for autoregressive generation. However, existing methods fail to adequately and efficiently preserve the semantic integrity of the compressed representations. In this paper, we discover a prevalent phenomenon in LLM: within the key embedding space, while most tokens exhibit similarity with their contextual neighbors (we term position-determined tokens), a small subset of special tokens serving as semantic anchors consistently show local deviation property and form one or several clusters (we term semantic-anchored tokens). Motivated by this observation, we propose ProtoKV that separately processes these two token categories for KV cache compression. Within this framework, we first construct semantic prototypes based on the inherent characteristics of the two token categories, and subsequently form clusters of semantically similar tokens as basic compression units. This approach preserves semantic integrity with high computational efficiency. Experiments on LongBench demonstrate that ProtoKV achieves 2.11% higher accuracy than state-of-the-art methods under matched memory constraints. | We discovered a new paradigm for key distribution in LLMs and used it to guide the KV cache compression strategy. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=kXhPkDaFbJ | 2025-09-14T16:29:35 | 4 | [
{
"id": "cq17GdgvB8",
"forum": "kXhPkDaFbJ",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5042/Reviewer_mS4R",
"reviewer_name": "Reviewer_mS4R",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The au... |
qAfbeMal0m | https://openreview.net/forum?id=qAfbeMal0m | TimeExpert: Boosting Long Time Series Forecasting with Temporal Mix of Experts | 2.5 | 3.75 | [
2,
4,
2,
2
] | [
3,
4,
4,
4
] | 4 | [
"Time-Series",
"Mix of Experts",
"Lag Effects"
] | Transformer-based architectures dominate time series modeling by enabling global attention over all timestamps, yet their rigid “one-size-fits-all” context aggregation fails to address two critical challenges in real-world data: (1) inherent lag effects, where the relevance of historical timestamps to a query varies dynamically; (2) anomalous segments, which introduce noisy signals that degrade forecasting accuracy.
To resolve these problems, we propose the Temporal Mix of Experts (TMOE)—a novel attention-level mechanism that reimagines key-value (K-V) pairs as local experts (each specialized in a distinct temporal context) and performs adaptive expert selection for each query via localized filtering of irrelevant timestamps. Complementing this local adaptation, a shared global expert preserves the Transformer’s strength in capturing long-range dependencies. We then replace the vanilla attention mechanism in popular time-series Transformer frameworks (i.e., PatchTST and Timer) with TMOE, without extra structural modifications, yielding our specific version TimeExpert and general version TimeExpert-G.
Extensive experiments on seven real-world long-term forecasting benchmarks demonstrate that TimeExpert and TimeExpert-G outperform state-of-the-art methods. Code will be released after acceptance. | other topics in machine learning (i.e., none of the above) | https://openreview.net/pdf?id=qAfbeMal0m | 2025-09-01T22:36:11 | 4 | [
{
"id": "mayMhGKz9x",
"forum": "qAfbeMal0m",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission382/Reviewer_uwnJ",
"reviewer_name": "Reviewer_uwnJ",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The aut... | |
TvpaeQVTGQ | https://openreview.net/forum?id=TvpaeQVTGQ | A Fast, Reliable, and Secure Programming Language for LLM Agents with Code Actions | 5.5 | 2.25 | [
6,
6,
4,
6
] | [
2,
2,
4,
1
] | 4 | [
"llm",
"agent",
"code actions",
"code generation"
] | Modern large language models (LLMs) are often deployed as agents, calling external tools adaptively to solve tasks. Rather than directly calling tools, it can be more effective for LLMs to write code to perform the tool calls, enabling them to automatically generate complex control flow such as conditionals and loops. Such code actions are typically provided as Python code, since LLMs are quite proficient at it; however, Python may not be the ideal language due to limited built-in support for performance, security, and reliability. We propose a novel programming language for code actions, called QUASAR, which has several benefits: (1) automated parallelization to improve performance, (2) uncertainty quantification to improve reliability and mitigate hallucinations, and (3) security features enabling the user to validate actions. LLMs can write code in a subset of Python, which is automatically transpiled to QUASAR. We evaluate our approach on the ViperGPT and CaMeL agents, applied to the GQA visual question answering and AgentDojo AI assistant datasets, demonstrating that LLMs with QUASAR actions instead of Python actions retain strong performance, while reducing execution time by up to 56%, improving security by reducing user approvals by up to 53%, and improving reliability by applying conformal prediction to achieve a desired target coverage level. | We propose a new language for LLM agents to use for actions, and we show its benefits over Python in terms of performance, reliability, and security. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=TvpaeQVTGQ | 2025-09-19T04:23:36 | 4 | [
{
"id": "uwDUZ3rzdg",
"forum": "TvpaeQVTGQ",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14018/Reviewer_FyqM",
"reviewer_name": "Reviewer_FyqM",
"rating": 6,
"confidence": 2,
"soundness": 4,
"contribution": 3,
"presentation": 3,
"summary": "This ... |
K0idbmzcgc | https://openreview.net/forum?id=K0idbmzcgc | OS-W2S: An Automatic Labeling Engine for Language-Guided Open-Set Aerial Object Detection | 4.8 | 3 | [
8,
2,
4,
6,
4
] | [
3,
3,
2,
3,
4
] | 5 | [
"Open-Set Aerial Object Detection",
"Automatic Label Engine",
"Multi-instance Open-set Aerial Dataset"
] | In recent years, language-guided open-set aerial object detection has gained significant attention due to its better alignment with real-world application needs. However, due to limited datasets, most existing language-guided methods primarily focus on vocabulary-level descriptions, which fail to meet the demands of fine-grained open-world detection. To address this limitation, we propose constructing a large-scale language-guided open-set aerial detection dataset, encompassing three levels of language guidance: from words to phrases, and ultimately to sentences. Centered around an open-source large vision-language model and integrating image-operation-based preprocessing with BERT-based postprocessing, we present the $\textbf{OS-W2S Label Engine}$, an automatic annotation pipeline capable of handling diverse scene annotations for aerial images. Using this label engine, we expand existing aerial detection datasets with rich textual annotations and construct a novel benchmark dataset, called Multi-instance Open-set Aerial Dataset ($\textbf{MI-OAD}$), addressing the limitations of current remote sensing grounding data and enabling effective language-guided open-set aerial detection. Specifically, MI-OAD contains 163,023 images and 2 million image-caption pairs, with multiple instances per caption, approximately 40 times larger than comparable datasets.
To demonstrate the effectiveness and quality of MI-OAD, we evaluate three representative tasks: language-guided open-set aerial detection, open-vocabulary aerial detection (OVAD), and remote sensing visual grounding (RSVG). On language-guided open-set aerial detection, training on MI-OAD lifts Grounding DINO by +31.1 AP$_{50}$ and +34.7 Recall@10 with sentence-level inputs under zero-shot transfer. Moreover, using MI-OAD for pre-training yields state-of-the-art performance on multiple existing OVAD and RSVG benchmarks, validating both the effectiveness of the dataset and the high quality of its OS-W2S annotations. More details are available at \url{https://anonymous.4open.science/r/MI-OAD}. | datasets and benchmarks | https://openreview.net/pdf?id=K0idbmzcgc | 2025-09-19T08:13:11 | 5 | [
{
"id": "jv63TR5pJc",
"forum": "K0idbmzcgc",
"review_number": 5,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14647/Reviewer_6Dr8",
"reviewer_name": "Reviewer_6Dr8",
"rating": 8,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The p... | |
KUn4IBIZC7 | https://openreview.net/forum?id=KUn4IBIZC7 | MotifGrIm: Motif-Based Multi-Granularity Graph-Image Pretraining for Molecular Representation Learning | 2.5 | 4.5 | [
2,
2,
2,
4
] | [
5,
4,
5,
4
] | 4 | [
"Multi-Modal Contrastive Learning",
"Molecular Representation Learning",
"Graph Neural Network"
] | Molecular representation learning is widely considered as a crucial task in computer-aided molecular applications and design. Recently, many studies have explored pretraining models on unlabeled data to learn molecular structures and enhance the performance of downstream tasks. However, existing methods mainly focus on graph domains, with limited attention to other modals, such as the images. In addition, most existing methods focus on the atomic or molecular level, which leads to the neglect of high-order connection information or local structure information. In this work, we propose a motif-based multi-granularity graph-image pretraining framework, MotifGrIm, for molecular representation learning.In this framework, we incorporate motifs into the image domain for the first time,by generating distinct background features for different motifs in molecular im-ages, offering a novel approach to enhancing molecular representation. Through contrastive learning within and across modules, we effectively tackle two key challenges in molecular motif pretraining with graph neural networks: (1) the over-smoothing problem, which restricts GNNs to shallow layers and hinders global molecular information capture, and (2) the aggregation of motif nodes, which leads to the loss of connectivity information between motifs. Additionally, to more effectively capture information across different molecular granularities, we propose a multi-granularity prediction pretraining strategy to optimize the model. For downstream tasks, we use only the graph encoders for prediction, reducing both time and memory consumption. We evaluate MotifGrIm on molecular property prediction and long-range benchmarks. Across eight commonly used molecular property prediction datasets, MotifGrIm outperforms state-of-the-art models with an average ROC-AUC improvement of 1.16% and achieves the best results on five of them. On long-range datasets, MotifGrIm improves the performance by at least 14.8%. | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=KUn4IBIZC7 | 2025-09-19T13:26:21 | 4 | [
{
"id": "dpFgmibU9f",
"forum": "KUn4IBIZC7",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16081/Reviewer_dFht",
"reviewer_name": "Reviewer_dFht",
"rating": 2,
"confidence": 5,
"soundness": 2,
"contribution": 1,
"presentation": 2,
"summary": "This ... | |
Bp2VlfYAMc | https://openreview.net/forum?id=Bp2VlfYAMc | TIPS: A Text-Image Pairs Synthesis Framework for Robust Text-based Person Retrieval | 5 | 4 | [
2,
6,
4,
8
] | [
4,
4,
4,
4
] | 4 | [
"Text-based Person Retrieval",
"Text-Image Pairs Synthesis",
"Diffusion Model",
"Identity Preservation",
"Test-Time Augmentation"
] | Text-based Person Retrieval (TPR) faces critical challenges in practical applications, including zero-shot adaptation, few-shot adaptation, and robustness issues. To address these challenges, we propose a Text-Image Pairs Synthesis (TIPS) framework, which is capable of generating high-fidelity and diverse pedestrian text-image pairs in various real-world scenarios. Firstly, two efficient diffusion-model fine-tuning strategies are proposed to develop a Seed Person Image Generator (SPG) and an Identity Preservation Generator (IDPG), thus generating person image sets that preserve the same identity. Secondly, a general TIPS approach utilizing LLM-driven text prompt synthesis is constructed to produce person images in conjunction with SPG and IDPG. Meanwhile, a Multi-modal Large Language Model (MLLM) is employed to filter images to ensure data quality and generate diverse captions. Furthermore, a Test-Time Augmentation (TTA) strategy is introduced, which combines textual and visual features via dual-encoder inference to consistently improve performance without architectural modifications. Extensive experiments conducted on TPR datasets demonstrate consistent performance improvements of three representative TPR methods across zero-shot, few-shot, and generalization settings. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=Bp2VlfYAMc | 2025-09-20T01:35:00 | 4 | [
{
"id": "ZigShWA1Ae",
"forum": "Bp2VlfYAMc",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission20172/Reviewer_Ao47",
"reviewer_name": "Reviewer_Ao47",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 1,
"summary": "This ... | |
JPLRtQINNy | https://openreview.net/forum?id=JPLRtQINNy | Domain Bridging: Enabling Adaptation without Peeking at Target Data | 3.333333 | 4 | [
4,
2,
4
] | [
4,
5,
3
] | 3 | [
"domain bridging",
"evaluation-based adaptation",
"zeroth-order optimization",
"proprietary target data"
] | Adapting models to target domains with proprietary data remains a challenging problem. One possible setup to enable adaptation is to allow target domain owners to privately evaluate candidate models on their own data. For example, model providers consider how to adjust models to better fit the unseen target data, relying solely on returned model performance. Existing methods adopt Zeroth-Order (ZO) optimization to refine model parameters or employ a two-stage learning process that first identifies the target-related samples in the source data and then retrains the model. However, we find that these methods struggle to generalize well for the target tasks during inference, primarily because of the failure to account for data-statistical shifts between source and target domains. To address this limitation, we introduce the concept of domain bridging in the context of model adaptation for proprietary target data. The core idea is to bridge the domain gap by learning target-aligned perturbations on source data, enabling the fine-tuned model to achieve better performance on target domains. A natural attempt is to extend ZO optimization to this setting. However, this approach fails to produce reliable perturbations on real datasets. To address this, we design a target-aligned, sample-wise perturbation learner, enabling reliable adaptation from performance-only feedback. We provide theoretical convergence guarantees and demonstrate through experiments on five datasets across image and text modalities that our domain bridging method achieves state-of-the-art performance, improving accuracy by approximately 4\%. | Domain Bridging introduces an efficient framework that learns source data perturbations to bridge domain gaps, enabling effective model fine-tuning for target domains without requiring direct access to proprietary target data. | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=JPLRtQINNy | 2025-09-17T18:39:58 | 3 | [
{
"id": "tSIq7SSmfc",
"forum": "JPLRtQINNy",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8980/Reviewer_RAvJ",
"reviewer_name": "Reviewer_RAvJ",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The pa... |
jAYHFBdQ0M | https://openreview.net/forum?id=jAYHFBdQ0M | Johnson-Lindenstrauss Transforms in Distributed Optimization | 3.5 | 3 | [
2,
4,
4,
4
] | [
4,
4,
2,
2
] | 4 | [
"optimization",
"distributed optimization",
"communication compresson"
] | Increasing volumes of data and models in the machine learning demand efficient methods. Distributed optimization addresses these challenges, for instance, by utilizing compression mechanisms, that reduce the number of bits transmitted. One of the known techniques, that diminish the dimension of the database are Johnson-Lindenstrauss (JL) mappings, that benefit from the ease of implementation. Unlike the usual sparsification techniques, they preserve the scalar product and distances between the vectors, which is beneficial for advanced machine learning problems, such as byzantine-robust learning, personalized and vertical federated learning. In this paper, we close the gap and connect JL Transforms with optimization algorithms and demonstrate, that we can compress communication messages with them. We also validate our theoretical results by the conducted experiments. | optimization | https://openreview.net/pdf?id=jAYHFBdQ0M | 2025-09-17T16:49:35 | 4 | [
{
"id": "wYRuKyGPEl",
"forum": "jAYHFBdQ0M",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8812/Reviewer_ecJS",
"reviewer_name": "Reviewer_ecJS",
"rating": 2,
"confidence": 4,
"soundness": 1,
"contribution": 1,
"presentation": 2,
"summary": "The pa... | |
bjNKvuBMqJ | https://openreview.net/forum?id=bjNKvuBMqJ | Solving robust MDPs as a sequence of static RL problems | 3.5 | 3.25 | [
4,
2,
4,
4
] | [
3,
4,
3,
3
] | 4 | [
"Robust reinforcement learning"
] | Designing control policies whose performance level is guaranteed to remain above a given
threshold in a span of environments is a critical feature for the adoption of reinforcement learning
(RL) in real-world applications. The search for such robust policies is a notoriously difficult
problem, related to the so-called dynamic model of transition function uncertainty, where the
environment dynamics are allowed to change at each time step. But in practical cases, one
is rather interested in robustness to a span of static transition models throughout interaction
episodes. The static model is known to be harder to solve than the dynamic one, and seminal
algorithms, such as robust value iteration, as well as most recent works on deep robust RL, build
upon the dynamic model. In this work, we propose to revisit the static model. We suggest an
analysis of why solving the static model under some mild hypotheses is a reasonable endeavor,
based on an equivalence with the dynamic model, and formalize the general intuition that
robust MDPs can be solved by tackling a series of static problems. We introduce a generic
meta-algorithm called IWOCS, which incrementally identifies worst-case transition models so
as to guide the search for a robust policy. Discussion on IWOCS sheds light on new ways to
decouple policy optimization and adversarial transition functions and opens new perspectives
for analysis. We derive a deep RL version of IWOCS and demonstrate it is competitive with
state-of-the-art algorithms on classical benchmarks. | We propose IWOCS, a method for robust MDPs that finds worst-case transitions, separates policy optimization from adversarial dynamics, and matches state-of-the-art deep RL performance. | reinforcement learning | https://openreview.net/pdf?id=bjNKvuBMqJ | 2025-09-19T15:48:19 | 4 | [
{
"id": "fLHf4O2p5e",
"forum": "bjNKvuBMqJ",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16724/Reviewer_E1gk",
"reviewer_name": "Reviewer_E1gk",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This ... |
2Aj7sA2vbb | https://openreview.net/forum?id=2Aj7sA2vbb | MADGen:Minority Attribute Discovery in Text-to-Image Generative Models | 4 | 3.666667 | [
6,
4,
2
] | [
3,
4,
4
] | 3 | [
"Bias identification",
"Bias mitigation",
"Fairness",
"Diffusion models"
] | Text-to-image diffusion models achieve impressive generation quality but also inherit and amplify biases from training data, resulting in biased coverage of semantic attributes. Prior work addresses this in two ways. Closed-set approaches mitigate biases in predefined fairness categories (e.g., gender, race), assuming socially salient minority attributes are known a priori. Open-set approaches frame the task as bias identification, highlighting majority attributes that dominate outputs. Both overlook a complementary task: uncovering minority features underrepresented in the data distribution (social, cultural, or stylistic) yet still encoded in model representations. We introduce MADGen, the first framework, to our knowledge, for discovering minority attributes in diffusion models. Our method leverages Matryoshka Sparse Autoencoders and introduces a minority metric that integrates neuron activation frequency with semantic distinctiveness, enabling the unsupervised identification of rare attributes. Specifically, MADGen identifies a set of neurons whose behavior can be directly interpreted through their top-activating images, which correspond to underrepresented semantic attributes in the model. Quantitative and qualitative experiments demonstrate that MADGen uncovers attributes beyond fairness categories, supports systematic auditing of architectures such as Stable Diffusion 1.5, 2, and XL, and enables amplification of minority attributes during generation. | A framework to identify minority or underrepresented attributes in the intermediate representations of diffusion models. | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=2Aj7sA2vbb | 2025-09-14T17:38:00 | 4 | [
{
"id": "7NFMIySpiC",
"forum": "2Aj7sA2vbb",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5074/Reviewer_Z4F1",
"reviewer_name": "Reviewer_Z4F1",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "This w... |
zq40cmz1JD | https://openreview.net/forum?id=zq40cmz1JD | When Speculation Spills Secrets: Side Channels via Speculative Decoding in LLMs | 5 | 3.5 | [
6,
6,
4,
4
] | [
4,
4,
3,
3
] | 4 | [
"Large Language Models",
"Speculative Decoding",
"Side Channel Attack",
"Privacy"
] | Deployed large language models (LLMs) often rely on speculative decoding, a technique that generates and verifies multiple candidate tokens in parallel, to improve throughput and latency. In this work, we reveal a new side-channel whereby input-dependent patterns of correct and incorrect speculations can be inferred by monitoring per-iteration token counts or packet sizes. We demonstrate that an adversary observing these patterns can fingerprint user queries with >90% accuracy across four speculative-decoding schemes, REST (∼100%), LADE (up to 92%), BiLD (up to 95%), and EAGLE (up to 77.6%) and leak confidential datastore contents used for prediction at rates exceeding 25 tokens/sec. We evaluate the side-channel attacks in both research prototypes as well as the production-grade vLLM serving framework. To defend against these, we propose and evaluate a suite of mitigations, including packet padding and iteration-wise token aggregation. | We develop a side channel attack leaking private user inputs by exploiting speculative decoding optimizations in LLM inference. | infrastructure, software libraries, hardware, systems, etc. | https://openreview.net/pdf?id=zq40cmz1JD | 2025-09-19T04:10:50 | 4 | [
{
"id": "YAO1FTCAHu",
"forum": "zq40cmz1JD",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13973/Reviewer_mQbv",
"reviewer_name": "Reviewer_mQbv",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This ... |
if1Ndb6RWD | https://openreview.net/forum?id=if1Ndb6RWD | Information-based Value Iteration Networks for Decision Making Under Uncertainty | 3.5 | 3 | [
2,
6,
2,
4
] | [
4,
4,
2,
2
] | 4 | [
"Reinforcement Learning",
"value iteration networks",
"planning under uncertainty"
] | Deep neural networks that incorporate classic reinforcement learning methods, such as value iteration, into their structure significantly outperform randomly structured networks in learning and generalization. These networks, however, are mostly limited to environments with no or very low amounts of uncertainty. In this paper, we propose a new planning module architecture, the VI$^2$N (Value Iteration with Value of Information Network), that learns to act in novel environments with a high amount of perceptual ambiguity. This architecture over-emphasizes reducing uncertainty before exploiting the reward. VI$^2$N can also utilize factorization in environments with mixed observability to decrease the computational complexity of calculating the policy and facilitate learning. Tested on a diverse set of domains, each containing various types of environments, our network outperforms other deep architectures. Moreover, VI$^2$N generates interpretable cognitive maps highlighting both rewarding and informative locations. These maps highlight the key states the agent must visit to achieve its goal. | We proposed a novel deep architecture for decision making under uncertainty based on planning for reward maximization and information gathering. | reinforcement learning | https://openreview.net/pdf?id=if1Ndb6RWD | 2025-09-19T04:03:42 | 4 | [
{
"id": "QkxtFCaSOb",
"forum": "if1Ndb6RWD",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13955/Reviewer_wGbv",
"reviewer_name": "Reviewer_wGbv",
"rating": 2,
"confidence": 4,
"soundness": 1,
"contribution": 2,
"presentation": 3,
"summary": "This ... |
dlaNQM6YbZ | https://openreview.net/forum?id=dlaNQM6YbZ | The Flaw of Averages: Quantifying Uniformity of Performance on Benchmarks | 4.5 | 3.25 | [
6,
6,
2,
4
] | [
3,
3,
4,
3
] | 4 | [
"Benchmark reliability",
"meta-evaluation of benchmarks",
"evaluation reliability",
"diagnostic evaluation"
] | Benchmarks shape scientific conclusions about model capabilities and steer model development. This creates a feedback loop: stronger benchmarks drive better models, and better models demand more discriminative benchmarks. Ensuring benchmark reliability is therefore essential for trustworthy evaluation and meaningful progress. In this work, we study benchmark reliability from a \emph{distributional} perspective and introduce benchmark harmony, which measures \textit{how uniformly a model's performance is distributed across the subdomains of a benchmark}. We posit that high harmony is a desirable benchmark property, indicating that the aggregate metric reflects uniform competence across subdomains. Across 19 multiple-choice benchmarks and five model families, we map each benchmark onto a mean-variance plane of harmony computed across models, where high mean and low variance signal more reliable evaluation. Our analysis shows that less harmonious benchmarks can give misleading results, since overall accuracy may be disproportionately influenced by specific subdomains. For instance, \emph{ARC-Easy} is overwhelmed by questions on \emph{Biological Concepts}, overshadowing other critical subdomains such as Geography, Physics, Chemistry, and Environmental Science. By recommending that harmony should be reported alongside accuracy, we reframe evaluation from
simple performance averages to a more robust, distributionally reliable measurement of performance. | datasets and benchmarks | https://openreview.net/pdf?id=dlaNQM6YbZ | 2025-09-19T08:12:50 | 4 | [
{
"id": "9tq7VP8KiW",
"forum": "dlaNQM6YbZ",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14643/Reviewer_F1zp",
"reviewer_name": "Reviewer_F1zp",
"rating": 6,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "Bench... | |
ErED2dvR7Z | https://openreview.net/forum?id=ErED2dvR7Z | Cascaded Flow Matching for Heterogeneous Tabular Data with Mixed-Type Features | 2.5 | 3.5 | [
2,
2,
2,
4
] | [
4,
2,
4,
4
] | 4 | [
"tabular data",
"flow matching",
"generative modeling",
"synthetic data"
] | Advances in generative modeling have recently been adapted to heterogeneous tabular data. However, generating mixed-type features that combine discrete values with an otherwise continuous distribution remains challenging.
We advance the state-of-the-art in diffusion-based generative models for heterogeneous tabular data with a cascaded approach.
As such, we conceptualize categorical variables and numerical features as low- and high-resolution representations of a tabular data row. We derive a feature-wise low-resolution representation of numerical features that allows the direct incorporation of mixed-type features including missing values or discrete outcomes with non-zero probability mass.
This coarse information is leveraged to guide the high-resolution flow matching model via a novel conditional probability path.
We prove that this lowers the transport costs of the flow matching model.
The results illustrate that our cascaded pipeline generates more realistic samples and learns the details of distributions more accurately. | A cascaded flow matching framework that generates details in tabular data conditioned on low-resolution features. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=ErED2dvR7Z | 2025-09-19T21:57:07 | 4 | [
{
"id": "BXQ9NHnc2f",
"forum": "ErED2dvR7Z",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18697/Reviewer_5EfX",
"reviewer_name": "Reviewer_5EfX",
"rating": 2,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 1,
"summary": "Mixed... |
9PpLnRAZjN | https://openreview.net/forum?id=9PpLnRAZjN | End-to-End One Step Flow Matching via Flow Fitting | 4 | 4.25 | [
2,
6,
4,
4
] | [
5,
5,
4,
3
] | 4 | [
"Flow matching",
"Single step generative models"
] | Diffusion and flow-matching models have demonstrated impressive performance in generating diverse, high-fidelity images by learning transformations from noise to data. However, their reliance on multi-step sampling requires repeated neural network evaluations, leading to high computational cost. We propose FlowFit, a family of generative models that enables high-quality sample generation through both single-phase training and single-step inference. FlowFit learns to approximate the continuous flow trajectory between latent noise (x_0) and data (x_1) by fitting a basis of functions parameterized over time (t \in [0, 1]) during training. At inference time, sampling is performed by simply evaluating the flow only at the terminal time (t = 1), avoiding iterative denoising or numerical integration. Empirically, FlowFit outperforms prior diffusion-based single-phase training methods achieving superior sample quality. | generative models | https://openreview.net/pdf?id=9PpLnRAZjN | 2025-09-19T15:39:30 | 4 | [
{
"id": "EsTXV9su34",
"forum": "9PpLnRAZjN",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16679/Reviewer_me4z",
"reviewer_name": "Reviewer_me4z",
"rating": 2,
"confidence": 5,
"soundness": 1,
"contribution": 2,
"presentation": 1,
"summary": "This ... | |
wcInjlUp8V | https://openreview.net/forum?id=wcInjlUp8V | CoTabBench: A Real-World Benchmark for Question Answering over Weakly-Structured and Heterogeneous Tables | 4 | 4 | [
2,
4,
4,
6
] | [
4,
4,
4,
4
] | 4 | [
"Table Question Answering",
"Large Language Models",
"Benchmark",
"Real-World Data"
] | Recent advancements in Large Language Models (LLMs) have significantly propelled their capabilities in table-based question answering. However, existing benchmarks predominantly feature well-structured tables, failing to address the complexities of real-world data, which is often weakly-structured and contains highly heterogeneous content. This discrepancy limits the evaluation of model robustness on diverse and challenging formats, such as tables with intricate layouts and varied data types found in scientific papers or financial reports. To bridge this gap, we introduce CoTabBench, a large-scale, multi-domain, and intricate benchmark featuring over 2,700 real-world, weakly-structured tables and more than 8,600 question-answer pairs spanning 10 distinct domains. We further propose a novel complexity assessment framework, which quantitatively validates the inherent structural and content-based challenges within CoTabBench. Furthermore, we introduce CoTabInstruct, a large-scale training corpus with over 11,000 tables, and present CoTabLLM, a 7B model trained on it that outperforms even leading models like GPT-4.1 on our benchmark. Extensive experiments reveal a significant performance degradation for state-of-the-art models on CoTabBench, highlighting its critical role in advancing robust, real-world table understanding. | To address the fact that LLMs fail on complex, real-world tables, we created CoTabBench: a comprehensive benchmark and dataset designed to push models beyond simple structured data and foster more robust table understanding. | datasets and benchmarks | https://openreview.net/pdf?id=wcInjlUp8V | 2025-09-17T16:36:52 | 4 | [
{
"id": "m7pRuiRqVr",
"forum": "wcInjlUp8V",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8787/Reviewer_TtPK",
"reviewer_name": "Reviewer_TtPK",
"rating": 2,
"confidence": 4,
"soundness": 1,
"contribution": 1,
"presentation": 3,
"summary": "This p... |
Pxd5mjwznl | https://openreview.net/forum?id=Pxd5mjwznl | Difference back propagation with inverse sigmoid function | 0 | 4.666667 | [
0,
0,
0
] | [
4,
5,
5
] | 3 | [
"Machine Learning",
"AI",
"Algorithm",
"Back Propagation"
] | Since the proposal of neural network, the derivative-based back propagation algorithm has been the default setting. However, the derivative for a non-linear function is an approximation for the difference of the function values, and it would be a more precise way to do back propagation using the difference directly instead of the derivative. While the back propagation algorithm has been the rule-of-thumb for neural networks, it becomes one of the bottleneck in modern large deep learning models. With the explosion of big data and large-scale deep learning models, a tiny change in the back propagation could lead to a huge difference. Here we propose a new back propagation algorithm based on inverse sigmoid function to calculate the difference instead of derivative, and verified the effectiveness with basic examples. | We propose a new back propagation algorithm that calculates the back propagatiion updates using the difference instead of the derivative from the activation function | optimization | https://openreview.net/pdf?id=Pxd5mjwznl | 2025-09-19T11:05:56 | 3 | [
{
"id": "dI3B1VuSWi",
"forum": "Pxd5mjwznl",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission15419/Reviewer_ZEzY",
"reviewer_name": "Reviewer_ZEzY",
"rating": 0,
"confidence": 4,
"soundness": 1,
"contribution": 1,
"presentation": 2,
"summary": "The s... |
sJI2JCggyD | https://openreview.net/forum?id=sJI2JCggyD | Delta Activations: A Representation for Finetuned Large Language Models | 3.333333 | 3.666667 | [
2,
4,
4
] | [
4,
4,
3
] | 3 | [
"Representation",
"LLM",
"post-training",
"finetuning"
] | The success of powerful open source Large Language Models (LLMs) has enabled the community to create a vast collection of post-trained models adapted to specific tasks and domains. However, navigating and understanding these models remains challenging due to inconsistent metadata and unstructured repositories. We introduce Delta Activations, a method to represent finetuned models as vector embeddings by measuring shifts in their internal activations relative to a base model. Clustering analysis shows that Delta Activations achieve strong separation of finetuned domains, significantly outperforming baselines such as flattened weights, salient parameter masks, and output embeddings, while being more lightweight and computationally efficient. Delta Activations also demonstrate desirable properties: it is robust across finetuning settings and exhibits an additive property when finetuning datasets are mixed. We also explore extensions of Delta Activations: it can represent tasks via few-shot finetuning for reliable model retrieval and guide model selection for merging by quantifying similarity between models. Furthermore, activations can be substituted with other representation extraction methods, demonstrating the flexibility of the broader Delta-X framework.
We hope Delta Activations can facilitate the practice of reusing publicly available models. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=sJI2JCggyD | 2025-09-08T22:33:52 | 3 | [
{
"id": "TP1UBBYsj8",
"forum": "sJI2JCggyD",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission3142/Reviewer_KPrH",
"reviewer_name": "Reviewer_KPrH",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 4,
"summary": "This p... | |
uS2FiaAkCz | https://openreview.net/forum?id=uS2FiaAkCz | Towards Monotonic Improvement in In-Context Reinforcement Learning | 3 | 3.5 | [
6,
4,
0,
2
] | [
3,
3,
4,
4
] | 4 | [
"Reinforcement Learning",
"Meta-RL",
"In-context Reinforcement Learning",
"Transformers",
"Learning to Learn"
] | In-Context Reinforcement Learning (ICRL) has emerged as a promising paradigm for developing agents that can rapidly adapt to new tasks by leveraging past experiences as context, without updating their parameters. Recent approaches train large sequence models on monotonic policy improvement data from online RL, aiming to a continue improved testing time performance. However, our experimental analysis reveals a critical flaw: these models cannot show a continue improvement like the training data during testing time. Theoretically, we identify this phenomenon as *contextual ambiguity*, where the model's own stochastic actions can generate an interaction history that misleadingly resembles that of a sub-optimal policy from the training data, initiating a vicious cycle of poor action selection. To resolve the contextual ambiguity, we introduce *Context Value* into training phase and propose **Context Value Informed ICRL** (CV-ICRL). CV-ICRL use Context Value as an explicit signal representing the ideal performance theoretically achievable by a policy given the current context. As the context expands, Context Value could include more task-relevant information, and therefore the ideal performance should be non-decreasing. We prove that the Context Value tightens the lower bound on the performance gap relative to an ideal, monotonically improving policy. We fruther propose two methods for estimating Context Value at both training and testing time. Experiments conducted on the Dark Room and MiniGrid testbeds demonstrate that CV-ICRL effectively mitigates performance degradation and improves overall ICRL abilities across various tasks and environments. The source code and data of this paper are available at https://anonymous.4open.science/r/towards_monotonic_improvement-E72F. | reinforcement learning | https://openreview.net/pdf?id=uS2FiaAkCz | 2025-09-16T22:52:04 | 4 | [
{
"id": "7RMkPfkIZB",
"forum": "uS2FiaAkCz",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission7740/Reviewer_8tXY",
"reviewer_name": "Reviewer_8tXY",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This p... | |
xindJJLSr1 | https://openreview.net/forum?id=xindJJLSr1 | ReWatch-R1: Boosting Complex Video Reasoning in Large Vision-Language Models through Agentic Data Synthesis | 5 | 3.833333 | [
6,
6,
6,
4,
4,
4
] | [
4,
3,
4,
5,
3,
4
] | 6 | [
"Video Reasoning",
"Large Vision-Language Models (LVLMs)",
"Agentic Data Synthesis",
"Multi-Agent ReAct",
"Reinforcement Learning with Verifiable Reward (RLVR)",
"Chain-of-Thought (CoT)"
] | While Reinforcement Learning with Verifiable Reward (RLVR) significantly advances image reasoning in Large Vision-Language Models (LVLMs), its application to complex video reasoning remains underdeveloped. This gap stems primarily from a critical data bottleneck: existing datasets lack the challenging, multi-hop questions and high-quality, video-grounded Chain-of-Thought (CoT) data necessary to effectively bootstrap RLVR. To address this, we introduce ReWatch, a large-scale dataset built to foster advanced video reasoning. We propose a novel multi-stage synthesis pipeline to synthesize its three components: ReWatch-Caption, ReWatch-QA, and ReWatch-CoT. A core innovation is our Multi-Agent ReAct framework for CoT synthesis, which simulates a human-like "re-watching" process to generate video-grounded reasoning traces by explicitly modeling information retrieval and verification. Building on this dataset, we develop ReWatch-R1 by post-training a strong baseline LVLM with Supervised Fine-Tuning (SFT) and our RLVR framework. This framework incorporates a novel Observation \& Reasoning (O\&R) reward mechanism that evaluates both the final answer's correctness and the reasoning's alignment with video content, directly penalizing hallucination. Our experiments show that ReWatch-R1 achieves state-of-the-art average performance on five challenging video reasoning benchmarks, substantially outperforming models trained on all other open-source datasets. We also provide crucial insights into the training dynamics of SFT and RL for complex video reasoning. | We introduce an agent-based pipeline to synthesize a high-quality video reasoning dataset (ReWatch) and a novel reinforcement learning reward (O&R) to train LVLMs, achieving state-of-the-art performance. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=xindJJLSr1 | 2025-09-19T20:00:47 | 6 | [
{
"id": "gSNBwjmODT",
"forum": "xindJJLSr1",
"review_number": 6,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18045/Reviewer_tQx8",
"reviewer_name": "Reviewer_tQx8",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This ... |
mjLMdY0xul | https://openreview.net/forum?id=mjLMdY0xul | Efficient Self-Evaluation for Diffusion Language Models via Sequence Regeneration | 3.333333 | 4 | [
4,
2,
4
] | [
4,
4,
4
] | 3 | [
"Diffusion Large Language Models"
] | Diffusion large language models (dLLMs) have recently attracted significant attention for their ability to enhance diversity, controllability, and parallelism. However, their non-sequential, bidirectionally masked generation makes quality assessment difficult, underscoring the need for effective self-evaluation. In this work, we propose DiSE, a simple yet effective self-evaluation confidence quantification method for dLLMs. DiSE quantifies confidence by computing the probability of regenerating the tokens in the entire generated sequence, given the full context. This method enables more efficient and reliable quality assessment by leveraging token regeneration probabilities, facilitating both likelihood estimation and robust uncertainty quantification. Building upon DiSE, we further introduce a flexible-length generation framework, which adaptively controls the sequence length based on the model’s self-assessment of its own output. Experiments demonstrate that DiSE consistently improves performance across multiple datasets, increasing likelihood evaluation by $4.0$\% and uncertainty evaluation by $6.4$\%, while achieving up to a $32\times$ speedup over Monte Carlo simulation baseline, and additionally improving flexible-length generation accuracy. These results establish DiSE as an efficient and versatile self-evaluation framework for diffusion-based language models. | We propose a simple yet effective self-evaluation confidence quantification method for diffusion large language models (dLLMs), and introduce a flexible-length dLLM generation framework based on it. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=mjLMdY0xul | 2025-09-04T21:14:57 | 3 | [
{
"id": "yP2WOCnO6x",
"forum": "mjLMdY0xul",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission2115/Reviewer_Ya5E",
"reviewer_name": "Reviewer_Ya5E",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This p... |
qVadFFSfrI | https://openreview.net/forum?id=qVadFFSfrI | Diagnosing and Remedying Knowledge Deficiencies in LLMs via Label-free Curricular Meaningful Learning | 6 | 3.5 | [
4,
6,
6,
8
] | [
3,
4,
4,
3
] | 4 | [
"Deficiency Diagnosis",
"Data Synthesis",
"LLMs Reasoning"
] | Large Language Models (LLMs) have demonstrated impressive generalization ability by learning from extensive unlabeled text. However, they still exhibit reasoning mistakes, which can affect their trustworthiness and reliability. Although users can interact with LLMs and provide diverse and comprehensive queries to expose the flaws of LLMs, obtaining sufficient and effective feedback is demanding. Furthermore, comprehensively evaluating LLMs with limited labeled samples is difficult. These make it a challenge to diagnose and remedy the deficiencies in LLMs through rich label-free user queries. To tackle this challenge and considersing that LLMs' reasoning mistakes often stem from knowledge deficiencies, we propose label-free curricular meaningful learning (LaMer), which first employs relative entropy to diagnose and quantify knowledge deficiencies of LLMs in a label-free setting. Then, LaMer adaptively synthesizes augmentation data based on deficiency severity and progressively remedies them with a curricular remedy strategy. Experiments show that LaMer effectively diagnoses and remedies knowledge deficiencies in LLMs, improving various LLMs across seven out-of-distribution (OOD) reasoning benchmarks, achieving comparable results to baselines with only 40% training data. LaMer even surpasses methods that rely on labeled data for deficiency diagnosis. In application, LaMer offers a diagnostic tool for efficient LLM development. | Diagnose the knowledge deficiencies of LLMs and remedy them with a novel approach. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=qVadFFSfrI | 2025-09-19T23:56:45 | 4 | [
{
"id": "RXsidSYRgJ",
"forum": "qVadFFSfrI",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19580/Reviewer_jb7t",
"reviewer_name": "Reviewer_jb7t",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This ... |
7FfZc9MePg | https://openreview.net/forum?id=7FfZc9MePg | PersonBias: A Lightweight Framework for Personalized Bias Mitigation in Large Language Models | 3 | 3.5 | [
2,
4,
4,
2
] | [
3,
4,
3,
4
] | 4 | [
"Personalized Debiasing",
"Dynamic Intervention",
"Large Language Models",
"Bias-Utility Trade-off"
] | Social bias in large language models (LLMs) outputs has emerged as a Social bias in large language model (LLM) outputs has emerged as a critical challenge in artificial intelligence. While existing bias detection methods pursue comprehensive identification and elimination of implicit biases, this \textit{one-size-fits-all} approach presents significant limitations. Excessive bias correction causes responses to deviate from user query intent, comprehensive detection demands extensive human annotation and computational resources, and critically, user heterogeneity dictates that different individuals with diverse backgrounds and personality traits exhibit varying sensitivities toward different bias types. To address these challenges, we propose PersonBias, a lightweight, personalized debiasing framework that balances bias mitigation with response quality optimization. Our approach leverages LLMs to automatically extract user personality features from conversational contexts, eliminating the need for explicit demographic data collection. We develop a dual-tower encoder architecture with cross-attention mechanisms to model user-specific bias sensitivities, employing parameter-efficient fine-tuning that freezes encoder parameters while optimizing only projection layers and attention mechanisms. Rather than requiring model-specific fine-tuning, PersonBias operates through real-time intervention during generation, dynamically evaluating and adjusting outputs at fixed token intervals to prevent bias accumulation while maintaining relevance and utility. Experiments on multi-turn dialogue datasets demonstrate that PersonBias achieves superior bias reduction and utility preservation compared to prompt-based and fine-tuning baselines, offering a practical and adaptive solution for personalized fairness in LLMs. | We introduce PersonBias, a plug-and-play module that detects and mitigates social biases in LLM outputs by dynamically adapting to individual user preferences, balancing fairness with response quality. | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=7FfZc9MePg | 2025-09-17T14:57:48 | 4 | [
{
"id": "W4XPLxbMIt",
"forum": "7FfZc9MePg",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8599/Reviewer_5vGB",
"reviewer_name": "Reviewer_5vGB",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 1,
"presentation": 3,
"summary": "This p... |
l7Vb3yxmuz | https://openreview.net/forum?id=l7Vb3yxmuz | WINA: Weight Informed Neuron Activation for Accelerating Large Language Model Inference | 5.666667 | 3 | [
6,
6,
6,
6,
4,
6
] | [
2,
3,
3,
3,
3,
4
] | 6 | [
"Sparse Activation",
"Efficient Inference"
] | The ever-increasing computational demands of large language models (LLMs) make efficient inference a central challenge. While recent advances leverage specialized architectures or selective activation, they typically require (re)training or architectural modifications, limiting their broad applicability. Training-free sparse activation, in contrast, offers a plug-and-play pathway to efficiency; however, existing methods often rely solely on hidden state magnitudes, leading to significant approximation error and performance degradation. To address this, we introduce WINA (Weight-Informed Neuron Activation): a simple framework for training-free sparse activation that incorporates both hidden state magnitudes and weight matrix structure. By also leveraging the ℓ2-norm of the model’s weight matrices, WINA yields a principled sparsification strategy with provably optimal approximation error bounds, offering better and tighter theoretical guarantees than prior state-of-the-art approaches. Overall, WINA also empirically outperforms many previous training-free methods across diverse LLM architectures and datasets: not only matching or exceeding their accuracy at comparable sparsity levels, but also sustaining performance better at more extreme sparsity levels. Together, these results position WINA as a practical, theoretically grounded, and broadly deployable solution for efficient inference. Our source code is anonymously available at https://anonymous.4open.science/r/wina-F704/README.md. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=l7Vb3yxmuz | 2025-09-17T05:11:00 | 6 | [
{
"id": "liI8CjRvSr",
"forum": "l7Vb3yxmuz",
"review_number": 6,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8065/Reviewer_zwra",
"reviewer_name": "Reviewer_zwra",
"rating": 6,
"confidence": 2,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The pa... | |
qrKymA0zuY | https://openreview.net/forum?id=qrKymA0zuY | Explaining Multimodal LLMs via Intra-Modal Token Interactions | 4 | 3.5 | [
4,
6,
4,
2
] | [
4,
4,
3,
3
] | 4 | [
"XAI",
"Multimodal LLM"
] | Multimodal Large Language Models (MLLMs) have achieved remarkable success across diverse vision-language tasks, yet their internal decision-making mechanisms remain insufficiently understood. Existing interpretability research has primarily focused on cross-modal attribution, identifying which image regions the model attends to during output generation. However, these approaches often overlook intra-modal dependencies. In the visual modality, attributing importance to isolated image patches ignores spatial context due to limited receptive fields, resulting in fragmented and noisy explanations. In the textual modality, reliance on preceding tokens introduces spurious activations. Failing to effectively mitigate these interference compromises attribution fidelity. To address these limitations, we propose enhancing interpretability by leveraging intra-modal interaction. For the visual branch, we introduce Multi-Scale Explanation Aggregation (MSEA), which aggregates attributions over multi-scale inputs to dynamically adjust receptive fields, producing more holistic and spatially coherent visual explanations. For the textual branch, we propose Activation Ranking Correlation (ARC), which measures the relevance of contextual tokens to the current token via alignment of their top-$k$ prediction rankings. ARC leverages this relevance to suppress spurious activations from irrelevant contexts while preserving semantically coherent ones. Extensive experiments across state-of-the-art MLLMs and benchmark datasets demonstrate that our approach consistently outperforms existing interpretability methods, yielding more faithful and fine-grained explanations of model behavior. | interpretability and explainable AI | https://openreview.net/pdf?id=qrKymA0zuY | 2025-09-20T07:30:34 | 4 | [
{
"id": "Ys70HRJ3G3",
"forum": "qrKymA0zuY",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission22000/Reviewer_UEeF",
"reviewer_name": "Reviewer_UEeF",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This ... | |
VPju7xAxb1 | https://openreview.net/forum?id=VPju7xAxb1 | Comprehend and Talk: Text to Speech Synthesis via Dual Language Modeling | 2 | 4.75 | [
2,
2,
2,
2
] | [
4,
5,
5,
5
] | 4 | [
"Text to Speech; Speech Signal Processing; Speech Language Modeling; Audio Language Models"
] | Existing Large Language Model (LLM) based autoregressive (AR) text-to-speech (TTS) systems, while achieving state-of-the-art quality, still face critical challenges. The foundation of this LLM-based paradigm is the discretization of the continuous speech waveform into a sequence of discrete tokens by neural audio codec. However, single codebook modeling is well suited to text LLMs, but suffers from significant information loss; hierarchical acoustic tokens, typically generated via Residual Vector Quantization (RVQ), often lack explicit semantic structure, placing a heavy learning burden on the model. Furthermore, the autoregressive process is inherently susceptible to error accumulation, which can degrade generation stability. To address these limitations, we propose CaT-TTS, a novel framework for robust and semantically-grounded zero-shot synthesis. First, we introduce S3Codec, a split RVQ codec that injects explicit linguistic features into its primary codebook via semantic distillation from a state-of-the-art ASR model, providing a structured representation that simplifies the learning task. Second, we propose an ``Understand-then-Generate'' dual-Transformer architecture that decouples comprehension from rendering. An initial ``Understanding'' Transformer models the cross-modal relationship between text and the prompt's semantic tokens to form a high-level utterance plan. A subsequent ``Generation'' Transformer then executes this plan, autoregressively synthesizing hierarchical acoustic tokens. Finally, to enhance generation stability, we introduce Masked Audio Parallel Inference (MAPI), a nearly parameter-free inference strategy that dynamically guides the decoding process to mitigate local errors. Extensive experiments demonstrate that the synergy of our principled architecture and semantically-aware codec allows CaT-TTS to achieve new state-of-the-art performance in zero-shot voice cloning, with MAPI providing a measurable boost in generation robustness on benchmark datasets. Project page: \href{https://anonymous.4open.science/r/CaT-TTS-66A1/}{https://anonymous.4open.science/r/CaT-TTS-66A1}. | Propose a two stage method for audio language modeling | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=VPju7xAxb1 | 2025-09-14T20:08:12 | 5 | [
{
"id": "oiVs7XYTj7",
"forum": "VPju7xAxb1",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5123/Reviewer_9G9x",
"reviewer_name": "Reviewer_9G9x",
"rating": 2,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This p... |
vJBMYahZY5 | https://openreview.net/forum?id=vJBMYahZY5 | MSearcher: Self-Reflective Search Agent Empowered by Monte Carlo Tree Search Based Data Synthesis | 4.5 | 3.75 | [
4,
4,
4,
6
] | [
4,
4,
4,
3
] | 4 | [
"Data Construction",
"Monte Carlo Tree Search",
"Post Training",
"Reinforcement Learning",
"Question Answering"
] | Recent advances in reinforcement learning (RL) have enabled large language models (LLMs) to perform multi-turn chain-of-thought (CoT) reasoning with tool use, where web search serves as the most critical tool for answering complex questions. However, most existing methods apply RL directly to off-the-shelf models without a supervised fine-tuning (SFT) cold start, resulting in unstable training and limited tool invocations. This difficulty is exacerbated by the high cost of curating long reasoning trajectories, which are expensive to annotate and prone to factual drift. We propose MSearcher, a two-stage trained search agent that combines reflective thinking with robust tool use for complex reasoning. A central contribution is an efficient data construction framework based on Monte Carlo Tree Search (MCTS), which produces self-reflective reasoning trajectories for the SFT cold start. This framework leverages both correct and flawed rollouts to generate natural and diverse reasoning data. We adopt a two-stage pipeline, first applying SFT with our constructed data and then further training the model with RL, achieving substantial improvements on multi-hop question answering: 67.6\% on HotpotQA and 52.0\% on Frames. These results highlight the importance of high-quality SFT in stabilizing RL and equipping LLMs with robust long-horizon reasoning capabilities. | reinforcement learning | https://openreview.net/pdf?id=vJBMYahZY5 | 2025-09-20T18:20:21 | 4 | [
{
"id": "3zB7qa4SOb",
"forum": "vJBMYahZY5",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission25063/Reviewer_RG1c",
"reviewer_name": "Reviewer_RG1c",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This ... | |
Eqbay04527 | https://openreview.net/forum?id=Eqbay04527 | HICO-GT: Hidden Community Based Tokenized Graph Transformer for Node Classification | 3.5 | 3.75 | [
2,
4,
4,
4
] | [
3,
4,
4,
4
] | 4 | [
"graph Transformer",
"node classification",
"hidden community detection"
] | Graph Transformers have been proven to be effective for the node classification task, of which tokenized graph Transformer is one of the most powerful approaches. When constructing tokens, existing methods focus on collecting multi-view node information as the Transformer's input. However, if a type of tokens only includes nodes having relations with a target node from one perspective, it will not provide sufficient evidence for predicting unknown labels. Directly applying self-attention to all tokens may also produce contradictory information as they are selected by distinct rules. Meanwhile, as an emerging concept on graphs, hidden communities refer to those with relatively weaker structures and being obscured by stronger ones. In this paper, inspired by the hidden community clustering method, we design a new multi-view graph Transformer called HICO-GT. We first reconstruct the input graph by merging the original topology and attribute information. Through an iterative process of weakening dominant and hidden communities in turn, we obtain two subgraphs both containing node information of topological relation and attributed similarity, and then generate two token sequences correspondingly. Along with another neighborhood sequence produced on the original graph, they are separately fed into the Transformer and fused afterwards to form the final representations. Experimental results on various datasets verify the performance of the proposed model, surpassing existing graph Transformers. | learning on graphs and other geometries & topologies | https://openreview.net/pdf?id=Eqbay04527 | 2025-09-19T16:34:10 | 4 | [
{
"id": "wThICaGWDK",
"forum": "Eqbay04527",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16978/Reviewer_8rpR",
"reviewer_name": "Reviewer_8rpR",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 1,
"presentation": 3,
"summary": "This ... | |
WtbXgc9GVA | https://openreview.net/forum?id=WtbXgc9GVA | LoRA meets Riemannion: Muon Optimizer for Parametrization-independent Low-Rank Adapters | 4 | 3.6 | [
4,
6,
2,
4,
4
] | [
4,
3,
4,
3,
4
] | 5 | [
"Low-rank Adaption",
"Fine-tuning",
"Smooth manifolds",
"Riemannian optimization",
"Fixed matrix rank manifold",
"LLM",
"Diffusion Models"
] | This work presents a novel, fully Riemannian framework for Low-Rank Adaptation (LoRA) that geometrically treats low-rank adapters by optimizing them directly on the fixed-rank manifold. This formulation eliminates the parametrization ambiguity present in standard Euclidean optimizers. Our framework integrates three key components to achieve this: (1) we derive **Riemannion**, a new Riemannian optimizer on the fixed-rank matrix manifold that generalizes the recently proposed Muon optimizer; (2) we develop a Riemannian gradient-informed LoRA initialization, and (3) we provide an efficient implementation without prominent overhead that uses automatic differentiation to compute arising geometric operations while adhering to best practices in numerical linear algebra. Comprehensive experimental results on both LLM and diffusion model architectures demonstrate that our approach yields consistent and noticeable improvements in convergence speed and final task performance over both standard LoRA and its state-of-the-art modifications. | generative models | https://openreview.net/pdf?id=WtbXgc9GVA | 2025-09-20T02:34:41 | 5 | [
{
"id": "hdoJDubxke",
"forum": "WtbXgc9GVA",
"review_number": 5,
"reviewer_id": "ICLR.cc/2026/Conference/Submission20503/Reviewer_wX7P",
"reviewer_name": "Reviewer_wX7P",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This ... | |
0fcVDzkGK2 | https://openreview.net/forum?id=0fcVDzkGK2 | DIVIDE-AND-DENOISE: A GAME THEORETIC METHOD FOR FAIRLY COMPOSING DIFFUSION MODELS | 2.666667 | 3.333333 | [
0,
4,
4
] | [
3,
4,
3
] | 3 | [
"Diffusion Models",
"Fair Composition",
"Game-Theoretic",
"Text-to-Image"
] | The widespread availability of large-scale pre-trained generative models raises a
question: how can we best leverage them beyond their original training distribu-
tions? Two strategies provide partial answers. Composition combines multiple
diffusion models, typically through linear averaging of their predictions, to pro-
duce out-of-distribution samples. Guidance steers a single model by biasing its
generation with rewards or classifier scores. We unify these perspectives with
Divide-and-Denoise, a game-theoretic approach to compositional sampling from
multiple pre-trained diffusion models, coordinated through an allocation flow. At
each denoising step, we alternate between (i) partitioning the sample into regions
assigned to distinct models for denoising (composition) and (ii) aligning the sam-
ple with this division (guidance). The partition is determined by solving a fair al-
location problem under a shared alignment objective. We evaluate our method on
text-to-image generation. Using models conditioned on different prompts, Divide-
and-Denoise reliably generates images that capture the semantics of each prompt,
even surpassing joint-prompt conditioning. On the GenEval benchmark, it further
outperforms energy-based composition and joint prompting baselines, resolving
common issues such as missing objects and attribute mismatches. | a game-theoretic approach to compositional sampling from multiple pre-trained diffusion models | generative models | https://openreview.net/pdf?id=0fcVDzkGK2 | 2025-09-18T16:22:23 | 3 | [
{
"id": "5zG3M4XOSY",
"forum": "0fcVDzkGK2",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10873/Reviewer_nmYV",
"reviewer_name": "Reviewer_nmYV",
"rating": 0,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 1,
"summary": "This ... |
vEh1ceS154 | https://openreview.net/forum?id=vEh1ceS154 | Partition Generative Modeling: Masked Modeling Without Masks | 7 | 3 | [
6,
6,
8,
8
] | [
3,
2,
4,
3
] | 4 | [
"masked generative modeling",
"discrete diffusion",
"masked diffusion language modeling",
"diffusion language modeling"
] | Masked generative models (MGMs) are widely used to capture complex data and enable faster generation than autoregressive models (AR) through parallel decoding.
However, MGMs typically operate on fixed-length inputs, which can be inefficient: early in sampling, most tokens are masked and carry little information, leading to wasted computation. In contrast, AR models process only tokens generated previously, making early iterations faster.
In this work, we introduce the ``Partition Generative Model'' (PGM), a novel approach that combines the strengths of AR and MGMs. Rather than masking, PGM partitions tokens into two groups and employs sparse attention to block information flow between them.
Since there is no information flow between partitions, the model can process the previously-generated tokens only during sampling, while retaining the ability to generate tokens in parallel and in any order.
On OpenWebText, PGMs offer at least $5\times$ improvements in sampling latency and throughput, while producing samples with superior generative perplexity, compared to Masked Diffusion Language Models. In the ImageNet dataset, PGMs achieve up to $7\times$ better throughput compared to MaskGIT with only a small change in FID. Finally, we show that PGMs are compatible with distillation methods for MGMs, enabling further inference speedups. | We show that it is possible to train masked generative models without using MASK tokens, resulting in efficiency gains at inference. | generative models | https://openreview.net/pdf?id=vEh1ceS154 | 2025-09-17T01:36:32 | 4 | [
{
"id": "LabNEsk09h",
"forum": "vEh1ceS154",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission7931/Reviewer_YDst",
"reviewer_name": "Reviewer_YDst",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The au... |
wSGle6ag5I | https://openreview.net/forum?id=wSGle6ag5I | Improving Diffusion Models for Class-imbalanced Training Data via Capacity Manipulation | 6 | 3.5 | [
6,
6,
6,
6
] | [
4,
3,
3,
4
] | 4 | [
"Imbalance",
"Diffusion Models"
] | While diffusion models have achieved remarkable performance in image generation, they often struggle with the imbalanced datasets frequently encountered in real-world applications, resulting in significant performance degradation on minority classes. In this paper, we identify model capacity allocation as a key and previously underexplored factor contributing to this issue, providing a perspective that is orthogonal to existing research. Our empirical experiments and theoretical analysis reveal that majority classes monopolize an unnecessarily large portion of the model's capacity, thereby restricting the representation of minority classes. To address this, we propose Capacity Manipulation (CM), which explicitly reserves model capacity for minority classes. Our approach leverages a low-rank decomposition of model parameters and introduces a capacity manipulation loss to allocate appropriate capacity for capturing minority knowledge, thus enhancing minority class representation. Extensive experiments demonstrate that CM consistently and significantly improves the robustness of diffusion models on imbalanced datasets, and when combined with existing methods, further boosts overall performance. | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=wSGle6ag5I | 2025-09-05T11:15:04 | 4 | [
{
"id": "y03pIN2wNq",
"forum": "wSGle6ag5I",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission2255/Reviewer_NJJB",
"reviewer_name": "Reviewer_NJJB",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This p... | |
XGODWn7HeJ | https://openreview.net/forum?id=XGODWn7HeJ | Toward Principled Flexible Scaling for Self-Gated Neural Activation | 6.666667 | 4 | [
8,
6,
6
] | [
4,
4,
4
] | 3 | [
"Neural Activation Functions",
"Principled Neural Activation Modeling",
"Neural Activation Interpretation",
"Non-local Information Modeling"
] | Neural networks necessitate nonlinearities to achieve universal approximability.
Traditional activation functions introduce nonlinearities through rigid feature rectifications.
Recent self-gated variants improve traditional methods in fitting flexibility by incorporating learnable content-aware factors and non-local dependencies, enabling dynamic adjustments to activation curves via adaptive translation and scaling.
While SOTA approaches achieve notable gains in conventional CNN layers, they struggle to enhance Transformer layers, where fine-grained context is inherently modeled, severely reducing the effectiveness of non-local dependencies leveraged in activation processes.
We refer to this critical yet unexplored challenge as the non-local tension of activation.
Drawing on a decision-making perspective, we systematically analyze the origins of the non-local tension problem and explore the initial solution to foster a more discriminative and generalizable neural activation methodology.
This is achieved by rethinking how non-local cues are encoded and transformed into adaptive scaling coefficients, which in turn recalibrate the contributions of features to filter updates through neural activation.
Grounded in these insights, we present FleS, a novel self-gated activation model for discriminative pattern recognition.
Extensive experiments on various popular benchmarks validate our interpretable methodology for improving neural activation modeling. | We identify, elucidate, and address the underexplored non-local tension problem and introduce FleS, a self-gated activation function that enhances discriminative visual recognition through adaptive scaling. | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=XGODWn7HeJ | 2025-09-19T20:16:30 | 3 | [
{
"id": "tkYg6DEAsm",
"forum": "XGODWn7HeJ",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18129/Reviewer_rpDy",
"reviewer_name": "Reviewer_rpDy",
"rating": 8,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "The p... |
vIcqXbhU0Y | https://openreview.net/forum?id=vIcqXbhU0Y | Coherent Local Explanations for Mathematical Optimization | 3.333333 | 4 | [
4,
4,
2
] | [
4,
4,
4
] | 3 | [
"Optimization",
"Explainability",
"Interpretability",
"Sensitivity Analysis",
"Regression"
] | The surge of explainable artificial intelligence methods seeks to enhance transparency and explainability in machine learning models. At the same time, there is a growing demand for explaining decisions taken through complex algorithms used in mathematical optimization. However, current explanation methods do not take into account the structure of the underlying optimization problem, leading to unreliable outcomes. In response to this need, we introduce Coherent Local Explanations for Mathematical Optimization (CLEMO). CLEMO provides explanations for multiple components of optimization models, the objective value and decision variables, which are coherent with the underlying model structure. Our sampling-based procedure can provide explanations for the behavior of exact and heuristic solution algorithms. The effectiveness of CLEMO is illustrated by experiments for the shortest path problem, the knapsack problem, and the vehicle routing problem. | optimization | https://openreview.net/pdf?id=vIcqXbhU0Y | 2025-09-19T17:14:41 | 3 | [
{
"id": "DbMQxAveE3",
"forum": "vIcqXbhU0Y",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission17192/Reviewer_LWrw",
"reviewer_name": "Reviewer_LWrw",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This ... | |
3CLscEAR9X | https://openreview.net/forum?id=3CLscEAR9X | ArtAug: Iterative Enhancement of Text-to-Image Models via Synthesis–Understanding Interaction | 4.5 | 3.5 | [
2,
6,
6,
4
] | [
4,
3,
3,
4
] | 4 | [
"Diffusion models",
"alignment",
"image synthesis"
] | The emergence of diffusion models has significantly advanced image synthesis. Recent studies of model interaction and self-corrective reasoning approaches in large language models offer new insights for enhancing text-to-image models. Inspired by these studies, we propose a novel method called ArtAug for enhancing text-to-image models via model interactions with understanding models. In the interactions, we leverage human preferences implicitly learned by image understanding models to provide fine-grained suggestions for image generation models. The interactions can modify the image content to make it aesthetically pleasing, such as adjusting exposure, changing shooting angles, and adding atmospheric effects. The enhancements brought by the interaction are iteratively fused into the generation model itself through an additional enhancement module. This enables the generation model to produce aesthetically pleasing images directly with no additional inference cost. In the experiments, we verify the effectiveness of ArtAug on advanced models such as FLUX, Stable Diffusion 3.5 and Qwen2-VL, with extensive evaluations in metrics of image quality, human evaluation, and ethics. The source code and models will be released publicly. | A paper on enhancement methods for text-to-image models. | generative models | https://openreview.net/pdf?id=3CLscEAR9X | 2025-09-18T10:03:37 | 4 | [
{
"id": "7d5GHfTHbh",
"forum": "3CLscEAR9X",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10112/Reviewer_seWm",
"reviewer_name": "Reviewer_seWm",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "Inspi... |
RnNqSYqEcm | https://openreview.net/forum?id=RnNqSYqEcm | Online Multi-objective Convex Optimization: A Unified Framework and Joint Gradient Descent | 3 | 3.5 | [
2,
4,
2,
4
] | [
4,
3,
3,
4
] | 4 | [
"online multi-objective convex optimization",
"Pareto front",
"primal-dual method"
] | Online Convex Optimization (OCO) usually addresses the learning task with a single objective; however, in real-world applications, multiple conflicting objectives often need to be optimized simultaneously. In this paper, we present an Online Multi-objective Convex Optimization (OMCO) framework with a novel multi-objective regret. We prove that, when the number of objectives in OMCO decreases to one, the regret is equal to the regret in OCO, thus unifying the OCO and OMCO frameworks. To facilitate the analysis of the proposed novel regret, we derive its equivalent form using the strong duality theory of convex optimization. Moreover, we propose an Online Joint Gradient Descent algorithm and prove that it achieves a sublinear multi-objective regret according to the equivalent regret form. Experimental results on several real-world datasets validate the effectiveness of our proposed algorithm. | optimization | https://openreview.net/pdf?id=RnNqSYqEcm | 2025-09-04T17:10:44 | 4 | [
{
"id": "EMfLr82i5c",
"forum": "RnNqSYqEcm",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission2016/Reviewer_Ggst",
"reviewer_name": "Reviewer_Ggst",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 1,
"presentation": 2,
"summary": "The pa... | |
q1Waov7fd2 | https://openreview.net/forum?id=q1Waov7fd2 | Normalized Matching Transformer | 2 | 3.75 | [
2,
2,
2,
2
] | [
4,
3,
4,
4
] | 4 | [
"Keypoint Matching",
"Graph Matching",
"Normalized Transformer",
"Hyperspherical Learning"
] | We introduce the Normalized Matching Transformer (NMT), a deep learning approach for efficient and accurate sparse keypoint matching between image pairs. NMT consists of a strong visual backbone, geometric feature refinement via SplineCNN, followed by a normalized transformer for computing matching features. Central to NMT is our hyperspherical normalization strategy: we enforce unitnorm embeddings at every transformer layer and train with a combined contrastive InfoNCE and hyperspherical uniformity loss to yield more discriminative keypoint representations. This novel architecture/loss combination encourages close alignment of matching image features and large distance between non-matching ones not only at the output level, but for each layer. Despite its architectural simplicity, NMT sets a new state-of-the-art performance on PascalVOC and SPair-71k, outperforming BBGM (Rol´ınek et al. 2020), ASAR (Ren et al. 2022), COMMON (Lin et al. 2023) and GMTR (Guo et al. 2024) by 5.1% and 2.2%, respectively, while converging in at least ≥ 1.7× fewer epochs compared to other state of the art baselines. These results underscore the power of combining pervasive normalization with hyperspherical learning for geometric matching tasks. | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=q1Waov7fd2 | 2025-09-17T17:55:07 | 4 | [
{
"id": "YCKIg78f08",
"forum": "q1Waov7fd2",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8931/Reviewer_Ygxx",
"reviewer_name": "Reviewer_Ygxx",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 1,
"presentation": 2,
"summary": "This p... | |
TRM3GP3u2O | https://openreview.net/forum?id=TRM3GP3u2O | PSRT: Accelerating LRM-based Guard Models via Prefilled Safe Reasoning Traces | 4 | 3.75 | [
4,
6,
4,
2
] | [
5,
3,
4,
3
] | 4 | [
"AI Safety",
"LRM",
"Inference acceleration",
"Guard Model"
] | Large Reasoning Models (LRMs) have demonstrated remarkable performance on tasks such as mathematics and code generation. Motivated by these strengths, recent work has empirically demonstrated the effectiveness of LRMs as guard models in improving harmful query detection. However, LRMs typically generate long reasoning traces during inference, causing substantial computational overhead.
In this paper, we introduce $\textbf{PSRT}$, a method that replaces the model's reasoning process with a $\textbf{P}$refilled $\textbf{S}$afe $\textbf{R}$easoning $\textbf{T}$race, thereby significantly reducing the inference cost of LRMs. Concretely, PSRT prefills "safe reasoning virtual tokens" from a constructed dataset and learns over their continuous embeddings. With the aid of indicator tokens, PSRT enables harmful-query detection in a single forward pass while preserving the classification effectiveness of LRMs.
We evaluate PSRT on 7 models, 13 datasets, and 8 jailbreak methods. In terms of efficiency, PSRT completely removes the overhead of generating reasoning tokens during inference. In terms of classification performance, PSRT achieves nearly identical accuracy, with only a minor average F1 drop of 0.015 across 7 models and 5 datasets | We replace the LRM-based guard model’s reasoning process with a prefilled safe reasoning trace, thereby preserving its capability while significantly reducing the computational overhead. | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=TRM3GP3u2O | 2025-09-17T11:35:47 | 4 | [
{
"id": "v56KOxMeMh",
"forum": "TRM3GP3u2O",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8365/Reviewer_1p4m",
"reviewer_name": "Reviewer_1p4m",
"rating": 4,
"confidence": 5,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "The to... |
FcuJY1dK7s | https://openreview.net/forum?id=FcuJY1dK7s | Reasoning Scaffolding: Distilling the Flow of Thought from LLMs | 5.5 | 3.75 | [
6,
6,
6,
4
] | [
3,
4,
4,
4
] | 4 | [
"LLM Reasoning Distillation",
"Large Reasoning Model",
"Reasoning Scaffolding",
"Semantic Signals"
] | The prevailing approach to distilling reasoning from Large Language Models (LLMs)—behavioral cloning from textual rationales—is fundamentally limited. It teaches Small Language Models (SLMs) to mimic surface-level patterns rather than the underlying algorithmic structure of thought, resulting in a critical lack of logical robustness. We argue that instead of cloning text, distillation should transfer this algorithmic structure directly. We introduce Reasoning Scaffolding, a framework that reframes reasoning as a structured generation process. Our method first abstracts the teacher's thought process into a sequence of discrete, interpretable semantic signals (e.g., Contrast, Addition) that act as a scaffold. The student model is then trained via a multi-task objective to both (1) predict the next semantic signal, anticipating the reasoning flow, and (2) generate the corresponding step, conditioned on that signal. This multi-task scheme acts as a powerful regularizer, compelling the student to internalize the computational patterns of coherent reasoning. On a suite of challenging reasoning benchmarks, our method significantly outperforms state-of-the-art distillation in both accuracy and logical consistency, providing a path towards creating smaller models that are genuine reasoners, not just fluent mimics. | We introduce Reasoning Scaffolding, a new reasoning distillation framework that transfers reasoning patterns—not just text—from large to small language models, resulting in stronger small reasoning models. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=FcuJY1dK7s | 2025-09-18T17:04:36 | 4 | [
{
"id": "W5UVveaKeR",
"forum": "FcuJY1dK7s",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10987/Reviewer_qrY1",
"reviewer_name": "Reviewer_qrY1",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This ... |
eHxQc2Q0aw | https://openreview.net/forum?id=eHxQc2Q0aw | Stability and Generalization for Bellman Residuals | 4 | 3.25 | [
2,
2,
6,
6
] | [
4,
3,
2,
4
] | 4 | [
"statistical learning theory",
"algorithmic stability",
"generalization analysis",
"offline reinforcement learning",
"inverse reinforcement learning"
] | Offline reinforcement learning and offline inverse reinforcement learning aim to recover near–optimal value functions or reward models from a fixed batch of logged trajectories, yet current practice still struggles to enforce Bellman consistency. Bellman residual minimization (BRM) has emerged as an attractive remedy, as a globally convergent stochastic gradient descent–ascent based method for BRM has been recently discovered. However, its statistical behavior in the offline setting remains largely unexplored. In this paper, we close this statistical gap. Our analysis introduces a single Lyapunov potential that couples SGDA runs on neighbouring datasets and yields an $\mathcal{O}(1/n)$ on-average argument-stability bound—doubling the best known sample-complexity exponent for convex–concave saddle problems. The same stability constant translates into the $\mathcal{O}(1/n)$ excess risk bound for BRM, without variance reduction, extra regularization, or restrictive independence assumptions on minibatch sampling. The results hold for standard neural-network parameterizations and minibatch SGD. | Our analysis yields an $\mathcal{O}(1/n)$ on-average argument-stability bound for Bellman residual minimization—doubling the best known sample-complexity exponent for convex–concave saddle problems. | learning theory | https://openreview.net/pdf?id=eHxQc2Q0aw | 2025-09-14T17:17:33 | 5 | [
{
"id": "dVNEQM20Wb",
"forum": "eHxQc2Q0aw",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5065/Reviewer_MpYr",
"reviewer_name": "Reviewer_MpYr",
"rating": 2,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The pa... |
0miO9v1jeC | https://openreview.net/forum?id=0miO9v1jeC | TAR: Token Adaptive Routing Framework for LLMs Token-level Semantic Correction Inspired by Neuro-Linguistic Pathways | 3 | 3 | [
2,
2,
4,
4
] | [
4,
3,
3,
2
] | 4 | [
"large language models; math reasoning; brain-inspired; adaptive routing; token semantic correction"
] | Large language models (LLMs) often suffer from cascading errors in math reasoning due to token-level semantic defects. A key limitation is that the reliance on unidirectional feedforward pathways makes LLMs unable to dynamically correct token-level defects during reasoning. In contrast, neuro-linguistic pathways in the human brain—centered on Broca’s and Wernicke’s areas—operate as a closed loop, integrating semantics through feedforward pathways while leveraging feedback circuit for error correction and signal adaptation. The loop involves conflict detection in the anterior cingulate cortex (ACC), cross-regional error transmission via the arcuate fasciculus/IFOF, and compensatory reprocessing in the DLPFC–Broca circuit. Inspired by the functional architecture of neuro-linguistic pathways, we propose a Token Adaptive Routing (TAR) framework that establishes a brain-inspired self-correcting loop in LLMs without requiring parameter fine-tuning. TAR comprises three components: (1) \textbf{Semantic Defect Monitor}, analogous to the anterior cingulate cortex (ACC) for identifying tokens with semantic defects; (2) \textbf{Adaptive Router}, resembling the arcuate fasciculus/IFOF for routing defective tokens to the most compatible LLM functional block; and (3) Feedback-based Re-representation, inspired by the DLPFC–Broca circuit for correcting semantic defects. Experiments show that TAR improves accuracy and reduces the number of inference tokens. On the challenging AIME25 benchmark, TAR improves the accuracy of Qwen3-1.7B by +3.36% while reducing inference tokens by 13.7%. Furthermore, we reveal that maintaining high token confidence is essential for reasoning performance, and deeper blocks in LLMs play a crucial role in shortening reasoning depth. Our code is available at https://anonymous.4open.science/r/warehouse-25F5 | We propose a brain-inspired Token Adaptive Routing framework that enables LLMs to self-correct token-level semantic errors, improving reasoning accuracy while reducing inference tokens. | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=0miO9v1jeC | 2025-09-20T16:22:51 | 4 | [
{
"id": "voMGPoVWiW",
"forum": "0miO9v1jeC",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24412/Reviewer_JgFs",
"reviewer_name": "Reviewer_JgFs",
"rating": 2,
"confidence": 4,
"soundness": 1,
"contribution": 2,
"presentation": 1,
"summary": "The p... |
WzLjwv8KAn | https://openreview.net/forum?id=WzLjwv8KAn | Which Cultural Lens Do Models Adopt? On Cultural Positioning Bias and Agentic Mitigation in LLMs | 2.5 | 3.5 | [
2,
2,
4,
2
] | [
3,
4,
3,
4
] | 4 | [
"Bias",
"Culture",
"LLM",
"Generation",
"Agent"
] | Large language models (LLMs) have unlocked a wide range of downstream generative applications.
However, we found that they also risk perpetuating subtle fairness issues tied to culture, positioning their generations from the perspectives of the mainstream US culture while demonstrating salient externality towards non-mainstream ones.
In this work, we identify and systematically investigate this novel **culture positioning bias**, in which an LLM’s default generative stance aligns with a mainstream view and treats other cultures as "outsiders".
We propose the ***CultureLens*** benchmark with 4,000 generation prompts and 3 evaluation metrics for quantifying this bias through the lens of a *culturally situated interview script generation* task, in which an LLM is positioned as an on-site reporter interviewing local people across 10 diverse cultures.
Empirical evaluation on 5 state-of-the-art LLMs reveals a stark pattern: while models adopt insider tones in over 88\% US-contexted scripts on average, they disproportionately adopt mainly outsider stances for less dominant cultures.
To resolve these biases, we propose *2 inference-time mitigation methods*: a baseline prompt-based **Fairness Intervention Pillars (FIP)** method, and a structured **Mitigation via Fairness Agents (MFA)** framework consisting of 2 pipelines:
(1) **MFA-SA (Single-Agent)** introduces a self-reflection and rewriting loop based on fairness guidelines.
(2) **MFA-MA (Multi-Agent)** structures the process into a hierarchy of specialized agents: a Planner Agent(initial script generation), a Critique Agent (evaluates initial script against fairness pillars), and a Refinement Agent (incorporates feedback to produce a polished, unbiased script).
Empirical results demonstrate that agent-based MFA methods achieve outstanding and robust performance in mitigating the culture positioning bias:
For instance, on the CAG metric, *MFA-SA reduces bias in Llama model by 89.70 \% and MFA-MA mitigates bias in Qwen by 82.55\%*.
These findings showcase the effectiveness of agent-based methods as a promising direction for mitigating biases in generative LLMs. | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=WzLjwv8KAn | 2025-09-20T15:05:18 | 5 | [
{
"id": "Ks50sMgkwg",
"forum": "WzLjwv8KAn",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24027/Reviewer_N7qu",
"reviewer_name": "Reviewer_N7qu",
"rating": 2,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This ... | |
avdPTUXdPG | https://openreview.net/forum?id=avdPTUXdPG | Dissecting Demystifying Region-Based Representations in MLLMs | 3 | 3 | [
4,
4,
2,
2
] | [
3,
3,
3,
3
] | 4 | [
"Vision Language Models",
"Multimodal Models"
] | Multimodal Large Language Models (MLLMs) typically process visual information as a flat sequence of image patch tokens, which is computationally expensive and lacks explicit semantic structure. This paper provides a systematic, vision-centric analysis of region-based representations, which group patches into semantically meaningful regions, as a more efficient and interpretable alternative. Our investigation is grounded in a key finding: MLLM performance is surprisingly robust to the input order of patch tokens, as the visual encoder already encode spatial information within the patches. This insight provides a foundational justification for reorganizing patches into semantically coherent regions. We further identify that the success of region-based methods depends on the quality of the visual features, particularly their smoothness and locality. We systematically evaluate how to enhance these properties through vision backbone selection, feature normalization, and hybrid partitioning strategies. Through comprehensive evaluations, we demonstrate that optimized region-based representations are a competitive alternative to patch-based ones, offering a compelling path towards more efficient, interpretable, and performant MLLMs. | Dissecting Demystifying Region-Based Representations in MLLMs | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=avdPTUXdPG | 2025-09-19T22:59:05 | 4 | [
{
"id": "PY4oNc5j0b",
"forum": "avdPTUXdPG",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19156/Reviewer_xyPG",
"reviewer_name": "Reviewer_xyPG",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This ... |
vK6iDcs8SM | https://openreview.net/forum?id=vK6iDcs8SM | BulletGen: Improving 4D Reconstruction with Bullet-Time Generation | 4 | 3.75 | [
4,
6,
4,
2
] | [
4,
4,
3,
4
] | 4 | [
"4D reconstruction",
"bullet-time",
"generative models"
] | Transforming casually captured, monocular videos into fully immersive dynamic experiences is a highly ill-posed task, and comes with significant challenges, e.g., reconstructing unseen regions, and dealing with the ambiguity in monocular depth estimation. In this work we introduce BulletGen, an approach that takes advantage of generative models to correct errors and complete missing information in a Gaussian-based dynamic scene representation. This is done by aligning the output of a diffusion-based video generation model with the 4D reconstruction at a single frozen "bullet-time" step. The generated frames are then used to supervise the optimization of the 4D Gaussian model. Our method seamlessly blends generative content with both static and dynamic scene components, achieving state-of-the-art results on both novel-view synthesis, and 2D/3D tracking tasks. | We improve 4D reconstruction from monocular videos by augmenting with bullet-time reconstructions from a generative model. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=vK6iDcs8SM | 2025-09-18T23:08:10 | 4 | [
{
"id": "oHmENQ7Rpb",
"forum": "vK6iDcs8SM",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12477/Reviewer_q93b",
"reviewer_name": "Reviewer_q93b",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This ... |
lNcc1TypMd | https://openreview.net/forum?id=lNcc1TypMd | Beyond Log Likelihood: Probability-Based Objectives for Supervised Fine-Tuning across the Model Capability Continuum | 5 | 3.75 | [
4,
6,
6,
4
] | [
3,
4,
4,
4
] | 4 | [
"Post-Training",
"SFT",
"training objectives"
] | Supervised fine-tuning (SFT) is the standard approach for post-training large language models (LLMs), yet it often shows limited generalization. We trace this limitation to its default training objective: negative log likelihood (NLL). While NLL is classically optimal when training from scratch, post-training operates in a different paradigm and could violate its optimality assumptions, where models already encode task-relevant priors and supervision can be long and noisy. To this end, we study a general family of probability-based objectives and characterize their effectiveness under different conditions. Through comprehensive experiments and extensive ablation studies across 7 model backbones, 14 benchmarks, and 3 domains, we uncover a critical dimension that governs objective behavior: the *model-capability continuum*. Near the *model-strong* end, prior-leaning objectives that downweight low-probability tokens (*e.g.,* $-p$, $-p^{10}$, thresholded variants) consistently outperform NLL; toward the *model-weak* end, NLL dominates; in between, no single objective prevails. Our theoretical analysis further elucidates how objectives trade places across the continuum, providing a principled foundation for adapting objectives to model capability. | We revisit supervised fine-tuning (SFT) for large language models, introducing a model-capability continuum that shows negative log-likelihood is not universally optimal and characterizes when alternative objectives succeed or fail. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=lNcc1TypMd | 2025-09-19T08:22:36 | 4 | [
{
"id": "ZOgxijcz1o",
"forum": "lNcc1TypMd",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14671/Reviewer_FZpg",
"reviewer_name": "Reviewer_FZpg",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "LLMs ... |
BZ1vutP53o | https://openreview.net/forum?id=BZ1vutP53o | TEN-DM: Topology-Enhanced Diffusion Model for Spatio-Temporal Event Prediction | 4 | 3.666667 | [
6,
2,
4
] | [
3,
4,
4
] | 3 | [
"Spatio-temporal point process",
"Diffusion model",
"Topological data analysis"
] | Spatio-temporal point process (STPP) data appear in many domains. A natural way to model them is to describe how the instantaneous event rate varies over space and time given the observed history which enables interpretation, interaction detection, and forecasting. Traditional parametric kernel-based models, while historically dominant, struggle to capture complex nonlinear patterns. In contrast, deep learning methods leverage the representational power of neural networks to aggregate historical events and integrate spatio-temporal point processes. However, existing deep learning methods often process space and time independently, overlooking the spatio-temporal dependencies. To address this limitation, we propose a novel method called Topology-ENhanced Diffusion Model (TEN-DM), including two key components namely spatio-temporal graph construction and multimodal topological feature representation learning. Further, we use temporal query technique to effectively capture periodic temporal patterns for learning effective temporal representations. Extensive experiments show the effectiveness of TEN-DM on multiple STPP datasets compared to state-of-the-art methods. | learning on time series and dynamical systems | https://openreview.net/pdf?id=BZ1vutP53o | 2025-09-19T14:14:20 | 3 | [
{
"id": "kl72bsgome",
"forum": "BZ1vutP53o",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16261/Reviewer_rwpn",
"reviewer_name": "Reviewer_rwpn",
"rating": 6,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The p... | |
yirunib8l8 | https://openreview.net/forum?id=yirunib8l8 | Depth Anything 3: Recovering the Visual Space from Any Views | 7 | 3.5 | [
8,
8,
6,
6
] | [
3,
4,
4,
3
] | 4 | [
"Depth Estimation"
] | We present Depth Anything 3 (DA3), a model that predicts spatially consistent geometry from an arbitrary number of visual inputs, with or without known camera poses.
In pursuit of minimal modeling, DA3 yields two key insights:
a single plain transformer (e.g., vanilla DINOv2 encoder) is sufficient as a backbone without architectural specialization, and a singular depth-ray prediction target obviates the need for complex multi-task learning. Through our teacher-student training paradigm, the model achieves a level of detail and generalization on par with Depth Anything 2 (DA2).
We establish a new visual geometry benchmark covering camera pose estimation, any-view geometry and visual rendering. On this benchmark, DA3 sets a new state-of-the-art across all tasks, surpassing prior SOTA VGGT by an average of 35.7\% in camera pose accuracy and 23.6\% in geometric accuracy. Moreover, it outperforms DA2 in monocular depth estimation. All models are trained exclusively on public academic datasets. | Depth Anything 3 uses a single vanilla DINOv2 transformer to take arbitrary input views and outputs consistent depth and ray maps, delivering leading pose, geometry, and visual rendering performance. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=yirunib8l8 | 2025-09-12T02:22:07 | 4 | [
{
"id": "88WiRwkmUt",
"forum": "yirunib8l8",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4157/Reviewer_xgar",
"reviewer_name": "Reviewer_xgar",
"rating": 8,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The pa... |
DDaaA4Uldp | https://openreview.net/forum?id=DDaaA4Uldp | XTransfer: Modality-Agnostic Few-Shot Model Transfer for Human Sensing at the Edge | 4 | 3.5 | [
4,
2,
6,
4
] | [
3,
3,
4,
4
] | 4 | [
"Human Sensing",
"Cross-Modality Few-Shot Model Transfer",
"Edge AI"
] | Deep learning for human sensing on edge systems presents significant potential for smart applications. However, its training and development are hindered by the limited availability of sensor data and resource constraints of edge systems. While transferring pre-trained models to different sensing applications is promising, existing methods often require extensive sensor data and computational resources, resulting in high costs and poor adaptability in practice. In this paper, we propose XTransfer, a first-of-its-kind method enabling modality-agnostic, few-shot model transfer with resource-efficient design. XTransfer flexibly uses single or multiple pre-trained models and transfers knowledge across different modalities by (i) model repairing that safely mitigates modality shift by adapting pre-trained layers with only few sensor data, and (ii) layer recombining that efficiently searches and recombines layers of interest from source models in a layer-wise manner to create compact models. We benchmark various baselines across diverse human sensing datasets spanning different modalities. Comprehensive results demonstrate that XTransfer achieves state-of-the-art performance while significantly reducing the costs of sensor data collection, model training, and edge deployment. | This paper proposes a pioneering and scalable method that enables modality-agnostic few-shot model transfer for advancing human sensing on edge systems. | transfer learning, meta learning, and lifelong learning | https://openreview.net/pdf?id=DDaaA4Uldp | 2025-09-18T22:42:35 | 4 | [
{
"id": "R0qel6PfsU",
"forum": "DDaaA4Uldp",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12243/Reviewer_BTLu",
"reviewer_name": "Reviewer_BTLu",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This ... |
YM3SskmtCE | https://openreview.net/forum?id=YM3SskmtCE | ATTS: Asynchronous Test-Time Scaling via Conformal Prediction | 6 | 3 | [
8,
8,
2
] | [
3,
2,
4
] | 3 | [
"Conformal Prediction",
"Test-Time Scaling",
"Speculative Decoding"
] | Large language models (LLMs) benefit from test-time scaling but are often hampered by high inference latency. Speculative decoding is a natural way to accelerate the scaling process; however, scaling along both the parallel and sequential dimensions poses significant challenges, including substantial memory-bound execution and synchronization overhead. We introduce *ATTS* (Asynchronous Test-Time Scaling), a statistically guaranteed adaptive scaling framework that follows the hypothesis testing process to address these challenges. By revisiting arithmetic intensity, *ATTS* identifies synchronization as the primary bottleneck. It enables asynchronous inference through online calibration and proposes an ordinal classification algorithm that supports a three-stage rejection sampling pipeline, scaling along both the sequential and parallel axes. Across experiments on the MATH, AMC23, AIME24, and AIME25 datasets and across multiple draft–target model families, we show that *ATTS* delivers up to *56.7x* speedup in test-time scaling and a *4.14x* throughput improvement, while maintaining accurate control of the rejection rate, reducing latency and memory overhead, and incurring no accuracy loss. By scaling both in parallel and sequential dimensions, we enable the 1.5B/70B draft/target model combination to achieve the performance of the state-of-the-art reasoning model o3-mini (high) on the AIME dataset. We submit the anonymous repository: anonymous.4open.science/r/Asynchronous-Test-Time-Scaling-5940. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=YM3SskmtCE | 2025-09-18T14:52:30 | 10 | [
{
"id": "HqbK5DjFzW",
"forum": "YM3SskmtCE",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10643/Reviewer_LsxV",
"reviewer_name": "Reviewer_LsxV",
"rating": 8,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The a... | |
MnQD69han5 | https://openreview.net/forum?id=MnQD69han5 | DFVEdit: Conditional Delta Flow Vector for Zero-shot Video Editing | 4 | 3.5 | [
4,
4,
4,
4
] | [
3,
4,
3,
4
] | 4 | [
"zeroshot",
"video editing",
"traning free",
"video transformer"
] | The advent of Video Diffusion Transformers (Video DiTs) marks a milestone in video generation. However, directly applying existing video editing methods to Video DiTs often incurs substantial computational overhead, due to resource-intensive attention modification or fine-tuning. To alleviate this problem, we present DFVEdit, an efficient zero-shot video editing method tailored for Video DiTs. DFVEdit eliminates the need for both attention engineering and fine-tuning by directly operating on clean latents via flow transformation. To be more specific, we observe that editing and sampling can be unified under the continuous flow perspective. Building upon this foundation, we propose the Conditional Delta Flow Vector (CDFV) -- a theoretically unbiased estimation of DFV -- and integrate Implicit Cross Attention (ICA) guidance as well as Embedding Reinforcement (ER) to further enhance editing quality. DFVEdit excels in practical efficiency, offering at least 20x inference speed-up and 85% memory reduction on Video DiTs compared to attention-engineering-based editing methods. Extensive quantitative and qualitative experiments demonstrate that DFVEdit can be seamlessly applied to popular Video DiTs (\emph{e.g.}, CogVideoX and Wan2.1), attaining state-of-the-art performance on structural fidelity, spatial-temporal consistency, and editing quality. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=MnQD69han5 | 2025-09-19T10:03:48 | 4 | [
{
"id": "igcn3kldv7",
"forum": "MnQD69han5",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission15070/Reviewer_eh2p",
"reviewer_name": "Reviewer_eh2p",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The a... | |
bl9hFm04Lc | https://openreview.net/forum?id=bl9hFm04Lc | Can AI Truly Represent Your Voice in Deliberations? A Comprehensive Study of Large-Scale Opinion Aggregation with LLMs | 5 | 3.5 | [
6,
4,
4,
6
] | [
4,
4,
3,
3
] | 4 | [
"Human Study; Reliable LLM; Public Deliberation; Computational Social Science; Large-Scale Evaluation"
] | Large-scale public deliberations generate thousands of free-form contributions that must be synthesized into representative and neutral summaries for policy use. While LLMs have been shown as a promising tool to generate summaries for large-scale deliberations, they also risk underrepresenting minority perspectives and exhibiting bias with respect to the input order, raising fairness concerns in high-stakes contexts. Studying and fixing these issues requires a comprehensive evaluation at a large scale, yet current practice often relies on LLMs as judges, which show weak alignment with human judgments. To address this, we present DeliberationBank, a large-scale human-grounded dataset with (1) opinion data spanning ten deliberation questions created by 3,000 participants and (2) summary judgment data annotated by 4,500 participants across four dimensions (representativeness, informativeness, neutrality, policy approval). Using these datasets, we train \model, a fine-tuned DeBERTa model that can rate deliberation summaries from individual perspectives. DeliberationJudge is more efficient and more aligned with human judgements compared to a wide range of LLM judges. With DeliberationJudge, we evaluate 15+ LLMs and reveal persistent weaknesses in deliberation summarization, especially underrepresentation of minority positions. Our framework provides a scalable and reliable way to evaluate deliberation summarization, helping ensure AI systems are more representative and equitable for policymaking. | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=bl9hFm04Lc | 2025-09-04T00:20:50 | 4 | [
{
"id": "RKsweVwgMJ",
"forum": "bl9hFm04Lc",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission1768/Reviewer_GiWi",
"reviewer_name": "Reviewer_GiWi",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The au... | |
DzecbBEmud | https://openreview.net/forum?id=DzecbBEmud | Differentially and Integrally Attentive Convolutional-based Photoplethysmography Signal Quality Classification | 2.5 | 4 | [
2,
2,
4,
2
] | [
4,
5,
3,
4
] | 4 | [
"Differential Attention",
"Differential Inteh Attention",
"Signal Quality",
"Photoplethysmography",
"Wearables"
] | Photoplethysmography (PPG) is a non-intrusive and cost-effective optical technology that detects changes in blood volume within tissues, providing insights into the body’s physiological dynamics over time. By analyzing PPG data as a time series, valuable information about cardiovascular health and other physiological parameters such as Heart Rate Variability (HRV), Peripheral Oxygen Saturation (SpO2), and sleep status can be estimated. With the ever increasing user adoption of wearable devices like smartwatches, Health Monitoring Applications (HMA) are gaining popularity due to their ability to track various health metrics, including sleep patterns, heart rate, and activity tracking, by making use of PPG sensors to monitor different aspects of an individual’s health and wellness. However, reliable
health indicators require high-quality PPG signals, which are often contaminated with noise and artifacts caused by movement when using wearables. Hence, Signal Quality Assessment (SQA) is crucial in determining the trustworthiness of PPG data for HMA applications. We present a new PPG SQA approach, leveraging recent advancements in differential and integral attention-based strategies coupled with a two-stage procedure for promptly discarding highly anomalous segments, as a means of enhancing the performance of Convolutional Neural Network (CNN)-based SQA classifiers, balancing storage size and classifier accuracy in resulting models of increased robustness across PPG signals from different devices. Our methods are capable of achieving F1-scores between 0.9194 and 0.9865 across four expert-annotated datasets from different wearable devices. | Improving signal quality classification in photoplethysmography-based health applications using differential and integral attention | other topics in machine learning (i.e., none of the above) | https://openreview.net/pdf?id=DzecbBEmud | 2025-09-16T02:18:58 | 4 | [
{
"id": "g59Nl6Pi7G",
"forum": "DzecbBEmud",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission6230/Reviewer_7LK8",
"reviewer_name": "Reviewer_7LK8",
"rating": 2,
"confidence": 4,
"soundness": 1,
"contribution": 2,
"presentation": 3,
"summary": "This p... |
bppDDqbO3V | https://openreview.net/forum?id=bppDDqbO3V | Dissecting the Role of Positional Encoding in Length Generalization | 4.5 | 3 | [
2,
4,
6,
6
] | [
4,
2,
3,
3
] | 4 | [
"Mechanistic Interpretation",
"Positional Encoding",
"Length Generalization",
"Iteration Head",
"Reasoning Tasks."
] | Length generalization (LG) is a persistent challenge for Transformers. Despite recent studies improving the models' LG capability, its underlying mechanisms are still underexplored. To better understand LG, we propose that LG requires alignment of the model’s inductive bias with the task’s computational structure, and validate this view with experiments on Transformers. Focusing on iterative tasks (e.g., Polynomial Iteration, Parity, Binary Copy), we systematically analyze different PEs and find that the misalignment persists for Transformers: the structural bias of softmax attention and computational biases from PEs destabilize LG under extrapolation. Notably, Transformers without positional encoding (NoPE) could show partial LG capability, potentially because implicit position encoding through hidden-state statistics and contextual token distributions preserves the consistent computation in extrapolation, though these signals decay with length, leaving the encoding misaligned with the task. Building on this mechanistic analysis, we introduce a lightweight enhancement—value-side relative coding with logit rescaling—that better aligns inductive bias with task structure. This sustains iterative computation and improves LG, offering insights for future PE design. | Exploring the mechanism of Positional Encoding in Length Generalization on Reasoning Tasks | interpretability and explainable AI | https://openreview.net/pdf?id=bppDDqbO3V | 2025-09-19T19:05:25 | 4 | [
{
"id": "yvMFEH7FbZ",
"forum": "bppDDqbO3V",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission17734/Reviewer_nP5W",
"reviewer_name": "Reviewer_nP5W",
"rating": 2,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This ... |
ehtVTpcjES | https://openreview.net/forum?id=ehtVTpcjES | T³: Test-Time Model Merging in VLMs for Zero-Shot Medical Imaging Analysis | 3.5 | 3.5 | [
2,
4,
2,
6
] | [
4,
3,
3,
4
] | 4 | [
"medical imaging",
"vision language models",
"zero-shot generalization",
"model merging",
"healthcare"
] | In medical imaging, vision-language models face a critical duality: \textit{pretrained} networks offer broad robustness but lack subtle, modality-specific characteristics, while fine-tuned \textit{expert} models achieve high in-distribution accuracy yet falter under modality shift. Existing model-merging techniques, designed for natural-image benchmarks, are simple and efficient but fail to deliver consistent gains across diverse medical modalities; their static interpolation limits reliability in varied clinical tasks.
To address this, we introduce \textbf{T}est-\textbf{T}ime \textbf{T}ask adaptive merging ($\mathbb{T^{3}}$), a backpropagation-free framework that computes \textit{per-sample} interpolation coefficients via the Jensen–Shannon divergence between the two models’ output distributions. $\mathbb{T^{3}}$ dynamically preserves local precision when models agree and defers to generalist robustness under drift. To overcome the inference costs of sample-wise merging, we further propose a batch-wise extension, $\mathbb{T^{3}}_{\mathcal{B}}$ that computes merging coefficient across a batch of samples, dramatically reducing computational bottleneck.
Recognizing the lack of a standardized medical-merging benchmark, we present a rigorous cross-evaluation protocol spanning in-domain, base-to-novel, and corruptions across four modalities. Empirically, $\mathbb{T^{3}}$ sets new state-of-the-art in Top-1 accuracy and error reduction, outperforming strong baselines while maintaining efficiency, paving the way for adaptive MVLM deployment in clinical settings. | We propose a sample-wise test-time model merging in vision-language models validating enhanced performance across four medical imaging classification tasks on a practical cross-dataset evaluation medical benchmark. | applications to physical sciences (physics, chemistry, biology, etc.) | https://openreview.net/pdf?id=ehtVTpcjES | 2025-09-19T00:00:27 | 5 | [
{
"id": "NMIjszVuSH",
"forum": "ehtVTpcjES",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12886/Reviewer_VAaC",
"reviewer_name": "Reviewer_VAaC",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This ... |
XNk56rmmiy | https://openreview.net/forum?id=XNk56rmmiy | Towards Adaptive ML Benchmarks: Web-Agent-Driven Construction, Domain Expansion, and Metric Optimization | 3.333333 | 3.333333 | [
2,
2,
6
] | [
4,
4,
2
] | 3 | [
"Benchmark",
"Large Language Models",
"Language Agents",
"End-to-End Machine Learning",
"Evaluation Framework",
"Data Science Automation"
] | Recent advances in large language models (LLMs) have enabled the emergence of general-purpose agents for automating end-to-end machine learning (ML) workflows, including data analysis, feature engineering, model training, and competition solving. However, existing benchmarks remain limited in task coverage, domain diversity, difficulty modeling, and evaluation rigor, failing to capture the full capabilities of such agents in realistic settings.
We present TAM Bench, a diverse, realistic, and structured benchmark for evaluating LLM-based agents on end-to-end ML tasks. TAM Bench features three key innovations:
(1) A browser automation and LLM-based task acquisition system that automatically collects and structures ML challenges from platforms such as Kaggle, AIcrowd, and Biendata, spanning multiple task types and data modalities (e.g., tabular, text, image, graph, audio);
(2) A leaderboard-driven difficulty modeling mechanism that estimates task complexity using participant counts and score dispersion, enabling scalable and objective task calibration;
(3) A multi-dimensional evaluation framework incorporating performance, format compliance, constraint adherence, and task generalization.
Based on 150 curated AutoML tasks, we construct three benchmark subsets of different sizes—Lite, Medium, and Full—designed for varying evaluation scenarios. The Lite version, with 18 tasks and balanced coverage across modalities and difficulty levels, serves as a practical testbed for daily benchmarking and comparative studies. | datasets and benchmarks | https://openreview.net/pdf?id=XNk56rmmiy | 2025-09-18T21:19:00 | 3 | [
{
"id": "IoNKb2h43G",
"forum": "XNk56rmmiy",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11551/Reviewer_HQoU",
"reviewer_name": "Reviewer_HQoU",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 1,
"summary": "The p... | |
4T9ncuf08p | https://openreview.net/forum?id=4T9ncuf08p | Dataset Regeneration for Cross Domain Recommendation | 6 | 3 | [
6,
4,
8
] | [
3,
3,
3
] | 3 | [
"Recommender System",
"Cross-domain recommendation",
"Dataset Regeneration"
] | Cross-domain recommendation (CDR) has emerged as an effective strategy to mitigate data sparsity and cold-start challenges by transferring knowledge from a source domain to a target domain. Despite recent progress, two key issues remain: (i) Sparse overlap. In real-world datasets such as Amazon, the proportion of users active in both domains is extremely low, significantly limiting the effectiveness of many state-of-the-art CDR approaches. (ii) Negative transfer. Existing methods primarily address this problem at the model level, often assuming that logged interactions are unbiased and noise-free. In practice, however, recommender data contain numerous spurious correlations, and this issue is exacerbated in CDR due to domain heterogeneity.
To address these challenges, we propose a dataset regeneration framework. First, we leverage a prediction model to generate a pool of high-confidence candidate interactions to link non-overlapping target-domain users and source-domain items. Second, inspired by causal inference, we introduce a filtering process designed to prune spurious interactions. This process identifies and removes not only noisy edges created during generation but also those from the original dataset, retaining only the interactions that have a positive causal effect on the target-domain performance. Through these two processes, we can regenerate a source-domain dataset that exhibits a tighter coupling and a more explicit causal connection with the target domain.
By integrating our method with three representative recommendation backbones—LightGCN, BiTGCF, and CUT—we show that it significantly boosts their predictive accuracy on the target domain, achieving substantial gains of up to 23.81\% in Recall@10 and 22.22\% in NDCG@10. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=4T9ncuf08p | 2025-09-19T14:22:37 | 3 | [
{
"id": "LCnihkLKE7",
"forum": "4T9ncuf08p",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16305/Reviewer_p3jn",
"reviewer_name": "Reviewer_p3jn",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This ... | |
xFdT63wm5e | https://openreview.net/forum?id=xFdT63wm5e | Unified Continuous Generative Models for Denoising-based Diffusion | 5.5 | 3.5 | [
4,
6,
6,
6
] | [
3,
3,
3,
5
] | 4 | [
"generative modeling",
"denoising diffusion",
"consistency model",
"image generation"
] | Recent advances in continuous generative models, encompassing multi-step processes such as diffusion and flow matching (typically requiring $8$-$1000$ steps) and few-step methods such as consistency models (typically $1$-$8$ steps), have yielded impressive generative performance.
However, existing work often treats these approaches as distinct paradigms, leading to disparate training and sampling methodologies.
We propose a unified framework for the training, sampling, and analysis of diffusion, flow matching, and consistency models.
Within this framework, we derive a surrogate unified objective that, for the first time, theoretically shows that the few-step objective can be viewed as the multi-step objective plus a regularization term.
Building on this framework, we introduce the **U**nified **C**ontinuous **G**enerative **M**odels **T**rainer and **S**ampler (**UCGM**), which enables efficient and stable training of both multi-step and few-step models.
Empirically, our framework achieves state-of-the-art results.
On ImageNet $256\times256$ with a $675\text{M}$ diffusion transformer, UCGM-T trains a multi-step model achieving $1.30$ FID in $20$ steps, and a few-step model achieving $1.42$ FID in only $2$ steps.
Moreover, applying UCGM-S to REPA-E improves its FID from $1.26$ (at $250$ steps) to $1.06$ in only $40$ steps, without additional cost. | generative models | https://openreview.net/pdf?id=xFdT63wm5e | 2025-09-20T17:57:38 | 4 | [
{
"id": "c2rOfFPh8s",
"forum": "xFdT63wm5e",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24943/Reviewer_iL3K",
"reviewer_name": "Reviewer_iL3K",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This ... | |
RDAhLHEHDm | https://openreview.net/forum?id=RDAhLHEHDm | Lost in Tokenization: Context as the Key to Unlocking Biomolecular Understanding in Scientific LLMs | 6.5 | 3.5 | [
6,
6,
6,
8
] | [
3,
3,
4,
4
] | 4 | [
"Biomolecular learning",
"Protein sequence"
] | Scientific Large Language Models (Sci-LLMs) have emerged as a promising frontier for accelerating biological discovery. However, these models face a fundamental challenge when processing raw biomolecular sequences: the tokenization dilemma. Whether treating sequences as a specialized language, risking the loss of functional motif information, or as a separate modality, introducing formidable alignment challenges, current strategies fundamentally limit their reasoning capacity. We challenge this sequence-centric paradigm by positing that a more effective strategy is to provide Sci-LLMs with high-level structured context derived from established bioinformatics tools, thereby bypassing the need to interpret low-level noisy sequence data directly. Through a systematic comparison of leading Sci-LLMs on biological reasoning tasks, we tested three input modes: sequence-only, context-only, and a combination of both. Our findings are striking: the context-only approach consistently and substantially outperforms all other modes. Even more revealing, the inclusion of the raw sequence alongside its high-level context consistently degrades performance, indicating that raw sequences act as informational noise, even for models with specialized tokenization schemes. These results suggest that the primary strength of existing Sci-LLMs lies not in their nascent ability to interpret biomolecular syntax from scratch, but in their profound capacity for reasoning over structured, human-readable knowledge. Therefore, we argue for reframing Sci-LLMs not as sequence decoders, but as powerful reasoning engines over expert knowledge. This work lays the foundation for a new class of hybrid scientific AI agents, repositioning the developmental focus from direct sequence interpretation towards high-level knowledge synthesis. | applications to physical sciences (physics, chemistry, biology, etc.) | https://openreview.net/pdf?id=RDAhLHEHDm | 2025-09-16T23:39:46 | 4 | [
{
"id": "mP2ddusOo8",
"forum": "RDAhLHEHDm",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission7811/Reviewer_oqHE",
"reviewer_name": "Reviewer_oqHE",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This p... | |
q6kXd8Gpfj | https://openreview.net/forum?id=q6kXd8Gpfj | LearNAT: Learning NL2SQL with AST-guided Task Decomposition for Large Language Models | 6 | 4.333333 | [
4,
6,
8
] | [
5,
4,
4
] | 3 | [
"Large Language Model",
"Text-to-SQL"
] | Natural Language to SQL (NL2SQL) aims to translate natural language queries into executable SQL statements, offering non-expert users intuitive access to databases. While recent approaches leveraging large-scale private LLMs such as GPT-4 have achieved state-of-the-art results, they face two critical challenges: the lack of openness and reproducibility, and the prohibitive computational cost of test-time scaling. To address these issues, we explore improving the model-level performance of small-scale public LLMs in NL2SQL under resource-constrained settings. Our exploratory experiments reveal the potential of task decomposition for enhancing NL2SQL performance, but also highlight the difficulty of enabling LLMs to decompose queries effectively. Motivated by these findings, we propose LearNAT, a novel framework designed to enhance LLMs’ decomposition capabilities. LearNAT introduces (1) a Decomposition Synthesis Procedure, which leverages AST-guided search with pruning strategies to generate verifiable and efficient decompositions, and (2) Margin-Aware Reinforcement Learning, which provides fine-grained preference optimization for multi-step reasoning beyond standard DPO. Extensive experiments on benchmark datasets demonstrate that LearNAT significantly improves the performance of small-scale LLMs, achieving results comparable to GPT-4 with only a 7B parameter model. These results validate the effectiveness of verifiable decomposition and fine-grained preference learning in advancing NL2SQL towards openness, transparency, and efficiency.
Our code is publicly available at https://anonymous.4open.science/r/LearNAT. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=q6kXd8Gpfj | 2025-09-20T10:39:26 | 3 | [
{
"id": "SLzmpRkouv",
"forum": "q6kXd8Gpfj",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission22830/Reviewer_kPT4",
"reviewer_name": "Reviewer_kPT4",
"rating": 4,
"confidence": 5,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This ... | |
EbgCEd8gyN | https://openreview.net/forum?id=EbgCEd8gyN | Sysformer: Safeguarding Frozen Large Language Models with Adaptive System Prompts | 5 | 3.25 | [
6,
4,
6,
4
] | [
3,
3,
3,
4
] | 4 | [
"Large Language Models",
"AI Safety",
"Jailbreaks",
"Guardrails",
"Frozen Model adaptation"
] | As large language models (LLMs) are deployed in safety-critical settings, it is essential to ensure that their responses comply with safety standards. Prior research has revealed that LLMs often fail to grasp the notion of safe behaviors, resulting in either unjustified refusals to harmless prompts or the generation of harmful content. While substantial efforts have been made to improve their robustness, existing defenses often rely on costly fine-tuning of model parameters or employ suboptimal heuristic techniques. In this work, we take a novel approach to safeguard LLMs by learning to adapt the system prompts in instruction-tuned LLMs. While LLMs are typically pre-trained to follow a fixed system prompt, we investigate the impact of tailoring the system prompt to each specific user input on the safety of the responses. To this end, we propose Sysformer, a transformer model that updates an initial system prompt to a more robust system prompt in the LLM input embedding space while attending to the user prompt. While keeping the LLM parameters frozen, the Sysformer is trained to refuse to respond to a set of harmful prompts while responding ideally to a set of safe ones. Through extensive experiments on 5 LLMs from different families and 2 recent benchmarks, we demonstrate that Sysformer can significantly enhance the robustness of LLMs, leading to upto 80% gain in the refusal rate on harmful prompts while enhancing the compliance with the safe prompts by upto 90%. Results also generalize well to sophisticated jailbreaking attacks, making LLMs upto 100% more robust against different attack strategies. We hope our findings lead to cheaper safeguarding of LLMs and motivate future investigations into designing variable system prompts. | We present Sysformer, a transformer-based mechanism to adapt system prompt based on the user prompts to boost the robustness of LLMs. | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=EbgCEd8gyN | 2025-09-18T23:43:05 | 4 | [
{
"id": "DnGSgQPPsM",
"forum": "EbgCEd8gyN",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12772/Reviewer_BNqo",
"reviewer_name": "Reviewer_BNqo",
"rating": 6,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This ... |
VYQuICALXj | https://openreview.net/forum?id=VYQuICALXj | Cross-Modal Redundancy and the Geometry of Vision–Language Embeddings | 5 | 3.5 | [
8,
4,
6,
2
] | [
3,
3,
3,
5
] | 4 | [
"multimodal",
"concepts",
"sparse autoencoder",
"modality gap",
"applications of interpretability"
] | Vision–language models (VLMs) align images and text with remarkable success, yet the geometry of their shared embedding space remains poorly understood.
To probe this geometry, we begin from the Iso-Energy Assumption, which exploits cross-modal redundancy: a concept that is truly shared should exhibit the same average energy across modalities.
We operationalize this assumption with an Aligned Sparse Autoencoder (SAE) that encourages energy consistency during training while preserving reconstruction.
We find that this inductive bias changes the SAE solution without harming reconstruction, giving us a representation that serves as a tool for geometric analysis.
Sanity checks on controlled data with known ground truth confirm that alignment improves when Iso-Energy holds and remains neutral when it does not.
Applied to foundational VLMs, our framework reveals a clear structure with practical consequences:
**(*i*)** sparse *bimodal* atoms carry the entire *cross-modal* alignment signal;
**(*ii*)** *unimodal* atoms act as *modality-specific* biases and fully explain the modality gap;
**(*iii*)** removing unimodal atoms collapses the gap without harming performance;
**(*iv*)** restricting vector arithmetic to the bimodal subspace yields in-distribution edits and improved retrieval.
These findings suggest that the right inductive bias can both preserve model fidelity and render the latent geometry interpretable and actionable. | Understanding the geometry of multimodality through a concept-based approach, leading to applications like semantic vector arithmetic and modality gap free embeddings. | interpretability and explainable AI | https://openreview.net/pdf?id=VYQuICALXj | 2025-09-18T18:16:51 | 4 | [
{
"id": "P2FGxaMJlL",
"forum": "VYQuICALXj",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11140/Reviewer_U7cA",
"reviewer_name": "Reviewer_U7cA",
"rating": 8,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "This ... |
32QQlzm9ft | https://openreview.net/forum?id=32QQlzm9ft | REFLEX-Med: Reinforcement for Label-Free Explainability in Unified Medical Reasoning | 3.666667 | 3.5 | [
4,
4,
2,
6,
2,
4
] | [
4,
4,
4,
2,
3,
4
] | 6 | [
"medical reasoning",
"large vision-language models",
"explainability"
] | Clinicians urgently need explanations they can audit, not merely fluent chains. Yet prevailing practices conflate interpretability with subjective human/LLM rationales, with post-hoc visuals loosely aligned to answers, or with answer rationale consistency. These proxies are annotation-hungry, bias-prone, and crucially do not certify process verifiability: where the model looked and why it looked there. Meanwhile, reinforcement learning from feedback excels at answer verifiability but offers little support for constraining the provenance of attention or penalizing visually ungrounded reasoning. We introduce REFLEX-Med, a reinforcement framework that instantiates label-free explainability through two verifiable prerequisites: (i) faithful visual grounding that is text-conditioned localization in the image, and (ii) bi-directional cross-modal provenance, that is a cycle of mutual traceability across image-text and text-text semantics. REFLEX-Med couples curriculum GRPO with two frozen rewards computed by a medical vision-language encoder: a visual fidelity reward aligning text-conditioned saliency between the model's own answer and an anchor text, and a bi-modal provenance reward enforcing image-text and text-text consistency in embedding space. Together with standard format and semantic-matching rewards, REFLEX-Med resists large VLM hallucination and attention-think drift, improving both answer quality and auditable faithfulness on unified medical reasoning (open and close-ended VQA) all without human or LLM rationale annotations. | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=32QQlzm9ft | 2025-09-13T17:07:44 | 6 | [
{
"id": "7tRBIbg4Sf",
"forum": "32QQlzm9ft",
"review_number": 9,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4734/Reviewer_vYv4",
"reviewer_name": "Reviewer_vYv4",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This p... | |
wWkyL8D9xd | https://openreview.net/forum?id=wWkyL8D9xd | FastFlow: Accelerating The Generative Flow Matching Models with Bandit Inference | 5.5 | 3.5 | [
4,
6,
6,
6
] | [
4,
3,
3,
4
] | 4 | [
"generative modelling",
"faster inference."
] | Flow-matching models deliver state-of-the-art fidelity in image and video generation, but the inherent sequential denoising process renders them slower. Existing acceleration methods like distillation, trajectory truncation, and consistency approaches are static, require retraining, and often fail to generalize across tasks. We propose FastFlow, a plug-and-play adaptive inference framework that accelerates generation in flow matching models. FastFlow identifies denoising steps that produce only minor adjustments to the denoising path and approximates them without using the full neural network models used for velocity predictions. The approximation utilizes finite-difference velocity estimates from prior predictions to efficiently extrapolate future states, enabling faster advancements along the denoising path at zero compute cost. This enables skipping computation at intermediary steps. We model the decision of how many steps to safely skip before requiring a full model computation as a multi-armed bandit problem. The bandit learns the optimal skips to balance speed with performance. FastFlow integrates seamlessly with existing pipelines and generalizes across image generation, video generation, and editing tasks. Experiments demonstrate a speedup of over $2.6\times$ while maintaining high-quality outputs. | Adaptive inference method for accelerating flow matching based visual generation. | generative models | https://openreview.net/pdf?id=wWkyL8D9xd | 2025-09-20T18:17:51 | 4 | [
{
"id": "AGajkCDJso",
"forum": "wWkyL8D9xd",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission25044/Reviewer_D8Ff",
"reviewer_name": "Reviewer_D8Ff",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This ... |
vv8EcCoBfr | https://openreview.net/forum?id=vv8EcCoBfr | Bilateral Information-aware Test-time Adaptation for Vision-Language Models | 4.333333 | 4.166667 | [
6,
6,
4,
4,
4,
2
] | [
3,
5,
5,
4,
4,
4
] | 6 | [
"Test-time Adaptation",
"Vision Language Model"
] | Test-time adaptation (TTA) fine-tunes models using new data encountered during inference, which enables the vision-language models to handle test data with covariant shifts. Unlike training-time adaptation, TTA does not require a test-distributed validation set or consider the worst-case distribution within a given tolerance. However, previous methods primarily focused on adaption-objective design, while the data tend to be fully utilized or simply filtered through a fixed low-entropy selection criteria. In this paper, we analyze the weakness of previous selection criterion and find that only selecting fixed proportion of low-entropy samples fails to ensure optimal performance across various datasets and can lead the model to becoming over-confident in wrongly classified samples, showing unexpected overfitting to atypical features and compromising effective adaptation. To improve upon them, we propose Bilateral Information-aware Test-Time Adaptation (BITTA), which simultaneously leverages two distinct parts of the test inputs during adaptation. Specifically, a dynamic proportion of low-entropy samples are used to learn the core representation under covariant shifts, while high-entropy samples are adopted to unlearn atypical features. This dual approach prevents the model from undesired memorization and ensures extensive optimal performance. Comprehensive experiments validate the effectiveness in various datasets and model architectures. | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=vv8EcCoBfr | 2025-09-17T09:29:50 | 6 | [
{
"id": "dSxbk5YjnL",
"forum": "vv8EcCoBfr",
"review_number": 6,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8180/Reviewer_kaur",
"reviewer_name": "Reviewer_kaur",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "The au... | |
1AYy3T3Xjk | https://openreview.net/forum?id=1AYy3T3Xjk | A Process-Level Method for Creativity Evaluation in LLM-Assisted Learning | 2.5 | 3.5 | [
2,
2,
4,
2
] | [
4,
3,
3,
4
] | 4 | [
"LLM",
"Creativity assessment",
"Process-level evaluation"
] | Interpretable creativity assessment remains challenging, and the adoption of large language models (LLMs) in education amplifies issues of subjectivity and opacity. This study presents a process-level evaluation approach for LLM-assisted learning that attributes learner-versus-model contributions from multi-turn student–LLM dialogues and scores four expert-elicited dimensions with rationale texts. Using 1,273 cleaned dialogues from 81 undergraduates across multi domains, an auditable attribution protocol and an instruction-tuned evaluator are introduced to produce process-linked, interpretable rationales. Empirical evaluation with expert assessments indicates alignment with expert judgments. Claims are explicitly scoped to the studied tasks and domains, and code and evaluation scripts will be released for reproducibility. | other topics in machine learning (i.e., none of the above) | https://openreview.net/pdf?id=1AYy3T3Xjk | 2025-09-20T07:12:15 | 4 | [
{
"id": "WtYVSc2PGG",
"forum": "1AYy3T3Xjk",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission21916/Reviewer_PNVN",
"reviewer_name": "Reviewer_PNVN",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The p... | |
IsOMU137M3 | https://openreview.net/forum?id=IsOMU137M3 | scCMIA: Self-supervised Dual Model for Mitigating Information Loss in Single-cell Cross-Modal Alignment | 3 | 3.75 | [
4,
2,
2,
4
] | [
3,
4,
4,
4
] | 4 | [
"Single-cell",
"Self-supervised",
"Alignment",
"Reconstruction",
"scRNA",
"scATAC"
] | Recent technological advances in single-cell sequencing have enabled simultaneous profiling of multiple omics modalities within individual cells. Despite these advancements, challenges such as high noise levels and information loss during computational integration persist. While existing methods align different modalities, they often struggle to balance alignment accuracy with the preservation of modality-specific information needed for downstream biological discovery. In this paper, we introduce scCMIA, a novel framework guided by Mutual Information (MI) principles that leverages a VQ-VAE architecture. scCMIA achieves robust cross-modal alignment in a unified discrete latent space while enabling high-fidelity reconstruction of the original data modalities. Crucially, our framework transforms the learned discrete representations into a tool for tangible biological discovery, allowing for the quantification of regulatory programs and cross-modal relationships. Our extensive experiments demonstrate that scCMIA achieves state-of-the-art performance across multiple datasets. Our code is available at: https://anonymous.4open.science/r/scCMIA-77E3. | applications to physical sciences (physics, chemistry, biology, etc.) | https://openreview.net/pdf?id=IsOMU137M3 | 2025-09-19T00:19:32 | 4 | [
{
"id": "jW5h2MOgxY",
"forum": "IsOMU137M3",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12979/Reviewer_JfMn",
"reviewer_name": "Reviewer_JfMn",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This ... | |
LwjUKEWAvt | https://openreview.net/forum?id=LwjUKEWAvt | SafetyChat: Learning to Generate Physical Safety Warnings in Instructional Assistants | 4 | 3.5 | [
4,
4,
6,
2
] | [
3,
4,
3,
4
] | 4 | [
"Physical Safety",
"Instructional AI Assistant",
"LLM"
] | While large language models (LLMs) excel in language generation and conversational abilities, their broader utility hinges on meeting additional requirements to ensure reliability and safety. Recent research has explored areas such as minimizing hallucinations, grounding outputs in credible sources, and safeguarding user privacy. However, the critical aspect of physical safety has received limited attention—an oversight that becomes increasingly important as LLMs are integrated into multimodal voice assistants (e.g., smart glasses) that are capable of guiding users through complex, safety-critical tasks such as automotive repair. In this work, we investigate the limitations of current LLMs in generating effective and contextually appropriate safety warnings in the context of complex repair tasks. We introduce SafetyChat, a multi-domain dataset that can evaluate LLMs’ ability to model and prioritize safety awareness. We further enhance model alignment by post-training on this data, comparing the performance of various techniques. Through this process, we identify key challenges and establish robust baselines, paving the way for future research on integrating physical safety considerations into LLM-driven instructional systems. We will release data and code to reproduce our results on publication. | A new physical safety task for LLM chat assistant, a new dataset, and strong alignment results. | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=LwjUKEWAvt | 2025-09-19T21:35:40 | 4 | [
{
"id": "zyVNBiJZ57",
"forum": "LwjUKEWAvt",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18539/Reviewer_6je4",
"reviewer_name": "Reviewer_6je4",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This ... |
6L3yCjx9s3 | https://openreview.net/forum?id=6L3yCjx9s3 | Dimension-Adaptive MCTS: Optimal Sample Complexity for Continuous Action Planning | 4.5 | 3 | [
6,
4,
4,
4
] | [
3,
4,
3,
2
] | 4 | [
"Monte-Carlo Tree Search; Continuous Reinforcement Learning Planning"
] | We study continuous-action Monte Carlo Tree Search (MCTS) in a $d$-dimensional action space when the
optimal action-value function $Q^*(s,\cdot)$ is $\beta$-Hölder continuous with constant~$L$. We show that a
dimension-adaptive $\varepsilon$-net schedule combined with power-mean backups and a polynomial exploration
bonus finds an $\varepsilon$-optimal action in $ \tilde{O}\left(\sigma^2 L^{d/\beta} \varepsilon^{-(d/\beta+2)}\right) $
simulations, matching standard continuum-armed lower bounds up to logs while remaining practical
via on-demand, capped random nets. We further demonstrate that our method significantly outperforms
baseline methods on continuous control planning problems. Our work bridges the gap between theoretical
reinforcement learning and practical planning algorithms, providing a principled approach to
high-dimensional continuous action space exploration. | reinforcement learning | https://openreview.net/pdf?id=6L3yCjx9s3 | 2025-09-19T23:10:22 | 4 | [
{
"id": "7HT5wVygqT",
"forum": "6L3yCjx9s3",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19231/Reviewer_pCZE",
"reviewer_name": "Reviewer_pCZE",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This ... | |
PRHNKeaZpP | https://openreview.net/forum?id=PRHNKeaZpP | Human-in-the-Loop Policy Optimization for Preference-Based Multi-Objective Reinforcement Learning | 4 | 3.75 | [
4,
4,
4,
4
] | [
4,
4,
3,
4
] | 4 | [
"Multi-objective reinforcement learning",
"human-in-the-loop",
"preference learning"
] | Multi-objective reinforcement learning (MORL) seeks policies that effectively balance conflicting objectives. However, presenting many diverse policies without accounting for the decision maker’s (DM’s) preferences can overwhelm the decision-making process. On the other hand, accurately specifying preferences in advance is often unrealistic. To address these challenges, we introduce a human-in-the-loop MORL framework that interactively discovers preferred policies during optimization. Our approach proactively learns the DM’s implicit preferences in real time, requiring no a priori knowledge. Importantly, we integrate this preference learning directly into a parallel optimization framework, balancing exploration and exploitation to identify high-quality policies aligned with the DM's preferences. Evaluations on a complex quadrupedal robot simulation environment demonstrate that, with only
interactions, our proposed method can identify policies aligned with human preferences, e.g., running like a dog. Further experiments on seven MuJoCo tasks and a multi-microgrid system design task against eight state-of-the-art MORAL algorithms fully demonstrate the effectiveness of our proposed framework. Demonstrations and full experiments are in https://sites.google.com/view/pbmorl/home. | We propose PBMORL, a human-in-the-loop MORL framework that learns preferences from limited feedback to efficiently discover high-quality, preference-aligned policies. | reinforcement learning | https://openreview.net/pdf?id=PRHNKeaZpP | 2025-09-18T22:48:56 | 4 | [
{
"id": "6bI8VtVUo8",
"forum": "PRHNKeaZpP",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12302/Reviewer_yZQN",
"reviewer_name": "Reviewer_yZQN",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This ... |
G3uNHQpP7J | https://openreview.net/forum?id=G3uNHQpP7J | Multi-Domain Transferable Graph Gluing for Building Graph Foundation Models | 6 | 3.5 | [
4,
8,
6,
6
] | [
2,
4,
4,
4
] | 4 | [
"Multi-domain graph pre-training",
"graph neural network",
"graph foundation model",
"Riemannian geometry"
] | Multi-domain graph pre-training integrates knowledge from diverse domains to enhance performance in the target domains, which is crucial for building graph foundation models. Despite initial success, existing solutions often fall short of answering a fundamental question: how is knowledge integrated or transferred across domains? This theoretical limitation motivates us to rethink the consistency and transferability between the pre-trained model and target domains. In this paper, we propose a fresh differential geometry perspective, whose core idea is to merge any graph dataset into a unified, smooth Riemannian manifold, enabling a systematic understanding of knowledge integration and transfer. To achieve this, our key contribution is the theoretical establishment of neural manifold gluing,
which first characterizes local geometry using an adaptive orthogonal frame and then “glues” the local pieces together into a coherent whole. Building on this theory, we present the GraphGlue framework, which supports batched pre-training with EMA prototyping and provides a transferability measure based on geometric consistence. Extensive experiments demonstrate its superior performance across diverse graph domains. Moreover, we empirically validated GraphGlue’s geometric scaling law, showing that larger quantities of datasets improve model transferability by producing a smoother manifold. | From differential geometry perspective, we present a novel framework that merges multi-domain graphs into a unified, smooth manifold with geometric consistency, enabling quantifiable transferability and geometric scaling behavior. | learning on graphs and other geometries & topologies | https://openreview.net/pdf?id=G3uNHQpP7J | 2025-09-15T23:32:49 | 4 | [
{
"id": "Zw6xvN1iuH",
"forum": "G3uNHQpP7J",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission6004/Reviewer_w9Kq",
"reviewer_name": "Reviewer_w9Kq",
"rating": 4,
"confidence": 2,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "This p... |
7AXP2RYw2N | https://openreview.net/forum?id=7AXP2RYw2N | Video-MTR: Reinforced Multi-Turn Reasoning for Long Video Understanding | 4.666667 | 4 | [
6,
4,
4
] | [
3,
5,
4
] | 3 | [
"Long-form video understanding;MLLM; multi-turn reasoning"
] | Long-form video understanding, characterized by long-range temporal dependencies and multiple events, remains a challenge. Existing methods often rely on static reasoning or external visual-language models (VLMs), which face issues like complexity and sub-optimal performance due to the lack of end-to-end training. In this paper, we propose Video-MTR, a reinforced multi-turn reasoning framework designed to enable iterative key video segment selection and question comprehension. Unlike traditional video reasoning pipeline, which generate predictions in a single turn, Video-MTR performs reasoning in multiple turns, selecting video segments progressively based on the evolving understanding of previously processed segments and the current question. This iterative process allows for a more refined and contextually aware analysis of the video. To ensure intermediate reasoning process, we introduce a novel gated bi-level reward system, combining trajectory-level rewards based on answer correctness and turn-level rewards emphasizing frame-query relevance. This system optimizes both video segment selection and question comprehension, eliminating the need for external VLMs and allowing end-to-end training. Extensive experiments on benchmarks like VideoMME, MLVU, and EgoSchema demonstrate that Video-MTR outperforms existing methods in both accuracy and efficiency, advancing the state-of-the-art in long-form video understanding. | leveraging end-to-end RL to enable MLLMs to perform multi-turn reasoning. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=7AXP2RYw2N | 2025-09-17T11:49:28 | 3 | [
{
"id": "oVe9T3jgzW",
"forum": "7AXP2RYw2N",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8384/Reviewer_StqD",
"reviewer_name": "Reviewer_StqD",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The pa... |
khBHJz2wcV | https://openreview.net/forum?id=khBHJz2wcV | Physics-Constrained Fine-Tuning of Flow-Matching Models for Generation and Inverse Problems | 3 | 3.75 | [
4,
6,
2,
0
] | [
4,
3,
4,
4
] | 4 | [
"Generative Modeling",
"Physics‑Informed Machine Learning",
"Inverse Problems",
"Parameter Identification"
] | We present a framework for fine-tuning flow-matching generative models to enforce physical constraints and solve inverse problems in scientific systems. Starting from a model trained on low-fidelity or observational data, we apply a differentiable post-training procedure that minimizes weak-form residuals of governing partial differential equations (PDEs), promoting physical consistency and adherence to boundary conditions without distorting the underlying learned distribution. To infer unknown physical inputs, such as source terms, material parameters, or boundary data, we augment the generative process with a learnable latent parameter predictor and propose a joint optimization strategy. The resulting model produces physically valid field solutions alongside plausible estimates of hidden parameters, effectively addressing ill-posed inverse problems in a data-driven yet physics-aware manner. We validate our method on canonical PDE problems, demonstrating improved satisfaction of physical constraints and accurate recovery of latent coefficients. Further, we confirm cross-domain utility through fine-tuning of natural-image models. Our approach bridges generative modelling and scientific inference, opening new avenues for simulation-augmented discovery and data-efficient modelling of physical systems. | generative models | https://openreview.net/pdf?id=khBHJz2wcV | 2025-09-19T19:19:43 | 4 | [
{
"id": "150xxQEo77",
"forum": "khBHJz2wcV",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission17809/Reviewer_Mbyw",
"reviewer_name": "Reviewer_Mbyw",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The p... | |
O4Oy7NsSwG | https://openreview.net/forum?id=O4Oy7NsSwG | Topology and geometry of the learning space of ReLU networks: connectivity and singularities | 5.5 | 3.25 | [
4,
6,
6,
6
] | [
4,
3,
4,
2
] | 4 | [
"learning dynamics",
"topology",
"neural networks",
"ReLU networks",
"geometry",
"symmetry",
"loss landscape",
"gradient",
"singularity",
"connectedness"
] | Understanding the properties of the parameter space in feed-forward ReLU networks is critical for effectively analyzing and guiding training dynamics. After initialization, training under gradient flow decisively restricts the parameter space to an algebraic variety that emerges from the homogeneous nature of the ReLU activation function. In this study, we examine two key challenges associated with feed-forward ReLU networks built on general directed acyclic graph (DAG) architectures: the (dis)connectedness of the parameter space and the existence of singularities within it. We extend previous results by providing a thorough characterization of connectedness, highlighting the roles of bottleneck nodes and balance conditions associated with specific subsets of the network. Our findings clearly demonstrate that singularities are intricately connected to the topology of the underlying DAG and its induced sub-networks. We discuss the reachability of these singularities and establish a principled connection with differentiable pruning. We validate our theory with simple numerical experiments. | learning theory | https://openreview.net/pdf?id=O4Oy7NsSwG | 2025-09-13T19:39:01 | 4 | [
{
"id": "QDl0LSwCbp",
"forum": "O4Oy7NsSwG",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4772/Reviewer_pzFJ",
"reviewer_name": "Reviewer_pzFJ",
"rating": 4,
"confidence": 4,
"soundness": 4,
"contribution": 2,
"presentation": 4,
"summary": "The pa... | |
iITycdPaOd | https://openreview.net/forum?id=iITycdPaOd | Structure before the Machine: Input Space is the Prerequisite for Concepts | 3 | 3.5 | [
4,
2,
4,
2
] | [
3,
4,
4,
3
] | 4 | [
"Spectral Principal Paths",
"Linear Representation Hypothesis",
"Representation Learning"
] | High-level representations have become a central focus in enhancing AI transparency and control, shifting attention from individual neurons or circuits to structured semantic directions that align with human-interpretable concepts. Motivated by the Linear Representation Hypothesis (LRH), we propose the Input-Space Linearity Hypothesis (ISLH), which posits that concept-aligned directions originate in the input space and are selectively amplified with increasing depth. We then introduce the Spectral Principal Path (SPP) framework, which formalizes how deep networks progressively distill linear representations along a small set of dominant spectral directions. Building on this framework, we further demonstrate the multimodal robustness of these representations in Vision-Language Models (VLMs). By bridging theoretical insights with empirical validation, this work advances a structured theory of representation formation in deep networks, paving the way for improving AI robustness, fairness, and transparency. | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=iITycdPaOd | 2025-09-19T02:08:45 | 4 | [
{
"id": "3KtAkpoZn3",
"forum": "iITycdPaOd",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13528/Reviewer_UAP6",
"reviewer_name": "Reviewer_UAP6",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "The p... | |
oRmo4p1KEE | https://openreview.net/forum?id=oRmo4p1KEE | QuadGPT: Native Quadrilateral Mesh Generation with Autoregressive Models | 5.5 | 3.75 | [
8,
4,
4,
6
] | [
4,
3,
4,
4
] | 4 | [
"Autoregressive Quad Mesh Generation",
"Reinforcement Learning",
"Topology Optimization"
] | The generation of quadrilateral-dominant meshes is a cornerstone of professional 3D content creation.
However, existing generative models generate quad meshes by first generating triangle meshes and then merging triangles into quadrilaterals with some specific rules, which typically produces quad meshes with poor topology.
In this paper, we introduce QuadGPT, the first autoregressive framework for generating quadrilateral meshes in an end-to-end manner.
QuadGPT formulates this as a sequence prediction paradigm, distinguished by two key innovations: a unified tokenization method to handle mixed topologies of triangles and quadrilaterals, and a specialized Reinforcement Learning fine-tuning method tDPO for better generation quality.
Extensive experiments demonstrate that QuadGPT significantly surpasses previous triangle-to-quad conversion pipelines in both geometric accuracy and topological quality.
Our work establishes a new benchmark for native quad-mesh generation and showcases the power of combining large-scale autoregressive models with topology-aware RL refinement for creating structured 3D assets. | A novel method that directly generates quad-dominant meshes with superior topology, overcoming the limitations of conversion-based approaches. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=oRmo4p1KEE | 2025-09-01T20:54:56 | 4 | [
{
"id": "w9Icudi0Iw",
"forum": "oRmo4p1KEE",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission213/Reviewer_87rP",
"reviewer_name": "Reviewer_87rP",
"rating": 8,
"confidence": 4,
"soundness": 3,
"contribution": 4,
"presentation": 3,
"summary": "This pa... |
0aBAAS0rRT | https://openreview.net/forum?id=0aBAAS0rRT | Map as a Prompt: Learning Multi-Modal Spatial-Signal Foundation Models for Cross-scenario Wireless Localization | 5.333333 | 2.666667 | [
6,
4,
6
] | [
2,
3,
3
] | 3 | [
"Wireless Localization",
"Foundation Models",
"Self-Supervised Learning",
"Fine-Tuning",
"6G Networks"
] | Accurate and robust wireless localization is a critical enabler for emerging 5G/6G applications, including autonomous driving, extended reality, and smart manufacturing. Despite its importance, achieving precise localization across diverse environments remains challenging due to the complex nature of wireless signals and their sensitivity to environmental changes. Existing data-driven approaches often suffer from limited generalization capability, requiring extensive labeled data and struggling to adapt to new scenarios. To address these limitations, we propose SigMap, a multimodal foundation model that introduces two key innovations: (1) A cycle-adaptive masking strategy that dynamically adjusts masking patterns based on channel periodicity characteristics to learn robust wireless representations; (2) A novel "map-as-prompt" framework that integrates 3D geographic information through lightweight soft prompts for effective cross-scenario adaptation. Extensive experiments demonstrate that our model achieves state-of-the-art performance across multiple localization tasks while exhibiting strong zero-shot generalization in unseen environments, significantly outperforming both supervised and self-supervised baselines by considerable margins. | We propose SigMap, a foundation model that uses self-supervised learning with cycle-adaptive masking and map-conditioned prompting to achieve accurate and generalizable wireless localization across diverse scenarios. | applications to physical sciences (physics, chemistry, biology, etc.) | https://openreview.net/pdf?id=0aBAAS0rRT | 2025-09-17T17:40:02 | 3 | [
{
"id": "otZhJeUNq0",
"forum": "0aBAAS0rRT",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8908/Reviewer_ct8k",
"reviewer_name": "Reviewer_ct8k",
"rating": 6,
"confidence": 2,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This p... |
Zz2gtWX8wn | https://openreview.net/forum?id=Zz2gtWX8wn | ReviewScore: Misinformed Peer Review Detection with Large Language Models | 4.5 | 3 | [
8,
2,
4,
4
] | [
3,
3,
3,
3
] | 4 | [
"Peer Review Evaluation",
"Argument Evaluation",
"Critical Thinking",
"Logic",
"Large Language Models",
"Neurosymbolic Approaches"
] | Peer review serves as a backbone of academic research, but in most AI conferences, the review quality is degrading as the number of submissions explodes. To reliably detect low-quality reviews, we define misinformed review points as either "weaknesses" in a review that contain incorrect premises, or "questions" in a review that can be already answered by the paper. We verify that 15.2% of weaknesses and 26.4% of questions are misinformed and introduce ReviewScore indicating if a review point is misinformed. To evaluate the factuality of each premise of weaknesses, we propose an automated engine that reconstructs every explicit and implicit premise from a weakness. We build a human expert-annotated ReviewScore dataset to check the ability of LLMs to automate ReviewScore evaluation. Then, we measure human-model agreements on ReviewScore using eight current state-of-the-art LLMs and verify moderate agreements. We also prove that evaluating premise-level factuality shows significantly higher agreements than evaluating weakness-level factuality. A thorough disagreement analysis further supports a potential of fully automated ReviewScore evaluation. | We introduce ReviewScore, a new evaluation of peer review quality, focusing on detecting misinformed review points. | datasets and benchmarks | https://openreview.net/pdf?id=Zz2gtWX8wn | 2025-09-18T06:55:12 | 4 | [
{
"id": "42tS1loj06",
"forum": "Zz2gtWX8wn",
"review_number": 5,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9929/Reviewer_GJmX",
"reviewer_name": "Reviewer_GJmX",
"rating": 8,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The pa... |
zwLpUxiqSE | https://openreview.net/forum?id=zwLpUxiqSE | Space Filling Curves as Spatial Priors for Small or Data-Scarce Vision Transformers | 4.5 | 3.5 | [
6,
6,
2,
4
] | [
3,
4,
4,
3
] | 4 | [
"space filling curves",
"ViT",
"spatial priors"
] | Vision Transformers (ViTs) have become a dominant backbone in computer vision, yet their attention mechanism lacks inherent spatial inductive biases, which are especially crucial in small models and low-data regimes. Inspired by the masking in Linear Transformers and the scanning patterns of Vision SSMs, we propose VIOLIN, a lightweight masked attention mechanism that integrates Space Filling Curves (SFCs) to enhance spatial awareness with negligible computational overhead. VIOLIN scans the input image with multiple SFCs to build curve specific decay masks, which are averaged and multiplied with the attention matrix to encode spatial relationships. It yields notable gains in data-scarce settings: when fine-tuning on VTAB-1K, VIOLIN improves accuracy by up to 8.7% on the Structured group, and it can be combined with parameter-efficient tuning methods such as LoRA. Beyond fine-tuning, VIOLIN consistently improves various tiny or small scale ViT architectures (e.g., DeiT, DINO) during pretraining on ImageNet-1K, achieving gains of up to 0.9\% on on ImageNet-1K and 7.2\% on pixel level CIFAR-100. Overall, VIOLIN offers a computationally efficient yet effective way to inject spatial inductive bias into ViTs, particularly benefiting small models and data-scarce scenarios. | A new attention mechanism for vision backbones using Space Filling Curves improving both fine-tuning and pre-training of ViTs. | other topics in machine learning (i.e., none of the above) | https://openreview.net/pdf?id=zwLpUxiqSE | 2025-09-20T01:56:33 | 4 | [
{
"id": "HX5jH4abzP",
"forum": "zwLpUxiqSE",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission20303/Reviewer_8zFr",
"reviewer_name": "Reviewer_8zFr",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The m... |
jeTiBeW3iZ | https://openreview.net/forum?id=jeTiBeW3iZ | Memorization Through the Lens of Sample Gradients | 5 | 3.75 | [
6,
6,
2,
6
] | [
3,
3,
5,
4
] | 4 | [
"Memorization",
"Sample Gradients"
] | Deep neural networks are known to often memorize underrepresented, hard examples, with implications for generalization and privacy. Feldman & Zhang (2020) defined a rigorous notion of memorization.
However it is prohibitively expensive to compute at scale because it requires training models both with and without the data point of interest in order to calculate the memorization score.
We observe that samples that are less memorized tend to be learned earlier in training, whereas highly memorized samples are learned later.
Motivated by this observation, we introduce Cumulative Sample Gradient (CSG), a computationally efficient proxy for memorization. CSG is the gradient of the loss with respect to input samples, accumulated over the course of training.
The advantage of using input gradients is that per-sample gradients can be obtained with negligible overhead during training. The accumulation over training also reduces per-epoch variance and enables a formal link to memorization. Theoretically, we show that CSG is bounded by memorization and by learning time.
Tracking these gradients during training reveals a characteristic rise–peak–decline trajectory whose timing is mirrored by the model’s weight norm. This yields an early-stopping criterion that does not require a validation set: stop at the peak of the weight norm. This early stopping also enables our memorization proxy, CSG, to be up to five orders of magnitude more efficient than the memorization score from Feldman & Zhang (2020). It is also approximately 140 $\times$ and 10$\times$ faster than the prior state-of-the-art memorization proxies, input curvature and cumulative sample loss, while still aligning closely with the memorization score, exhibiting high correlation. Further, we develop Sample Gradient Assisted Loss (SGAL), a proxy that further improves alignment with memorization and is highly efficient to compute. Finally, we show that CSG attains state-of-the-art performance on practical dataset diagnostics, such as mislabeled-sample detection and enables bias discovery, providing a theoretically grounded toolbox for studying memorization in deep networks. | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=jeTiBeW3iZ | 2025-09-18T23:24:19 | 4 | [
{
"id": "z3GZqYPWjF",
"forum": "jeTiBeW3iZ",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12621/Reviewer_P7vQ",
"reviewer_name": "Reviewer_P7vQ",
"rating": 6,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "The p... | |
eWBu4tY9ta | https://openreview.net/forum?id=eWBu4tY9ta | Safeguarding Multimodal Knowledge Copyright in the RAG-as-a-Service Environment | 4.666667 | 3.333333 | [
4,
4,
6
] | [
3,
3,
4
] | 3 | [
"Watermark",
"VLM",
"Dataset Copyright Protection"
] | As Retrieval-Augmented Generation (RAG) evolves into service-oriented platforms (Rag-as-a-Service) with shared knowledge bases, protecting the copyright of contributed data becomes essential. Existing watermarking methods in RAG focus solely on textual knowledge, leaving image knowledge unprotected. In this work, we propose \textit{AQUA}, the first watermark framework for image knowledge protection in Multimodal RAG systems. \textit{AQUA} embeds semantic signals into synthetic images using two complementary methods: acronym-based triggers and spatial relationship cues. These techniques ensure watermark signals survive indirect watermark propagation from image retriever to textual generator, being efficient, effective and imperceptible. Experiments across diverse models and datasets show that \textit{AQUA} enables robust, stealthy, and reliable copyright tracing, filling a key gap in multimodal RAG protection. | An effective watermarking framework for protecting the copyright of multimodal knowledge, especially image knowledge, in RaaS. | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=eWBu4tY9ta | 2025-09-19T16:34:38 | 3 | [
{
"id": "38wcTQEGqV",
"forum": "eWBu4tY9ta",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16979/Reviewer_k6AT",
"reviewer_name": "Reviewer_k6AT",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 2,
"summary": "This ... |
T9ikO8tXfY | https://openreview.net/forum?id=T9ikO8tXfY | Breaking Down and Building Up: Mixture of Skill-Based Vision-and-Language Navigation Agents | 4 | 4 | [
4,
4,
4
] | [
4,
3,
5
] | 3 | [
"Vision-and-Language Navigation",
"Skill-Based Agents",
"Mixture-of-Experts"
] | Vision-and-Language Navigation (VLN) poses significant challenges for agents to interpret natural language instructions and navigate complex 3D environments. While recent progress has been driven by large-scale pre-training and data augmentation, current methods still struggle to generalize to unseen scenarios, particularly when complex spatial and temporal reasoning is required. In this work, we propose SkillNav, a modular framework that introduces structured, skill-based reasoning into Transformer-based VLN agents. Our method decomposes navigation into a set of interpretable atomic skills (e.g., Vertical Movement, Area and Region Identification, Stop and Pause), each handled by a specialized agent. To support targeted skill training without manual data annotation, we construct a synthetic dataset pipeline that generates diverse, linguistically natural, skill-specific instruction-trajectory pairs. We then introduce a novel training-free Vision-Language Model (VLM)-based router, which dynamically selects the most suitable agent at each time step by aligning sub-goals with visual observations and historical actions. SkillNav obtains competitive results on commonly-used benchmarks, and establishes state-of-the-art generalization to the GSA-R2R, a benchmark with novel instruction styles and unseen environments. | We propose SkillNav, a modular framework that decomposes navigation into interpretable atomic skills and uses a vision-language model router to achieve state-of-the-art generalization in vision-and-language navigation. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=T9ikO8tXfY | 2025-09-19T07:48:19 | 3 | [
{
"id": "nXQ34dRGRC",
"forum": "T9ikO8tXfY",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14576/Reviewer_WcPc",
"reviewer_name": "Reviewer_WcPc",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The p... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.