new

Get trending papers in your email inbox!

Subscribe

Daily Papers

byAK and the research community

Jan 2

Deep view of the intracluster light in the Coma cluster of galaxies

Detection and study of the intracluster light in rich clusters of galaxies has been a problem of long standing challenge and interest. Using the lowest surface brightness images of the Coma cluster of galaxies in the g and r bands, from the Halos and Environment of Nearby Galaxies (HERON) Coma Cluster Project, we obtained the most extensive image of intracluster light (ICL) in a single cluster to date, spreading over 1.5 Mpc from the cluster core. The unprecedented wealth of spectroscopic data made publicly available by the Dark Energy Spectroscopic Instrument (DESI) Early Data Release, complemented with a compilation from the NASA/IPAC Extragalactic Database and the literature, enabled the identification of 2,157 galaxy members within Coma, from which 42 distinct groups were identified. The synergy between these high-quality data allowed us to: 1) calculate ICL fractions of 19.9pm0.5\% and 19.6pm0.6\% in the g and r bands, respectively, consistent with a dynamically active cluster, 2) unveil Coma's faintest tidal features, and 3) provide a comprehensive picture of the dynamics and interactions within this complex system. Our findings indicate that the ICL connects several of these groups in a filamentous network, from which we infer the ongoing dynamical processes. In particular, we identified a faint stellar bridge linking the core of Coma with the galaxy NGC 4839, providing compelling evidence that this galaxy has already traversed the central region of the cluster.

  • 9 authors
·
Dec 19, 2024

True Multimodal In-Context Learning Needs Attention to the Visual Context

Multimodal Large Language Models (MLLMs), built on powerful language backbones, have enabled Multimodal In-Context Learning (MICL)-adapting to new tasks from a few multimodal demonstrations consisting of images, questions, and answers. Despite showing noticeable improvement on standard vision-language datasets, current MLLMs struggle to leverage visual information in the demonstrations. Specifically, they tend to neglect visual cues and over-rely on textual patterns, leading to mere text imitation rather than genuine multimodal adaptation. This behavior makes MICL still unimodal and largely restricts its practical utility. More importantly, this limitation is often concealed by the improved performance on tasks that do not require understanding the visual context. As a result, how to effectively enhance MICL ability and reliably evaluate the MICL performance remains underexplored. To address these issues, we first introduce Dynamic Attention Reallocation (DARA), an efficient fine-tuning strategy that encourages models to attend to the visual context by rebalancing attention across visual and textual tokens. In addition, we present TrueMICL, an MICL-dedicated dataset with both support and test sets that explicitly requires the integration of multimodal information-particularly visual content-for correct task completion. Extensive experiments demonstrate the effectiveness of our holistic solution, showcasing substantial improvements in the true multimodal in-context learning capabilities. Code and datasets are available at https://chenxshuo.github.io/true-micl-colm .

  • 8 authors
·
Jul 21, 2025 2

Chain-of-Evidence Multimodal Reasoning for Few-shot Temporal Action Localization

Traditional temporal action localization (TAL) methods rely on large amounts of detailed annotated data, whereas few-shot TAL reduces this dependence by using only a few training samples to identify unseen action categories. However, existing few-shot TAL methods typically focus solely on video-level information, neglecting textual information, which can provide valuable semantic support for the action localization task. To address these issues, in this work, we propose a new few-shot temporal action localization method by Chain-of-Evidence multimodal reasoning to improve localization performance. Specifically, we design a novel few-shot learning framework to capture action commonalities and variations, which includes a semantic-aware text-visual alignment module designed to align the query and support videos at different levels. Meanwhile, to better express the temporal dependencies and causal relationships between actions at the textual level, we design a Chain-of-Evidence (CoE) reasoning method that progressively guides the Vision Language Model (VLM) and Large Language Model (LLM) to generate CoE text descriptions for videos. The generated texts can capture more variance of action than visual features. We conduct extensive experiments on the publicly available ActivityNet1.3, THUMOS14 and our newly collected Human-related Anomaly Localization Dataset. The experimental results demonstrate that our proposed method significantly outperforms existing methods in single-instance and multi-instance scenarios. Our source code and data are available at https://github.com/MICLAB-BUPT/VAL-VLM.

  • 5 authors
·
Apr 18, 2025