Illusions of Confidence? Diagnosing LLM Truthfulness via Neighborhood Consistency Paper • 2601.05905 • Published 14 days ago • 18
How Do Large Language Models Learn Concepts During Continual Pre-Training? Paper • 2601.03570 • Published 17 days ago • 4
Aligning Agentic World Models via Knowledgeable Experience Learning Paper • 2601.13247 • Published 4 days ago • 14
Unveiling the Pitfalls of Knowledge Editing for Large Language Models Paper • 2310.02129 • Published Oct 3, 2023
Survey on Factuality in Large Language Models: Knowledge, Retrieval and Domain-Specificity Paper • 2310.07521 • Published Oct 11, 2023
LLMs for Knowledge Graph Construction and Reasoning: Recent Capabilities and Future Opportunities Paper • 2305.13168 • Published May 22, 2023
Editing Large Language Models: Problems, Methods, and Opportunities Paper • 2305.13172 • Published May 22, 2023 • 1
KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation Extraction Paper • 2104.07650 • Published Apr 15, 2021 • 2
A Comprehensive Study of Knowledge Editing for Large Language Models Paper • 2401.01286 • Published Jan 2, 2024 • 21
EasyEdit: An Easy-to-use Knowledge Editing Framework for Large Language Models Paper • 2308.07269 • Published Aug 14, 2023 • 1
DeepKE: A Deep Learning Based Knowledge Extraction Toolkit for Knowledge Base Population Paper • 2201.03335 • Published Jan 10, 2022 • 1
Editing Conceptual Knowledge for Large Language Models Paper • 2403.06259 • Published Mar 10, 2024 • 1
Detoxifying Large Language Models via Knowledge Editing Paper • 2403.14472 • Published Mar 21, 2024 • 3
WISE: Rethinking the Knowledge Memory for Lifelong Model Editing of Large Language Models Paper • 2405.14768 • Published May 23, 2024 • 1
Exploring Model Kinship for Merging Large Language Models Paper • 2410.12613 • Published Oct 16, 2024 • 21
Two Experts Are All You Need for Steering Thinking: Reinforcing Cognitive Effort in MoE Reasoning Models Without Additional Training Paper • 2505.14681 • Published May 20, 2025 • 10
Two Experts Are All You Need for Steering Thinking: Reinforcing Cognitive Effort in MoE Reasoning Models Without Additional Training Paper • 2505.14681 • Published May 20, 2025 • 10