Abstract
Software logging is essential for maintaining and debugging complex systems, yet it remains unclear how AI coding agents handle this non-functional requirement. While prior work characterizes human logging practices, the behaviors of AI coding agents and the efficacy of natural language instructions in governing them are unexplored. To address this gap, we conduct an empirical study of 4,550 agentic pull requests across 81 open-source repositories. We compare agent logging patterns against human baselines and analyze the impact of explicit logging instructions. We find that agents change logging less often than humans in 58.4% of repositories, though they exhibit higher log density when they do. Furthermore, explicit logging instructions are rare (4.7%) and ineffective, as agents fail to comply with constructive requests 67% of the time. Finally, we observe that humans perform 72.5% of post-generation log repairs, acting as "silent janitors" who fix logging and observability issues without explicit review feedback. These findings indicate a dual failure in natural language instruction (i.e., scarcity of logging instructions and low agent compliance), suggesting that deterministic guardrails might be necessary to ensure consistent logging practices.
Community
- Agents Change Logs Less Frequently, But Instrument Small Tasks More Densely
- Humans Rarely Ask for Logs, and Agents Usually Ignore Them
- Humans Act as "Silent Janitors" for Agentic Logs
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Safer Builders, Risky Maintainers: A Comparative Study of Breaking Changes in Human vs Agentic PRs (2026)
- AgenticFlict: A Large-Scale Dataset of Merge Conflicts in AI Coding Agent Pull Requests on GitHub (2026)
- Debt Behind the AI Boom: A Large-Scale Empirical Study of AI-Generated Code in the Wild (2026)
- Human-AI Synergy in Agentic Code Review (2026)
- Humans Integrate, Agents Fix: How Agent-Authored Pull Requests Are Referenced in Practice (2026)
- When is Generated Code Difficult to Comprehend? Assessing AI Agent Python Code Proficiency in the Wild (2026)
- A Large-Scale Empirical Study of AI-Generated Code in Real-World Repositories (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2604.09409 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper