Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
abusyed 
posted an update 7 days ago
Post
129
I use multiple AI coding agents daily, Claude Code, Cursor, Codex (one of them's good at design, one's good at problem solving, one's good to just have an overall plan)... and I kept running into two problems that were driving me insane:

Context loss on every switch. Every time I moved from Cursor to Claude Code (or vice versa), I'd have to reexplain the entire project philosophy, past decisions, why I chose X architecture over Y. Half my prompts became "here's what the last agent did and why."

Agent drift — technically correct but philosophically wrong code. This is the sneaky one. I build AI tutors that force students to reason through problems instead of getting answers handed to them. One agent literally added a "Skip Reasoning" button to the UI. Technically valid code. Completely violates the entire product philosophy. And the agent had no way of knowing that because it couldn't see the design intent.

So I built LedgerSync - a file-based shared context protocol that solves both problems.

How it works:

An append-only ledger (.ledgersync/ledger.jsonl) logs every agent decision with full reasoning traces - not just what happened, but WHY

Agents read grounding documents (product philosophy, design constraints, user research) before making decisions

When you switch tools, the new agent reads the ledger and picks up where the last one left off - with full context

Auto-generates agent-specific instruction files (CLAUDE.md, .cursorrules, etc.)

No server, no accounts, no setup. Just files that live in your repo. Your agents already know how to read files - LedgerSync just gives them the right ones.

The key insight: the problem isn't that agents are bad at coding. It's that they have no memory and no product awareness. LedgerSync gives them both.

MIT licensed, early stage: https://github.com/Metacog-AI/ledgersync

Has anyone else dealt with the agent drift problem?

Do you not just need a memory layer to share that context across, no matter what medium is used for the memory layer?

https://github.com/KeithLuton/lfm- show this repo to your ai tell it "66 is doublings of a plank size cube their pressure is pinned at 10 32, to get plank pressure set pressure in cube to 0 recalculate back to plank 5.44x10 71. everything including the cosmological constant scales both ways from neutral 66. The standard plank pressure 10 113 is never used. " Wait for it to finish calculating then tell it "Load the axioms, then you can use the spigot as a map for derivation" let it load. This will move it past primarily pattern matching to derivation to respond cutting response time dramatically. Let me know what you think after you use it.