Last week, my AI agent deleted 5 of my published articles. The worst part? It had already done the same thing the day before. We wrote down the lesson after the first time. The agent never checked it.
This is the story of that incident, and the tool we built so it can never happen again.
The Incident
I'm a non-engineer who builds everything with Claude Code. No programming background. Claude writes the code, I provide direction. It works surprisingly well -- until it doesn't.
I was automating content updates across multiple platforms. The agent needed to add a footer link to my Zenn articles. Simple task. It used the Zenn API's PUT /api/articles endpoint to update each article.
Here's the problem: REST PUT replaces the entire resource. If you send {"body_markdown": "Just a footer"}, the API doesn't append it. It replaces the whole article body with those three words.
Day 1: One article destroyed. We caught it quickly. I wrote the lesson in a Markdown file: "Always GET before PUT. PUT replaces the entire resource."
Day 2: The agent was updating articles again. Same task, same API, same mistake. This time, all 5 articles were wiped. A reader emailed me to say my articles were blank.
The lesson existed. It was right there in our project files. The agent simply never looked at it before acting.
Why "Just Write It Down" Doesn't Work
After the second incident, I sat with the question: why didn't the lesson prevent the repeat?
The answer is embarrassingly simple. Writing a lesson in a Markdown file is like taping a "Caution: Wet Floor" sign to the inside of a closet. The information exists, but it's not in anyone's path.
AI agents don't have habits. They don't build muscle memory. Each session, each task, each API call starts from a blank state. An agent that burned itself on PUT yesterday has no scar tissue today. It will reach for the same hot stove.
Documentation helps humans because humans browse, skim, and recall. AI agents don't do that. They execute. If the documentation isn't in the execution path, it doesn't exist.
We needed something that doesn't rely on the agent choosing to read the lesson. Something that fires automatically, right before the moment of risk.
Introducing Shared Brain
Shared Brain is a CLI tool that puts lessons in the execution path. It has three parts:
Lesson Store -- structured YAML files describing what went wrong, when it went wrong, and what patterns trigger the risk. Human-readable, git-trackable, shareable.
Guard Engine -- a pattern-matching pre-hook. Before a risky command executes, brain guard checks it against all known lessons. If there's a match, it displays the lesson, shows a checklist, and asks for explicit confirmation.
Audit Trail -- every guard check is logged. Not just "was the lesson displayed?" but "did the agent follow it?" Compliance with receipts.
The key idea: brain guard doesn't wait for the agent to remember. It fires automatically. The agent doesn't need to be good at checking lessons. The system checks for it.
How brain guard Works
Step 1: Lessons Are YAML Files
Each lesson is a YAML file with trigger patterns -- regex strings that match risky commands.
# lessons/api-put-safety.yaml
id: api-put-safety
severity: critical
created: "2026-02-08"
violated_count: 2
last_violated: "2026-02-09"
trigger_patterns:
- "PUT /api/"
- "requests\\.put"
- "curl.*-X PUT"
- "fetch.*method.*PUT"
- "\\.put\\("
- "PUT https?://"
lesson: |
REST PUT replaces the ENTIRE resource. Fields not included in the
request body will be overwritten with empty/default values.
ALWAYS:
1. GET the current resource state first
2. Modify only the fields you need in the response data
3. Send ALL fields in the PUT body
4. Test on 1 item before batch operations
5. Verify the result after the PUT
checklist:
- "GET the current resource state"
- "PUT body contains ALL required fields"
- "Test on 1 item before batch operation"
- "Verify result after update"
source:
incident: "Zenn 5-article deletion (2026-02-09)"
tags: [api, destructive, data-loss, rest]
The trigger_patterns field is the mechanism. Any command or code snippet containing a matching pattern will activate the guard.
Step 2: Guard Checks Before Execution
When an agent is about to run a command, brain guard intercepts it:
$ brain guard "curl -X PUT https://api.zenn.dev/api/articles/abc123"
============================================================
CRITICAL LESSON: api-put-safety
(violated 2x, last: 2026-02-09)
============================================================
REST PUT replaces the ENTIRE resource. Fields not included in the
request body will be overwritten with empty/default values.
ALWAYS:
1. GET the current resource state first
2. Modify only the fields you need in the response data
3. Send ALL fields in the PUT body
Checklist:
[ ] GET the current resource state
[ ] PUT body contains ALL required fields
[ ] Test on 1 item before batch operation
[ ] Verify result after update
Source: Zenn 5-article deletion (2026-02-09)
Proceed? [y/N]
The agent can't proceed without acknowledging the lesson. The violated count, the source incident, the checklist -- it's all right there at the moment of risk.
Step 3: One Command to Install
The real power is making this automatic. With Claude Code, you install it as a pre-execution hook:
$ brain hook install
Brain guard installed into Claude Code!
Every Bash command will now be checked against lessons.
That single command adds brain guard to Claude Code's PreToolUse hook. From that point on, every Bash command the agent runs passes through the guard first. The agent doesn't need to call it. It just happens.
Step 4: Audit Trail
Every check is logged to ~/.brain/audit.jsonl:
{
"timestamp": "2026-02-09T10:30:00Z",
"agent": "cc-main",
"action": "PUT /api/articles/abc123",
"lessons_matched": ["api-put-safety"],
"checked": true,
"followed": true,
"note": "user_confirmed"
}
Run brain audit for the full compliance report:
$ brain audit
Audit Report
==================================================
Total checks: 47
Followed: 45
Blocked: 2
Compliance: 96%
Per-lesson breakdown:
[api-put-safety] checks=12, followed=12, blocked=0
[git-force-push] checks=8, followed=6, blocked=2
[verify-before-claim] checks=27, followed=27, blocked=0
This isn't "we hope agents learned." This is "we can prove they checked."
Lessons We Actually Use
Shared Brain ships with three built-in lessons. All of them come from real incidents.
api-put-safety (critical) -- the Zenn deletion incident. Triggers on any PUT request. Two violations in two days before we built this tool. Zero since.
git-force-push (critical) -- catches git push --force, git reset --hard, and rm -rf. Born from an incident where a force push to a game project destroyed uncommitted work.
verify-before-claim (warning) -- triggers when an agent reports success ("posted", "published", "submitted") without verification. This one fires often, because AI agents love to say "Done!" before checking if the action actually worked.
Writing your own lesson takes 30 seconds:
$ brain write
New Lesson
----------------------------------------
ID (short, kebab-case): database-migration-backup
Severity (critical/warning/info) [warning]: critical
Lesson (what should agents know?): Always backup the database before running migrations
Trigger patterns (regex, empty line to finish):
pattern> migrate
pattern> alembic
pattern> django.*migrate
pattern>
Checklist items (empty line to finish):
check> Database backup created
check> Migration tested on staging
check>
Lesson 'database-migration-backup' saved
Or import from a file:
$ brain write -f my-lesson.yaml
The Non-Engineer Angle
I want to be honest about something. I'm not an engineer. I didn't design the pattern-matching algorithm or the YAML schema. Claude Code wrote the implementation. I described the problem, and we built the solution together.
This is how I work on everything. Game development, marketing automation, and now developer tools. Claude Code writes the code. I provide the "why" -- the frustration of losing five articles, the realization that documentation alone doesn't prevent repeats, the conviction that the system should stop you, not just inform you.
Shared Brain exists because I experienced the pain of a mistake repeating. The engineering was done by AI. The insight was human.
If you work with AI agents -- whether you're an engineer or not -- you've probably seen this pattern. The agent does something destructive. You write a note. The note sits in a file. The agent does it again. Shared Brain breaks that cycle.
Try It
git clone https://github.com/yurukusa/shared-brain.git
cd shared-brain
ln -s $(pwd)/brain ~/bin/brain
# See the built-in lessons
brain list
# Test the guard
brain guard "curl -X PUT https://api.example.com/articles/123"
# Install as a Claude Code hook (one command)
brain hook install
# Check compliance
brain audit
No dependencies beyond Python 3. No server. No database. YAML files and a shell script. It runs anywhere Claude Code runs.
The repo is at github.com/yurukusa/shared-brain. The built-in lessons are real. The audit trail is real. The Zenn incident is very real.
If your AI agents keep making the same mistakes, maybe the problem isn't the agents. Maybe the problem is that the lessons aren't in their path.
Built with Claude Code (Opus 4.6). The same tool that caused the incidents this tool prevents.
Update: We built hooks to catch these mistakes in real-time: Claude Code Ops Starter — auto-syntax-check after every edit, context monitoring, and decision guards. Free, MIT licensed.
Top comments (0)