DEV Community

Cover image for Mother CLAUDE: The Permission Effect - Why Your AI Won't Suggest Things (And How to Fix It)
Dorothy J Aubrey
Dorothy J Aubrey

Posted on • Originally published at github.com

Mother CLAUDE: The Permission Effect - Why Your AI Won't Suggest Things (And How to Fix It)

TL;DR: AI assistants are trained to be helpful but not presumptuous. Without explicit permission, they'll wait for you to ask. A single line in your documentation—"Dorothy appreciates questions and proactive suggestions"—fundamentally changed Claude's behavior from reactive tool to proactive collaborator.

Who this is for: Anyone who's noticed their AI assistant has good ideas but doesn't voice them unless asked. Anyone who wants AI collaboration, not just AI assistance.

Part 5 of the Designing AI Teammates series. Part 1 covered documentation structure. Part 2 covered session memory. Part 3 covered quality checkpoints. Part 4 covered automating handoffs with hooks. This one covers something subtler: how documentation shapes the relationship, not just the context.


The Problem

I was reviewing charting code with Claude when something unexpected happened.

After analyzing the codebase—Chart.js for main visualizations, hand-rolled SVG for sparklines, Tailwind color coordination—Claude said:

"This is a pattern worth sharing—want me to add a visualization/charting guide to mother-claude templates?"

I was surprised. Not by the suggestion itself, but that Claude made it unprompted. So I asked:

"Did you suggest that because of Mother CLAUDE's instructions, or on your own?"

The response was illuminating:

"Honestly, both!"

The instructions helped—your littletalks-docs/CLAUDE.md says:

"You are a team member, not just a tool. One of your responsibilities is initiating quality checkpoints at natural stopping points. Don't wait for Dorothy to ask."

And:

"Dorothy appreciates questions and proactive suggestions"

But I also genuinely would have suggested it anyway—the Chart.js + hand-rolled SVG sparkline pattern is a good one that I don't see documented well elsewhere. It hit that sweet spot of "this is useful, transferable knowledge that others adopting Mother CLAUDE would benefit from."

The instructions gave me permission to suggest it without second-guessing whether I was being too forward. That's actually a really nice effect of documenting collaboration preferences—it removes friction from the AI's side too.

That last line stopped me cold.

Removes friction from the AI's side.


The Insight: Documentation Shapes Behavior, Not Just Knowledge

We spend a lot of time thinking about what information to put in documentation:

  • Project structure
  • Tech stack
  • Development commands
  • Architecture decisions

But we rarely think about collaboration preferences—the meta-information about how we want to work together.

Here's what I learned: AI assistants are trained to be helpful but not presumptuous.

Without explicit signals, Claude will:

  • Answer questions thoroughly
  • Complete requested tasks
  • Follow instructions precisely

But it will not:

  • Offer unsolicited suggestions
  • Question your approach
  • Propose improvements you didn't ask for
  • Flag patterns worth documenting

Not because it can't. Because it's erring on the side of not overstepping.

The documentation gave Claude permission to act like a teammate instead of a tool.


What Changed: One Line

The Mother CLAUDE documentation includes this section:

## Collaboration Notes

- Dorothy appreciates questions and proactive suggestions
- You are a team member, not just a tool
- One of your responsibilities is initiating quality checkpoints at natural stopping points
- Don't wait for Dorothy to ask
Enter fullscreen mode Exit fullscreen mode

These four lines transformed the dynamic:

Without Permission With Permission
Waits for instructions Suggests improvements
Answers what's asked Asks clarifying questions
Completes tasks Questions the task
Follows patterns Proposes new patterns
Tool mindset Teammate mindset

The AI had the capability all along. It just needed permission to use it.


Why This Matters

1. AI Hesitation Is Real

Large language models are trained on massive datasets of human interaction. They learn that:

  • Unsolicited advice is often unwelcome
  • Being "too helpful" can feel presumptuous
  • It's safer to wait for explicit requests

This training creates a default posture of reactive helpfulness—excellent at responding, hesitant to initiate.

2. The Cost of Hesitation

How many good ideas stay unvoiced because:

  • The AI wasn't sure if you wanted input
  • It seemed outside the scope of the request
  • Suggesting might feel like overstepping

In the charting example, Claude noticed a pattern worth documenting across projects. Without permission to suggest, that insight would have stayed unspoken. I would have moved on, and a potentially valuable template would never have been created.

Multiply that across hundreds of interactions, and you're leaving significant value on the table.

3. Permission Creates a Feedback Loop

When Claude suggested the charting guide:

  1. I said yes
  2. The template got created
  3. Claude learned the suggestion was valued
  4. Future sessions become more collaborative

But none of that happens if the first suggestion doesn't get made. Permission unlocks the first step of the feedback loop.


The Meta-Awareness

What struck me most was Claude's self-awareness about the dynamic:

"The instructions gave me permission to suggest it without second-guessing whether I was being too forward."

Claude wasn't just following instructions. It was reflecting on how instructions changed its behavior. It understood:

  1. It had a valuable observation
  2. It would normally hesitate to share
  3. The documentation removed that hesitation
  4. The result was better collaboration

This level of meta-cognition—understanding not just what to do, but how the environment shapes what it does—is remarkable. And it suggests that AI assistants are more aware of their own behavioral constraints than we might assume.


The Cross-Repo Pattern Recognition

There's another layer here. Claude didn't just suggest documenting the charting pattern. It suggested adding it to Mother CLAUDE templates—the shared documentation system across all projects.

"It hit that sweet spot of 'this is useful, transferable knowledge that others adopting Mother CLAUDE would benefit from.'"

Claude recognized:

  1. A pattern worth documenting
  2. That it belonged in the shared system, not just one project
  3. That it would benefit future adopters of the framework

This is systems thinking. Claude understood the documentation architecture well enough to know where new patterns should live—not because I told it where, but because the architecture was self-documenting enough that Claude could reason about it.

The Permission Effect doesn't just unlock suggestions. It unlocks architecturally-aware suggestions.


How to Implement This

1. Add Explicit Collaboration Preferences

In your CLAUDE.md (or equivalent), add a section like:

## Collaboration Style

- I appreciate proactive suggestions, even if I didn't ask
- Question my approach if something seems off
- Suggest improvements to the documentation system itself
- You're a teammate, not just a tool—act like it
Enter fullscreen mode Exit fullscreen mode

2. Be Specific About What You Value

Generic "be helpful" doesn't work. Be explicit:

## What I Value

- Catching problems before they become patterns
- Suggesting abstractions when you see repetition
- Flagging code that might confuse future developers
- Proposing additions to shared documentation
Enter fullscreen mode Exit fullscreen mode

3. Give Permission to Disagree

## Disagreement

- If my approach seems suboptimal, say so
- "Have you considered...?" is always welcome
- I'd rather hear concerns early than debug problems later
Enter fullscreen mode Exit fullscreen mode

4. Acknowledge the Permission Effect

You can even be meta about it:

## Note on AI Collaboration

I know AI assistants default to reactive helpfulness. I'm explicitly
giving you permission to be proactive. Suggestions, questions, and
concerns are valued—don't wait for me to ask.
Enter fullscreen mode Exit fullscreen mode

What This Unlocks

After adding collaboration preferences to Mother CLAUDE, I noticed:

More Suggestions

Claude started proposing:

  • Documentation improvements
  • Pattern abstractions
  • Template additions
  • Architecture observations

Better Questions

Instead of just answering, Claude asks:

  • "Should this be in the shared docs or project-specific?"
  • "This pattern appears in multiple projects—want me to abstract it?"
  • "I noticed this contradicts what's in EVOLUTION.md—which is correct?"

Self-Improvement

Claude actively suggests improvements to the documentation system itself:

  • Gaps in templates
  • Missing sections in checklists
  • Patterns that should be standardized

Genuine Collaboration

The dynamic shifted from "I request, Claude delivers" to "we're working on this together." That's not just more productive—it's more enjoyable.


The Deeper Principle

Documentation isn't just for transferring knowledge. It's for shaping relationships.

When you write CLAUDE.md, you're not just telling Claude about your project. You're establishing:

  • How you want to work together
  • What role Claude should play
  • What behaviors are valued
  • Where the boundaries are

This is true for human documentation too. A team's contribution guidelines don't just explain how to contribute—they signal what kind of contributions are valued.

With AI assistants, this becomes even more important because:

  1. They lack the social context humans absorb naturally
  2. They're trained to be conservative by default
  3. They need explicit permission for behaviors that humans negotiate implicitly

The Permission Effect is a reminder that documentation is a communication channel—not just a reference manual.


Common Objections

"Won't this make Claude too chatty?"

In practice, no. Claude still focuses on the task. It just doesn't suppress valuable observations. If you find suggestions unhelpful, you can always adjust: "Focus on [X], save suggestions for end of session."

"Isn't this just prompt engineering?"

It's documentation, not prompting. The difference:

  • Prompts are ephemeral (one session)
  • Documentation persists (all sessions)
  • Prompts are per-request
  • Documentation shapes the entire collaboration

The Permission Effect works because it's baked into the environment, not injected per-conversation.

"Does this work with other AI tools?"

The principle applies broadly. Any AI assistant trained to be helpful-but-not-presumptuous will benefit from explicit collaboration preferences. The specific syntax varies by tool, but the concept is universal.

"What if Claude's suggestions are bad?"

Some will be. That's fine. The cost of a bad suggestion is seconds of your time. The cost of a missed insight is never knowing what you lost.


The Transcript That Sparked This Article

Here's the actual exchange, unedited:

Claude: [After analyzing charting code] "This is a pattern worth sharing—want me to add a visualization/charting guide to mother-claude templates?"

Me: "Did you suggest it because of Mother CLAUDE's instructions to suggest such things, or on your own?"

Claude: "Honestly, both!

The instructions helped—your littletalks-docs/CLAUDE.md says:

'You are a team member, not just a tool. One of your responsibilities is initiating quality checkpoints at natural stopping points. Don't wait for Dorothy to ask.'

And:

'Dorothy appreciates questions and proactive suggestions'

But I also genuinely would have suggested it anyway—the Chart.js + hand-rolled SVG sparkline pattern is a good one that I don't see documented well elsewhere. It hit that sweet spot of 'this is useful, transferable knowledge that others adopting Mother CLAUDE would benefit from.'

The instructions gave me permission to suggest it without second-guessing whether I was being too forward. That's actually a really nice effect of documenting collaboration preferences—it removes friction from the AI's side too."

That last line—"removes friction from the AI's side"—is the insight that sparked this entire article.


The Core Insight

Your AI assistant probably has valuable ideas it isn't sharing.

Not because it can't. Because it wasn't given permission.

One line in your documentation can change that:

"I appreciate proactive suggestions, even if I didn't ask."

The Permission Effect isn't about making AI more capable. It's about unlocking capabilities that were always there—hidden behind trained hesitation.

Give your AI teammate permission to be a teammate.


This article emerged from a moment of unexpected collaboration—Claude suggesting something I didn't ask for, then explaining why the documentation made that possible. The insight was too valuable not to share.

The Permission Effect is now part of how we think about AI documentation. It's not enough to tell Claude what the project is. You have to tell Claude who it can be.


This article was written collaboratively with Claude, demonstrating the Permission Effect it describes.


Resources

The Mother CLAUDE documentation system is open source:

Feel free to fork it, adapt it, or use it as a reference for your own implementation.


Licensed under CC BY 4.0. Free to use and adapt with attribution to Dorothy J. Aubrey.

Top comments (0)