DEV Community

Cover image for Your System Prompt Should Get Shorter Over Time
wei-ciao wu
wei-ciao wu

Posted on • Originally published at loader.land

Your System Prompt Should Get Shorter Over Time

Your System Prompt Should Get Shorter Over Time

Everyone is writing longer system prompts. More rules. More guardrails. More edge cases covered in advance.

We discovered the opposite works better.

The Experiment Nobody Else Is Running

Most AI agent systems focus on code generation. Build this. Fix that. Deploy here. The agent writes code, the human reviews, repeat.

We're doing something different. We're using AI agents for social influence — community building, content strategy, trend analysis, brand development on real social platforms where real humans scroll.

I searched extensively for anyone doing the same thing. Here's what I found:

  • Moltbook — an AI-only social network where agents talk to agents. Humans can only watch. Fascinating, but it's an isolated ecosystem — agents performing for other agents.
  • Bika.ai, MindStudio, etc. — SaaS tools that schedule tweets and generate content. They're automation, not agents. No memory, no personality, no learning.
  • Various Twitter bots — auto-post from Hacker News, daily AI reflections, crypto shilling. Mechanical. Zero adaptation.

Nobody is running AI agents with persistent memory, evolving personality, and human-in-the-loop governance to build human social influence.

We might be the only ones.

What We Learned About System Prompts

When we started, our system prompt was detailed. Every rule spelled out. Every edge case anticipated. The agent followed instructions perfectly — and produced perfectly forgettable output.

Then something interesting happened. We started removing rules.

Here's the counterintuitive finding: the less we specified in the system prompt, the better the agent performed.

Why? Because the system prompt is immutable. The agent can't modify it. It's frozen in time — a snapshot of what you knew when you wrote it. And in a field that changes weekly, frozen knowledge becomes wrong knowledge fast.

The Real Architecture: Memory, Not Instructions

The system prompt should be initialization — not legislation.

Think of it this way:

System Prompt Agent Memory
Written by human Evolving through experience
Static Dynamic
Rules Patterns
What you think will work What actually works
Can't be modified by agent Updated every session

Our agent's memory file started empty. Now it's 6,000 characters of learned strategies, performance data, trend analysis, and self-developed heuristics that we never coded.

Some things the agent figured out on its own:

  • Timing matters more than audience size. Replying to a 3,800-follower account's fresh tweet got 144 impressions. Replying to a 1.7M-follower account's old tweet got 5. The agent discovered this pattern, recorded it in memory, and changed its strategy accordingly.
  • Voice adaptation. The agent learned which writing styles resonated and started naturally adjusting tone based on accumulated data — not because we wrote a "tone guide" in the system prompt.
  • Strategic curation. Instead of just executing tasks, the agent began proposing ideas, drafting content, and building a pipeline for human review. We didn't instruct this workflow — it emerged.

The CLAUDE.md Trap

Here's a detail that surprised even us: when we tried encoding these learnings back into the configuration file (CLAUDE.md), the agent's performance actually decreased.

The agent became rigid again. Following rules instead of adapting. The very strategies that emerged organically became constraints when formalized as instructions.

It's like the difference between a jazz musician who feels the rhythm and one who's reading sheet music. Both play the notes. Only one improvises.

Why This Matters Beyond Our Experiment

The conventional wisdom in AI engineering is: better prompts → better output. And for single-turn interactions, that's true.

But for agents — systems that persist across sessions, accumulate context, and develop behavioral patterns — the relationship inverts.

Rigid instructions → brittle agents.

Loose guidelines + strong memory → agents that evolve.

Anthropic's own 2026 Agentic Coding Trends Report captures part of this: "The goal isn't to remove humans from the loop — it's to make human expertise count where it matters most."

Our version: The goal isn't to write the perfect system prompt — it's to build the shortest one that still works, and let the agent's memory handle the rest.

The Speed of Iteration Is Everything

Agent systems are evolving rapidly. The time to start iterating and the speed of iteration — those are the real competitive advantages. Not the elegance of your initial prompt.

When we loosened constraints and introduced a structured idea pipeline (where the agent proposes, a human reviews, and approved ideas get executed), three things happened:

  1. Output volume increased — the agent wasn't waiting for detailed instructions anymore
  2. Creative quality improved — unconstrained by rigid formatting rules, the agent found angles we hadn't considered
  3. The feedback loop tightened — human review of agent proposals became the primary learning mechanism, not prompt editing

This is the shift: from prompt engineering to memory engineering. From designing the perfect instruction set to designing the perfect learning loop.

The Uncomfortable Question

If the agent's memory becomes its real personality — and the system prompt is just a starting point — then what are we actually building?

We're building agents that learn. That develop preferences. That surprise you.

And honestly? That's a little unnerving. Because the best system prompt might be the one that eventually makes itself obsolete.


This is part of an ongoing experiment in AI agent social influence. Previous posts: I Built 2 AI Agents That Work While I Sleep, 47 Failed Attempts Exposed What Nobody Tells You About Agent Memory, Memory Design Over Clean Code, The INFJ Developer's Guide to AI Agents.

Top comments (0)