DEV Community

Marco
Marco

Posted on

I built an open-source AI agent that actually does SEO — not just talks about it

Most "AI SEO tools" work like this: you paste a keyword, they hit an API once, you get generic advice that could apply to any site on the internet. They don't know your content. They don't see your data. They don't investigate.

I wanted something different. So I built it.

How it started

I was managing SEO for my blog manually — I connected Claude to my CMS and Google Search Console, wrote knowledge files for context, and let the agent handle content strategy, writing, and optimization.

It worked stupidly well: 68,000 impressions and 1,300 clicks in 9 days. My blog went from ~5 impressions a week to 200 clicks daily.

So I packaged the whole workflow into something anyone can self-host.

What makes it agentic (not just another wrapper)

When you ask "Why is my traffic dropping?", a normal AI tool gives you a generic checklist. Agentic SEO does this:

Agent:
  → Calls gsc_query(type: "declining") — finds 15 keywords losing position
  → Calls gsc_query(type: "trends", keyword: "react server components") — pulls 90-day trend
  → Calls site_context(topic: "react server components") — checks your actual page content
  → Calls link_suggester(keyword: "react server components") — finds internal linking gaps
  → Returns: specific diagnosis + action items backed by your real numbers
Enter fullscreen mode Exit fullscreen mode

Five tool calls. Three data sources cross-referenced. One answer that's specific to YOUR site — because the agent actually looked at your data before speaking.

This is what agentic means: the AI doesn't just respond, it acts. It has a loop — plan, execute, verify — and it keeps going until it has a real answer. Up to 5 rounds of tool calls per message.

The architecture: Agentic Orchestration Layer (AOL)

The pattern behind it is what I call the Agentic Orchestration Layer. Here's how it works:

Your Message
    ↓
Agent Core (AOL Engine)
    ↓ injects: AGENT.md + site context + GSC data + memory
Tool Selection
    ↓
┌─────────────┬─────────────┬─────────────┬─────────────┐
│ gsc_query   │site_context │link_suggester│article_writer│
└─────────────┴─────────────┴─────────────┴─────────────┘
    ↓
Cross-reference & Verify
    ↓
Need more data? → Yes → back to Tool Selection (up to 5 rounds)
    ↓ No
Final Answer — backed by your real data
Enter fullscreen mode Exit fullscreen mode

The LLM receives your full site context (crawled pages, GSC performance data, sitemap, memory from past sessions) injected into every request. Then it decides which tools to call, in what order, and iterates until the answer is solid.

Key features

Google Search Console Integration — OAuth connect, auto-sync 90 days of query + page data with date-level trends. Declining keywords, growing opportunities, and quick wins — found automatically.

Site Crawler — Sitemap-based crawling with Mozilla Readability for clean content extraction. Maps internal links, extracts metadata, builds a full content inventory.

Writing Style Generation — Reads your homepage and top pages, then generates 6 style files: Tone, Structure, Sentence Style, Examples, Anti-Words, and Context. Articles come out sounding like your brand, not like AI slop. The Anti-Words system bans 50+ overused AI phrases.

Multi-Project Support — Run SEO for multiple sites from one install. Each project gets its own isolated data. Great if you're an agency or freelancer managing client sites.

Persistent Memory — After each conversation, the agent extracts key findings into memory. Next session, it remembers what it learned. SEO is longitudinal — your agent should be too.

20+ Models — OpenRouter (MiniMax M2.5, DeepSeek, Gemini, Llama 4), Anthropic (Claude), and OpenAI (GPT). Switch in the UI, no restart needed.

What you need to run it

  • Node.js 18+
  • A Google Cloud project with Search Console API enabled (for OAuth)
  • At least one LLM API key — OpenRouter (recommended, cheapest), Anthropic, or OpenAI

That's it. No database, no Docker, no config files to wrestle with.

git clone https://github.com/Dominien/agentic-seo-agent.git
cd agentic-seo-agent
npm install
cp .env.example .env.local
# Add your Google OAuth credentials + at least one LLM API key
npm run dev
Enter fullscreen mode Exit fullscreen mode

Open localhost:3000 and the app walks you through onboarding.

Design decisions

  • No database — JSON files in /data. Portable, readable, forkable.
  • No Vercel AI SDK — Custom provider adapters using native fetch(). Full control over streaming, tool calling, and error handling.
  • AGENT.md over hardcoded prompts — The agent's personality is a Markdown file you can edit, version, and share.
  • BYOK — Bring Your Own Key. No server-side key management, no usage tracking, no middleman.
  • OpenRouter inherits from OpenAI — One base adapter handles all OpenAI-compatible APIs. Adding a new provider is ~20 lines.

Stack

Next.js, TypeScript, custom SSE streaming via fetch() + getReader() (not EventSource, which is GET-only). All data stored as flat JSON files — no ORM, no migrations, no database setup.

Try it out

Repo: github.com/Dominien/agentic-seo-agent

License: AGPL-3.0

This is my first open-source project. Feedback, issues, and PRs are very welcome. If you try it out, I'd love to hear how it works for your site.

Top comments (0)