DEV Community

Cover image for I got tired of copy-pasting messages from RabbitMQ to validate them, so I built an MCP server
Lars
Lars

Posted on

I got tired of copy-pasting messages from RabbitMQ to validate them, so I built an MCP server

If you've worked on projects where teams communicate through message brokers, you know the drill. A message lands in a queue with a broken payload. Maybe a field is the wrong type, maybe a required property is missing, maybe someone changed the schema without telling anyone.

The debugging loop looks like this:

  1. Open the RabbitMQ management UI
  2. Navigate to the queue
  3. Click "Get messages"
  4. Copy the payload
  5. Find the JSON Schema for that message type
  6. Paste it into some online validator
  7. Read the error
  8. Fix the publisher
  9. Repeat

I did this dozens of times per week. So I built an MCP server that does it in one sentence.

What is MCP?

Quick context if you haven't seen it yet: MCP (Model Context Protocol) is a standard that lets AI assistants call external tools. If you use Claude Code, Cursor, VS Code Copilot, or Windsurf, you can add MCP servers that give your assistant new capabilities.

Think of it like plugins, but standardized across clients.

The idea

What if I could just ask my AI assistant:

"Inspect the orders queue and check if all messages are valid"

And it would connect to my broker, peek at the messages (without consuming them), validate each one against the right JSON Schema, and tell me exactly what's broken?

That's what Queue Pilot does.

MCP-server for RabbitMQ and Kafka

How it works

You define your message contracts as JSON Schema files. These are the schemas your teams already agreed on (or should agree on). Each schema has an $id that matches the type field on your messages.

// schemas/order.created.json
{
  "$id": "order.created",
  "$schema": "http://json-schema.org/draft-07/schema#",
  "title": "Order Created",
  "type": "object",
  "required": ["orderId", "amount"],
  "properties": {
    "orderId": { "type": "string" },
    "amount": { "type": "number", "minimum": 0 }
  }
}
Enter fullscreen mode Exit fullscreen mode

Then you add Queue Pilot to your MCP client. One command generates the config:

npx queue-pilot init --schemas ./schemas --client claude-code
Enter fullscreen mode Exit fullscreen mode

That's it. Your assistant now has access to your broker.

What you can actually do with it

Here's where it gets practical. These are real prompts I use daily:

Debugging a broken consumer:

"Show me the messages in the dead-letter queue and validate them"

Queue Pilot peeks at the messages, matches each one to its schema by the type field, and returns validation errors. No more copy-pasting.

Before deploying a publisher change:

"Publish this to the events exchange: { "type": "order.created", "orderId": 123, "amount": "fifty" }"

The publish_message tool validates against the schema first. This message would be rejected because orderId should be a string and amount should be a number, not a string. Invalid messages never hit the broker.

Checking queue health across the board:

"List all queues and show me which ones have backed-up messages"

Setting up test infrastructure:

"Create a queue called test-orders, bind it to the events exchange with routing key order.*, and publish 3 test messages"

The inspect_queue tool

This is the one I use most. It combines peeking and validation in a single call.

When you ask your assistant to inspect a queue, Queue Pilot:

  1. Fetches messages from the queue without consuming them
  2. Looks at each message's type property
  3. Finds the matching schema by $id
  4. Validates the payload against that schema
  5. Returns a report: which messages are valid, which aren't, and exactly why

For a queue with 5 messages where 2 have issues, you'd get something like:

Messages 1, 2, 4: Valid (order.created)
Message 3: Invalid (order.created)
  - /amount: must be number, got string
Message 5: No matching schema for type "order.updated"
Enter fullscreen mode Exit fullscreen mode

No browser tabs. No copy-pasting. No context switching.

Kafka support

Queue Pilot also supports Apache Kafka through a unified adapter interface. The same tools work for both brokers, plus Kafka-specific ones like list_consumer_groups, describe_consumer_group, list_partitions, and get_offsets.

npx queue-pilot init --schemas ./schemas --broker kafka --client claude-code
Enter fullscreen mode Exit fullscreen mode

The Kafka adapter uses the Confluent JavaScript client and supports SASL authentication.

Setup in 2 minutes

1. Create your schemas directory

Put your JSON Schema files in a folder. One file per message type.

2. Generate config for your MCP client

# Claude Code
npx queue-pilot init --schemas /path/to/schemas --client claude-code

# Cursor
npx queue-pilot init --schemas /path/to/schemas --client cursor

# VS Code
npx queue-pilot init --schemas /path/to/schemas --client vscode
Enter fullscreen mode Exit fullscreen mode

3. Start using it

Open your editor and ask your assistant about your queues. It has access to 14+ tools for inspecting, validating, publishing, and managing your message infrastructure.

The full tool list

Universal (all brokers): list_schemas, get_schema, validate_message, list_queues, peek_messages, inspect_queue, get_overview, check_health, get_queue, list_consumers, publish_message, purge_queue, create_queue, delete_queue

RabbitMQ-specific: list_exchanges, create_exchange, delete_exchange, list_bindings, create_binding, delete_binding, list_connections

Kafka-specific: list_consumer_groups, describe_consumer_group, list_partitions, get_offsets

When is this useful?

Queue Pilot is designed for development and testing, not production monitoring. It shines when:

  • Multiple teams publish/consume messages and schemas drift over time
  • You're debugging why a consumer is failing on certain messages
  • You want to validate message contracts before merging a PR
  • You need to quickly set up queues, bindings, and test data for local development
  • You want to catch schema violations before they reach a test environment

What's next

The project is at v0.5.0. Kafka support is newer and I'm looking for feedback from people who work with multi-team message contracts. If you have ideas or run into issues, open an issue on GitHub.

Links:


If you work with message queues and MCP-compatible editors, I'd love to hear how you handle schema validation in your workflow. Drop a comment or open an issue.

Top comments (0)