DEV Community

Cover image for We Replaced Our Bash Scripts and Hydra With a Single Go Binary for Credential Testing
Nathan Sportsman
Nathan Sportsman

Posted on

We Replaced Our Bash Scripts and Hydra With a Single Go Binary for Credential Testing

Every few months, I watch one of our engineers burn an hour on an engagement trying to get THC Hydra compiled on a stripped-down jump box. Missing libssh-dev. Wrong version of libmysqlclient-dev. Package headers that don't exist on whatever minimal container they're working from. And that's before they've tested a single credential.

Then they test credentials, get results in Hydra's human-readable terminal output, and spend another chunk of time writing a parsing script to get that data into a format the rest of the pipeline can use. On the next engagement, they write the same script again, slightly different, because the output changed or the use case shifted.

This has been the state of credential testing tooling for years. It works, but it's held together with duct tape. We finally decided to build something better.

The Tool Tax

If you've done any kind of security assessment work — or even just managed infrastructure at scale — you've probably dealt with some version of this problem. Your reconnaissance tools output JSON. Your reporting tools expect JSON. But the tool in the middle speaks its own format and requires you to translate in both directions.

Modern recon workflows are built around tools like naabu for port scanning and fingerprintx for service identification. They chain together cleanly because they share a common data format. Credential testing has been the gap in that pipeline — the step where you drop out of structured data and into ad hoc scripts.

That translation layer between tools isn't just annoying. It's where mistakes happen. Hosts get dropped. Formats get misread. Results get lost. On a network with 700,000 live hosts and thousands of identified services, "good enough" glue scripts have real consequences.

Why Go

The language choice was deliberate and it comes down to one thing: distribution.

When your tool needs to run on whatever box you land on during an engagement — a hardened jump host, a minimal container, a client workstation with nothing installed — your dependency story matters more than almost any other technical decision.

Go gives us a statically compiled binary. No runtime. No shared libraries. No package manager on the target. Download the binary, run it. That's the entire setup process.

This isn't a theoretical benefit. THC Hydra's protocol support comes from linking against system libraries: libssh for SSH, libmysqlclient for MySQL, libpq for PostgreSQL. Each library is a potential compilation failure on a system that wasn't set up for building C projects. Go's SSH support is in the standard library ecosystem. Database drivers are pure Go. Everything compiles into one artifact.

The concurrency model is the other half of the equation. Credential testing is embarrassingly parallel — you're making thousands of independent authentication attempts. Goroutines and channels map onto this problem naturally without the overhead of managing thread pools or process spawning.

Plugin Architecture in a Single Binary

One design constraint we set early: Brutus ships as a single binary, but adding a new protocol shouldn't require understanding the whole codebase. These goals are in tension, and the plugin architecture is how we resolved it.

Each protocol — SSH, MySQL, FTP, HTTP Basic, etc. — is a self-contained plugin that implements a common interface. The plugin registers itself, declares what service identifiers it handles (matching fingerprintx output), and implements the authentication logic. The core engine handles concurrency, input parsing, output formatting, and retry logic. Plugins just authenticate.

This means contributing a new protocol looks roughly like:

  1. Create a new file in the plugins directory
  2. Implement the authentication interface
  3. Register the plugin with the service identifier it handles
  4. Compile

The new protocol is immediately available in the pipeline. No configuration files, no dynamic loading, no plugin directories to manage. It compiles into the same single binary.

Compiling Known-Bad SSH Keys Into the Binary

This is probably the most unusual design decision in Brutus, and the one I think has the most practical value.

The security community has catalogued a large number of publicly known, compromised SSH keys. Rapid7 maintains the ssh-badkeys repository. HashiCorp's Vagrant ships with a well-known insecure key. Various appliance vendors — F5 BIG-IP, ExaGrid, Ceragon FibeAir — have shipped products with embedded keys that are now public.

Testing for these keys across an environment is something that should be trivial but traditionally isn't. You need to find the key collections, download them, write a script to iterate through them, handle SSH connection logic and timeouts, and keep track of which key you're testing. It's not complex work, it's just tedious enough that it gets done incompletely.

Brutus embeds all of these key collections directly into the binary using Go's embed package. When it encounters an SSH service, it automatically tests every known-bad key. Each key carries metadata: the expected default username (root for F5, vagrant for Vagrant, mateidu for Ceragon) and context about which vulnerability it represents.

The output doesn't just say "this key authenticated." It tells you which known-compromised key matched, what product it's associated with, and what CVE or advisory applies. That's the difference between a finding and an actionable finding.

# Test every SSH service in your naabu/fingerprintx output
# against every known-compromised key, automatically
cat recon_output.json | brutus
Enter fullscreen mode Exit fullscreen mode

No flags needed. If the service is SSH, bad keys get tested.

The Pipeline

The core workflow Brutus was designed for looks like this:

# Port scan → Service identification → Credential testing
naabu -host 10.0.0.0/8 -p 22,3306,5432,8080 -silent | \
  fingerprintx --json | \
  brutus -u admin -p password123
Enter fullscreen mode Exit fullscreen mode

Each tool reads from stdin and writes to stdout. JSON in, JSON out. No intermediate files, no format conversion, no glue scripts.

For more targeted work — say you recovered a private key from a compromised system and need to find everywhere it grants access:

naabu -host 10.1.0.0/24 -p 22 -silent | \
  fingerprintx --json | \
  brutus -u nessus -k /path/to/recovered_key
Enter fullscreen mode Exit fullscreen mode

The output is structured JSON. Every valid credential, the host it worked against, the protocol, the timestamp. You can query it with jq, pipe it into your reporting toolchain, or feed it into whatever comes next.

Experimental: LLM-Powered Credential Discovery

This is the part I want to be upfront about: these features are experimental. They work, they're useful in the scenarios we've tested, and they represent something I think is genuinely interesting from an engineering perspective. But they depend on external API services, they add latency and cost, and LLMs are non-deterministic. Treat them accordingly.

The problem: You land on an internal network. You scan it. You find dozens of HTTP services on non-standard ports — management interfaces for switches, storage appliances, monitoring tools, IPMI consoles, printer admin panels. Each one probably has default credentials, but you'd need to identify the product first, then look up its defaults. Manually, across dozens of services, this is painfully slow.

Approach one — response analysis: Brutus captures the HTTP response (headers, page content, server signatures) and sends it to an LLM. The model identifies the application and suggests vendor-specific default credentials. Those get tested first, with fallback to generic wordlists.

Approach two — visual authentication: Some login pages are JavaScript-rendered with CSRF tokens, multi-step flows, or non-standard field names. Brutus uses headless Chrome to render the page, takes a screenshot, sends it to Claude's vision API for identification, then fills and submits the form. It compares page state before and after to determine success.

Both of these are solving real workflow problems. Whether LLMs are the right long-term solution or a stepping stone to something more deterministic is an open question. But right now, they work better than the alternative of doing it manually.

What We're Looking For

Brutus is open source under Apache 2.0. The things that would make the biggest impact:

New protocol plugins. If there's a service you test credentials against that isn't supported, the plugin interface is designed to make this straightforward.

Bad key collections. If you've encountered default or embedded SSH keys in appliances, IoT devices, or vendor products that aren't in our current collection, adding them benefits everyone.

Real-world feedback. We've battle-tested this on our own engagements, but every environment is different.

Repo: github.com/praetorian-inc/brutus

Blog with full technical details: praetorian.com/blog/brutus


I'm Nathan Sportsman. I run Praetorian, an offensive security company. We build tools like this because we use them on real engagements and got tired of the workarounds. If you have questions about the architecture, the Go implementation decisions, or the AI features, I'm happy to discuss in the comments.

Top comments (0)