DEV Community

Cover image for Why I Built Human-in-the-Loop Instead of Full Automation
Chudi Nnorukam
Chudi Nnorukam

Posted on • Edited on • Originally published at chudi.dev

Why I Built Human-in-the-Loop Instead of Full Automation

Originally published at chudi.dev


I could have built BugBountyBot to submit findings automatically. The technical barrier isn't high—an API call to HackerOne after validation passes.

I didn't build it that way. Here's why.

The Temptation of Full Automation

Full automation is seductive:

  • Speed: Submit findings as fast as you find them
  • Scale: Hunt 24/7 without human bottlenecks
  • Ego: "I built a system that hunts bugs while I sleep"

Every conversation about BugBountyBot eventually hits this question: "Why not just auto-submit?"

The False Positive Problem

In security research, reputation is everything. A single bad submission can:

  • Get your report closed as "Not Applicable"
  • Add a negative signal to your profile
  • Cost you access to private programs
  • Waste triager time (they remember)

The math is brutal: one false positive can undo five true positives in terms of reputation impact.

Automated systems optimize for recall—finding everything possible. But bug bounty rewards precision. A 90% precision rate sounds good until you realize that means 1 in 10 submissions is garbage.

A researcher I know ran automated submissions for a month. Their valid-to-invalid ratio dropped to 60%. Three private programs revoked access. It took six months of manual, high-quality reports to recover.

Platform Requirements

This isn't just my opinion—platforms explicitly require human oversight.

From HackerOne's Automation Policy:

"Automated tools must have human review before submission. Fully automated submission systems are prohibited."

From Intigriti's Terms:

"Researchers are responsible for the quality and accuracy of all submissions, including those assisted by automated tools."

From Bugcrowd:

"Automated scanning that results in excessive false positives may result in account suspension."

Build full automation, and you're violating ToS. Not a gray area.

The Liability Question

When your bot submits a finding, who's responsible?

  • If it's valid: You get credit
  • If it's invalid: You get blamed
  • If it causes harm: You're liable

There's no "my AI did it" defense. The liability is asymmetric—downside is yours, and "scale" just multiplies it.

Compare this to human-in-the-loop:

  • You review each finding before submission
  • You apply judgment about timing and context
  • You own the decision, not just the consequence

What Humans Do Better

Automation excels at:

  • Pattern matching at scale
  • Consistent testing methodology
  • 24/7 availability
  • Memory across sessions

Humans excel at:

  • Context understanding - Is this behavior intentional?
  • Impact assessment - Is this actually a security issue?
  • Communication - Can I explain this clearly?
  • Timing judgment - Is now the right time to submit?

The optimal system uses AI for the first set and humans for the second.

The Human-in-the-Loop Architecture

BugBountyBot's design:

[Recon Agent] → [Testing Agent] → [Validator Agent]
                                         ↓
                              [Confidence ≥ 0.85?]
                                    ↓         ↓
                                  Yes        No
                                   ↓          ↓
                         [Queue for Review] [Log & Learn]
                                   ↓
                           [Human Review]
                                   ↓
                         [Approve/Reject/Edit]
                                   ↓
                          [Reporter Agent]
Enter fullscreen mode Exit fullscreen mode

The human checkpoint is after validation but before submission. You're not reviewing raw signals—you're reviewing high-confidence findings with full evidence.

This is the leverage point: AI handles the 80% grind, humans handle the 20% that requires judgment.

The Numbers That Matter

Metric Full Automation Human-in-the-Loop
Submissions/day High Medium
Precision ~70% ~95%
Reputation trend Declining Stable/Growing
Platform standing At risk Solid
Sustainable? No Yes

Optimizing for submissions per day is the wrong metric. Optimize for accepted findings per month, reputation over time, and access to better programs.

When Full Automation Makes Sense

There are legitimate use cases:

  • Internal security testing - Your own infrastructure, no reputation at stake
  • Private engagements - Client agreed to automated testing
  • Research environments - Sandboxed, no real submissions

But for public bug bounty programs? Human-in-the-loop is the only sustainable architecture.

The Deeper Point

The goal isn't maximum automation. The goal is maximum valuable output with acceptable risk.

Human-in-the-loop is how you get there. It's not a compromise—it's the architecture that lets you scale without catastrophic failure modes.

Build for sustainability. Your future self will thank you.


Related: Building a Semi-Autonomous Bug Bounty System | Portfolio: BugBountyBot

Top comments (0)