DEV Community

Cover image for Code Verification Over Confidence
Chudi Nnorukam
Chudi Nnorukam

Posted on • Edited on • Originally published at chudi.dev

Code Verification Over Confidence

Originally published at chudi.dev


"Should work."

The AI said it. I believed it. Six hours later, I was still debugging a fundamental error that existed from line one.

Evidence-based completion for AI code means blocking confidence phrases and requiring proof before any task is marked done. Not "should work"--actual build output. Not "looks good"--actual test results. The psychology is simple: confidence without evidence is gambling. Forced evaluation achieves 84% compliance because it makes evidence the only path forward.

Why Do We Skip Verification?

The pattern is universal. You describe what you want. The AI generates code. It looks reasonable. You paste it in.

That moment of hesitation--the one where you could run the build, could write a test, could verify the output--gets skipped. The code looks right. The AI sounds confident. What could go wrong?

That specific shame of shipping broken code--the kind where you have to message the team "actually, there's an issue"--became my recurring experience.

I trust AI completely. That's why I verify everything.

The paradox makes sense once you've been burned enough times.

What Makes "Should Work" Psychologically Dangerous?

The phrase creates false confidence through three mechanisms:

1. Authority Transfer

The AI presents with confidence. We transfer that confidence to the code itself, as if certainty of delivery equals certainty of correctness.

2. Completion Illusion

"Should work" feels like a finished state. The task feels done. Moving to verification feels like extra work rather than essential work.

3. Optimism Bias

We want it to work. We've invested time. Verification risks discovering problems we'd rather not face.

I thought I was being thorough. Well, it's more like... I was being thorough at the wrong stage. Careful prompting, careless verification.

What Phrases Trigger the Red Flag System?

Here's the complete list that gets blocked:

Each of these phrases indicates a claim without evidence. They're not wrong to think--they're wrong to accept as completion.

What Evidence Replaces Confidence Claims?

The replacement is specific, verifiable proof:

Build Evidence

Build completed successfully:
- Exit code: 0
- Duration: 9.51s
- Client bundle: 352KB
- No errors, 2 warnings (acceptable)
Enter fullscreen mode Exit fullscreen mode

Test Evidence

Tests passing: 47/47
- Unit tests: 32/32
- Integration tests: 15/15
- Coverage: 78%
Enter fullscreen mode Exit fullscreen mode

Visual Evidence

Screenshots captured:
- Mobile (375px): layout correct
- Tablet (768px): responsive breakpoint working
- Desktop (1440px): full layout verified
- Dark mode: all components themed
Enter fullscreen mode Exit fullscreen mode

Performance Evidence

Lighthouse scores:
- Performance: 94
- Accessibility: 98
- Best Practices: 100
- SEO: 100
Bundle size: 287KB (-3KB from previous)
Enter fullscreen mode Exit fullscreen mode

That hollow confidence of claiming something works--replaced with facts that prove it.

How Does Forced Evaluation Achieve 84% Compliance?

Research on skill activation showed a stark difference:

  • Passive suggestions: 20% actually followed
  • Forced evaluation: 84% actually followed

The mechanism is a 3-step mandatory protocol:

Step 1: EVALUATE

For each potentially relevant skill:

- master-debugging: YES - error pattern detected
- frontend-guidelines: NO - not UI work
- test-patterns: YES - need verification
Enter fullscreen mode Exit fullscreen mode

Step 2: ACTIVATE

For every YES answer:

Activating: master-debugging
Activating: test-patterns
Enter fullscreen mode Exit fullscreen mode

Step 3: IMPLEMENT

Only after evaluation and activation complete:

Proceeding with implementation...
Enter fullscreen mode Exit fullscreen mode

The psychology works because evaluation creates commitment. Writing "YES - need verification" makes you accountable to the claim. Skipping feels like breaking a promise to yourself.

What Are the 4 Pillars of Quality Gates?

Every verification maps to one of four pillars:

How Does Self-Review Automation Work?

The system includes prompts that make the AI review its own work:

Primary Self-Review Prompts

  1. "Review your own architecture for issues"
  2. "Explain the end-to-end data flow"
  3. "Predict how this could break in production"

The Pattern

1. Generate solution
2. Self-review with prompts
3. Fix identified issues
4. Re-review
5. Only then mark complete
Enter fullscreen mode Exit fullscreen mode

Self-review catches issues before they become bugs. The AI is good at finding problems in code--including its own code, when asked explicitly.

What Happens When Verification Fails?

The system handles failures through structured escalation:

Level 1: Soft Block

Red flag phrase detected. Request clarification:
"You mentioned 'should work'. What specific evidence supports this? Please provide build output or test results."

Level 2: Hard Block

Completion claimed without evidence. Block the completion:
"Task cannot be marked complete. Required: build output showing success OR test results passing."

Level 3: Rollback Trigger

Critical functionality broken after completion:
"Verification failed post-completion. Initiating rollback to last known good state."

The escalation makes cutting corners progressively harder. Evidence is the only path through.

FAQ: Implementing Verification Gates

Why is 'should work' dangerous in AI development?
It indicates a claim without evidence. The AI (or developer) is expressing confidence without verification. This confidence often masks untested assumptions, missing edge cases, or fundamental errors.

What is forced evaluation mode?
A mandatory 3-step protocol: evaluate each skill (YES/NO with reasoning), activate every YES, then implement. Research shows 84% compliance vs 20% with passive suggestions. The commitment mechanism creates follow-through.

What phrases indicate unverified AI code?
Red flags include: 'Should work', 'Probably fine', 'I'm confident', 'Looks good', 'Seems correct'. These all express certainty without evidence of testing, building, or verification.

What evidence should replace confidence claims?
Specific proof: 'Build completed: exit code 0', 'Tests passing: 47/47', 'Screenshot at 375px shows correct layout', 'Bundle size: 287KB'. Facts, not feelings.

How do I implement verification gates for AI code?
Add hooks that run after AI responses. Check for red flag phrases and reject them. Require build output, test results, or screenshots before marking tasks complete. Make evidence the only path forward.


I thought the problem was AI accuracy. Well, it's more like... the problem was my verification laziness. The AI generates good code most of the time. But "most of the time" isn't good enough for production.

Maybe the goal isn't trusting AI less. Maybe it's trusting evidence more--and building systems that make "should work" impossible to accept.


Related Reading

This is part of the Complete Claude Code Guide. Continue with:

Top comments (0)