DEV Community

Matthew Hou
Matthew Hou

Posted on

How I Do Code Reviews: A Checklist That Catches Real Bugs, Not Style Nitpicks

I used to write code review comments like: "Could you add a blank line here?" and "Nit: I'd rename this variable."

Then a production incident happened because I approved a PR with a race condition I didn't notice — I was too busy suggesting better variable names.

That incident changed how I review code. Here's my current process.

The 3-Pass Method

I read every PR three times. Each pass has a different focus. This sounds slow, but it's faster than reading once and trying to catch everything simultaneously.

Pass 1: The "What and Why" (2 minutes)

Don't read the code yet. Read:

  1. The PR title and description — what is this supposed to do?
  2. The linked ticket/issue — what's the context?
  3. The file list — which parts of the codebase are touched?

After this pass, I should be able to answer: "If I were building this feature, would I touch the same files?"

If the answer is no, that's a design discussion — not a code review. Catch architectural misalignment before line-by-line review, not after.

Pass 2: The "Correctness" Pass (10-20 minutes)

Now read the code. But only look for things that matter:

1. Logic Errors

  • Off-by-one errors in loops and pagination
  • Missing null/undefined checks
  • Wrong comparison operators (= vs ==, < vs <=)
  • Incorrect boolean logic (De Morgan's law violations)
// Spot the bug:
if (!user.isAdmin || !user.isActive) {
  throw new ForbiddenError();
}
// Bug: This blocks admins who are inactive AND active non-admins
// Should be: if (!user.isAdmin || !user.isActive) 
// Wait — actually depends on requirements. Are inactive admins allowed?
// This is the kind of question worth asking.
Enter fullscreen mode Exit fullscreen mode

2. Error Handling

  • What happens when the external API is down?
  • What happens when the database query returns no results?
  • Are errors caught at the right level? (Not too early, not too late)
  • Are error messages helpful for debugging?
// Red flag: swallowed error
try {
  await processPayment(order);
} catch (err) {
  console.log('Payment failed');  // Which payment? What error? 
  return { success: false };       // Caller has no idea what happened
}
Enter fullscreen mode Exit fullscreen mode

3. Concurrency & Race Conditions

  • Check-then-act patterns without locking
  • Shared mutable state accessed from multiple threads/requests
  • Missing database transactions for multi-step operations
// Classic race condition:
const count = await db.query('SELECT count FROM inventory WHERE sku = ?', [sku]);
if (count > 0) {
  await db.query('UPDATE inventory SET count = count - 1 WHERE sku = ?', [sku]);
}
// Two concurrent requests both see count = 1, both decrement. Count goes to -1.
// Fix: Use a single atomic query with WHERE count > 0
Enter fullscreen mode Exit fullscreen mode

4. Security

  • SQL injection (string concatenation in queries)
  • XSS (unescaped user input in HTML)
  • IDOR (accessing resources without ownership check)
  • Secrets in code or logs
  • Missing authentication/authorization checks
// IDOR: anyone can view any user's data
app.get('/users/:id/settings', authMiddleware, async (req, res) => {
  const settings = await db.getUserSettings(req.params.id);
  // Missing: verify req.user.id === req.params.id
  res.json(settings);
});
Enter fullscreen mode Exit fullscreen mode

5. Data Integrity

  • Missing database constraints (unique, not null, foreign key)
  • Missing validation on input (trusting the client)
  • Inconsistent state possible if process crashes mid-operation

Pass 3: The "Maintainability" Pass (5 minutes)

Only after correctness. This is where style comments live:

  • Is the code readable? Could a new team member understand it?
  • Are there obvious simplifications?
  • Is there unnecessary duplication?
  • Are the tests testing the right things?

Important: Prefix style comments with "Nit:" so the author knows they're optional. Don't block a PR over naming conventions.

My Review Comment Format

I prefix every comment to set expectations:

  • 🔴 Bug: This will break in production. Must fix.
  • 🟡 Issue: This won't break today, but will cause problems. Should fix.
  • 🟢 Nit: Style or naming suggestion. Optional.
  • 💡 Idea: Alternative approach to consider. Not a request.
  • ❓ Question: I don't understand this. Can you explain?

Example:

🔴 Bug: This query is vulnerable to SQL injection. 
`req.query.search` is concatenated directly into the query string.

Fix: Use parameterized query:
  db.query('SELECT * FROM products WHERE name LIKE $1', [`%${search}%`])
Enter fullscreen mode Exit fullscreen mode
🟢 Nit: `processData()` is pretty generic. Maybe `normalizeUserImport()`?
Not blocking.
Enter fullscreen mode Exit fullscreen mode

What I Don't Do Anymore

1. Block PRs Over Style

If it works, it's correct, and it's reasonably readable — approve it. The author's style preferences are valid even if they differ from mine. That's what linters are for.

2. Rewrite PRs in Comments

If the changes I want are that extensive, I have a conversation first. "Hey, I think the approach might need to change. Can we chat for 10 minutes?" A 10-minute call replaces 20 back-and-forth comments.

3. Delay Reviews

A PR waiting for review is a PR blocking someone's work. I aim for first response within 4 hours on my team. Not a full review — at least a first pass and any blocking concerns.

4. Review Without Running the Code

For non-trivial changes, I pull the branch and run it. Reading code on GitHub catches logic errors. Running code catches integration errors, UX issues, and "it compiles but doesn't work" problems.

The Checklist (Copy This)

For every PR, I mentally check:

Before Reading Code:

  • [ ] I understand WHAT this PR does
  • [ ] I understand WHY this change is needed
  • [ ] The scope feels right (not too big, not mixing concerns)

Correctness:

  • [ ] No logic errors in conditionals and loops
  • [ ] Error cases handled explicitly
  • [ ] No race conditions in concurrent/async code
  • [ ] No security vulnerabilities (injection, IDOR, auth bypass)
  • [ ] Database constraints enforce data integrity
  • [ ] Edge cases considered (empty inputs, null, concurrent access)

Tests:

  • [ ] Tests exist for the new behavior
  • [ ] Tests cover error/edge cases, not just happy path
  • [ ] Tests are testing behavior, not mocking their own setup

After Review:

  • [ ] My comments are actionable (not just "this feels wrong")
  • [ ] Blockers are clearly marked as 🔴
  • [ ] I've said something positive if the code is good

That last point matters. Code review shouldn't be an experience people dread. If the PR is good, say so. "Clean implementation, nice catch on the edge case in line 47" costs you 5 seconds and makes review culture healthier for everyone.


How do you approach code reviews? I'm especially curious about teams that have transitioned to AI-assisted code review — does it change the human reviewer's role? Let me know in the comments.

Top comments (0)