"If you use AI, you're not a real developer."
Same energy as every gatekeeping panic before it:
- Stack Overflow? Not a real programmer.
- Frameworks? Not a real programmer.
- High-level languages? Not a real programmer.
- IDEs with autocomplete? Not a real programmer.
The tools change. The panic doesn't.
This time though, there's something worth worrying about. Just not what people think.
The Shakespeare problem
February 2026. AI detectors flagging Shakespeare's sonnets as 99% AI-generated. ZeroGPT declaring the US Constitution was written by ChatGPT. These tools claim 99% accuracy, yet they can't tell Thomas Jefferson in 1787 from GPT-4 in 2024.
If we can't detect AI text when we know the ground truth, how are we supposed to audit codebases?
The answer: the same way we always should have. Judgment. Accountability. Not detection tools that think the Founding Fathers had transformers.
Boundaries that keep moving
Writer Elnathan John on AI and poetry:
"People speak as though AI were a finished object, not a moving frontier. They present limits as laws. They mistake a snapshot for a constitution."
This is what's happening in software development. "AI can never write production code" has become dogma, repeated with certainty and very little historical memory.
But history keeps proving certainty wrong.
We flew. Transplanted organs. Edited genes—rewrote the code of life in ways that would've been inconceivable centuries ago. Every time we cross a line we said was uncrossable, we move the goalposts.
Software follows the same pattern:
| What we said was impossible | What actually happened |
|---|---|
| High-level languages will never match assembly | They exceeded it |
| Garbage collection will never work for real systems | It powers most production code |
| You need to understand pointers to program | Most developers never touch them |
| Copying from Stack Overflow isn't real programming | It's how we all work |
| Frameworks mean you don't know fundamentals | Frameworks won |
Every time, gatekeepers panic. Every time, boundaries move.
The question isn't whether AI can write code—it already does, and improves every quarter.
The question is: what does this expose about what we've been pretending?
What AI actually threatens
Elnathan again, because he names it precisely:
"What AI threatens for many people is not writing itself, but the social architecture around writing: scarcity, gatekeeping, credentialed access, institutional permission, and inherited prestige."
Replace "writing" with "programming." That's the panic explained.
If anyone can ship code, who gets to be a "real developer"?
If a designer can prompt a working prototype, what makes a frontend developer special?
If a PM can generate a complete API with Claude Code, why do we need backend teams?
If GPT-5 can refactor legacy code in an afternoon, what's ten years of experience worth?
These questions expose what gatekeeping actually protected: not code quality, but social hierarchy.
Scarcity of programming skill created economic value. Gatekeeping—CS degrees, whiteboard interviews, "culture fit," YoE requirements—controlled access. Credentials decided who got to call themselves "real engineers."
AI doesn't threaten programming. It threatens the architecture that made programming a protected class.
And that architecture was never about quality.
Better questions
Instead of "is this code AI-generated?" ask:
Who's accountable when it breaks?
Not who wrote it. Who gets paged at 3am? Who explains to customers why their data's gone? Who faces consequences if this was wrong?
If the answer is "nobody," you have a problem that existed before AI.
What judgment shaped this architecture?
Not what tool was used. What tradeoffs were made? Why this approach over alternatives? What's the blast radius if this fails? What assumptions could invalidate this design?
If nobody on the team can answer these, the tool doesn't matter.
What context is missing?
AI doesn't know why past decisions were made. It doesn't know your company's unwritten rules. It doesn't remember that time the same approach melted the database in production.
If your team doesn't have this memory, you'll repeat mistakes faster.
Can anyone fix this without the original author?
If not, you have a maintenance problem. AI makes it worse—it generates code that works but nobody fully understands.
What responsibility does the author accept?
Not the AI. The human who merged it. The human who approved it. The human whose name is on the commit.
These were always the right questions. We just didn't ask them because code review theater was easier than accountability.
The real threat
The panic is misdirected. Here's what matters:
Not "AI is writing our code"
But "who's accountable when AI code fails in production?"
Not "AI will replace developers"
But "we're eliminating juniors who'd develop judgment"
Not "AI code looks like human code"
But "we can't review code faster than AI generates it"
Not "AI doesn't understand our codebase"
But "neither do most developers, and AI makes that obvious"
Not "we need better AI detectors"
But "we need accountability frameworks"
Different problems. Different solutions.
What this means
If you're worried about AI replacing developers, wrong question.
The developers who'll thrive won't be the ones who prompt better. They'll be the ones who:
- Ask better questions
- Make better tradeoffs
- Exercise better judgment
- Take accountability for outcomes
AI can write code. It can't know what's worth building.
That gap matters more than ever.
Top comments (93)
The real threat is the race to the bottom.
Companies have to lower their prices because business people expect software development to go faster with AI. I was talking to someone how is running a company, and he said before AI you knew when people submitted offers well below the market price the quality was lackluster. But with AI those prices are becoming the norm.
The problem I saw with that scenario is, how will the customers know the quality of the software is good or bad? It can be a company that is working with AI, but it could also be a vibe prompter.
The other problem with lowering prices is that IT companies will not be able to pay their people as much as they used to. People with skills will go away when they are not payed their worth. So it is not only no new people, it is going to be an intelligence drain from the sector too.
This feels like the days people asked developers to make custom websites for the exposure. Exposure doesn't pay the bills.
When someone does something in your house, nobody is thinking about paying with putting up a sign.
@xwero This is a brutally honest extension.thanks for bringing the economic reality front and center.
The race to the bottom on pricing is the part that scares me most too AI enables "good enough" slop at rock-bottom rates, clients can't reliably spot the difference (no more credential signals), and skilled people bail when pay doesn't match the value they deliver. It's not just fewer juniors entering it's experienced talent draining out, leaving even less mentorship and judgment in the ecosystem.
Your "exposure" parallel hits home. we've seen this undervaluation cycle before, and it never ends well for quality or sustainability.
Have you seen agencies or freelancers starting to push back? Or is it still full steam ahead on the price war?
I don't know where it is going. I'm not at the business side.
I brought it up to make a point.
I think this is really well-put.
"Race to the bottom" issues should be resolved by market dynamics in the long run where the bottom turns out to not be feasible — but is there a tight enough feedback loop to avoid hitting rock bottom before it's too late?
@ben Thanks,appreciate you jumping in.
I agree markets should punish "rock bottom" quality eventually (clients notice outages, tech debt, vendor churn), but the feedback loop is worryingly slow in software. Cheap AI-slop can ship fast, look good in demos, and rack up users before the cracks show by then the damage (burned trust, talent exodus) is already done.
The question is whether we can accelerate that loop somehow better transparency tools, reputation signals, or just more public "this failed because of unchecked AI" stories. What do you think could tighten the feedback before too many hit bottom?
The junior pipeline problem is the one that keeps me up at night. I'm a solo dev shipping two SaaS products right now, and I use AI constantly — but I can only use it effectively because I spent years debugging garbage code at 2am, reading stack traces nobody else wanted to touch, and learning why certain architectural decisions blow up at scale.
If we skip that painful apprenticeship phase for the next generation, we're basically training pilots who've never experienced turbulence. They'll be fine until they're not, and when they're not, nobody will know how to land the plane.
The accountability framing is spot on though. "Who gets paged at 3am" is a much better filter than "who wrote this code." The tool doesn't matter — the ownership does.
@egedev This hits hard.the junior pipeline / apprenticeship skip is what keeps me up too. Your "pilots who've never experienced turbulence" metaphor is perfect. AI can generate smooth flights, but when turbulence hits, judgment from hard-earned scars is what lands the plane.
The 3am page filter ("who owns it?") over "who wrote it" is the accountability shift we need.
As someone shipping with AI daily, how are you thinking about mentoring or onboarding the next wave without the old grind? Or is the model fundamentally changing?
Oh my God. You wrote the post with AI, you're also replying comments with AI too? 😫
The thing is that the AI system now knows how to deal with turbulence. The pilots have experienced turbulence, but when it happens, the tell the AI to deal with it and the AI does, most of the time.
That gives the pilot enough experience with using the AI to debug problems. Then when something really crazy comes up, the pilot would know how to use AI to debug the problem.
I'm not sure, but it seems to me that we're getting hung up on the wrong aspect of AI in software development. Instead of worrying about it replacing our jobs, we should be looking at how it's changing the landscape of what we do. I've always been fascinated by the traditional apprenticeship model, and the idea that AI might eliminate that - and the chance for junior developers to hone their skills - has me really thinking.
@itsugo Thanks, you're hitting the exact thing that keeps me up at night.
The traditional apprenticeship (grinding through bugs, reading stack traces at 2am, learning why decisions blow up at scale) is how judgment gets built. If AI lets juniors ship fast without that grind, we risk a generation that can prompt but can't debug or trade off under real pressure.
I'm not saying AI is all bad . it can accelerate learning if used right but skipping the painful "why" phase feels like a massive loss. That's why I'm writing the next piece on rebuilding the ladder so juniors still get those scars, just differently.
What part of the apprenticeship model do you think is hardest to replace or recreate with AI in the mix?
I've been thinking a lot about the tradeoff between speed and quality in coding, and I'm starting to see a difference in ideologies between getting code out the door quickly (even if it doesn't work as expected) and truly understanding how it works. When I was an apprentice, I spent a lot of time breaking things and trying to figure out why they wouldn't work, which is where I picked up the most valuable lessons - how to fix them, and what to do differently next time. Also if I can't explain how I built something in simple words then I can't take the credit for building it.
The key part of the apprenticeship model that's hard to replicate is the chance to make the kinds of mistakes people used to make without AI guidance. Now that AI is rapidly improving, I think the mistakes we make will be different, and the skills we need to develop will be distinct as well.
@itsugo Thanks, the apprenticeship grind (breaking things, fixing them, explaining simply) is exactly where the real lessons live. AI shortcuts the "make mistakes without guidance" phase, so the mistakes we do make will be different probably subtler and harder to spot.
That shift in what skills we need to build is huge. The next piece is digging into how we recreate that learning loop in an AI world. so juniors still get the scars, just not the old way.
What do you think is the one apprenticeship lesson that's hardest to replicate now?
The point about "we can't review code faster than AI generates it" is the one that deserves its own article. That's the actual operational crisis nobody's staffing for. I've been benchmarking AI-generated code for security vulnerabilities and the volume problem is real — when 65-75% of AI-generated functions ship with security issues, the bottleneck isn't writing code, it's the judgment layer between generation and merge. Accountability frameworks won't work if the people accountable can't actually evaluate what they're approving.
@ofri-peretz Thanks, the "can't review faster than it generates" point is the operational crisis in plain sight.
65–75% of AI functions with security issues is a brutal stat. the bottleneck isn't generation anymore; it's judgment and evaluation before merge. Accountability breaks if the accountable people can't actually assess what's being approved.
I've seen similar volume problems in smaller teams; it's not sustainable without new rituals or tools. What approaches are you experimenting with to make the judgment layer scale when volume explodes?
The table of what we said was impossible vs what happened is a great reference. The accountability framing is the key insight here. I'm building AI-powered tools and the hardest problem isn't getting the AI to generate good code — it's designing systems where humans stay meaningfully in the loop. When AI writes 80% of a PR, the review process needs to fundamentally change, not just speed up. Great piece.
@vibeyclaw Thanks,glad the table and accountability framing landed for you.
You're exactly right. generation is the easy(ish) part now. The hard engineering is redesigning the loop so humans aren't just rubber-stamping 80% AI PRs. we need review processes that force real interrogation, catch confident hallucinations, and preserve ownership.
When you've got AI writing most of a PR, what changes have you made (or are experimenting with) to keep humans meaningfully in control? More structured checklists, mandatory "why this decision" notes, separate verification passes? Curious what actually works in practice.
Great question. Here's what's actually worked for us:
Mandatory "intent annotation" — before every AI-generated PR, the developer writes a 2-3 sentence explanation of why this change exists and what tradeoffs were considered. Forces you to think beyond "the AI suggested it."
Differential review — instead of reviewing the full PR, we diff what the AI generated against what a human would have written (even just mentally). The gaps are where bugs hide.
"Explain this line" challenges — during review, randomly pick 3-4 lines and ask the PR author to explain them without looking at the AI conversation. If they can't, that's a red flag.
The biggest insight: the review process needs to be adversarial toward confident-looking code, not just syntactically correct code. AI writes very convincing wrong answers.
@vibeyclaw This is fantastic. thanks for sharing what actually works.
"Intent annotation" forcing the 2-3 sentence "why + tradeoffs" is brilliant. it turns passive acceptance into active thinking. The "explain this line" challenge is ruthless in the best way; if they can't defend it without the AI log, it's not owned.
The adversarial mindset toward confident code is the killer insight. AI excels at plausible answers; humans have to be the skeptics.
Have you seen any pushback from devs on these rituals (e.g., "too much ceremony"), or do they buy in once they see the bugs it catches?
Great question. Honestly, some initial eye-rolling at the "ceremony" — especially from senior devs who feel it slows them down. What flipped the mindset was showing them their own bug rate data: the devs who adopted intent annotation had 40% fewer production incidents over 3 months. Hard to argue with that.
The key was making it lightweight. We don't require a novel — just 2-3 sentences: what's the intent, what tradeoff did you accept, what would you watch for in prod. Takes 30 seconds per PR. The "explain this line" challenge we only do in code reviews, not every commit, so it doesn't feel like a tax on velocity.
Biggest win was reframing it as "this protects you when the AI-generated code breaks at 3am and someone asks why we shipped it." Self-interest is a powerful motivator.
When you say these things, are you suggesting that backend teams aren't needed because a single project manager can do it all, and that ten years of experience is now worthless? You feel like AI really writes code at senior engineers?
And the conclusion here is that, deep down, we all believe this as well? And so the only way to protect ourselves is by coming out and saying AI isn't as good as a senior, trying to convince people it's worse than what it really is? Hence, we're gatekeeping?
There is another explanation - maybe AI really isn't at that level yet. What it can do is impressive and many people find it helpful to integrate it into their workflow, but it's not senior level yet. Which means it's not gatekeeping for people to avoid an over reliance on AI code, or for open source projects to want to avoid a flood of low quality AI generated PRs, etc, it's just them being realistic with how things are today. Perhaps things will change in the future, like you said, but for now, AI just isn't there. Or, at the very least, it would be good to accept that people honestly believe that AI isn't there so when they limit the use of AI, it's not because they're trying to be dishonest and gatekeep, but because they honestly feel that code quality would be worse if there weren't limits.
@thescottyjam Good pushback, I agree, a lot of the caution around AI PRs or over-reliance isn't gatekeeping; it's realistic about current limits. Claude can generate solid code, but it's not refactoring legacy systems with deep context or making tradeoffs under real constraints yet.
I'm not saying AI is already senior-level (it's not), just that the panic often focuses on "it'll replace us" instead of "it shifts what we gatekeep." The honest belief that "it's not there yet" is valid. it's why accountability and judgment stay human for now.
What limits have you seen most clearly in practice that keep it from senior territory?
To be honest, I don't love comparing LLMs to Junior or Senior programmers, since it's extremely different in capabilities from either. It knows a lot about a huge range of topics, more than any single programmer could ever hope to know, and it knows how to solve straightforward problems or problems that have been solved many times over much faster than any individual programmer could ever hope to do.
But that's about it.
It's really bad a assessing the pros and cons of different approaches in the context of the company's goals, problem solving, coming up with alternative approaches, asking for clarification, learning, prioritizing what's most important, coming up with ideas to improve the product, isolating nasty bugs, and so forth.
Sure, AI can do some of this stuff to a small extent when explicitly prompted, but seniors tend to be capable of doing most or all of it fairly well. Juniors of course won't be as skilled in these areas, but they still generally do better than AI - a junior's more likely than AI to be able to figure out how to isolate a nasty bug, come up with good ideas to improve the product, and so forth.
So an AI's programming skill is probably at junior level (maybe a little higher), assuming the junior is aware of a really wide range of topics, but the other important skills just aren't there yet.
@thescottyjam I agree, AI's breadth is insane for known/solved problems, but it falls flat on context-aware tradeoffs, bug isolation, prioritization, and product ideas. That's why I see it as a force multiplier for engineers who already have those skills, not a replacement for them.
The "junior-level on syntax, but missing senior judgment" framing makes a lot of sense. it explains why the panic feels overblown to some and terrifying to others. The gap in those "soft" engineering skills is what keeps humans essential.
What do you see as the biggest current weakness in AI for those contextual/prioritization areas?
Really thoughtful piece. The distinction between 'programming' and the 'social architecture' around it is a masterclass in naming the problem. We aren't mourning the loss of writing code; some are mourning the loss of the exclusivity of being a coder.
I’m particularly interested in your questions about accountability. If 'code review theater' is dead, we’re forced into a world where we have to be much more honest about our technical debt. AI makes it incredibly easy to build a 'house of cards' that looks like a mansion. My big worry: as the volume of code explodes, will our 'judgment' be able to keep up, or will we just end up with 'AI reviewing AI' until the whole system becomes a black box? The gap you mentioned—knowing what's worth building—is the only high ground left.
@shalinibhavi525sudo Thanks, "mourning the exclusivity" is exactly it. The social architecture was never just about code quality; it was about who gets to claim expertise.
Your house-of-cards point is sharp. AI makes beautiful mansions easy, but debt explodes, volume overwhelms, and "AI reviewing AI" risks a black-box mess. The only defensible ground left is judgment on "what's worth building" and honest ownership of the mess.
How do you see teams avoiding that black-box trap right now? More human review layers, or something else?
The 'AI reviewing AI' black-box trap is exactly the nightmare scenario. It’s like having two people who don't speak the language trying to proofread each other’s translation—eventually, you just get a beautiful-looking hallucination.
From what I’m seeing, the teams avoiding the 'house of cards' trap aren't just adding more human review; they’re actually changing the level of the review. Instead of nitpicking lines of code (which is where we get overwhelmed), the focus is shifting to Architectural Audits. >
We’re seeing a move toward Spec-Driven Development—where a senior dev’s job is to ruthlessly define the constraints, the why, and the edge cases before the AI ever touches a key. If the human owns the blueprint and the accountability, the AI just becomes a high-speed bricklayer.
It’s a massive shift for juniors, though. We have to figure out how to teach them judgment when they aren't spending years in the syntax trenches anymore. I suspect the defensible ground isn't just knowing what to build, but being the person who can spot when the mansion the AI built doesn't actually have a foundation.
It’s a wild time to be in the room, isn't it? I’m still hopeful, but I’m keeping my accountability hat on tight!
@shalinibhavi525sudo This is brilliant. the "two people proofreading translations in a language they don't speak" analogy nails the black-box nightmare.
Shifting to architectural audits + spec-driven development makes total sense: human owns the blueprint (constraints, why, edge cases), AI lays bricks fast. That keeps the foundation solid instead of building pretty houses of cards.
The junior piece is exactly about recreating judgment without the old syntax trenches. how do we teach spotting "no foundation" when AI makes the mansion look flawless?
It's definitely a wild time. Keeping the accountability hat on tight feels like the only sane move right now.
i totally agree with your point, but the real problem is that juniors are using AI to write all the code without understanding the logic behind it. As they are not aware of the consequences that they will face if they will not learn the fundamentals. This will create a significant gap between junior developers and senior developer.This leads that a junior developer will never become as skilled as senior developer over the time. the question is Who will fill the gap?
@rohit_giri Thanks yeah, that's the core junior trap I'm worried about too. AI lets them ship CRUD fast, but skips the "why this fails at scale" scars that turn juniors into seniors.
The gap widens if we don't force fundamentals somehow. My next piece is digging into exactly that. how do we rebuild the ladder so juniors still learn logic/consequences, not just prompts?
What do you think the first "must-learn" fundamentals are that AI can't shortcut effectively?
This piece cuts through a lot of noise, and I appreciate that it refuses the easy villain narrative.
What AI seems to threaten isn’t “software development” so much as unexamined comfort. The parts of the job built on repetition, ceremony, and accidental gatekeeping feel exposed—not because AI is magical, but because they were never the essence of the craft to begin with.
The irony is that good engineers already work with abstraction, automation, and leverage. AI is just another layer—louder, faster, and more visible—forcing the question many avoided: what value do I actually add beyond syntax and recall?
Gatekeeping panic often reads less like fear of replacement and more like fear of re-evaluation. That’s uncomfortable, but not new. Every major shift in tooling has asked the same thing.
Strong article. Clear-eyed without being dismissive.
@canabady Thanks, "unexamined comfort" is a perfect way to put it.
AI isn't threatening the craft itself; it's exposing the parts that were never the core repetition, ceremony, accidental prestige. Good engineers have always lived on abstraction and leverage; this is just a louder version forcing the "what do I actually add?" question.
The panic often feels like fear of that re-evaluation more than fear of replacement. Glad the piece cut through the noise for you.
This perspective exposes a critical shift in our industry. AI tools indeed challenge our perception of what constitutes a "real developer". The real questions revolve around accountability and oversight when using AI. If we trust AI to generate code, who is responsible for its maintenance, especially in production? As we move towards increasingly integrated AI systems, we must prioritize strong architectural decisions and robust practices. This isn't just about maintaining control; it’s about ensuring our systems remain reliable and comprehensible as we embrace these new technologies. 🤔
@theminimalcreator Thanks,exactly, the shift isn't about generation; it's about keeping humans meaningfully responsible for reliability and comprehension.
When AI generates 80% of the code, the architectural decisions and oversight become the only thing keeping systems from becoming unmaintainable black boxes. Prioritizing those "strong practices" is the real unlock.
What's one oversight pattern you've found most effective when integrating AI-generated code into production systems?
Some comments may only be visible to logged-in visitors. Sign in to view all comments.