We are currently living through the greatest inflation of software in history.
With the AI tools we have available in 2026, a Junior Developer can generate more lines of code in an afternoon than a Senior Developer used to write in a month. We have lowered the barrier to entry for creation to almost zero.
But we have not lowered the cost of maintenance.
If anything, we have created a crisis. We are drowning in "good enough" code, boilerplate, and features that "might be useful later."
Code is not an asset. It is a liability.
I used to measure my productivity by how many green squares I had on my GitHub contribution graph. I thought that writing more meant I was building more value.
I was wrong.
Every line of code you write is a commitment. It is something that needs to be:
- Tested
- Debugged
- Secured
- Updated when dependencies break
- Read by the next person (or yourself in 6 months)
The Hoarder Mindset
I recently realized that my codebase looked exactly like my "Read Later" list—a graveyard of good intentions.
I had features I built "just in case." I had abstractions that were "future-proof" (for a future that never arrived). I had utility functions that were used once and then forgotten.
It was digital hoarding. And just like hoarding physical objects, it creates a mental load that paralyzes you.
The Era of the Code Janitor
The best developers I know right now are not the ones spinning up 10 microservices in a weekend. They are the ones walking into a project and saying:
"We can delete this module."
"We don't need this library."
"We can solve this without code."
They are not Architects. They are Janitors. And I mean that with the highest possible respect.
The Art of Subtraction
I spent this past weekend doing nothing but deleting.
I removed a feature that only 2% of users touched but caused 50% of the support tickets.
I ripped out a complex state management library and replaced it with standard React hooks.
I hard-coded variables that I had made dynamic "just in case."
The result? The bundle size dropped. The build time was cut in half. But most importantly, my mental model of the system became clear again.
Conclusion
In a world where AI can write infinite code, the value of writing code approaches zero. The value shifts entirely to curation.
Your job is no longer to build the mountain. Your job is to carve the sculpture out of the rock.
If you want to be a Senior Engineer in this new era, stop asking "What can I add?" and start asking "What can I remove?"
Go look at your PRs from last week. Did you add complexity, or did you remove it?
The most satisfying commit message is not "Feat: Added X".
It is "Refactor: Deleted 2,000 lines".


Top comments (56)
I decided to refactor my code of 5,327 lines in main.py + 344 + 665 additional scripts + 7 + 358 + 226 + 378 lines of shaders. I feel like I haven't slept in a week. Seriously, I just tried using
__init__.pyand I don't even understand its purpose, other than burning my patience...?!Refactoring is an art. And apparently, the way I format the structure, it's safe to say I'm a bad architect. No. As Paolo Veronese said, "I'm an artist, I see it that way." And I'll say, "I'm a vibe coder, and I don't need architectural skills, I just need it to work." Maybe that's bad, but at least it gives me impetus to create something bigger (I hope).
5,327 lines in main.py? That is not a script, that is a novel. Respect.
I love the term "Vibe Coder". Honestly, getting it to work is the hard part. Architecture is just what we do later when we are tired of scrolling up and down 5,000 lines to find a variable.
Don't let init.py break you. Python packaging confuses people with 10 years of experience. Go get some sleep.
Thanks!
@embernoglow Vibe coding builds prototypes. Deletion builds cathedrals.
The arc:
You're living the post exactly and great to see you finally did prune your codebase. BTW your sdf repo is just gold :) Love to see more open source contributions like this.
Thank you!
This is not a 2026 skill, writing as less code as possible has always been the goal of programming.
Libraries and frameworks should be a considered solution in the application, not a starting point.
With AI the libraries and frameworks could become a thing of the past because it can produce custom code faster than setting up a framework.
In the AI experiments I'm doing I'm using a router for a front controller, the base of most web frameworks, to check how far I can push AI.
A router can come with one or more ways to identify the routes; config, a builder pattern, attributes. But an application will have a single way of router identification to make it predictable.
So you don't need the other options, which means you can remove abstractions that are made to make the other options possible.
Because the base isn't a framework anymore, the application doesn't force you to use the opinion of the framework if it has one.
The question is not can you delete lines, which is the worst metric you can use to measure code quality.
The question is how much code are you willing to maintain.
You nailed it with your last sentence. How much code are we willing to maintain is the only metric that really matters.
Your point about AI replacing frameworks is incredibly interesting. We used to accept framework bloat because we needed the development speed. Now that AI can write a custom router just as fast, we do not need to import all that extra baggage anymore.
The challenge now is just making sure the AI does not invent its own bloated abstractions when we ask it to build those custom solutions. Keeping it minimal is definitely a timeless skill.
You can instruct AI to write the simplest code, and the output always has to be reviewed.
The dead of frameworks with AI is that they are opinionated, and everyone has their own opinion.
Like you mentioned we accepted the framework opinion, because it brought us speed.
Now we can start with the libraries and custom code, and let AI create the glue code in the same time that is needed to setup a framework.
That is why I took the router component as a base to experiment with AI. Does it reliably alters the router, Does it do a bit weird stuff like using FFI code for the route matching. Does it create the base for a maintainable application. Does it write tests an documentation that requires minor changes. And so on.
I'm late starting with the experiments but the tools we have today are much easier and diverse, and we also understand the pros and cons of AI use better. Most of the time it pays off not to dive in from the start.
The maintenance cost point is the one that doesn't get enough attention. We benchmarked AI-generated code recently and found that 65-75% of functions had security vulnerabilities — so it's not just that AI creates more code to maintain, it creates more risk to maintain. Every line you don't delete is a line you're implicitly agreeing to secure, and most teams aren't doing that audit. The "code janitor" framing is spot on. @the_nortern_dev
That 65-75% stat is terrifying, but honestly not surprising.
I absolutely love how you phrased this: "Every line you don't delete is a line you're implicitly agreeing to secure." That should be printed on every CTO's wall.
It reframes the "Code Janitor" from someone who just cleans up messes to someone who is actively reducing the attack surface. In 2026, deleting code is arguably the most effective security patch you can apply.
Thanks for adding that data point to the discussion.
That 65-75% stat is terrifying, but honestly not surprising.
I absolutely love how you phrased this: "Every line you don't delete is a line you're implicitly agreeing to secure." That should be printed on every CTO's wall.
It reframes the "Code Janitor" from someone who just cleans up messes to someone who is actively reducing the attack surface. In 2026, deleting code is arguably the most effective security patch you can apply.
Thanks for adding that data point to the discussion.
Honestly, I don’t really like the vibe of "vibe coding". It’s great for scaffolding and getting something off the ground fast. But once you start leaning on it too heavily, it almost feels like the AI senses the dependency and starts churning out slop.
That is a perfect description. "Vibe coding" feels like borrowing time from the future at a predatory interest rate.
I have noticed that exact degradation too. Once the context gets complex, the AI starts guessing. If you do not have the deep knowledge to audit and delete that "slop" immediately, you end up with a codebase that is technically working but impossible to maintain.
This is an excellent consideration to bring forward this year. The ability to simplify and create lean code that does more with less signifies a deeper understanding of the codebase and should be rewarded as such.
Spot on, Julien. I think the challenge for engineering leaders in 2026 is exactly that: figuring out how to reward the "negative space" in a project.
It’s easy to measure a feature launch, but it's much harder to measure the value of a bug that never happened because the code was simplified. We need to move away from "lines of code" as a metric and start looking at "reduced complexity" as the true sign of seniority.
Have you seen any teams successfully implementing metrics or cultures that actually incentivize this kind of subtraction?
In one of my previous teams, it would be taken as a positive to simplify/remove code whenever possible. However implemented as an actual metric no, it would be interesting to explore that further.
Making it a formal KPI is definitely risky. Goodhart's law kicks in fast and people might start deleting safety checks just to hit a number.
We just highlighted negative lines of code during sprint reviews. It was enough to signal that cleanup mattered without creating perverse incentives
Good point! Goodhart's law indeed!
This resonates a lot with me.
As someone building AI-powered and full-stack products, I’ve realized that shipping fast is easy in 2026 — especially with AI — but maintaining clarity is the real engineering skill. Every abstraction, every dependency, every “future-proof” config adds cognitive load to the system.
In one of my recent projects, removing an unnecessary state library and simplifying the data flow improved performance more than any “new feature” I added. The code became easier to reason about — and that’s real productivity.
Writing code is creation.
Deleting code is judgment.
And judgment is what separates developers from engineers.
"Writing code is creation. Deleting code is judgment." That is a brilliant way to frame it. I might have to quote you on that in the future.
State management is the absolute perfect example. We always reach for the heavy libraries on day one, convinced the app will be massive. Ripping that global state out later and realizing standard data flow works fine is the best feeling.
Keystrokes are cheap now. Judgment is the actual bottleneck.
That really means a lot — feel free to quote it 😄
You’re spot on: premature architecture is just overengineering in disguise. AI made keystrokes cheap, but judgment, restraint, and knowing when not to abstract — that’s the real senior skill.
Consider the quote officially stolen.
Fighting the urge to build for scale on day one is definitely the hardest habit to break. Thanks for a great back-and-forth. See you in the next thread.
YES. I wrote about this same topic this week because it's so underappreciated.
The hardest part isn't finding code to delete — it's convincing the team. Nobody wants to be responsible for deleting something that "might be needed someday." I've found that framing it as risk reduction works better than "code cleanup":
"This unused auth module has 3 unpatched CVEs in its dependencies. Deleting it eliminates the attack surface and saves us from maintaining code nobody uses."
Security risk + maintenance cost > "it's messy." Managers respond to the first, not the second.
My biggest deletion win: 8,200 lines of a deprecated auth system. Saved $340/month in Redis costs for a cluster that only the dead code was using.
8,200 lines and killing a useless Redis cluster is the absolute dream.
You are completely right about the framing. "Code cleanup" sounds like a low-priority chore to management, but "attack surface reduction" sounds like an urgent necessity.
Tying deletion directly to infrastructure costs and CVEs is a brilliant way to get buy-in from the people holding the budget.
Spot on for 2026!
Pruning AI-generated cruft exposes the real system. (I once refactored a 10k-line LLM service to 2k—same perf, zero tech debt)
The new interview: "Delete 50% of this codebase. Explain why" :)
Codebase value concentrates as code shrinks ! 🚀
I bet the maintainability of that service went through the roof.
I absolutely love that interview concept. We spend so much time testing candidates on their ability to write new algorithms, but almost zero time testing their ability to read and simplify existing ones.
If a candidate can look at a module and confidently say "we can delete this half because it is just legacy bloat," that is an instant hire for me.
True! I am starting to think, from now on I am gonna keep this as an evaluation parameter for candidates while hiring.
This is exactly the mental shift that separates seniors from juniors in 2026. Everyone can generate code now — the differentiator is knowing which generated code to keep and which to throw away.
I've seen teams drown in AI-generated boilerplate because they treated every suggestion as gospel. The real skill is asking "does this actually solve my problem, or did the model just throw its most common pattern at me?"
The "Code Janitor" framing is perfect. Deletion isn't just cleanup — it's curation. In a world of infinite generation, the engineer who can say "no, we don't need this" is more valuable than the one who can write it.
Your digital hoarding analogy hit hard. My "read later" list and my codebase have a lot in common — both are full of things I thought I'd need someday but never touched.
Great piece!
The parallel between a "read later" list and a codebase is spot on. We are just hoarding text in different formats.
You hit the nail on the head regarding AI boilerplate. The models are designed to be helpful, which usually means they over-deliver. They will hand you a massive factory pattern when all you needed was a simple function.
Being the person who can look at 50 lines of perfectly generated code and just say "no thanks" is absolutely the new senior skill.
I think the key is controlling that thin boundary between the layers where you make architectural decisions and where you hand things off to generative AI.
It’s really a system of abstraction levels. At the business level, you could say “build me a program that makes a million dollars” and delegate everything below. Or you can consciously design each layer yourself and use AI as a tool that translates your thinking into working, understandable components that you can combine into a product.
Right now, I’ve chosen to use AI to generate mostly isolated modules — things I could have written myself and fully understand. Then I treat them as reusable building blocks.
I was already working this way before AI, but now it feels like the optimal approach. Fixing, debugging, or vibe-checking a module is limited to its local context, so cognitive load doesn’t explode — and I don’t feel tempted to outsource all decisions to the model
Owning the boundaries is exactly the right approach.
If you let the model architect the entire system, you are basically just a passenger. Generating isolated modules that you actually understand is the absolute sweet spot right now. It keeps the cognitive load entirely local.
The moment you ask the AI to start wiring all those modules together is when the technical debt really starts compounding. Treating them as reusable, human-verified building blocks is the only way to stay sane.
The "digital hoarding" analogy really hit home. I spent a month building an elaborate plugin system for a personal project, convinced I would need it "someday." Eventually I ripped the whole thing out and replaced it with a 40-line script that did exactly what I needed. The relief was immediate — not just in the codebase, but mentally.
Your point about code being a liability rather than an asset is especially true now that AI can regenerate boilerplate in seconds. The real skill is knowing what NOT to build in the first place. I have started asking myself "can I solve this with configuration instead of code?" before writing anything, and it is surprising how often the answer is yes.
Replacing a month of architecture with a 40 line script is painful but also the best feeling ever.
You nailed the psychological part. Code isn't just bytes, it is mental load. Every abstraction is just one more thing you have to remember later.
I love the configuration vs code rule. I am definitely stealing that idea.
Some comments may only be visible to logged-in visitors. Sign in to view all comments.