DEV Community

Nikolai Noskov
Nikolai Noskov

Posted on

I Haven't Written Code by Hand for a Year. And I'm Not Going Back.

Note: This article was originally written in Russian and adapted for international audience. Original version on Habr

Hey dev.to! I'm a freelance developer from Russia. This is my take on AI-assisted development — not theory, but a year of real production experience.

TL;DR: Freelance developer, 2 years of experience, no CS degree. Grew income 6x in two years using LLM-first development. 100% of my code is now generated by AI. Projects in production, clients happy, team growing. Here's how it actually works — and why most skeptics haven't even tried it.


1. Numbers That Won't Convince Anyone

Two years ago I started freelancing at around $400/month. Now — $2,500/month. 6x growth. For context — that's solid money for Russia, roughly equivalent to a senior developer salary in Moscow.

I don't have a CS degree — just free online courses. Never worked at FAANG (or Yandex, our local equivalent), never wrote compilers, never contributed to major open source projects (though I'm itching to).

I build Telegram bots, Mini Apps, marketplace seller assistants, RAG systems, business automation. Small to medium projects for SMBs. Not enterprise — I've had a couple enterprise clients approach me, but we couldn't agree on terms. Probably for the best.

A year ago I stopped writing code by hand. Completely. And I don't recommend my interns do it either.

At first it was just iterative fixes — letting LLM handle unclear parts. Then I started providing more project context to chat-based models. Then came Cursor, Claude Code, Codex CLI. I wrote about my methodology in detail elsewhere — this post is more about the philosophy.

There was fear. Not doubt about the approach — actual fear. Like being afraid to forget how to walk. But it turned out — you need to stop walking to learn how to fly.


2. Concrete Cases

Let me be specific:

Marketplace Seller Assistant — $4,500. Bot that handles inventory, sales tracking, and reviews for Wildberries (think Russian Amazon). Sends notifications for new orders, pickups, cancellations, returns. Tracks critical stock levels relative to sales dynamics. Filters review notifications by star rating. Includes warehouse statistics, GPT-powered analytics with graphs over custom time windows. Competitor parsing, semantic core extraction from product descriptions for SEO. And the cherry on top — RAG for database queries via Postgres vectorization.

This isn't a landing page. It's an inventory management system with analytics, integrations, and AI features.

Virtual Try-On Bot — $1,300. Bot accesses product catalog with photos. User uploads their photo, GPT validates it, then we use an AI try-on service to show the product on them. User gets direct marketplace links plus size recommendations based on their measurements.

Online Backgammon in Telegram Mini App — $4,000. With betting in TON cryptocurrency via smart contracts — escrow contract that holds the bet pool and sends winnings to the winner minus platform commission. Yes, the kind of project where "boldness and madness win," as one commenter put it.

Prototypes for each stage — often with mocks — delivered in 1-2 weeks. Not months.

These are projects from the last 3 months. 100% of the code generated by LLM. All in production. Clients satisfied. Ongoing maintenance and expansion.


3. Honest About Failures

There was one serious failure that affected production. But the problem wasn't LLM — it was process.

Client showed me requirements — I quoted $5,000. Too expensive, they said. Can we do cheaper? So I cut everything to a narrow MVP (removed smart contracts, monitoring, switched backend from Go to TypeScript, bunch of other stuff). New price: $1,800.

I didn't set clear boundaries in the spec. Client started requesting things not mentioned initially. I kept agreeing — yeah, I was being an idiot. We lost context, documentation became outdated because we started building new features on top of an unfinished core. The core itself started falling apart and was hard to restore.

My mistake — agreeing without setting boundaries. Explained to the client that we shouldn't deviate from the spec like that. Resolved it, learned my lesson.

There were also cases where I missed deadlines due to wrong instructions for the AI. For example, not specifying in documentation that we strictly don't touch old code — only add new. Without this constraint, old logic started breaking, error cycles began.

Key insight: don't let AI constantly fix its own errors. You need to inject new context yourself, test hypotheses. That's the difference — a non-developer can't even guess where the bug is buried.


4. What Skeptics Say

Under my posts and in comments on other LLM-development articles — same arguments every time. Let me address specific quotes.

"Not checking code after LLM — that's bold. No documentation will save you from hallucinations"

Hallucinations exist. Bugs exist. And humans don't have bugs? The difference is I spend 10% of time on code generation and 90% on testing and debugging. Those 10% used to be not hours, but weeks. Documentation isn't protection from bugs — it's a way to maintain context and avoid loops. Without it, LLM starts circling through the same digested swamp.

"Obviously these are projects where error cost is negligible. Better not vibe-code an inventory management system"

But we do build inventory management systems via LLM — with statistics, analytics, RAG. Everything works. Question for the skeptic: have you tried building such projects with LLM yourself? From actual experience?

"I would never trust LLM with a critical project. Errors can surface from unexpected angles. The owner will suffer serious losses"

I've been running on this scheme for a year. There were complaints — but all resolved quickly. Within normal development error range. In terms of time and financial output — I see only positives. But I won't forget the warnings.

"If you're carefully checking the code — you spend more time than writing it yourself. You'll be slower than an experienced programmer"

Not true. I had experience with manual development, completed many projects and smoothly transitioned to LLM. By all metrics — time and financial output — I see only positives. Bot in 3 hours instead of 3-4 days. Landing page with backend in 3 hours instead of a week. This isn't theory — it's my reality.

"When the team has no awareness of the codebase, all decisions are 'fix it now', and there's no architecture in sight — things will go south"

Agreed. That's exactly why AI-driven doesn't mean without humans. Engineer is essential. LLM won't replace architectural understanding. But it will become the primary tool. Look at the dynamics: in 2023-24 I couldn't even imagine things like Cursor and Claude Code. What will exist in another two years?

"This won't last. We'll forget how to write code quickly by hand"

What does "won't last" mean? What's the practical point of coding by hand? It's slower for clients — they value speed, that's the main competitive advantage. They're willing to pay more for that speed.

At first I was tormented by thoughts that this is wrong, that I'll forget how to read and write code. But then I realized: it's analogous to how wooden abacuses were replaced by calculators, and calculators by Excel. This is progress. Resisting it costs you — in every sense.


5. About Interns

I hire beginners with minimal experience — primarily because their minds aren't cluttered with patterns, no rigid frameworks. People for whom this isn't their main activity — I can't yet afford to pay them full salaries. We agree on terms where they spend no more than 8 hours per week.

I get on calls and show them specifically on my project examples, on concrete tasks — live — how to manage AI. Then I give them real tasks. Always real, never toy projects. I get results, review, identify errors — and explain how to avoid them. Again, using LLM.

Two months is enough for people to become independent. So I don't spend much attention on them anymore. ROI is excellent.

One intern initially denied using LLM, was embarrassed about it. But when I showed how to work with documentation and tests — things took off. Now thinks like an engineer and architect, not a code-typing machine.

I have a student intern studying CS — they aggressively tell them in university that code must be written strictly by hand, that any projects suspected of generation will be rejected. So at first she also hid this fact, wrote slowly — until I clearly said: LLM only.

Now my intern builds a landing page with backend integration in 3 hours. I just explained the methodology. Used to be a week of work. She has free time for studies and even side freelance projects — how she works elsewhere is her business, but on our projects everything goes through AI.

When needed — I pay for interns' Cursor subscriptions, or CLI tools — Codex or Claude Code + show them how to set up Gemini CLI for free. I personally run on Claude Code, sometimes add Codex — that's enough.

Interesting observation: monthly LLM spending correlates with income. I spend no more than 2 subscriptions per month.


6. About Fear and Comfort Zone

Here's what really interests me: why do people with 10-15 years of experience, who've seen technology changes, frameworks, paradigms — suddenly dig in right here?

Simple answer: fear.

Not fear of technology. Fear that skills will depreciate — that years spent honing syntax, patterns, algorithms — will turn out not so important. That some guy without a degree from free online courses will do the same thing faster.

It's uncomfortable and painful. I understand. But it's not a reason to deny reality. Those who do everything through LLM will only grow in numbers.

I read comments: "yeah, probably cool, but I won't risk it on my projects." Or "too expensive." Or "let's wait until the technology matures." Or just "watching with curiosity, eating popcorn" (these are all real quotes from Russian dev community).

This isn't risk analysis. It's fear rationalization. Armchair analytics from those who haven't tried but already know it won't work.

I had no choice. No degree, no connections, no safety net of a salary. I just had to do it. Take it and do it — and it worked.

People develop linearly. One year of experience, two years, ten years. Gradually, predictably, on track. For example — setting a goal to read 1 more book per month than last, increase income by $500 per year — these are examples of linear development. The thing is, linear development hits physical limits — that's when it's time to change approaches, systems — need to increase not quantity, but quality.

The world develops exponentially. Three years ago ChatGPT wrote one-line Python functions. Two years ago — edited large files and found bugs. Now — Claude Code, Cursor, Codex. Entire systems in hours.

Think about this dynamic. Three years. Not thirty, not ten. Three.

Programmers used to be the ones driving this exponential growth. People created technologies that changed the world. Now a technology has appeared that we risk falling behind. It's not a threat — it's a paradigm shift.

A person developing linearly in an exponential world — degrades relative to that world. Not because they're stupid. The world just accelerates faster than they can keep up.


7. What Remains for the Engineer

Important point, so there are no illusions.

LLM doesn't replace understanding. You need to know what client-server architecture is. What HTTP requests are. How databases work. What microservices are. Why one solution is better than another. Without this — no Claude will help you. It's an amplifier, not a replacement. An army of juniors that needs an architect.

Expertise hasn't gone anywhere. It's shifted. Now the main skill is context management, documentation, design. Ability to hold the whole system in your head and decompose it into manageable pieces.

Honestly — this is harder than writing code by hand.

Seriously. Requires constant concentration, control, understanding what's happening under the hood. Not giving in to the temptation to let error resolution run on autopilot, not specifying clear constraints for a specific project. As my friend says — "head like a house of advisors." This becomes the main requirement.

AI-driven doesn't mean "without human." Engineer is essential. LLM just becomes the primary tool. Paradigm shift of tools — not singularity.

There won't be magic wands. There won't be AGI that does everything for you (not in the near future anyway). But there will be a tool that's already necessary to master — because it's the primary tool of an already-arrived future.

Last year the question wasn't this urgent. Now — it is.

Don't get on this train now — you're already late. You can sit in comments and explain why it won't work. Eat popcorn and watch. Or you can try.

I'm not evangelizing. Not selling courses. Not saying my path is the only one.

I just do. Show results. Write about it.

The world doesn't wait. It never did.


If you're interested in my methodology — I wrote in detail about documentation, TDD, and context management. Happy to share in comments.

Ready for tomatoes. As practice shows — it only adds views.

Feel free to ask questions — I'll answer what I can about working from Russia, LLM tools availability, or anything else.

Top comments (0)