DEV Community

Cover image for Programming in the Age of AI: From Code to Intent
Roberto B.
Roberto B.

Posted on

Programming in the Age of AI: From Code to Intent

I was building an AI agent in PHP, wiring up tools, feeding it project documentation, defining how it should interact with the codebase, when a thought stopped me mid-keystroke.

I was writing instructions for an AI, in a language designed for humans, using conventions invented so human brains could follow the logic.

And I thought: why are we still doing this?

Programming languages were designed for us humans:

  • Clear instructions if, while, for.
  • Indentation for our eyes.
  • Mnemonics for our memory.
  • Clean APIs so we can reason about behavior.
  • Design patterns so we don’t forget how systems are structured.

But now AI is writing the code.

AI doesn’t need mnemonics. It doesn’t get confused by nested brackets. It doesn’t get fatigued by verbosity. It can hold an entire codebase in context.

So the question feels inevitable: "Should we create a programming language optimized for AI instead of for humans?"
I think the answer is yes.

But not in the way most people expect.

The Pattern We Keep Repeating

Look at the history of how we talk to machines:

  • 1950s: Humans write machine code. Raw numbers. Only a handful of experts can do it.
  • 1960s: Assembly arrives. Still low-level, but now we see MOV and ADD instead of opcodes.
  • 1980s–90s: C, Java, Python. We stop thinking about registers and memory addresses. We write for user in users and let the compiler handle the rest.
  • Today: We write PHP, Python, TypeScript. AI assists us. But we still write code line by line.

Notice the pattern?

Each generation moves humans further from the machine and closer to pure intent.

Nobody writes assembly today. Not because it stopped working, it’s still there under everything you use. We just stopped reading it. It became an intermediate representation.

Code is about to become the next assembly.

The Three Layers of the Future

Here’s what I think is emerging.

Layer 1: Humans Write Intent

I expect we will not have code, but we will have specifications. Constraints. Business rules. Performance requirements. Security guarantees.

Instead of:

if ($user->isAuthenticated()) { ... }
Enter fullscreen mode Exit fullscreen mode

We define something like:

Authenticated users can save preferences.
Maximum 100 preferences per user.
Response time under 200ms.
Backwards compatible with the v2 API.
Enter fullscreen mode Exit fullscreen mode

This looks less like code and more like a contract.

Programming becomes the act of defining what must be true, not how to make it true.

Layer 2: AI Generates Code

PHP, Python, Rust, they don’t disappear.
The ecosystems are too valuable. The runtimes, the libraries, the decades of optimization. AI writes in these languages because that’s where the infrastructure lives.

But humans read this code less and less.

Just like you don’t read the assembly output of your C compiler, you won’t read every line of AI-generated PHP.

Code becomes an implementation detail.

Layer 3: Machines Verify Correctness

This is the critical piece, if humans stop reviewing every line, how do we trust the output?
The answer isn’t “trust the AI more.”

It’s verification.

  • Strong type systems
  • Contract checking
  • Property-based testing
  • Static analysis
  • Formal verification tools
  • Automated proofs that the implementation matches the specification

The AI-optimized “language” isn’t a new syntax for writing loops.

It’s a rigorous way to describe intent, and to mathematically or systematically verify that generated code satisfies it.

The future of programming isn’t about generating code faster. It’s about proving code correct automatically.

Why This Matters Now

We’re in an awkward transition period.

AI writes code, but humans still review it line by line.
That’s like reviewing assembly output in 1995. Technically possible. Not where the value is.

The developers who will thrive aren’t the ones who type the fastest.

They’re the ones who:

  • Define precise specifications
  • Express constraints clearly
  • Build strong verification systems
  • Design architectures that are easy to validate

The skill is shifting:

  • From “How do I implement this algorithm?” → to “How do I define what correct means?”
  • From “Let me debug this code.” → to “Let me define constraints so bugs can’t exist.”
  • From writing instructions for machines → to writing intent for AI.

What This Means for Junior Developers (and People Starting Today)

There’s an uncomfortable question hidden in all this:

If AI writes most of the code in the future, is learning programming still worth it?

Yes. More than ever.

Understanding programming logic is not about memorizing syntax. It’s about understanding:

  • Control flow
  • Data structures
  • State
  • Side effects
  • Performance trade-offs
  • System design
  • Failure modes

If you understand how machines think, you can describe intent more precisely.

You know what the machine expects.
You know what can go wrong.
You know where ambiguity hides.

And that makes you dramatically more effective when collaborating with AI.
The abstraction layer may rise. But understanding what’s underneath remains a superpower.

There’s another practical reason this matters today.

If the future depends on verification and coherence, then the habits we adopt now become critical:

  • Use strict typing.
  • Write automated tests.
  • Use static analysis tools.
  • Enforce consistent architecture.
  • Keep code uniform and predictable.

Why?

Because coherent, well-structured codebases are easier for AI to understand, extend, and reason about.

If tomorrow AI generates 100% of the code, consistency will reduce:

  • Errors
  • Misunderstandings
  • Hidden edge cases
  • Bias introduced by unclear patterns

Clean, strictly typed, well-tested systems are not just “good engineering.”
They are AI-ready systems.

Learning programming today is not obsolete. It’s training for a higher level of abstraction tomorrow.

The Real Singularity in Software

The singularity isn’t AI writing code.
That’s already happening.
The real shift is the moment when code becomes an intermediate representation that nobody reads, just like assembly before it.
When that happens, “programming” won’t mean writing PHP or Python.

It will mean writing intent.

The language changes. The skill doesn’t.

It has always been about turning human intent into working machines.

We’re just removing one more layer of translation.

If you’re already using AI tools such as Copilot, Claude, Cursor, or others, have you already noticed the shift?

Less mental energy on syntax. More on defining what you actually want.

And the uncomfortable question:

When was the last time you read every line of AI-generated code before shipping it?

Are we ready to let code become the next assembly?

Or are we holding on because we’re not ready to trust what we can’t read?

Top comments (2)

Collapse
 
xwero profile image
david duymelinck • Edited

I have been thinking about the same thing. Why are we forcing AI to use programming languages?

If AI can write perfect code, preferably machinecode, all we need to do is to get readable tests that we can review and execute.
The problem is AI does not write perfect code. Even with agents and fine grained contexts AI is not perfect. Neither are we, but what we have over AI is responsibility.

Because LLM's are based on probability, they are never going to write perfect code hundred percent of the time. Do you trust a compiler that doesn't compiles the code the same every time?

I do think developers should switch to the most performant languages. And I say that as someone that has PHP as a comfort language. With AI it is making less sense to use languages that gives the application a performance disadvantage. The only script language we are going to keep using is Javascript, because that is the only main language most browsers support.

So for the thing we call AI at the moment we are still going to choose our languages and understand them well. Only intent leads to failure.

Collapse
 
osama_alghanmi_3a45d72c33 profile image
Osama Alghanmi • Edited

I hate to drop my own work in someone else's thread, but this felt too relevant to stay quiet about. I've been building something called

Almadar that lands almost exactly on the problem you're describing.

Your three layers turned out to be real. We accidentally stumbled into all three at once.

Layer 1 (intent) became a JSON schema: entities, state machines, business rules. Not code. Closer to your "authenticated users can save preferences, max 100, under 200ms" than to any programming language.

Layer 2 (AI generates code) flipped for us. We stopped having the LLM write Python or TypeScript directly. Instead the LLM generates the schema
once — the intent layer — and a deterministic compiler produces the code. The LLM runs once. The runtime runs forever. Same abstraction shift you're describing, but the intermediate representation is the schema, not the AI prompt.

Layer 3 (verification) is where it gets interesting. Because the schema is structured data — valid JSON — we can formally validate it before a
single line of code is generated. The validator catches impossible state transitions, missing event handlers, circuit violations. You get the guarantees without the human reading the generated TypeScript.

The uncomfortable thing we found: once the schema validates, the generated code is basically assembly. We stopped reading it.

"The abstraction layer may rise. But understanding what's underneath remains a superpower."

That line is going on our internal docs.