DEV Community

Cover image for Your LLMs don't do real OOP, and it's structural.
Eddy BEHIH
Eddy BEHIH

Posted on

Your LLMs don't do real OOP, and it's structural.

Generative AIs write code every day: classes, services, models, controllers. At first glance, everything looks correct. It compiles, it passes tests and it "does the job."

And yet, there's a recurring problem:
code generated by LLMs is often poorly encapsulated.

Not "a little."
structurally poorly encapsulated.

Classes filled with getters and setters, little to no behavior, business logic scattered everywhere. In short: data-oriented code, not object-oriented.

Why?
And more importantly: how to do better when using an AI?


What OOP originally meant (and what we forgot)

When we talk about object-oriented programming today, we often think of:

  • classes
  • private properties
  • getters / setters
  • interfaces

But this is not the original vision.

For Alan Kay, considered one of the fathers of OOP, the central idea wasn't the class, but the message.

His definition is famous:

"OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things."

In other words:

  • objects communicate
  • they keep their state to themselves
  • they hide their internal logic
  • they are loosely coupled

The analogy he used was biological:
autonomous cells that interact without exposing their internal organs.


What LLMs generate instead

Let's take a typical example generated by an AI:

public class User {
    private String email;

    public String getEmail() {
        return email;
    }

    public void setEmail(String email) {
        this.email = email;
    }
}
Enter fullscreen mode Exit fullscreen mode

It's clean.
It's "best practice" according to many tutorials.
But it's not encapsulation.

Why?

Because:

  • internal state is exposed
  • internal type is fixed
  • validation is absent
  • business logic is pushed outside

Result:
behavior ends up in services, controllers, or worse… duplicated everywhere.

We call this an anemic class:
a simple bag of data with accessors.


The false sense of security of getters / setters

Getters and setters give the illusion of encapsulation, but in reality:

  • they expose internal structure
  • they create strong coupling
  • they freeze implementation decisions

Changing a field, its type, or its logic quickly becomes widespread breakage.

In OOP, exposing state is almost always an abstraction leak.


A better question to ask an object

Instead of asking:

if (user.getEmail() == null) {
    // logic here
}
Enter fullscreen mode Exit fullscreen mode

Ask:

if (user.canBeContacted()) {
    // logic here
}
Enter fullscreen mode Exit fullscreen mode

This is already progress:

  • behavior is localized
  • business rule is in the object
  • implementation can evolve

But we can go even further.


The message and event approach

In Alan Kay's vision, an object doesn't say what it is, it responds to what it's asked.

Instead of reading state:

  • you send an intention
  • the object decides
  • state remains internal

An event-driven or message-oriented model allows exactly this:

  • internal state transitions
  • strong decoupling
  • logic concentrated in one place

It's not "more complex."
It's more explicit.


Why LLMs struggle so much with real encapsulation

It's not because AIs are "bad."

It's structural.

  1. They learn from existing code
    And GitHub is filled with CRUDs, DTOs, anemic classes.

  2. Getters / setters are statistically dominant
    So they're "probable," therefore generated.

  3. Business behavior is contextual
    Yet LLMs excel at the local, less at global consistency.

  4. Message-oriented code is less verbose but more conceptual
    And therefore harder to infer without explicit intention.

The AI doesn't understand your domain.
It extrapolates patterns.


How to better use an AI to write OOP code

The solution isn't to stop using AI.
The solution is to guide it better.

When you generate a class, ask yourself (and ask it) these questions:

  • Does this class do something, or does it just transport data?
  • Do I ask the object, or do I read its state?
  • Is behavior localized or scattered?
  • Can I change the implementation without breaking callers?

If the answer is "no," it's probably not real OOP.


The real problem isn't the AI

The problem is that:

  • we've normalized anemic OOP
  • we've confused encapsulation with visibility
  • we've replaced behavior with data structures

LLMs merely reproduce what we've produced for years.


Conclusion

Encapsulation is not:

  • private fields
  • public getters
  • passive models

Encapsulation is:

  • objects responsible for their state
  • localized business rules
  • messages rather than direct access
  • minimal coupling

AI can help.
But it will never replace good modeling.


Further reading
Read "Loopy Loops" on Dave's blog

Top comments (0)