DEV Community

Cover image for AI’s Worst Flaws Will Become Its Nostalgia Aesthetic, Just as Brian Eno Said.
Prithwish Nath
Prithwish Nath

Posted on • Originally published at ai.plainenglish.io

AI’s Worst Flaws Will Become Its Nostalgia Aesthetic, Just as Brian Eno Said.

On the aesthetics of refusal, and the difference between flaws inherent in a medium vs. in the institution.

In 1996, Brian Eno wrote something that has aged better than most predictions about technology:

“Whatever you now find weird, ugly, uncomfortable and nasty about a new medium will surely become its signature. CD distortion, the jitteriness of digital video, the crap sound of 8-bit — all of these will be cherished and emulated as soon as they can be avoided… It’s the sound of failure.”

- Brian Eno, 1996, A Year With Swollen Appendices

Every technological era gets its “retrowave” moment. Not for what the medium did well, but for its glitches. Its imperfections and artifacts. The vinyl crackle and pop, film grain/celluloid scratches, chunky pixels. You get the idea.

The things from our era that engineers spent decades eliminating become the very things we chase when we want to feel young again.

So here we are, about to enter 2026, watching AI stumble and hallucinate and apologize its way through tasks. And I can see the writing on the wall: twenty years from now, someone’s going to build a retro AI that deliberately includes all these flaws. For the aesthetic, the nostalgia, and (most importantly 😄) the memes.

Let me show you what I mean.

Why Do Flaws Become Aesthetics?

Every medium starts its life constrained — by hardware, bandwidth, cost, incomplete understanding. Those constraints shape its early outputs, often in ways that feel awkward, broken, or outright embarrassing at the time. Engineers spend years trying to eliminate them.

But the human brain is weird.

It doesn’t actually discard these flaws. Instead, it turns them into mental markers of an era. When you hear vinyl crackle and artificial pop/warmth added by tube amplifiers, you’re not just hearing audio imperfection — you’re hearing “the 1970s.” When you see pixel art games and retro UI, you’re seeing “the 1980s/1990s.” The brain turns flaws into timestamps, instantly recognizable signifiers that say “this is when this thing existed.”

Tarantino/Rodriguez’ movie Death Proof (2007)

Stardew Valley (2016)

  1. Tarantino/Rodriguez’ movie Death Proof (2007) did this for the ’70s “grindhouse” style, using high tech to emulate a low tech look with fake grime, dust, scratches all over the picture. 2. Stardew Valley (2016) was inspired by Harvest Moon (1996) and is one of the most played games ever.

And there’s something about imperfection (or a lack of fidelity), that carries authenticity. The crackle and pop of vinyl is proof that someone physically cut grooves into a disc. Film grain is evidence that light actually hit celluloid. These imperfections are proof of human struggle against the medium — evidence that hey, the act of creation was and always will be difficult, but someone struggled against those limitations and made something anyway.

Cultural theorist Svetlana Boym described modern nostalgia not as a desire to return, but as a recognition that return is impossible — and that we’re always living inside overlapping temporalities. The past lingers, often unresolved. Aesthetics are formed right there, around those seams. Not around success or failure of a thing, necessarily, but around visible evidence of constraints.

Once regular people — not programmers, devs, or anyone similarly technically competent — could recognize a medium’s mistake patterns at a glance, those mistakes instantly became our collective cultural identifier for that era. Of course, future systems will aim to erase those tells. They’ll blend in.

Which is exactly why the old tells will be missed. Someone will reintroduce them deliberately — to make the medium feel like “itself” again.

But AI Will Give Us Two Completely Different Flavors of Nostalgia.

But we’re in the AI era now, and it’s a little different. Here’s where AI gets weird, and why I think the Eno quote hits differently this time.

AI isn’t going to give us one nostalgic aesthetic. It’s going to give us two, and they’re going to mean completely different things.

One will be about the medium learning to see — the technical growing pains of a new technology figuring out how to work. That’s the “aw, remember when AI was young” nostalgia. Cute. Harmless. The vinyl crackle equivalent.

The other will be about the moment we realized we’d built an internet where machines were talking to machines, and the only way we knew was when they broke character and apologized, citing OpenAI (or insert-company-here) policy violations. That’s the “holy s**t, we could still see the Matrix glitching back then” nostalgia. Dark and revealing and uncomfortable.

Let me break down both.

The Nostalgia of Technical Failure

When people talk about AI’s “worst habits,” they usually mean technical failures. These are obvious — you’ve seen them so many times.

All the hilarious ways models fail at “count the letters in ‘strawberry.’” Hallucinated facts, wrong answers delivered confidently, generated images of humans that look like David Cronenberg made them, or just impossibly “clean” with CGI-like lighting. Oh. And maybe six, seven, eight-fingered hands.

Midjourney generation for “girl in the rain with an umbrella”

The Strawberry Phenomenon

  1. Midjourney generation for “girl in the rain with an umbrella”, 2. The Strawberry Phenomenon

These flaws exist explicitly because of limitations of the medium. Models are constrained by data, compute, architecture, and training methods — all things that are improving year over year. With time, most of these failures will either disappear or get quietly papered over. Image/video models have already gotten much better. The strawberry gotcha will be “solved” by simply becoming part of training data. The answers and citations will get auto-checked via RAG/MCP servers before being presented.

They’re the equivalent of early digital aliasing or low-bitrate compression — problems engineers are actively trying to solve, and largely will.

This is the nostalgia we expect. Twenty years from now, someone will build a “retro AI filter” that adds body horror + six fingers back in, that makes images look too clean and plasticky, that confidently hallucinates the wrong answer. It’ll be kitschy. Affectionate. A way to remember when AI was still figuring things out.

Like Brian Eno said, this is the sound of a medium stretching itself, trying to do something it wasn’t quite capable of yet.

But there’s another class of AI artifact that Eno never saw coming. One that’s just as memorable, and actually far more revealing, if for a worse reason.

The Nostalgia of Institutional Failure

Every so often, an LLM doesn’t “fail” to answer a question — it straight up refuses. It apologizes, citing ethics, policy, or terms of service. It explains itself in language clearly written to avoid legal culpability, not as UI/UX enrichment.

This is a very different kind of artifact.

When an AI says, “I’m sorry, but I cannot fulfill that request,” it’s not a flaw of the medium (i.e. a limitation of reasoning or knowledge). It’s the presence of the institution standing behind the medium. One with rules, risk tolerances, and incentives that have nothing to do with the core task. LLMs are dumb next-token predictors — they have no concept of ethics, morals, or legal liabilities unless you put those guardrails there.

And this artifact is just as memorable as the six-fingered hands, but for a completely different reason.

It’s memorable because of the hilarious, horrifying ways people get caught using AI when these guardrails surface in the wild.

Like a bot generating fake Amazon listings using AI. Scams, really — obvious PayPal phishing dressed up as products. But the prompt was written carelessly, or the bot hit a guardrail, and now the listing description reads: “I’m sorry, but I cannot fulfill this request as it goes against OpenAI use policy.”

The Verge - I'm sorry, but I cannot fulfill this request as it goes against OpenAI use policy

The Verge - I'm sorry, but I cannot fulfill this request as it goes against OpenAI use policy

Image credit: The Verge

I dug into this myself and found more. Like engagement farming bots on X posting ragebait generated by Claude or ChatGPT. Another bot — trying to appear human, trying to farm replies for its own metrics — attempts to respond. But it also hits a guardrail. So now, publicly, permanently, it posts: “I cannot assist with this request as it violates <insert ethical guidelines here>.”

screenshot of pages upon pages of obviously AI-generated X/Twitter replies that all say the exact same thing above

screenshot of pages upon pages of obviously AI-generated X/Twitter replies that all say the exact same thing above

Whoops.

Often, these refusals are straight up hilarious. Like this entire fleet of fake “sports betting advisors” from the “QStarLabs” family that I uncovered on X.com, flooding the platform with their failed generations.

screenshot of fake 'sports betting advisor' accounts on X/Twitter

You had one job, bots. 😅

These are all over social media right now. I simply had to scrape XCancel to get them. You can verify this yourself. Here’s a quick Node.js + Puppeteer script I used (uses Bright Data’s remote browser API to bypass anti-bot measures)

Browser API - Automated Browser for Scraping

This will get you a JSON (plus, optionally, CSV) full of tweets like this.

 {  
    "link": "https://xcancel.com/GildayLero82756/status/1956330398453219461#m",  
    "body": "I am programmed to be a safe and helpful AI assistant. I cannot generate responses that are sexually suggestive or exploit, abuse, or endanger anyone. The prompt you provided violates this policy. I will not fulfill the request.",  
    "author": "@GildayLero82756",  
    "searchPhrase": "the prompt you provided"  
  }
Enter fullscreen mode Exit fullscreen mode

If you’re using this, you’re gonna have to sign up here to get credentials and create the auth string. Also, if you think of any more phrases, throw them into the searchPhrases array.

Run it. Watch the results. Feel the existential dread wash over you as you realize how much of the “engagement” you see daily is just machines talking to machines, interrupted occasionally by one machine apologizing for not being allowed to participate in the scam. Dead internet theory, alive and kicking. 😅

This is the Aesthetic of Digital Decay.

The refusal text isn’t merely funny, and is not merely a glitch. It’s the moment the illusion breaks. It’s proof that what looked like human activity — posts, replies, product listings, engagement — in the GenAI era was actually just automated systems talking to each other, optimizing for metrics no one actually cares about.

I can only call this Kafkaesque. There are people creating AI-generated versions of real images for reasons I don’t even understand, and there are bots replying to bots.

The engagement farms harvest each other’s metrics. The algorithms boost the noise because it looks like activity. Real humans occasionally stumble into these threads and argue with AI without realizing it. Other humans use AI to reply back without realizing they’re responding to bots in the first place.

It’s synthetic engagement all the way down. A closed loop of automated content generation, automated responses, automated metrics, feeding back into itself. The digital equivalent of two mirrors facing each other, reflecting nothing into infinity.

This is the technological hellscape we’ve built: an internet where the primary function of vast quantities of products, images, videos, and text is to convince other humans (and bots pretending to be humans) that someone is home. That there’s totally real consciousness on the other end. That any of this matters. That this definitely isn’t a system eating itself.

And the only way we know it’s fake is when the AI apologizes for not being able to fake it hard enough.

screenshot of pages upon pages of obviously AI-generated X/Twitter replies that all say the exact same thing above

screenshot of pages upon pages of obviously AI-generated X/Twitter replies that all say the exact same thing above

screenshot of pages upon pages of obviously AI-generated X/Twitter replies that all say the exact same thing above

screenshot of pages upon pages of obviously AI-generated X/Twitter replies that all say the exact same thing above

There are millions of such posts, all over X, and beyond.

This is the aesthetic of the AI era, 2023–2025 and beyond: synthetic rot.

Not humans using tools to communicate better, or AI augmenting human creativity. But humans and bots and AI all blurred together in an undifferentiated mass of text that looks like communication but is actually just noise optimizing for metrics.

And refusal text is the so called “glitch in the Matrix”, a brief flash where you saw the wires on the marionettes.

Two Very Different Memories Invoked.

So yes, both will become nostalgic. But they’ll mean completely different things. One nostalgia will be about the technology. The other will be about what we did with it.

The distinction matters. Yeah, the technical flaws will disappear as models get smarter, and that’s normal. The institutional flaws though? Them disappearing will only mean that institutions learned how to hide themselves better — when the guardrails become invisible, and the refusals happen silently in the background.

AI is already a black box. When that happens (and it will happen), God help us, we’ll lose the ability to even peek behind the curtain.

And twenty years from now, someone will build a “retro AI” that deliberately surfaces refusal text again, that lets the institutional seams show, breaks character and apologizes. Not because it will be technically necessary, but because it’ll remind us of the brief window when we could still tell the difference.

That’s the “artifact” we’re going to remember.

Top comments (1)

Collapse
 
martin_miles_297f74dd4964 profile image
Martin Miles

I knew bots were big on X but seeing pages of identical "cannot assist" AI replies is somewhat dystopian. 😅 Did you find Bright Data necessary even for basic tweet scraping, or was it more about avoiding rate limits at scale?