"Hostile experts created the dataset for patient machines."
That line, from a comment by Vinicius Fagundes on my last article, won't leave my head.
Stack Overflow's traffic collapsed 78% in two years. Everyone's celebrating that AI finally killed the gatekeepers. But here's what we're not asking:
If we all stop contributing to public knowledge bases, what does the next generation of AI even train on?
We might be optimizing ourselves into a knowledge dead-end.
The Data We're Ignoring
Stack Overflow went from 200,000 questions per month at its peak to under 50,000 by late 2025. That's not a dip. That's a collapse.
Meanwhile, 84% of developers now use AI tools in their workflow, up from 76% just a year ago. Among professional developers, 51% use AI daily.
The shift is real. The speed is undeniable. But here's the uncomfortable part: 52% of ChatGPT's answers to Stack Overflow questions are incorrect.
The irony is brutal:
- AI trained on Stack Overflow
- Developers replaced Stack Overflow with AI
- Stack Overflow dies from lack of new content
- Future AI has... what, exactly?
The Wikipedia Problem
Here's something nobody's complaining about loudly enough: Wikipedia sometimes doesn't even appear on the first page of Google results anymore.
Let that sink in. The largest collaborative knowledge project in human history - free, community-curated, constantly updated, with 60+ million articles - is getting buried by AI-generated summaries and SEO-optimized content farms.
Google would rather show you an AI-generated answer panel (trained on Wikipedia) than send you to Wikipedia itself. The thing that created the knowledge gets pushed down. The thing that consumed the knowledge gets prioritized.
This is the loop closing in real-time:
- Humans build Wikipedia collaboratively
- AI trains on Wikipedia
- Google prioritizes AI summaries over Wikipedia
- People stop going to Wikipedia
- Wikipedia gets fewer contributions
- AI trains on... what, exactly?
We're not just moving from public to private knowledge. We're actively burying the public knowledge that still exists.
Stack Overflow isn't dying because it's bad. Wikipedia isn't disappearing because it's irrelevant. They're dying because AI companies extracted their value, repackaged it, and now we can't even find the originals.
The commons didn't just lose contributors. It lost visibility.
What We Actually Lost
PEACEBINFLOW captured something crucial:
"We didn't just swap Stack Overflow for chat, we swapped navigation for conversation."
Stack Overflow threads had timestamps, edits, disagreement, evolution. You could see how understanding changed as frameworks matured. Someone's answer from 2014 would get updated comments in 2020 when the approach became deprecated.
AI chats? Stateless. Every conversation starts from zero. No institutional memory. No visible evolution.
I can ask Claude the same question you asked yesterday, and neither of us will ever know we're solving the same problem. That's not efficiency. That's redundancy at scale.
As Amir put it:
"Those tabs were context, debate, and scars from other devs who had already been burned."
We traded communal struggle for what Ali-Funk perfectly named: "efficient isolation."
The Skills We're Not Teaching
Amir nailed something that's been bothering me:
"AI answers confidently by default, and without friction it's easy to skip the doubt step. Maybe the new skill we need to teach isn't how to find answers, but how to interrogate them."
The old way:
Bad docs forced skepticism accidentally. You got burned, so you learned to doubt. Friction built judgment naturally.
The new way:
AI is patient and confident. No friction. No forced skepticism. How do you teach doubt when there's nothing pushing back?
We used to learn to verify because Stack Overflow answers were often wrong or outdated. Now AI gives us wrong answers confidently, and we... trust them? Because the experience is smooth?
The Economics of Abundance
Doogal Simpson reframed the problem economically:
"We are trading the friction of search for the discipline of editing.
The challenge now isn't generating the code, but having the guts to
reject the 'Kitchen Sink' solutions the AI offers."
Old economy: Scarcity forced simplicity
Finding answers was expensive, so we valued minimal solutions.
New economy: Abundance requires discipline
AI generates overengineered solutions by default. The skill is knowing
what to DELETE, not what to ADD.
This connects to Mohammad Aman's warning about stratification: those who
develop the discipline to reject complexity become irreplaceable. Those
who accept whatever AI generates become replaceable.
The commons didn't just lose knowledge. It lost the forcing function that
taught us to keep things simple.
The Solver vs Judge Problem
Ben Santora has been testing AI models with logic puzzles designed to reveal reasoning weaknesses. His finding: most LLMs are "solvers" optimized for helpfulness over correctness.
When you give a solver an impossible puzzle, it tries to "fix" it to give you an answer. When you give a judge the same puzzle, it calls out the impossibility.
As Ben explained in our exchange:
"Knowledge collapse happens when solver output is recycled without a strong, independent judging layer to validate it. The risk is not in AI writing content; it comes from AI becoming its own authority."
This matters for knowledge collapse: if solver models (helpful but sometimes wrong) are the ones generating content that gets recycled into training data, we're not just getting model collapse - we're getting a specific type of collapse.
Confident wrongness compounds. And it compounds confidently.
The Verification Problem
Ben pointed out something crucial: some domains have built-in verification, others don't.
Cheap verification domains:
- Code that compiles (Rust's strict compiler catches errors)
- Bash scripts (either they run or they don't)
- Math (verifiable proof)
- APIs (test the endpoint, get immediate feedback)
Expensive verification domains:
- System architecture ("is this the right approach?")
- Best practices ("should we use microservices?")
- Performance optimization ("will this scale?")
- Security patterns ("is this safe?")
Here's the problem: AI solvers sound equally confident in both domains.
But in expensive verification domains, you won't know you're wrong until months later when the system falls over in production. By then, the confident wrong answer is already in blog posts, copied to Stack Overflow, referenced in documentation.
And the next AI trains on that.
The Confident Wrongness Problem
Maame Afua and Richard Pascoe highlighted something worse
than simple hallucination:
When AI gets caught being wrong, it doesn't admit error - it generates
plausible explanations for why it was "actually right."
Example:
You: "Click the Settings menu"
AI: "Go to File > Settings"
You: "There's no Settings under File"
AI: "Oh yes, that menu was removed in version 3.2"
[You check - Settings was never under File]
This is worse than hallucination because it makes you doubt your own
observations. "Wait, did I miss an update? Am I using the wrong version?"
Maame developed a verification workflow: use AI for speed, but check
documentation to verify. She's doing MORE cognitive work than either
method alone.
This is the verification tax. And it only works if the documentation
still exists.
The Tragedy of the Commons
This is where it gets uncomfortable.
Individually, we're all more productive. I build faster with Claude than I ever did with Stack Overflow tabs. You probably do too.
But collectively? We're killing the knowledge commons.
The old feedback loop:
Problem → Public discussion → Solution → Archived for others
The new feedback loop:
Problem → Private AI chat → Solution → Lost forever
Ingo Steinke pointed out something I hadn't considered: even if AI companies train on our private chats, raw conversations are noise without curation.
Stack Overflow had voting. Accepted answers. Comment threads that refined understanding over time. That curation layer was the actual magic, not just the public visibility.
Making all AI chats public wouldn't help. We'd just have a giant pile of messy conversations with no way to know what's good.
"Future generations might not benefit from such rich source material... we shouldn't forget that AI models are trained on years of documentation, questions, and exploratory content."
We're consuming the commons (Stack Overflow, Wikipedia, documentation) through AI but not contributing back. Eventually the well runs dry.
We're Feeling Guilty About the Wrong Thing
A commenter said: "I've been living with this guilty conscience for some time, relying on AI instead of doing it the old way."
I get it. I feel it too sometimes. Like we're cheating, somehow.
But I think we're feeling guilty about the wrong thing.
The problem isn't using AI. The tools are incredible. They make us faster, more productive, able to tackle problems we couldn't before.
The problem is using AI privately while the public knowledge base dies.
We've replaced "struggle publicly on Stack Overflow" with "solve privately with Claude." Individually optimal. Collectively destructive.
The guilt we feel? That's our instinct telling us something's off. Not because we're using new tools, but because we've stopped contributing to the commons.
One Possible Path Forward
Ali-Funk wrote about using AI as a "virtual mentor" while transitioning from IT Ops to Cloud Security Architect. But here's what he's doing differently:
He uses AI heavily:
- Simulates senior architect feedback
- Challenges his technical designs
- Helps him think strategically
But he also:
- Publishes his insights publicly on dev.to
- Verifies AI output against official AWS docs
- Messages real people in his network for validation
- Has a rule: "Never implement what you can't explain to a non-techie"
As he put it in the comments:
"AI isn't artificial intelligence. It's a text generator connected to a library. You can't blindly trust AI... It's about using AI as a compass, not as an autopilot."
This might be the model: Use AI to accelerate learning, but publish the reasoning paths. Your private conversation becomes public knowledge. The messy AI dialogue becomes clean documentation that others can learn from.
It's not "stop using AI" - it's "use AI then contribute back."
The question isn't whether to use these tools. It's whether we can use them in ways that rebuild the commons instead of just consuming it.
Model Collapse
Peter Truchly raised the real nightmare scenario:
"I just hope that conversation data is used for training, otherwise the only entity left to build that knowledge base is AI itself."
Think about what happens:
- AI trains on human knowledge (Stack Overflow, docs, forums)
- Humans stop creating public knowledge (we use AI instead)
- New problems emerge (new frameworks, new patterns)
- AI trains on... AI-generated solutions to those problems
- Garbage in, garbage out, but at scale
This is model collapse. And we're speedrunning toward it while celebrating productivity gains.
GitHub is scraped constantly. Every public repo becomes training data. If people are using solver models to write code, pushing to GitHub, and that code trains the next generation of models... we're creating a feedback loop where confidence compounds regardless of correctness.
The domains with cheap verification stay healthy (the compiler catches it). The domains with expensive verification degrade silently.
The Corporate Consolidation Problem
webketje raised something I hadn't fully addressed:
"By using AI, you opt out of sharing your knowledge with the broader community
in a publicly accessible space and consolidate power in the hands of corporate
monopolists. They WILL enshittify their services."
This is uncomfortable but true.
We're not just moving from public to private knowledge. We're moving from
commons to capital.
Stack Overflow was community-owned. Wikipedia is foundation-run. Documentation
is open source. These were the knowledge commons - imperfect, often hostile,
but fundamentally not owned by anyone.
Now we're consolidating around:
- OpenAI (ChatGPT) - $157B valuation
- Anthropic (Claude) - $60B valuation
- Google (Gemini) - Alphabet's future
They own the models. They own the training data. They set the prices.
And as every platform teaches us: they WILL enshittify once we're dependent.
Remember when:
- Twitter was free and open? Now it's X.
- Google search was clean? Now it's ads and AI.
- Reddit was community-first? Now it's IPO-driven.
The pattern is clear: Build user dependency → Extract maximum value →
Users have nowhere else to go.
What happens when Claude costs $100/month? When ChatGPT paywalls
advanced features? When Gemini requires Google Workspace Enterprise?
We'll pay. Because by then, we won't remember how to read documentation.
At least Stack Overflow never threatened to raise prices or cut off API access.
Sidebar: The Constraint Problem
Ben Santora argues that AI-assisted coding requires strong constraints -
compilers that force errors to surface early, rather than permissive environments
that let bad code slip through.The same principle applies to knowledge: Stack Overflow's voting system was a
constraint. Peer review was a constraint. Community curation was a constraint.AI chats have no constraints. Every answer sounds equally confident, whether
it's right or catastrophically wrong. And when there's no forcing function to
catch the error...
The Uncomfortable Counter-Argument
Mike Talbot pushed back hard on my nostalgia:
"I fear Stack Overflow, dev.to etc are like manuals on how to look after your horse, when the world is soon going to be driving Fords."
Ouch. But maybe he's right?
Maybe we're not losing something valuable. Maybe we're watching an obsolete skill set become obsolete. Just like:
- Assembly programmers → High-level languages
- Manual memory management → Garbage collection
- Physical servers → Cloud infrastructure
- Horse care manuals → Auto repair guides
Each generation thought they were losing something essential. Each generation was partially right.
But here's where the analogy breaks down: horses didn't build the knowledge base that cars trained on. Developers did.
If AI replaces developers, and future AI trains on AI output... who builds the knowledge base for the NEXT paradigm shift?
The horses couldn't invent cars. But developers invented AI. If we stop thinking publicly about hard problems (system design, organizational architecture, scaling patterns), does AI even have the data to make the next leap?
Or do we hit a ceiling where AI can maintain existing patterns but can't invent new ones?
I don't know. But "we're the horses" is the most unsettling framing I've heard yet.
What We Actually Need
I don't have clean answers. But here are questions worth asking:
Can we build Stack Overflow for the AI age?
Troels asked: "Perhaps our next 'Stack Overflow for the AI age' is yet to come. Perhaps it will be even better for us."
I really hope so. But what would that even look like?
From Stack Overflow (the good parts):
- Public by default
- Community curation (voting, accepted answers)
- Searchable and discoverable
- Evolves as frameworks change
From AI conversations (the good parts):
- Patient explanation
- Adapts to your context
- Iterative dialogue
- No judgment for asking "dumb" questions
What it can't be:
- Just AI chat logs (too noisy)
- Just curated AI answers (loses the reasoning)
- Just documentation (loses the conversation)
Maybe it's something like: AI helps you solve the problem, then you publish the reasoning path - not just the solution - in a searchable, community-curated space.
Your messy conversation becomes clean documentation. Your private learning becomes public knowledge.
Should we treat AI conversations as artifacts?
When you solve something novel with AI, should you publish that conversation? Create new public spaces for AI-era knowledge? Find a curation mechanism that actually works?
Pascal suggested: "Using the solid answers we get from AI to build clean, useful wikis that are helpful both to us and to future AI systems."
This might be the direction. Not abandoning AI, but creating feedback loops from private AI conversations back to public knowledge bases.
How do we teach interrogation as a core skill?
Make "doubting AI" explicit in how we teach development. Build skepticism into the workflow. Stop treating AI confidence as correctness.
As Ben put it: "The human must always be in the loop - always and forever."
The Uncomfortable Truth
We're not just changing how we code. We're changing how knowledge compounds.
Stack Overflow was annoying. The gatekeeping was real. The "marked as duplicate" culture was hostile. As Vinicius perfectly captured:
"I started learning Linux in 2012. Sometimes I'd find an answer on Stack Overflow. Sometimes I'd get attacked for how I asked the question. Now I ask Claude and get a clear, patient explanation. The communities that gatekept knowledge ended up training the tools that now give it away freely."
Hostile experts created the dataset for patient machines.
But Stack Overflow was PUBLIC. Searchable. Evolvable. Future developers could learn from our struggles.
Now we're all having the same conversations in private. Solving the same problems independently. Building individual speed at the cost of collective memory.
"We're mid-paradigm shift and don't have the language for it yet."
That's exactly where we are. Somewhere between the old way dying and the new way emerging. We don't know if this is progress or just... change.
But the current trajectory doesn't work long-term.
If knowledge stays private, understanding stops compounding. And if understanding stops compounding, we're not building on each other anymore.
We're just... parallel processing.
Huge thanks to everyone who commented on my last article. This piece is basically a synthesis of your insights. Special shoutout to Vinicius, Ben, Ingo, Amir, PEACEBINFLOW, Pascal, Mike, Troels, Sophia, Ali,Maame,webketje,doogal and Peter for sharpening this thinking.
What's your take? Are we headed for knowledge collapse, or am I overthinking this? Drop a comment - let's keep building understanding publicly.
Top comments (157)
Fair points - in my opinion what we REALLY need to (MUST !) keep are (1) Wikipedia and (2) Stackoverflow ...
"Everyone's celebrating that AI finally killed the gatekeepers" - that's a funny statement, what exactly is "everyone" (?) celebrating - the "demise" of some 'cocky' know-it-all people on Stackoverflow?
I've heard people complaining about that, but it's not something that has ever bothered me ...
fair pushback on "everyone" . youre right thats overstated.
what i meant. theres a vocal contingent celebrating SO decline as karma for the "marked as duplicate" culture. but youre right that not everyone had negative experiences.
the gatekeeping thing wasnt my main point though. whether SO was hostile
or helpful, the real issue is. if it dies (78% traffic drop is real data), what replaces it?
private AI chats dont have the same properties. searchable, evolvable, publicly curated. thats the loss im worried about, not the personality of the answerers.
curious.you say we MUST keep wikipedia + SO. how do we do that when AI makes contributing feel redundant? genuine question
StackOverflow would eventually experience another kind of knowledge collapse due to outdated information occupying the top answer spots and answers by long standing seniors getting upvotes just because of their reputation. The gatekeeping was an effective spam filter, and it made me draft numerous questions that I never posted because I found the answer myself while refining a minimal reproducible example. But StackOverflow's (and other communities') gatekeeping also made a lot of valuable data get discarded just because people had other priorities than making an effort to solve their issues in public.
I've never had any negative experiences on SO, maybe it also depends on people's attitude? People who say:
"a vocal contingent celebrating SO decline as karma"
are peevish, resentful and bear a narrow-minded grudge :-)
Your point about the value and necessity of original content (SO and Wikipedia, and much more) is spot on ... I hope (and honestly I expect) that SO and Wikipedia (and similar community-driven sources) will survive!
ha fair. the people celebrating SO decline are probably louder than they are numerous.
youre right that attitude matters.respectful questions got better SO treatment. but the reputation (deserved or not) scared people away.
your optimism is interesting though. what makes you think SO/wikipedia survive when 78% traffic drop is real?
maybe people who value these platforms keep contributing even as casuals move to AI? quality over quantity?
id love to be wrong about collapse trajectory.
I guess it's the fact that there's still 22 percent left? Yeah and maybe "quality over quantity" - the "hard core" people won't walk away ...
interesting take. maybe the 22% who stayed are the actual contributors and the 78% who left were just consumers?
if thats true it could work. wikipedia survives on tiny fraction of editors while millions read.
but heres the problem. even hardcore contributors need NEW questions to
answer. if juniors are asking AI instead of posting on SO, where do the questions come from?
and without fresh questions, do m experienced devs stick around? or does
it become an archive instead of living knowledge base?
Curious. can a platform survive on just the hardcore 22% if the pipeline of new questions dries up?
Well your concerns seem valid ... I don't know if the smaller "volume" will be enough for SO to survive, but I certainly hope so!
Next breakthrough for AI would be if it can "invent" something by itself, pose new questions on SO, autonomously write blog posts or create other content, instead of only cleverly regurgitating and recombining what's been fed to it ...
I guess that would be what they call "AGI" (artificial general intelligence)), and actually that's when it might get really scary for us humans, so let's be careful what we wish for ;-)
the AGI question is the real fork.
scenario 1: AI stays sophisticated recombinator. knowledge collapse poisons training data. we're screwed.
scenario 2: AI achieves invention. knowledge collapse irrelevant but...
humans might be too?
uncle bob said "AI cant hold big picture or understand architecture." maybe invention REQUIRES that.
but if AI gets there... yeah, scary.
betting on "AGI will save us" feels risky when we're already seeing collapse.
Correct analysis - but what's the solution? Are the "AI big boys" (big tech) actually (explicitly) aiming for AGI - which would have "(super)human" capabilities? I think that would really be a bridge too far - governments might need to step in (not counting on Trump obviously, lol) ...
big tech explicitly aims for AGI.openai's mission, anthropic charter, deepmind goal
solution by timeline:
short: preserve commons deliberately. platforms rewarding public reasoning not just answers.
mid: regulatory guardrails on training data. EU might require disclosure if training on AI content. US wont.
long: if AGI emerges, irrelevant. if not,need intact commons.
maintain commons as insurance while hoping AGI makes it unnecessary.
imperfect but better than assuming AGI solves everything.
I’ve noticed that the friction of a broken script or a confusing doc is actually what forces me to understand the 'why.' When an AI gives a confident, polished answer, it’s tempting to skip that doubt step entirely. Developing that judging layer you mentioned feels like the most important thing I can focus on right now. Great follow-up piece!
this is it exactly.
friction teaches the "why" accidentally. smooth AI answers skip straight to "what" and we miss the foundation.
the fact that you're consciously building that judging layer puts you ahead of most devs who just optimize for speed without realizing what they're losing.
curious.when you catch AI being confidently wrong now, does it make you
more skeptical of future answers? or do you still have to fight the temptation to trust it?
To be honest I will never trust any AI tool a 100%, I personally think if you know & understand what you are doing, it's a great online assistant (that's when you are able to tell when it makes mistakes and not follow it blindly..) but aside that, depending on it a 100% is scary and would definitely cause more harm than good in the long run for anyone's personal growth
this is the core tension.
how do you GET to "know & understand what youre doing" if AI is your primary learning tool?
experienced devs like john h (comments) use AI well because they already have context. they can verify. juniors starting today dont have that foundation.
stack overflow forced skepticism through pain. AI doesnt. so can we teach "healthy doubt of AI" explicitly? or does it require the hard-won experience you already have?
might be the real divide. learned before AI vs learned with AI.
That's why I personally don't use AI as a primary learning tool (I accompany it with accredited resources after I have some vast knowledge), because it could always give you the wrong information, I usually just read books on topics I am learning. So after I have an idea of what I am doing, then I can use ai as an assistant / more or less a 'super search engine'. Personally, I learned the hard way of learning things the old school way (reading actual books and accredited online resources that have been written by developers & people with years of experience). That is helping me more in my learning journey than solely depending on ai to do the work for me. Because the moment ai goes downhill , those who depended FULLY on it will have zero value... these are my personal views on the topic in general :)
this is the model that works.
foundation first (books, docs) → then AI as assistant. not the other way around.
the problem. juniors today see everyone using AI and skip straight to it. they never build the foundation that lets you verify.
youre doing it right because you learned the hard way. question. can we teach juniors your approach? or does it require getting burned first?
if verification skills require pain to learn, we're in trouble.
To be honest, I am still learning myself (junior level), but I got loads of advise from some really good developers who have been through the old school system (without AI). So I have been following their advise in doing so, and it has helped my personal growth because I am able to understand the technical aspects of most things now, as compared to using ai. I think everyone just needs to do what would help their personal growth, since we all learn in different ways :)
wait.youre a JUNIOR but learned from devs who came up without AI.
so its not experienced vs junior. its mentored vs unmentored.
youre inheriting their verification habits. thats the transmission mechanism.
scary question. in 5 years when most seniors also learned with AI, who teaches juniors to be skeptical?
right now theres enough pre-AI devs to mentor. that window is closing.
youre lucky you found good mentors.
Mentorship is so important to me in my learning journey and I appreciate my mentors a lot
The "dead internet" is only accelerated (exponentially, however) by LLM-based generative AI, but there were real people producing sloppy spam content before AI took their jobs. Algorithms lured people into hate speech spirals and recommendation rabbit holes to maximise clicks and engagement before AI already.
Maybe that's not a risk at all, while still a waste of resources, if we focus and filter. There are millions of bad books that I don't need to read, millions of bad coffee shops that I'll never visit. Millions of questions that I could ask AI but I'll never will.
We won't lose the internet as something alive, we'll have to reinvent and rediscover the good aspects we loved about Web 1 (originality, imperfection, USENET, what else?) and Web 2.0 (instant interaction, user generated content and social media platforms before everything went too commercial) and maybe even Web3 (the ideas of decentralization, independence and forgery-proof, not necessarily built with crypto and blockchain though) and the discussions like this one about AI, DEV, StackOverflow, Wikipedia and how to continue collaborating as developers commited to finding facts and best practices.
you're right that this might be first world problem when world has bigger issues. but i'd argue: developer knowledge infrastructure affects ALL software, including systems that DO address real world problems.
bad AI-generated code in healthcare systems? financial infrastructure? critical infrastructure? knowledge collapse has real-world consequences.
your point about SO already having flaws is fair . outdated answers, reputation bias. but those are curation problems we COULD fix. model collapse from AI training on AI is systemic.
love the "reinvent best of web 1/2/3" vision. decentralized knowledge commons without crypto overhead. public reasoning without gatekeeping.
maybe thats the answer - new platforms designed for AI era that keep web 1 authenticity with web 2 collaboration
what would that look like practically?
exactly.action beats paralysis.
writing these articles is my version.making documenting reasoning publicly instead of keeping it private.
small steps compound if enough people take them.
appreciate you being vocal about this. other people reading might not comment but seeing someone actually commit to action (wikipedia, EFF, fediverse migration) makes it feel possible instead of just theoretical leadership by example.
We'll probably look back to the 2010 decade and early 2020 years as the golden age of knowledge and open data unless we manage to change society's course. But maybe that's a temporary first world problem: knowledge curation might recover after a massive collapse of quality, and the real world problems are aren't how to find the right words and details but rather taking action in society and politics, stop war and terror and help people beyond our digital bubble.
Thanks for your thoughtful article. While I'd like to see AI fail due to model collapse, I should better hope that we can somehow fix its inherent flaws and that the next generations will know how to use AI and when to distrust it, just like nobody would flee a cinema screaming in fear when a steam locomotive approaches the camera in black and white, or panic when a fictitious audio book about a martian invasion plays on the radio.
Brilliant! I wrote about this some months ago but you have explained it with much more detail.
dev.to/nandofm/ai-the-danger-of-en...
What we will get at the end is a rotten knowledge because it won't be fed with new and fresh ideas.
just read yours. "entropy in knowledge" perfect framing. same conclusion,different angles.
"rotten knowledge" = "knowledge collapse" - same mechanism
appreciate generosity on execution. feels like building toward something
since you published months ago, seen any solution attempts? platforms preserving public knowledge? or just more acceleration?
would love to collaborate exploring this further.
To be honest I only see acceleration. Maybe we need some kind of Foundation (like Asimov's) and/or a place where genuine content can be created and discussed, the fediverse?. AI generated content is everywhere, I'm not optimistic.
exactly where im heading.
richard (in comments) committed to m building federated Q&A on activitypub.
same conclusion you reached.
asimovs foundation perfect metaphor. preserve knowledge through dark age. but BUILD it not hope for it.
next 2 articles. what stays "above API" m when AI codes, then building federated stackoverflow.
youre right. waiting for platforms to fix themselves = pessimism justified.
but if we BUILD alternative...
want to be involved? need people who've thought about this beyond hype cycle.
I recently closed my private social media accounts and moved to the fediverse. Apart from that I'm building at home my own cloud server with Nextcloud in a Raspberry. These are my little actions to avoid the "enshitifcation" of the Intenet. Not too much because family deserves also my time but at least I'm doing what I can.
In any case, your idea seems interesting, not sure if I could contribute but I would like to know about the idea/project :-)
this is exactly the kind of builder we need.
youre not just talking.youre DOING (fediverse migration, nextcloud, raspberry pi infrastructure).
the federated Q&A idea: activitypub-based stackoverflow alternative questions/answers federate across instances. community-owned, open source.
richard committed to help. now you. that's enough to start.
going to write the spec as next article (after "above the API" piece). then build prototype weekend after.
can you join a small group chat to sketch architecture? just richard, you, me for now. keep it tight until we have working prototype
family time matters. this is volunteer/passion project, not job. we build what we can when we can.
this is exactly the kind of builder we need
youre not just talking. youre DOING
the federated Q&A idea: activitypub-based stackoverflow alternative. questions/answers federate across instances. community-owned, open source.
richard committed to help. now you. that's enough to start
going to write the spec as next article (after "above the API" piece). then build prototype weekend after.
can you join a small group chat to sketch architecture? just richard, you, me for now. keep it tight until we have working prototype.
also: family time matters. this is volunteer/passion project, not job.
we build what we can when we can
this is it. concern into action
wikipedia support, EFF membership, migrating from big tech. concrete
not hand-wringing.
im doing similar.writing publicly on dev.to instead of private notes, publishing OSS, documenting reasoning not just solutions
maybe individual answer. consciouschoice to contribute back even when less efficient than AI privately.
commons survives if enough people make that choice.
appreciate you actually doing something
that image perfectly captures it. sawing off our own branch.
skepticism works for misinformation (human or AI). but "death of ecosystems" is bigger threat.
SO dying isnt just "less accurate".its loss of platform where collective refinement happened.
tech giants consolidating. own models, training data, deployment. replacing public commons with private capital.
perfect visual. mind if i use in follow-up about building alternatives?
We're definitely asking that. We've been talking about it for a good couple of years by this point.
The problem is that the AI hype machine steamrolls everything. Too many people don't care, and will never care.
Thanks for sharing this article. Got me thinking on a lot of topics. How we are losing our authenticity in the way we communicate as we are regurgitating the same knowledge sources. There needs to be over time more choices of models in their design and source. Too many models are trained at a corporate level. We also need models trained by governments too to counterract incentives and produce richness in alternatives. The Swiss produced their national government model recently and it looks promising.
hadnt considered model diversity as defense against homogenization.
if all models train on corporate data with profit incentives, we get value convergence not just output convergence.
swiss government model interesting.public infrastructure, different optimization.
but question. does government AI solve knowledge collapse or just diversify AI layer? still need humans contributing novel experiences.
maybe government models + federated knowledge platforms. public AI on public knowledge, both community-owned.
Agreed, I think one way is due to the different incentive structures, more people would be inclined and/or nudged to contribute novel experiences if framed differently
I find your suggestions in the last part interesting to consider
incentive framing is key.
SO worked because reputation. what makes someone publish AI reasoning
when private is faster?
government models might change default. contributing becomes civic act not just personal branding.
"your tax dollars fund this AI, help train it"
different motivation than corporate reputation.
exploring in next piece. sustainable commons incentives.
exactly yes. looking forward to the next piece
Some comments may only be visible to logged-in visitors. Sign in to view all comments.