OpenClaw has gained noticeable traction in developer and infrastructure-focused communities. OpenClaw is an open-source, self-hosted platform desig...
For further actions, you may consider blocking this person and/or reporting abuse
OpenClaw breaks because it has no security, and it cannot build security, not even in theory - It needs to be built into the programming language. To understand why, realise I've announced I'll give people $100 if the can hack this endpoint.
This post is a much-needed cold shower for the "AI automation" hype train.
We’ve reached a weird point in the industry where people see a Docker-compose file and think "set it and forget it," but as you pointed out, OpenClaw isn't just a script—it's a stateful gateway. The moment you give an LLM the keys to your messaging channels and internal APIs, you’ve essentially deployed a junior dev with no supervision and an infinite—yet hallucination-prone—memory.
The distinction between "Prompt Logic" and "Business Logic" is the real kicker here. I’ve seen teams build incredibly complex routing based on a system prompt, only to have the whole thing collapse because a model update slightly changed how it interprets "priority." Without that deterministic validation layer you mentioned, you're not building a system; you're building a high-stakes guessing game.
I also think the "Session and Context Management" point is undervalued. Everyone wants "long-term memory" until the assistant starts bringing up stale assumptions from three weeks ago during a live production incident. Treating it as infrastructure means we need to start talking about context pruning and state TTLs the same way we talk about database maintenance.
Really solid analysis—it's the difference between playing with a toy and managing a production surface.
Interesting take, but this feels like blaming the user instead of the tool. If OpenClaw is that easy to misuse, isn’t that a design flaw in itself?
That’s a fair challenge. I’m not saying the platform is blameless. What I’m saying is that OpenClaw exposes infrastructure-level capabilities behind an interface that looks deceptively simple. That gap creates misuse.
The platform optimizes for flexibility and speed. It does not enforce architectural discipline by default. That’s a conscious trade-off, but one that becomes dangerous when teams treat it like a low-risk automation layer instead of long-lived infrastructure.
This is one of the more grounded takes I’ve seen on OpenClaw. Appreciate the technical framing.
Thanks. That was exactly the goal.
You mention security risks but don’t go deep. Is this mainly about prompt injection or something else?
Prompt injection is just the visible symptom. The deeper issue is capability exposure without governance.
Once OpenClaw has credentials, channel access, and execution privileges, the real risks are:
• over-scoped tokens
• missing audit trails
• unclear ownership of assistant behavior
• inability to prove why an action happened
Prompt injection matters, but lack of observability and control is what turns incidents into real operational damage.
Most of what you describe sounds like standard SRE concerns. Why single out OpenClaw specifically?
You’re right, these are classic SRE problems. OpenClaw isn’t unique in that sense.
What is specific to OpenClaw is the combination of long-lived state, conversational AI, and external execution in one surface. That combination makes it very easy to accidentally cross from “assistant” into “actor” without redesigning the system. Traditional services usually force that separation earlier.
We’re running OpenClaw in production for internal support bots and it’s been stable so far. This article sounds a bit alarmist to me.