DEV Community

Yanko Alexandrov
Yanko Alexandrov

Posted on

Running a Low Power AI Server 24/7 — My Setup Under 15W

I've been running a personal AI assistant 24/7 for the past few months. It handles my Telegram messages, automates browser tasks, manages my calendar, and runs local inference for privacy-sensitive queries. And it draws less power than a laptop charger.

Here's how I built a low power AI server that stays on around the clock — and what it actually costs to run.

Why "Always On" Matters

An AI assistant that you have to manually start isn't really an assistant. It's a tool. The difference is like having a butler versus owning a Swiss Army knife — one is ready when you need it, the other is ready when you remember it exists.

For an always-on AI assistant to make sense, it needs to:

  1. Cost almost nothing to run (electricity)
  2. Make zero noise (it lives in your home/office)
  3. Be reliable (no crashes, no overheating)
  4. Actually be capable (not just a glorified Raspberry Pi sitting idle)

That last point is where most low-power setups fall apart. Sure, a Pi 5 sips power — but it can't run local AI models. And a beefy desktop GPU server can run anything — but at 300W, you're paying €30+/month just in electricity.

My Setup: ClawBox (Jetson Orin Nano)

I landed on the ClawBox, which is an NVIDIA Jetson Orin Nano with a 512GB SSD and OpenClaw pre-installed. Here's the spec that matters for this article:

  • TDP: 15W (adjustable down to 7W in low-power mode)
  • No fan — completely passive cooling
  • 67 TOPS of AI compute via NVIDIA's GPU
  • Always on via systemd services, auto-starts on power recovery

I have it sitting on a shelf next to my router. No noise, no heat you'd notice, no blinking RGB. Just a small box doing its thing.

The Electricity Math

Let's get specific. I'm in Europe, paying roughly €0.25/kWh (varies by country — could be €0.15 in France or €0.35 in Germany).

Cost Per Device Running 24/7

Device Typical Wattage kWh/month Cost/month (€0.25/kWh) Cost/year
Raspberry Pi 5 5-8W 3.6-5.8 €0.90-1.44 €10.80-17.28
ClawBox (Jetson) 12-15W 8.6-10.8 €2.16-2.70 €25.92-32.40
Intel N100 Mini PC 15-25W 10.8-18.0 €2.70-4.50 €32.40-54.00
Mac Mini M4 (idle) 5-7W 3.6-5.0 €0.90-1.26 €10.80-15.12
Mac Mini M4 (load) 20-40W 14.4-28.8 €3.60-7.20 €43.20-86.40
Old laptop/desktop 40-80W 28.8-57.6 €7.20-14.40 €86.40-172.80
Desktop GPU server 150-350W 108-252 €27.00-63.00 €324-756

My ClawBox running 24/7 costs me roughly €2.50/month. That's a cup of coffee. For a fully functional AI assistant with GPU-accelerated inference.

Compare that to running an old laptop (€10+/month) or a GPU server (€30-60/month). The savings compound — over a year, a low-power setup saves you hundreds.

The Noise Factor

This is wildly underrated. I tried running OpenClaw on an Intel N100 mini PC first. It worked, but the tiny fan would spin up during browser automation tasks. At 2 AM, in a quiet apartment, you hear it.

The ClawBox is fanless. Zero noise. This sounds like a small thing until you've lived with a server in your home for a month. Silent operation isn't a nice-to-have — it's a requirement.

Noise comparison:

Device Noise Level Notes
Raspberry Pi 5 0 dB (fanless) Silent, but limited capability
ClawBox (Jetson) 0 dB (fanless) Silent + GPU acceleration
N100 Mini PC 20-35 dB Fan spins under load
Mac Mini M4 0-15 dB Mostly silent, fan rare
Desktop tower 25-45 dB Always audible

What Actually Runs on 15W

People assume "low power" means "low performance." Here's what my 15W ClawBox handles simultaneously:

  • OpenClaw core — Node.js orchestration engine
  • Telegram + WhatsApp + Discord bots — always connected
  • Browser automation — Chromium with Playwright for web tasks
  • Local LLM inference — quantized models via CUDA on the Jetson GPU
  • PostgreSQL — conversation history and memory
  • Nginx — reverse proxy for webhooks

All of this, concurrently, under 15W. The Jetson's GPU handles the AI inference heavy lifting while the ARM CPU manages orchestration. It's genuinely impressive what modern ARM + GPU silicon can do within a tiny power envelope.

Thermal Management Without a Fan

The ClawBox uses a passive aluminum heatsink design. In my testing:

  • Idle: ~38°C
  • Normal load (chat + browser automation): ~52°C
  • Heavy inference: ~65°C
  • Ambient temp: ~23°C (indoor)

The Jetson throttles at 85°C, which I've never come close to hitting in normal use. Even during sustained local model inference, temperatures stay well within safe ranges.

One tip: don't put it in an enclosed cabinet. Give it a few centimeters of breathing room on all sides and you'll be fine.

Comparing to Raspberry Pi

A lot of people ask: "Why not just use a Raspberry Pi 5? It's cheaper and uses less power."

Fair question. The Pi 5 uses ~5-8W versus the Jetson's ~12-15W. That's a €1-2/month difference. But here's what you lose:

  • No GPU — you can't run local AI models, period
  • 8GB RAM ceiling — tight for OpenClaw + browser automation + database
  • SD card reliability — not ideal for 24/7 write-heavy workloads
  • No CUDA — lose access to the entire NVIDIA AI ecosystem

For the full comparison, I put together a detailed Raspberry Pi vs Jetson breakdown that covers benchmarks, real-world performance, and total cost of ownership.

The Pi is great for learning and light tasks. But for an always-on AI assistant that can actually think locally, the extra 7-10W is worth every milliwatt.

Tips for Running Any Low Power AI Server

Regardless of what hardware you choose:

  1. Use an SSD, not an SD card. Write endurance matters for 24/7 operation.
  2. Set up auto-restart on power failure. BIOS setting + systemd services.
  3. Monitor temperatures. A simple cron job logging thermal_zone values works.
  4. Use a UPS or at least a surge protector. Cheap insurance for your always-on server.
  5. Optimize your services. Disable what you don't use. Every watt counts when multiplied by 8,760 hours.
  6. Put it on a separate VLAN if you're security-conscious. An always-on device is an always-on attack surface.

The Bottom Line

Running a low power AI server isn't about compromise — it's about right-sizing. I don't need a 350W GPU server to manage my messages, automate web tasks, and occasionally run local inference. I need a quiet, efficient box that costs less per month than a streaming subscription.

At 15W and €2.50/month, the ClawBox is the setup I'd recommend to anyone who wants an always-on AI assistant without the noise, heat, or electricity bill of traditional server hardware.

The future of personal AI isn't in the cloud. It's on your shelf, drawing less power than a lightbulb.


Want to build your own low-power AI setup? Check out the hardware comparison guide for detailed benchmarks and buying recommendations.

Top comments (0)