DEV Community

Cover image for Why Your Next Rails App Doesn't Need Nginx or Apache
Zil Norvilis
Zil Norvilis

Posted on

Why Your Next Rails App Doesn't Need Nginx or Apache

The "Nginx Tax"

For the last 15 years, deploying a Rails app to production involved a mandatory ritual: Configuring Nginx.

We accept it as a fact of life. Puma (the Rails web server) is great at running Ruby, but it is historically bad at serving static files (images, CSS, JS) and handling slow clients.

So, we put Nginx in front of it.
But then we have to configure Nginx.

  • "How do I set up Gzip compression?"
  • "What is the Regex for cache headers on assets?"
  • "Why is try_files returning a 404?"
  • "How do I configure X-Sendfile?"

For a solo developer using Docker, this means managing two containers (App + Web Server) or building a complex Dockerfile that runs both processes.

Rails 8 just killed this requirement.
With the introduction of Thruster and Propshaft, you can now expose your Rails container directly to the internet, with production-grade performance and zero configuration.

Part 1: Propshaft (The Asset Pipeline, Simplified)

First, we need to talk about where the files come from.
For years, Sprockets was the standard. It was powerful, but complex. It tried to do too much (transpiling CoffeeScript, compressing images, bundling JS).

Propshaft is the new default for the modern stack (Asset Pipeline).
It is incredibly dumb, in the best way possible.

  1. It takes files from your app/assets folders.
  2. It copies them to public/assets.
  3. It adds a "fingerprint" to the filename for caching (e.g., application-d8a8...css).

Thatโ€™s it. It leaves the "transpiling" to dedicated tools (like Importmaps or Tailwind) and focuses solely on digest stamping.

Part 2: Thruster (The Go Proxy)

Now that we have the files, how do we serve them fast?
Enter Thruster.

Thruster is a minimal HTTP/2 proxy written in Go. It is not a Ruby gem that runs inside your app; it is a binary that wraps your app.

In your Dockerfile or Procfile, instead of running:

CMD ["bin/rails", "server"]
Enter fullscreen mode Exit fullscreen mode

You run:

CMD ["bin/thrust", "bin/rails", "server"]
Enter fullscreen mode Exit fullscreen mode

What does this magic wrapper do?

Thruster sits between the internet and Puma. It intercepts every request.

  1. Static File Serving: If a request comes in for /assets/style.css, Thruster sees it exists on the disk and serves it immediately using Goโ€™s high-performance file server. It never wakes up Puma. This saves your Ruby workers for actual business logic.
  2. Compression: It automatically Gzips or Brotli compresses your responses. No config needed.
  3. Caching: Because Propshaft adds fingerprints to filenames, Thruster knows these files never change. It automatically adds Cache-Control: public, max-age=31536000, immutable headers.
  4. X-Sendfile Support: If your Rails controller wants to let a user download a large PDF, Rails can tell Thruster "You handle the download." Thruster streams the file to the client, freeing up the Ruby worker instantly.
  5. HTTP/2: Puma is (mostly) HTTP/1.1. Thruster upgrades the connection to HTTP/2, allowing multiplexed requests (loading CSS, JS, and Images in parallel).

Why this is a Game Changer for Solo Devs

If you are using Kamal to deploy (which you should be), this simplifies your architecture drastically.

Before:
You needed a "Sidecar" Nginx container in your deploy.yml, or you had to use a PaaS that provided a load balancer.

Now:
Your single Docker container is a self-contained, production-ready unit.

  • It serves its own assets.
  • It compresses its own traffic.
  • It caches its own headers.

You can deploy this container to a raw VPS, and it performs as if it were behind a professionally tuned Nginx setup.

Summary

The "One-Person Framework" philosophy is about removing friction.
Configuring Nginx location blocks is friction.
Debugging Gzip headers is friction.

Thruster creates a "Drop-in Production" environment. You don't configure it. You just wrap your command in it, and suddenly your app is enterprise-grade fast.

Nginx will always have a place for complex routing and load balancing at the edge. But for the application layer? Itโ€™s time to retire the config file.


Are you still writing Nginx configs by hand? Let me know in the comments! ๐Ÿ‘‡

Top comments (0)