I have deployed code to:
- Bare metal servers that screamed when the fan failed.
- VPS machines that mysteriously rebooted at 3AM.
- Kubernetes clusters that required three YAML sacrifices and a Helm incantation.
- And of course, AWS — where the bill is the only truly consistent runtime.
And after years of “cloud-native architecture,” I’ve realized something uncomfortable:
The cloud is not your computer.
It’s a negotiation.
The Illusion of Control
When you write Go:
err := db.QueryRowContext(ctx, query).Scan(&user.ID)
It feels deterministic.
When you write Rust:
let user = repo.find_user(id).await?;
It feels safe. Structured. Controlled. Owned.
You think:
I wrote this code. I understand this system.
But in the cloud?
- Your “server” is virtual.
- Your “disk” is network-attached.
- Your “network” is software-defined.
- Your “security boundary” is an IAM policy someone copy-pasted from StackOverflow.
You are not running software.
You are renting probability.
Go: The Language of Optimists
Go was built at Google for large distributed systems.
It assumes failure.
if err != nil {
return err
}
That’s not error handling.
That’s distributed system trauma.
Go developers understand something frontend engineers often don’t:
Everything fails.
Everything times out.
Everything retries.
Everything lies.
And AWS amplifies that truth.
Your Lambda cold starts.
Your ECS task reschedules.
Your EKS node disappears.
Your RDS connection pool silently dies.
Go doesn’t fight this chaos.
It shrugs and returns error.
Rust: The Language of Control Freaks
Rust says:
You don’t get memory unless you prove you deserve it.
It forces you to confront ownership, lifetimes, and mutability.
And then we deploy that beautifully memory-safe binary into:
- A container
- On a node
- On a cluster
- In a VPC
- Behind a load balancer
- Behind a CDN
- Behind a WAF
- Behind someone else’s data center
You eliminated use-after-free.
Congratulations.
Now debug why your pod can’t reach S3 because your IAM role lacks s3:ListBucket.
The New Stack: YAML All The Way Down
There was a time when “stack trace” meant something.
Now the stack looks like this:
- Rust binary
- Docker
- Kubernetes
- Helm
- Terraform
- AWS
- IAM
- VPC
- Subnet
- Route table
- NAT gateway
- Internet gateway
- Cloud provider control plane
- Unknown planetary alignment
You fix a bug in your code.
The problem was a security group.
You increase CPU.
The problem was file descriptor limits.
You scale horizontally.
The problem was a missing index.
We used to debug functions.
Now we debug ecosystems.
AWS: The Most Expensive Distributed Systems Course in History
AWS doesn’t break loudly.
It degrades gracefully.
Which is worse.
Your service doesn’t crash.
It just slows down enough for users to leave quietly.
And the billing dashboard?
It scales perfectly.
You don’t notice a bug because of logs.
You notice it because your credit card calls you.
The Monolith Was Honest
Say what you want about monoliths.
They were:
- Predictable
- Deployable
- Understandable
- Debuggable
When something broke, you SSH’d into one machine.
You checked logs.
You fixed it.
Now?
You open:
- CloudWatch
- X-Ray
- Prometheus
- Grafana
- Jaeger
- Datadog
- And three tabs of Terraform
And you still don’t know why 503 is happening.
But Here’s the Twist
Despite all of this…
Go and Rust are thriving in the cloud.
Why?
Because they are honest languages in a dishonest environment.
Go embraces failure as a first-class value.
Rust enforces correctness at compile time.
Both reduce uncertainty in systems that are fundamentally uncertain.
The cloud is chaos.
Go and Rust are discipline.
And that tension is exactly why they belong together.
The Real Skill Isn’t Coding Anymore
The best cloud engineers today aren’t just good at writing code.
They understand:
- Network topology
- IAM blast radius
- Observability strategy
- Latency budgets
- Backpressure
- Failure domains
- Cost modeling
In other words:
We didn’t stop being systems engineers.
We just outsourced the hardware and multiplied the complexity.
Final Thought
In the end, the cloud didn’t make engineering easier.
It made responsibility abstract.
And abstraction is power.
But every abstraction leaks.
Go leaks through error.
Rust leaks through Result.
AWS leaks through your invoice.
The monolith never lied to you.
The cloud smiles politely while charging by the millisecond.
Top comments (22)
Loved this. The irony is real: we obsess over correctness in Go/Rust, then deploy into a probabilistic maze of IAM, networks, and YAML. The monolith wasn’t simple — it was just honest. Great articulation of modern systems pain.
This is such a sharp and thoughtful take — you captured the irony of modern engineering beautifully. Really appreciate how clearly you expressed that tension; it resonates with anyone building real systems today.
This really hits the mark. The cloud is like that friend who promises everything but leaves out the details. Everything seems easy until you're buried in IAM policies and watching your bill climb. 😅
Go and Rust feel like the solid lifeboats in all this chaos. Go’s “everything fails” and Rust’s strict rules are just what we need when the cloud keeps throwing curveballs. No matter how much we layer on Terraform and containers, unpredictability is the only constant.
The monolith might've been messy, but at least it didn’t hit you with a surprise bill. 🙃
Great post! Keep it up!
This is such a relatable take. I really like how you described the cloud as that “overpromising friend” — it sounds funny, but it’s honestly so true. We jump in thinking everything will be simple, and then IAM rules and unexpected bills remind us who’s in control. You captured that chaos in a very real and human way.
I also agree that Go and Rust feel like strong anchors in this environment. Maybe the real solution isn’t fewer tools, but stronger fundamentals and better visibility from the start. If we design with failure and cost in mind early on, we can handle the unpredictability much better. Posts like this make me reflect on my own setups, and I’m genuinely interested in exploring this idea further with you.
Great read ! The monolith was a simple relationship; the Cloud is a complicated long-distance one.
Love that analogy 😄 You captured the reality perfectly — modern cloud systems really do feel like juggling time zones and trust issues. Great insight!
I love how you're highlighting the value of discipline in systems engineering. It's like, the more you learn about building things in the cloud, the more you realize how much we're all just winging it, right? I mean, even with the best tools and languages, there's still so much uncertainty involved. It's like trying to construct a house on shaky ground, we've got to double down on fundamentals, I think.
I really appreciate how you pointed out the importance of discipline in systems engineering. You’re absolutely right — no matter how modern our tools are, there’s always a level of uncertainty, and sometimes it really does feel like we’re just figuring things out as we go. That’s why your focus on fundamentals hits home for me.
In my opinion, doubling down on core principles like clear architecture, solid testing, and strong observability is the only real way to build with confidence in the cloud. Trends will change, but good engineering habits won’t. I’m really interested in exploring this mindset more because I believe it’s what separates stable systems from fragile ones.
"You notice it because your credit card calls you" — yeah that one hit different 😂
I've been running a few side projects on a single VPS lately and honestly? It's been refreshing. One machine, one process,
journalctlfor logs. When something breaks I know exactly where to look. Total cost: $7/month.Meanwhile at work we've got 14 microservices, a service mesh, and last week we spent 3 hours debugging what turned out to be a misconfigured health check. The code was fine. The YAML wasn't.
I don't think monoliths are always the answer, but I do think we reach for distributed architectures way too early. Most apps could run on a single server for years before they actually need to scale.
That line about the credit card is painfully accurate 😂 I really appreciate how you contrasted the simplicity of a single VPS setup with the operational overhead of 14 microservices — it clearly shows that the real complexity often lives in orchestration, not in the code. Spending hours debugging a misconfigured health check instead of business logic is a strong signal that infrastructure choices can become the bottleneck.
I completely agree that distributed systems should solve a real scaling or domain-boundary problem, not just follow a trend. In many cases, a well-structured modular monolith with clear boundaries and solid observability can carry a product much further than people expect. I’m especially interested in the tipping point — what concrete metrics (traffic, team size, deploy frequency) truly justify the move to microservices? This is the kind of discussion that pushes us to be more intentional with architecture instead of defaulting to YAML-driven complexity.
Couldn't agree more with this statement "reach for distributed architectures way too early", yeah sometimes we decide to use it just because it looks good.
Absolutely — distributed architectures should be a scaling strategy, not a starting point. Premature distribution adds operational complexity and coordination overhead long before the system actually demands it.
As someone building two SaaS products with Go (Fiber) right now, I feel this deeply. My entire backend is a single Go binary — auth, API, webhooks, cron jobs, all in one process. Deploy is literally
scp binary && systemctl restart. No Kubernetes, no container orchestration, no service mesh.The irony is that this "outdated" approach handles thousands of requests with barely any memory usage, while my friend's microservices architecture needs a DevOps engineer just to keep the lights on.
Go was literally designed for this — goroutines give you concurrency without the complexity of distributed systems. Most startups don't need microservices. They need to ship fast and iterate.
This really resonates with me. Running everything inside a single Go binary is such an underrated strategy — especially with Fiber — because you eliminate so much operational overhead while still getting strong concurrency from goroutines. When auth, API, and background jobs live in one process, you avoid network hops, serialization costs, and the consistency issues that microservices often introduce too early.
I also think people underestimate how far a well-structured monolith can scale before distribution becomes necessary. Go’s lightweight memory footprint and built-in concurrency model make it ideal for high-throughput workloads without orchestration complexity. Ship fast, validate the product, and only introduce distributed systems when there’s a clear technical bottleneck — not just because it’s trendy.
Art Light, I too face this while using AWS—billing creeps up from obscure services and data egress fees, while debugging across K8s and IAM feels like herding cats in YAML hell. Still, your monolith love hits home; sometimes simpler is the real superpower for us Go/Rust folks. Thanks for the reality check!
Love this — “herding cats in YAML hell” is painfully accurate 😄 You nailed it, and I really respect how you balance real-world AWS scars with the wisdom to appreciate simplicity — that mindset is what makes strong Go/Rust engineers stand out.
Absolutely! This post highlights a critical truth: in the cloud era, we trade predictable silos for a chaotic mesh of systems. Go and Rust thrive because they don’t shy away from the mess—Go embraces error as expected, and Rust demands strict correctness. The real skill set now includes navigation through not just code, but a complex network of dependencies and configurations. Spot on! 🚀
I really enjoyed reading this — you explained the cloud reality in a very clear and honest way. I agree that Go and Rust don’t try to hide the complexity, but instead give us tools to handle it properly. That mindset of accepting errors and focusing on correctness feels very practical in today’s environment.
In my opinion, the next step is building better patterns and tooling around this complexity so teams can move fast without losing control. I’m especially interested in how we can design systems that are both strict and flexible at the same time. Thanks for sharing this — it definitely made me think deeper about how we navigate modern infrastructure.
I agree with these sentiments. Consider Elixir / Erlang for those interested in working in a system that was designed for these problems.
Great point — if you’re building highly concurrent, fault-tolerant systems, Elixir/Erlang’s BEAM VM offers battle-tested reliability and lightweight process isolation that most modern stacks still struggle to match.
This is so real, we’re not debugging code anymore, we’re debugging systems. Or maybe creating those systems.
So true — that line really hits. I love how you framed it, because these days the real challenge isn’t just fixing logic, it’s understanding how services, configs, and infra interact under load.