DEV Community

Cover image for Static IP Addresses for GKE Outbound Traffic: A Practical Guide to Cloud NAT
lbc
lbc

Posted on

Static IP Addresses for GKE Outbound Traffic: A Practical Guide to Cloud NAT

TL;DR

To get a fixed public IP for your GKE cluster's outbound traffic:

  1. Reserve a regional static IP
  2. Create a Cloud Router in the same region
  3. Configure Cloud NAT with Manual IP assignment using that reserved IP

Done! All outbound traffic from your pods will always exit through the same IP.

The problem:

Your application running on Google Kubernetes Engine (GKE) needs to connect to an external database that requires IP whitelisting. But pods in Kubernetes have ephemeral IPs that change constantly. The solution? Cloud NAT with manual static IP assignment.

Why is this necessary?

In modern microservice architectures, it's common for Kubernetes applications to need access to:

  • Managed databases in other GCP projects
  • Third-party APIs with strict firewall policies
  • Legacy services that only allow access from known IPs

The challenge: GKE nodes (especially in private clusters) don't have fixed public IPs, making it impossible to maintain a stable whitelist.

The solution: Cloud NAT with manual assignment

Cloud NAT (Network Address Translation) acts as a gateway that translates your cluster's internal private addresses to a fixed, predictable public IP address.

Step-by-step implementation:

Reserve a static IP address

First, reserve a regional IP that we'll use as the public "face" of our cluster:

gcloud compute addresses create nat-static-ip \
  --region=us-central1
Enter fullscreen mode Exit fullscreen mode

Important note: The IP must be in the same region as your GKE cluster.

— Create a Cloud Router

Cloud NAT requires a Cloud Router, which acts as the control plane for NAT configuration:

gcloud compute routers create nat-router \
  --network=my-vpc \
  --region=us-central1
Enter fullscreen mode Exit fullscreen mode

— Configure Cloud NAT with manual assignment

This is the critical step. You must choose manual assignment (not automatic) to ensure the IP remains fixed:

gcloud compute routers nats create nat-config \
  --router=nat-router \
  --region=us-central1 \
  --nat-external-ip-pool=nat-static-ip \
  --nat-all-subnet-ip-ranges
Enter fullscreen mode Exit fullscreen mode

The --nat-external-ip-pool flag specifies the static IP we reserved in step 1.

— Add the IP to your destination's whitelist

Once Cloud NAT is configured, all outbound traffic from your cluster will use the static IP. You can now confidently add it to your database or external service's firewall.

Key benefits

Persistence: The IP won't change even if the cluster restarts or nodes are recreated.

Security: Your GKE nodes can remain in private subnets without public IPs, reducing your attack surface.

Scalability: Cloud NAT is a managed service that scales automatically without impacting performance.

No application changes: If you use GitOps with ArgoCD, you don't need to modify your deployments. Configuration is entirely at the infrastructure level.

Important considerations

Capacity management: In manual assignment mode, you're responsible for calculating how many IPs/ports you need. If your cluster grows significantly, you might experience OUT_OF_RESOURCES errors.

Monitoring: Set up alerts for NAT port utilization to detect issues before they impact production.

Alternatives: For very specific use cases (like custom NAT logic or complex firewall requirements), consider whether a manual NAT instance might be more appropriate, though this increases operational overhead.

When to use this solution?

✅ You need to communicate with services requiring IP whitelisting

✅ You run private GKE clusters

✅ You want a scalable, managed solution

✅ You need compliance and centralized auditing of outbound traffic

❌ You have extremely custom NAT logic

❌ You need granular control the managed service doesn't offer.

How to verify it's working

Once configured, you can easily test it:

# Create a temporary pod and check your public IP
kubectl run curl-test --image=radial/busyboxplus:curl --rm -it -- \
  curl -s ifconfig.me

# Or run continuous checks to confirm the IP stays consistent
kubectl run -i --tty curl-test --image=radial/busyboxplus:curl --rm -- \
  sh -c "while true; do curl -s ifconfig.me; echo; sleep 2; done"
Enter fullscreen mode Exit fullscreen mode

You should see your reserved static IP returned consistently.

Common issues and how to fix them

  • IP keeps changing → Double-check that you selected "Manual" (not "Automatic") in your Cloud NAT configuration.
  • Reserved IP in wrong region → The static IP and Cloud NAT must be in the same region as your GKE cluster.
  • Pods still using dynamic IPs → Ensure the NAT is applied to the subnetwork where your GKE cluster runs (NAT configuration → "Selected subnetworks").
  • Using GKE Autopilot → It works exactly the same. No special configuration needed.
  • No traffic showing in NAT → Wait 2-3 minutes after applying changes (Cloud NAT takes a moment to propagate).

Conclusion

Cloud NAT with manual IP assignment is GCP's standard solution for this common use case. It's reliable, scalable, and relatively simple to configure. Most importantly: it allows you to keep your resources secure in private networks while maintaining controlled connectivity to the outside world.

Have you implemented Cloud NAT in your infrastructure? What challenges did you encounter?

Top comments (0)