A few years ago I downsized from a full-size rack to... just my ISP router. Practical? Sure. Boring? Absolutely.
Now I'm getting back into homelabbing, this time with mini PCs and SBCs running a multi-node Kubernetes cluster to sharpen my skills. But before I could spin up workloads, I needed to rebuild my home network with proper segmentation, dual-site connectivity to a secondary location (my backup lab location), and secure VPN access when traveling.
This 3 part series documents that rebuild:
Part 1 - Got Lazy With My Home Network—So I Rebuilt It Properly
Part 2 - Turning One LAN Into Five Networks: VLANs + Wi‑Fi Segmentation at Home
Part 3 (this post) - Making It Two Locations: A Routed WireGuard Tunnel Between My Labs
Recap and what Part 3 covers
In Part 1, I explained why I tore down my “it mostly works” home network and rebuilt it with clear boundaries: VLAN segmentation, predictable addressing, and a design that could eventually stretch across two locations. In Part 2, I implemented that design at Site A (my house): the GL.iNet Brume 2 runs OpenWrt with VLAN interfaces, the Netgear managed switch carries those VLANs end-to-end, and the Omada AP maps SSIDs to VLAN IDs so Wi‑Fi clients land in the right networks.
Part 3 is where the lab becomes truly dual-site. I turn my parents’ house into “Site B” using a Xiaomi router flashed with OpenWrt, then connect Site A and Site B using a routed site‑to‑site WireGuard tunnel (Brume 2 as the hub/server, Xiaomi as the spoke/client). From there, the focus is on the practical glue that makes this usable in real life: which subnets are routed over the tunnel, what firewall forwardings are allowed (and which are explicitly not), and how I can still reach both sites securely when traveling without poking holes in the segmentation work from Part 2.
Why the Xiaomi, and how the tunnel starts
The router I used at Site B (my parents’ house) started life as a basic Wi‑Fi extender, but it turned out to be the ideal “cheap lab edge” once I realized it could be flashed to OpenWrt. With OpenWrt on the Xiaomi, I can run WireGuard and have it establish a site‑to‑site tunnel back to my Site A router (the GL.iNet Brume 2), which turns that second location into a routable extension of my homelab instead of an isolated island.
On the Site A side, I kept the initial WireGuard setup intentionally simple by using the Brume 2’s native GL.iNet UI rather than building everything manually in LuCI. I chose WireGuard because it’s lightweight and a good performance fit for small routers, and I configured the VPN server with a dedicated tunnel IP—10.11.255.1 in my case—so the VPN has a clean “transit” network that stays separate from my LAN subnets.
To bootstrap the site‑to‑site link, I created a new WireGuard client/profile in the GL.iNet UI, and the Brume 2 generated a ready-to-import configuration file for the other end. That file is the starting point on the Xiaomi side: import it, bring up the tunnel, and then move from “the interface is up” to “the sites can actually route to each other” by aligning AllowedIPs, routes, and firewall forwardings in the next sections.
Site B: Xiaomi client setup (WireGuard-focused)
Site A is already prepared to accept a site‑to‑site peer, so the next job is turning the Xiaomi at Site B into a clean “WireGuard client that routes a subnet” device. In this post I’m deliberately not going deep into the Site B homelab itself; the point is getting a stable tunnel that can carry traffic between a Site B lab subnet (10.22.30.0/24 in the article) and the networks at Site A.
Flashing the Xiaomi to OpenWrt (high level)
The Xiaomi firmware flash is a one-time prerequisite, and I’m not going to replicate the steps here because they vary by model and hardware revision. The key takeaway is that once the router is running OpenWrt (and you can access LuCI), it stops being “just an extender” and becomes a proper edge router that can run a routed WireGuard tunnel.
Installing WireGuard support on OpenWrt
On a fresh OpenWrt install, WireGuard typically isn’t fully available in the UI until you install the kernel module and userland tools (and optionally the LuCI protocol helper). On the Xiaomi I installed the WireGuard packages first and rebooted, so the WireGuard protocol shows up cleanly when creating interfaces and peers.
Example (SSH):
opkg update
opkg install wireguard-tools kmod-wireguard luci-proto-wireguard reboot
A practical pitfall here is storage: some routers have limited free flash, so it’s worth checking before installing extra packages.
Importing the profile and bringing up the tunnel
With the Site A configuration exported and ready, the Xiaomi side becomes mostly an OpenWrt exercise: install WireGuard support, create a WireGuard interface, and then copy/import the peer settings so the Xiaomi knows how to reach the Brume 2. In my case I ended up with an interface named wg_site_a carrying the Xiaomi’s tunnel address, and a peer entry pointing at the Brume 2 endpoint with PersistentKeepalive enabled, since Site B sits behind NAT.
The detail that turns this from “remote access” into “site‑to‑site routing” is AllowedIPs plus route creation on the WireGuard client.
On the Xiaomi, with route_allowed_ips enabled, OpenWrt automatically installs routes for the peer’s AllowedIPs, so traffic destined for Site A subnets is forwarded into the WireGuard interface instead of the regular WAN.[
Also, remember AllowedIPs isn’t just a routing convenience: it’s WireGuard’s traffic selector—outbound destinations matching AllowedIPs go to that peer, and inbound packets from that peer are only accepted if their source IPs are within that peer’s AllowedIPs.[
Minimal firewall glue (so it routes, but stays contained)
To keep Site B intentionally small and safe, I didn’t open the tunnel to every local network on the Xiaomi. Instead, I put the WireGuard interface into its own firewall zone and allowed forwarding only between that zone and my “lab” interface at Site B (the one I use for homelab devices), which gives me bidirectional routing between the two labs without making the entire Site B router a wide-open bridge into everything.
Brume 2: firewall forwardings for site‑to‑site routing
At this point the tunnel can be “up” and still be useless if the firewall won’t let traffic cross it, so on the Brume 2 my work was almost entirely firewall plumbing. I didn’t create any WireGuard devices or interfaces manually on the Brume 2: configuring the WireGuard server in the GL.iNet UI took care of the server-side interface and peer scaffolding for me.
Treat the tunnel as its own zone
On the Brume 2 I keep the WireGuard server side grouped into its own firewall zone (I use wgserver), so it’s easy to reason about paths like “Management VLAN → VPN” or “Homelab VLAN → VPN.” This keeps the mental model clean: VLAN zones are internal networks, wgserver is the inter-site transit, and forwardings are the only doors between them.
Allow only the VLANs that should reach Site B
To keep the site‑to‑site link useful without flattening my segmentation, I only allow the tunnel to talk to two VLANs at Site A: VLAN 10 (management) and VLAN 30 (homelab). Concretely, that means adding forwardings in both directions:
wgserver → vlan10, vlan10 → wgserver, wgserver → vlan30, and vlan30 → wgserver.
The practical effect is exactly what I wanted: management clients in VLAN 10 can administer across sites, and lab workloads in VLAN 30 can reach lab services at Site B, while the other VLANs remain isolated from the tunnel by default.
Common pitfall: “handshake works, but nothing routes”
If you ever see a healthy WireGuard handshake but can’t ping anything across sites, the firewall forwardings are one of the first things to double-check. With OpenWrt’s zone model, “the interface exists” doesn’t imply “traffic is permitted”—those inter-zone forwardings are the difference between a connected tunnel and a routed inter-site network.
Travel access: two “road‑warrior” profiles on top of the site‑to‑site VPN
With the site‑to‑site tunnel in place, the last piece I wanted was the ability to reach everything when I’m away from home—without exposing any management services to the public internet. Instead of changing the site‑to‑site design, I simply added two more WireGuard profiles on the Brume 2: one for my GL.iNet Beryl AX travel router, and one for my daily-driver laptop for the cases where I’m not using the travel router.
Why two profiles
The Beryl AX profile gives me a “VPN bubble” I can carry anywhere, and in my case its WAN comes from USB phone tethering (not hotel Wi‑Fi). That means I plug my phone into the Beryl, the Beryl treats the phone as its upstream internet, and then it brings up WireGuard back to the Brume 2—anything behind the Beryl can use the VPN without needing individual client configs.
The laptop profile is the fallback for those times I’m traveling light, or when I only need access from a single device. It’s also useful at home when I want to join via VPN from the laptop directly instead of routing through the travel router.
What these profiles need to reach
These travel profiles are classic road‑warrior configs: they should be able to reach my trusted admin and lab networks at Site A, and they should also include the Site B lab subnet so I can manage the backup location through the existing site‑to‑site tunnel. The key idea is that the Brume 2 remains the hub: once a road‑warrior client is connected to it, the Brume 2 can route the client into the allowed Site A VLANs (per firewall policy) and onward to Site B across the site‑to‑site link.
Keeping access intentional (not “everything everywhere”)
I treat these travel profiles as privileged access, so I keep their reach aligned with my segmentation model: the goal is seamless admin and homelab access, not letting an untrusted travel network have a path into every VLAN. In practice, that means the travel profiles are designed to reach only the VLANs and subnets I explicitly manage (plus the Site B lab subnet), while everything else stays isolated unless I make a conscious decision to open it.
Lessons learned and future Kubernetes plans
This rebuild delivered what I wanted at the start: a network with strong segmentation at Site A, a real routed extension at Site B, and secure remote access when I’m away. The best part is that it’s predictable now—predictable addressing, predictable boundaries, and no more “I wonder what network this device is on.”
What worked better than my old “mostly works” setup
The biggest win was making the firewall the source of truth for segmentation. VLANs create the separate networks, but the forwardings between zones are what make (or break) isolation, so keeping those paths intentional gave me a clean “default deny between VLANs” posture while still allowing my management VLAN to reach what it needs.
The second win was treating the VPN the same way: not as a magic private backdoor, but as another zone with explicit forwardings. On the Brume 2 I leaned on the GL.iNet UI to handle the WireGuard server setup, and then I focused my effort where it matters: which VLANs are allowed to forward into the tunnel (and back).
Practical lessons I’m keeping for next time
Simple scaling beats cleverness. The “site is the second octet, VLAN is the third octet” scheme makes it simple to identify where an IP belongs (which site, which network) when you’re looking at routes, firewall rules, or client addresses.
Where Kubernetes fits next
This network redesign was always in service of the next phase: running a small multi-node Kubernetes cluster on low-power machines without turning the network into the bottleneck. I’ve got a pile of hardware waiting for that job—HP G3 mini PCs, a Raspberry Pi, and a few other older systems—and while I plan to run separate clusters per site, having the option to connect things across sites later (cleanly, without renumbering) is a nice capability to have.
The immediate plan is to start simple: get a cluster running on the Site A homelab VLAN, then treat Site B as its own independent lab environment. The nice thing is that the “network foundation” work is done—adding nodes should now feel like plugging machines into the right VLAN (or subnet) and letting routing and VPN do their job.






Top comments (0)