DEV Community

Ibrahim
Ibrahim

Posted on

Building a Single-Page Homelab Dashboard That Shows CPU, GPU, RAM, Storage and Temps Across Multiple Proxmox VMs

When your GPU lives in one VM, your temps live on the hypervisor, and your services are spread across three hosts — no single tool shows the full picture. Here's a zero-dependency dashboard that stitches it all together.


The Problem

I run a Proxmox homelab on a mini PC with GPU passthrough to a dedicated VM. The GPU (NVIDIA RTX 5060 Ti) lives in one VM for AI/ML workloads, while general Docker services run in another VM, and the hypervisor itself handles CPU, RAM, and storage. Monitoring means SSH-ing into three different hosts and running separate commands — not ideal.

I wanted a single browser dashboard that shows everything at a glance: CPU usage with actual temps, GPU utilization and VRAM, host and VM memory, NVMe storage usage and temps, and load averages across all hosts. No heavyweight monitoring stack (Grafana + Prometheus + node_exporter), no databases, no agents to maintain — just a static HTML page that polls REST APIs.

Architecture Overview

┌──────────────────────────────────────────────────┐
│                Proxmox VE Host                    │
│                (bare metal)                       │
│  ┌──────────────┐  ┌──────────────┐              │
│  │ Glances :61208│  │ Storage API  │              │
│  │ (systemd)    │  │ :61209       │              │
│  │              │  │ (LVM stats)  │              │
│  └──────┬───────┘  └──────┬───────┘              │
│         │                 │                       │
│  ┌──────┴─────────────────┴──────┐               │
│  │        VM: Docker (general)    │               │
│  │  Glances :61208 (load only)   │               │
│  │  Services: n8n, Portainer...  │               │
│  └────────────────────────────────┘               │
│  ┌────────────────────────────────┐               │
│  │    VM: Docker-GPU (AI/ML)      │               │
│  │  Glances :61208 (GPU + mem)   │               │
│  │  RTX 5060 Ti via VFIO/PCIe   │               │
│  │  Services: Ollama, Immich...  │               │
│  └────────────────────────────────┘               │
│  ┌────────────────────────────────┐               │
│  │      VM: Devbox                │               │
│  │  Static HTML dashboard        │               │
│  │  Served via Caddy :80         │               │
│  └────────────────────────────────┘               │
└──────────────────────────────────────────────────┘
         ↑                    ↑
    LAN browser          Caddy reverse
    fetches APIs         proxy (*.lan)
Enter fullscreen mode Exit fullscreen mode

Key components:

Component Role
Proxmox VE Bare metal hypervisor — only host with hardware sensors (CPU temp, DIMM temp, NVMe temp)
Docker VM General services (no GPU) — provides load average via Glances
Docker-GPU VM GPU passthrough VM — provides GPU metrics, VM memory, load average via Glances
Devbox VM Serves the static dashboard HTML via Caddy
Caddy LXC Reverse proxy — monitor.lan routes to devbox
MikroTik router Wildcard DNS *.lan → Caddy for all internal services

Glances Setup

Glances is a cross-platform monitoring tool written in Python. Critically, it includes a REST API out of the box when run in web mode (-w). This is what makes the whole dashboard possible — no agents, no exporters, just HTTP GET requests.

Installing Glances

On each host:

# Install with web dependencies (FastAPI + uvicorn)
sudo pip install "glances[web]" --break-system-packages

# Or if you're on a newer system with externally-managed-environment:
sudo pip install "glances[web]" --break-system-packages --ignore-installed typing_extensions
Enter fullscreen mode Exit fullscreen mode

Glances Configuration

Create /etc/glances/glances.conf — disable everything you don't need to keep the API responses lean:

Hypervisor (full monitoring):

[global]
refresh=2
check_update=false

# Disable noisy plugins
[processlist]
disable=True
[network]
disable=True
[diskio]
disable=True
[wifi]
disable=True
[ports]
disable=True
[folders]
disable=True
[cloud]
disable=True
[containers]
disable=True
Enter fullscreen mode Exit fullscreen mode

This leaves the important plugins enabled: cpu, mem, gpu, load, sensors, fs, system, uptime.

VM (minimal — load average only):

[global]
refresh=2
check_update=false

# Disable everything except cpu, mem, load, system
[processlist]
disable=True
[network]
disable=True
[diskio]
disable=True
[fs]
disable=True
[sensors]
disable=True
[gpu]
disable=True
[wifi]
disable=True
[ports]
disable=True
[folders]
disable=True
[cloud]
disable=True
[containers]
disable=True
Enter fullscreen mode Exit fullscreen mode

Systemd Service

Create /etc/systemd/system/glances.service:

[Unit]
Description=Glances system monitoring
After=network.target

[Service]
ExecStart=/usr/local/bin/glances -w -B 0.0.0.0
Restart=on-failure
RestartSec=10

[Install]
WantedBy=multi-user.target
Enter fullscreen mode Exit fullscreen mode
sudo systemctl daemon-reload
sudo systemctl enable --now glances
Enter fullscreen mode Exit fullscreen mode

Glances now exposes its REST API on port 61208. Test it:

curl -s http://localhost:61208/api/4/cpu | jq
# Returns: { "total": 2.3, "cpucore": 32, ... }

curl -s http://localhost:61208/api/4/sensors | jq
# Returns: [{ "label": "Tctl", "value": 55.0, "unit": "C" }, ...]
Enter fullscreen mode Exit fullscreen mode

Key API Endpoints

Endpoint Returns
/api/4/cpu total (usage %), cpucore (core count)
/api/4/mem used, total, percent
/api/4/gpu Array of GPUs: name, proc, mem, temperature, fan_speed
/api/4/load min1, min5, min15
/api/4/sensors Array: label, value, unit (CPU temp, DIMM temp, NVMe temp)

Important: Only bare metal hosts return useful sensor data. VMs have no hardware sensors — /api/4/sensors returns an empty array.

CORS

Glances automatically adds Access-Control-Allow-Origin: * when the browser sends an Origin header. No configuration needed — cross-origin fetch from your dashboard just works.

The LVM Thin Pool Problem

Proxmox uses LVM thin provisioning for VM disks. The Glances fs plugin only reports mounted filesystems (like / or /boot), not the thin pool itself. To show how much of the 2TB NVMe is actually used by VMs and containers, I needed a different approach.

Solution: A 30-Line Python HTTP Server

Create a tiny API on the hypervisor that wraps the lvs command:

#!/usr/bin/env python3
"""Tiny HTTP server exposing LVM thin pool stats."""
from http.server import HTTPServer, BaseHTTPRequestHandler
import subprocess, json

class Handler(BaseHTTPRequestHandler):
    def do_GET(self):
        try:
            result = subprocess.run(
                ["lvs", "--noheadings", "--nosuffix", "--units", "g",
                 "-o", "lv_size,data_percent", "pve/data"],
                capture_output=True, text=True, timeout=5
            )
            parts = result.stdout.strip().split()
            size_gb = float(parts[0])
            pct = float(parts[1])
            used_gb = size_gb * pct / 100
            data = {
                "size_gb": round(size_gb, 1),
                "used_gb": round(used_gb, 1),
                "percent": round(pct, 1)
            }
        except Exception as e:
            data = {"error": str(e)}
        self.send_response(200)
        self.send_header("Content-Type", "application/json")
        self.send_header("Access-Control-Allow-Origin", "*")
        self.end_headers()
        self.wfile.write(json.dumps(data).encode())

    def log_message(self, *args):
        pass  # Suppress request logging

HTTPServer(("0.0.0.0", 61209), Handler).serve_forever()
Enter fullscreen mode Exit fullscreen mode

Deploy as a systemd service on port 61209:

[Unit]
Description=PVE Storage API (LVM thin pool stats)
After=network.target

[Service]
ExecStart=/usr/bin/python3 /usr/local/bin/pve-storage-api.py
Restart=on-failure

[Install]
WantedBy=multi-user.target
Enter fullscreen mode Exit fullscreen mode
curl -s http://your-pve-ip:61209/ | jq
# { "size_gb": 1754.0, "used_gb": 600.7, "percent": 34.2 }
Enter fullscreen mode Exit fullscreen mode

The Dashboard

The entire dashboard is a single HTML file — no build tools, no npm, no framework. It uses fetch() with AbortSignal.timeout(3000) for resilience and polls every 2 seconds.

Data Flow

Browser (on LAN)
  ├── fetch → PVE:61208/api/4/cpu,mem,load,sensors
  ├── fetch → Docker:61208/api/4/load
  ├── fetch → Docker-GPU:61208/api/4/cpu,mem,gpu,load
  └── fetch → PVE:61209/ (LVM thin pool)
       ↓
  Render unified layout:
  ┌─────────────────────────┐
  │ CPU — 32 cores          │
  │ Usage: 1.5%  ████░░░░░░ │
  │ Temp:              55°C  │
  ├─────────────────────────┤
  │ GPU                      │
  │ NVIDIA RTX 5060 Ti       │
  │ Proc:  0.0%  ░░░░░░░░░░ │
  │ VRAM:  3.7%  █░░░░░░░░░ │
  │ Temp:              30°C  │
  │ Fan:                 0%  │
  ├─────────────────────────┤
  │ RAM                      │
  │ pve   65/92 GB   71.1%  │
  │ gpu    5/31 GB   14.3%  │
  │ DIMM Temp:         41°C  │
  ├─────────────────────────┤
  │ SSD                      │
  │ Pool  601/1754 GB 34.2% │
  │ NVMe Temp:         42°C  │
  ├─────────────────────────┤
  │ Load Average             │
  │ pve    0.16/0.25/0.32   │
  │ docker 0.02/0.08/0.09   │
  │ gpu    0.01/0.12/0.06   │
  └─────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

Status Indicators

Three colored dots in the header show host connectivity:

  • Green: API responded within 3 seconds
  • Red: Connection failed or timed out

Color Coding

Metric Green Yellow Red
Usage bars < 50% 50-80% > 80%
Temperatures < 45°C 45-65°C > 65°C

Complete Dashboard Source

<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Homelab Monitor</title>
<style>
* { margin: 0; padding: 0; box-sizing: border-box; }
body { background: #0a0a0a; color: #ccc; font-family: 'JetBrains Mono', 'Fira Code', 'Consolas', monospace; font-size: 13px; }
.container { max-width: 560px; margin: 0 auto; padding: 8px; }
.header { display: flex; align-items: center; gap: 8px; padding: 6px 0 10px; margin-bottom: 8px; }
.header h1 { font-size: 15px; font-weight: 600; color: #e0e0e0; }
.status-dots { display: flex; gap: 10px; margin-left: auto; }
.status-item { display: flex; align-items: center; gap: 4px; font-size: 10px; color: #666; }
.dot { width: 6px; height: 6px; border-radius: 50%; }
.dot.ok { background: #00e676; box-shadow: 0 0 3px #00e676; }
.dot.err { background: #ff1744; box-shadow: 0 0 3px #ff1744; }

.section { background: #111; border: 1px solid #262626; border-radius: 4px; padding: 8px 12px; margin-bottom: 5px; }
.section-title { font-size: 9px; color: #555; text-transform: uppercase; letter-spacing: 1.2px; margin-bottom: 5px; }
.row { display: flex; justify-content: space-between; align-items: center; padding: 1.5px 0; gap: 8px; }
.label { color: #888; white-space: nowrap; font-size: 12px; }
.value { font-weight: 600; white-space: nowrap; }
.bar-container { width: 100px; height: 5px; background: #1a1a1a; border-radius: 3px; margin-left: 6px; flex-shrink: 0; }
.bar { height: 100%; border-radius: 3px; transition: width 0.5s ease; }
.bar-row { display: flex; align-items: center; justify-content: flex-end; gap: 4px; flex-shrink: 0; }

.green { color: #00e676; }
.yellow { color: #ffc107; }
.red { color: #ff1744; }
.bar.green { background: #00e676; }
.bar.yellow { background: #ffc107; }
.bar.red { background: #ff1744; }
.dim { color: #444; }

.gpu-name { color: #76ff03; font-size: 10px; margin-bottom: 3px; }
.load-vals { color: #b0bec5; white-space: nowrap; font-size: 12px; }
.sub-label { color: #555; font-size: 10px; }

.refresh-info { text-align: center; padding: 4px; font-size: 9px; color: #333; }

@media (min-width: 600px) {
  body { font-size: 14px; }
  .container { max-width: 620px; padding: 12px; }
  .header h1 { font-size: 16px; }
  .section { padding: 10px 14px; margin-bottom: 6px; }
  .bar-container { width: 130px; }
  .label { font-size: 13px; }
  .load-vals { font-size: 13px; }
}
</style>
</head>
<body>

<div class="container">
  <div class="header">
    <h1>Homelab Monitor</h1>
    <div class="status-dots">
      <div class="status-item"><div class="dot" id="dot-pve"></div>pve</div>
      <div class="status-item"><div class="dot" id="dot-docker"></div>docker</div>
      <div class="status-item"><div class="dot" id="dot-gpu"></div>gpu</div>
    </div>
  </div>
  <div id="content"></div>
  <div class="refresh-info">auto-refresh 2s &middot; <span id="last-update"></span></div>
</div>

<script>
// ── CONFIGURE THESE ─────────────────────────────────────────
const PVE     = 'http://YOUR-PVE-IP:61208';        // Proxmox host Glances
const DOCKER  = 'http://YOUR-DOCKER-IP:61208';      // General Docker VM Glances
const GPU     = 'http://YOUR-GPU-VM-IP:61208';      // GPU VM Glances
const STORAGE = 'http://YOUR-PVE-IP:61209';          // LVM thin pool API
// ─────────────────────────────────────────────────────────────

function cc(pct) { return pct < 50 ? 'green' : pct < 80 ? 'yellow' : 'red'; }
function tc(t) { return t < 45 ? 'green' : t < 65 ? 'yellow' : 'red'; }
function fmtGB(b) { const g = b / (1024**3); return g >= 100 ? g.toFixed(0) : g.toFixed(1); }
function fmtL(v) { return v != null ? v.toFixed(2) : '-'; }

function barHTML(pct, label) {
  const c = pct != null ? cc(pct) : 'dim';
  const v = pct != null ? pct.toFixed(1) + '%' : '-';
  return `<div class="row"><span class="label">${label}</span>
    <div class="bar-row"><span class="value ${c}">${v}</span>
    <div class="bar-container"><div class="bar ${c}" style="width:${pct||0}%"></div></div></div></div>`;
}

function tempRow(label, val) {
  if (val == null) return '';
  return `<div class="row"><span class="label">${label}</span><span class="value ${tc(val)}">${val}&deg;C</span></div>`;
}

async function fj(url) {
  return (await fetch(url, { signal: AbortSignal.timeout(3000) })).json();
}

async function fa(base, eps) {
  const r = {};
  await Promise.all(eps.map(async e => {
    try { r[e] = await fj(`${base}/api/4/${e}`); } catch { r[e] = null; }
  }));
  return r;
}

async function poll() {
  const [pve, docker, gpu, storage] = await Promise.all([
    fa(PVE, ['cpu','mem','load','sensors']).catch(() => ({})),
    fa(DOCKER, ['load']).catch(() => ({})),
    fa(GPU, ['cpu','mem','gpu','load']).catch(() => ({})),
    fj(STORAGE).catch(() => null)
  ]);

  // Status dots
  document.getElementById('dot-pve').className = `dot ${pve.cpu ? 'ok' : 'err'}`;
  document.getElementById('dot-docker').className = `dot ${docker.load ? 'ok' : 'err'}`;
  document.getElementById('dot-gpu').className = `dot ${gpu.cpu ? 'ok' : 'err'}`;

  // Extract sensor data (bare metal only)
  let cpuTemp=null, dimmTemp=null, nvmeTemp=null;
  if (pve.sensors && Array.isArray(pve.sensors)) {
    const s = pve.sensors;
    // Adjust these labels to match YOUR hardware:
    const t = s.find(x => x.label==='Tctl');         // AMD CPU temp
    if (t) cpuTemp = t.value;
    const d = s.filter(x => x.label && x.label.startsWith('spd5118'));  // DDR5 DIMM temps
    if (d.length) dimmTemp = Math.max(...d.map(x => x.value));
    const n = s.find(x => x.label==='Composite');     // NVMe temp
    if (n) nvmeTemp = n.value;
  }

  const cpuPct = pve.cpu ? pve.cpu.total : null;
  const cores = pve.cpu ? pve.cpu.cpucore : '';
  const gpus = Array.isArray(gpu.gpu) ? gpu.gpu : [];
  const pm = pve.mem || {};
  const pmPct = pm.percent || 0;
  const gm = gpu.mem || {};
  const gmPct = gm.percent || 0;

  let h = '';

  // CPU section
  h += `<div class="section"><div class="section-title">CPU${cores ? ''+cores+' cores' : ''}</div>
    ${barHTML(cpuPct, 'Usage')}${tempRow('Temp', cpuTemp)}</div>`;

  // GPU section
  if (gpus.length) {
    h += `<div class="section"><div class="section-title">GPU</div>`;
    for (const g of gpus) {
      h += `<div class="gpu-name">${g.name||g.gpu_id}</div>`;
      h += barHTML(g.proc, 'Proc');
      h += barHTML(g.mem, 'VRAM');
      h += tempRow('Temp', g.temperature);
      if (g.fan_speed != null)
        h += `<div class="row"><span class="label">Fan</span><span class="value">${g.fan_speed}%</span></div>`;
    }
    h += `</div>`;
  }

  // RAM section
  h += `<div class="section"><div class="section-title">RAM</div>
    <div class="row"><span class="label">pve <span class="sub-label">${fmtGB(pm.used||0)} / ${fmtGB(pm.total||0)} GB</span></span>
      <div class="bar-row"><span class="value ${cc(pmPct)}">${pmPct.toFixed(1)}%</span>
      <div class="bar-container"><div class="bar ${cc(pmPct)}" style="width:${pmPct}%"></div></div></div></div>
    <div class="row"><span class="label">docker-gpu <span class="sub-label">${fmtGB(gm.used||0)} / ${fmtGB(gm.total||0)} GB</span></span>
      <div class="bar-row"><span class="value ${cc(gmPct)}">${gmPct.toFixed(1)}%</span>
      <div class="bar-container"><div class="bar ${cc(gmPct)}" style="width:${gmPct}%"></div></div></div></div>
    ${tempRow('DIMM Temp', dimmTemp)}</div>`;

  // SSD section
  let ssdUsage = '';
  if (storage && storage.size_gb) {
    ssdUsage = `<div class="row"><span class="label">Thin Pool <span class="sub-label">${storage.used_gb} / ${storage.size_gb} GB</span></span>
      <div class="bar-row"><span class="value ${cc(storage.percent)}">${storage.percent}%</span>
      <div class="bar-container"><div class="bar ${cc(storage.percent)}" style="width:${storage.percent}%"></div></div></div></div>`;
  }
  h += `<div class="section"><div class="section-title">SSD</div>
    ${ssdUsage}${tempRow('NVMe Temp', nvmeTemp)}</div>`;

  // Load Average section
  const pl = pve.load||{}, dl = docker.load||{}, gl = gpu.load||{};
  h += `<div class="section"><div class="section-title">Load Average</div>
    <div class="row"><span class="label">pve</span><span class="load-vals">${fmtL(pl.min1)} / ${fmtL(pl.min5)} / ${fmtL(pl.min15)}</span></div>
    <div class="row"><span class="label">docker</span><span class="load-vals">${fmtL(dl.min1)} / ${fmtL(dl.min5)} / ${fmtL(dl.min15)}</span></div>
    <div class="row"><span class="label">docker-gpu</span><span class="load-vals">${fmtL(gl.min1)} / ${fmtL(gl.min5)} / ${fmtL(gl.min15)}</span></div>
  </div>`;

  document.getElementById('content').innerHTML = h;
  document.getElementById('last-update').textContent = new Date().toLocaleTimeString();
}

poll();
setInterval(poll, 2000);
</script>
</body>
</html>
Enter fullscreen mode Exit fullscreen mode

DNS & Reverse Proxy Setup

The .lan domain trick makes this seamless:

  1. Router: A single static DNS entry — regex .*\.lan$ resolves to the Caddy reverse proxy LXC
  2. Caddy: Routes monitor.lan to the devbox VM serving the static HTML
# Inside the :80 server block
@monitor host monitor.lan
handle @monitor {
    reverse_proxy YOUR-DEVBOX-IP:80
}
Enter fullscreen mode Exit fullscreen mode

Any device on the LAN can now open http://monitor.lan — no IP addresses to remember, no port numbers.

GPU Passthrough Context

The reason we need metrics from multiple hosts is GPU passthrough. In a typical Proxmox setup:

  • The hypervisor sees CPU, RAM, NVMe, and motherboard sensors
  • The GPU VM sees the GPU (via VFIO passthrough) and its own allocated memory
  • Other VMs see their own memory and load but nothing about hardware

No single host has the complete picture. The dashboard solves this by aggregating all three perspectives.

What VFIO Passthrough Looks Like

# On the GPU VM
nvidia-smi
# Shows: RTX 5060 Ti 16GB, driver 580.x, CUDA 13.0

# On the hypervisor
lspci | grep -i nvidia
# Shows: IOMMU group, but no driver loaded (VFIO owns it)
Enter fullscreen mode Exit fullscreen mode

The GPU VM runs Glances with the gpu plugin, which reads from nvidia-smi internally. The hypervisor's Glances can't see the GPU at all — it's passed through.

Lessons Learned

Sensor Labels Vary by Hardware

The Glances /api/4/sensors endpoint returns labels that depend on your specific hardware:

Sensor Label Hardware
AMD CPU temp Tctl AMD Ryzen (k10temp driver)
Intel CPU temp Package id 0 Intel Core (coretemp driver)
DDR5 DIMM temp spd5118 0, spd5118 1 DDR5 with SPD5118 hub
NVMe temp Composite Most NVMe drives

Check your own labels first: curl -s http://YOUR-PVE:61208/api/4/sensors | jq '.[].label'

LVM Thin Pools Need Special Handling

Glances' fs plugin reports mounted filesystems, not LVM thin pools. If you use Proxmox's default storage layout (LVM thin pve/data), you need the custom storage API or a different approach (like parsing pvesm status output).

VMs Have No Hardware Sensors

This is obvious in retrospect but easy to forget. A VM's /api/4/sensors always returns []. All temperature data must come from the bare metal host.

CORS Just Works

Glances automatically adds CORS headers when it detects an Origin header in the request. No configuration needed — your browser dashboard can fetch from any Glances instance on the LAN.

The iGPU Gotcha

If your Proxmox host has an integrated GPU (like AMD's Radeon iGPU), Glances will report it in /api/4/gpu. The dashboard code filters this out — we only care about the discrete NVIDIA GPU in the passthrough VM. Check for and filter any unwanted GPU entries.

Adapting This for Your Setup

To use this dashboard in your homelab:

  1. Install Glances on each host you want to monitor (pip install "glances[web]")
  2. Create the systemd service on each host (same unit file everywhere)
  3. Deploy the storage API on your Proxmox host if you want LVM thin pool stats
  4. Edit the 4 URL constants at the top of the HTML file
  5. Update sensor labels in the JavaScript to match your hardware
  6. Serve the HTML from any web server on your LAN

The dashboard is entirely client-side — there's no backend, no state, no database. It runs anywhere: a Docker container with nginx, a Caddy file server, even python3 -m http.server.

If you have fewer hosts (say, just one Proxmox machine with no VMs), simplify by removing the multi-host fetches. If you have more hosts, add more fa() calls and extend the layout.

Final Result

A dark-themed, monospace-font dashboard that refreshes every 2 seconds showing:

  • CPU: Usage bar + core count + AMD Tctl temperature
  • GPU: NVIDIA RTX 5060 Ti — processor %, VRAM %, temperature, fan speed
  • RAM: Hypervisor memory + GPU VM memory + DIMM temperature
  • SSD: LVM thin pool usage bar (how much of the NVMe is allocated) + NVMe temperature
  • Load Average: 1/5/15 min for all three hosts

All from a single index.html with zero dependencies. Total infrastructure: three Glances services + one 30-line Python script.


Built on Proxmox VE with a Minisforum mini PC, RTX 5060 Ti via OCuLink eGPU, and way too much time spent staring at terminal fonts.

Top comments (0)