Fully autonomous
Each edge site is a self-contained Kubernetes cluster with its own redundant control plane on dedicated master servers, plus workers. Maximum robustness, maximum overhead.
A quantitative comparison of edge Kubernetes deployment topologies for large distributed infrastructures — built around a simulator you can drive with your own parameters.
A distributed edge deployment — thousands of small sites, each with a handful of servers, coordinated by a central management cluster — forces one question first: where does the Kubernetes control plane live? Five topologies are credible, each distributing control-plane components differently between the edge sites and the central cluster.
The trade-offs between robustness, latency, cost, and operational complexity aren't reducible to a single number — but they can be made quantitative. This page formalises the parameters, defines the metrics, and provides a live simulator. The simulator is the centerpiece: drive the parameters, watch the metrics shift.
Each edge site is a self-contained Kubernetes cluster with its own redundant control plane on dedicated master servers, plus workers. Maximum robustness, maximum overhead.
Each site is self-contained, but master and worker components are co-located on the same servers. Same robustness, less hardware waste, identical operational complexity.
Each tenant control plane runs as pods inside a central management cluster. Edge sites contain workers only. The hyperscaler-style hosted control plane.
A hybrid: one master runs locally as a fallback while the rest of the control plane runs remotely. Mixes patterns in a way that fails to maintain control-plane quorum during a partition.
A single Kubernetes cluster with masters in the cloud and every edge server as a worker. Operationally simple but hits scalability ceilings and exposes the maximum blast radius.
Sliders update the metrics in real time. Cells turn warning-coloured or danger-coloured when a model crosses a meaningful threshold for the chosen deployment scale.
| Model | CP % | Fail-ok | WAN-ok | Lat ms | Blast | Max/cl | OpEx/y | Energy/y |
|---|---|---|---|---|---|---|---|---|
| (1) Fully autonomous | 37.6 | 1 | 0.5 | 0 | 8 | €3.0M | 1.1 GWh | |
| (2) Shared autonomous | 7.7 | 1 | 0.5 | 0 | 8 | €2.6M | 215 MWh | |
| (3) Headless (Kamaji) | 3.8 | 1 | 20.5 | 200 | 8 | €1.0M | 110 MWh | |
| (4) Distributed autonomous | 15.8 | 0 | 6.5 | 200 | 8 | €1.2M | 461 MWh | |
| (5) Gigantic cluster | 0.2 | 1 | 20.5 | 200 | 1,600 | €30k | 5 MWh |
Total master footprint across the deployment. mded is the amount of dedicated server for master nodes, while mshared is the count of servers being both master and worker nodes. Me is the footprint per edge site (co-located masters count at fcp = 0.2). Mhosting represents the central servers needed to host tenant control planes.
Master server-equivalents as a percentage of total compute servers. Note: we are assuming the central cluster only has exactly the master nodes and the workers needed to run tenant control planes and no additional static workers.
Server failures the site can absorb before its control plane loses write availability.
Whether the edge site keeps a writable control plane during a WAN partition.
Round-trip to a control-plane API server, accounting for an optional local cache.
How many edge sites lose control-plane writes if the central cluster fails.
Hits Kubernetes' practical scalability ceiling around 5,000 nodes.
Ops scales encode per-cluster operational difficulty: Kamaji-managed clusters are homogeneous (0.5–0.6), bespoke clusters are heavy (1.3–1.5), one hard gigantic cluster is heaviest of all (3.0).
Energy consumed by control-plane infrastructure alone (excludes worker load).
The model assumes uniform edge sites (real deployments rarely are), ignores correlated failures (a fibre cut takes down many sites at once), doesn't model storage and has no Availability Zone concept which usually applies to cloud deployments.