Blueprints
Validated operating patterns with explicit environment posture, guardrails, and versioned implementation surfaces.
A blueprint is not a one-off environment file. It composes stable module contracts with policy profiles, environment state, and replaceable implementation packs so the same operating pattern can be reused cleanly across shared foundations, drills, staging, and production lanes.
Operating posture
Blueprints stay stable; environments, guardrails, and packaging surfaces change around them.
Blueprint refs stay stable while state, secrets, approvals, and runtime context stay isolated per environment.
- The same blueprint can drive shared, dev, drill, staging, prod, QA, or customer-specific lanes.
- Live and drill cutovers stay separated by environment state instead of branching the blueprint itself.
- Environment naming is not limited to dev, staging, and prod.
Execution policy is applied around the blueprint contract, not hardcoded into every implementation path.
- Naming, backend selection, timeouts, and connectivity expectations are policy-driven.
- Manual gates, cost controls, and validation depth can vary by environment without changing the blueprint ref.
- Every run still resolves through the same contract chain: module, driver, profile, pack, and probes.
Blueprints remain the operating pattern while implementation assets are versioned on the surfaces they belong to.
- HybridOps Core ships the runtime, module contracts, and blueprint definitions.
- Terraform module sources can be consumed from registry or Git-backed module repos.
- Ansible collections are packaged for Galaxy distribution rather than kept as local ad hoc scripts.
Featured Blueprints
Complete, end-to-end blueprint paths with walkthroughs and run records.
NetBox-backed IPAM, Proxmox SDN, and foundation services brought up through one controlled path before higher-order services are layered on top.
Hetzner VyOS edge pair, on-prem site extension, and GCP hub routing combined into one reusable WAN control plane.
Backup-driven PostgreSQL HA failover to GCP and controlled failback on-prem with isolated drill lanes and checksum validation.
Managed Cloud SQL standby, controlled promote, and isolated failback for teams that want a lower-operations cloud DR lane.
On-prem RKE2 control plane and worker pool with clean kubeconfig handoff, ready for Argo CD and workload layering.
Reference Blueprints
Current reference blueprints organized by deployment scope.
-
onprem/bootstrap-netbox@v1SDN + NetBox Bootstrap -
onprem/authoritative-foundation@v1Authoritative Foundation -
onprem/postgresql-ha@v1PostgreSQL HA (Patroni + etcd) -
onprem/rke2@v1RKE2 Cluster -
onprem/rke2-workloads@v1RKE2 + ArgoCD Workloads -
onprem/netbox-ha-cutover@v1NetBox DB HA Cutover -
onprem/eve-ng@v1EVE-NG Lab Platform
-
networking/wan-hub-edge@v1WAN Hub to Edge (GCP + Hetzner) -
networking/edge-control-plane@v1Edge Control Plane -
networking/hetzner-vyos-edge@v1Hetzner VyOS Edge -
networking/onprem-vyos-edge@v1On-Prem VyOS Edge -
networking/onprem-site-extension@v1On-Prem Site Extension -
networking/gcp-ops-runner@v1GCP Ops Runner -
networking/onprem-ops-runner@v1On-Prem Ops Runner -
networking/powerdns-shared-primary@v1PowerDNS Shared Primary -
networking/powerdns-onprem-secondary@v1PowerDNS On-Prem Secondary -
gcp/gke-burst@v1GKE Burst Cluster -
gcp/eve-ng@v1EVE-NG on GCP
-
dr/postgresql-ha-backup-gcp@v1PostgreSQL Backup to GCS -
dr/postgresql-ha-failover-gcp@v1PostgreSQL Failover to GCP -
dr/postgresql-ha-failback-onprem@v1PostgreSQL Failback to On-Prem -
dr/postgresql-cloudsql-standby-gcp@v1Cloud SQL Standby in GCP -
dr/postgresql-cloudsql-promote-gcp@v1Cloud SQL Promote in GCP -
dr/postgresql-cloudsql-failback-onprem@v1Cloud SQL Failback to On-Prem
Full blueprint step sequences, module compositions, and runbook links are in the blueprint index.
Module Catalog
Core module families behind the current validated platform paths.
SDN, virtual networks, WAN hubs, edge foundations, and image lifecycle across on-prem, GCP, Azure, AWS, and Hetzner.
core/onprem/network-sdncore/azure/vnetorg/gcp/wan-hub-networkorg/hetzner/vyos-edge-foundationorg/gcp/wan-vpn-to-edgecore/onprem/template-image
PostgreSQL HA, RKE2 Kubernetes, ArgoCD, NetBox IPAM, edge observability, and DNS routing.
platform/onprem/postgresql-haplatform/onprem/rke2-clusterplatform/k8s/argocd-bootstrapplatform/onprem/netboxplatform/network/edge-observabilityplatform/network/decision-service
pgBackRest to GCS/S3, Cloud SQL replication, object storage repos, and DNS-based failover.
platform/onprem/postgresql-ha-backuporg/gcp/cloudsql-postgresqlorg/gcp/cloudsql-external-replicaorg/gcp/object-repoplatform/network/dns-routing
Full module contracts, lifecycle runbooks, and input/output references are in the module index. Implementation packaging lives on the appropriate surfaces: source in GitHub, Terraform modules in registry or Git-backed module repos, and Ansible collections through Galaxy.
Governed execution model
The same blueprint can operate across environments because policy, state, and execution are separated cleanly.
Blueprints bind to isolated environment state, not a fixed dev/staging/prod assumption.
Profiles apply policy for naming, approvals, connectivity, cost, and validation depth.
Drivers isolate workdirs and produce structured logs, state, and operator-facing verification outputs.
Full topology: on-prem primary runtime, always-on edge decisioning, event-driven cloud burst and DR. Hover any box for detail.
Prometheus scrapes on-prem cluster metrics and remote-writes them to the Thanos edge receiver for a global view.
The Decision service evaluates policy rules against aggregated Thanos metrics. If thresholds breach, it emits action signals.
DNS cutover module executes the traffic shift. Structured run records are written to external object storage.
Cloud target cluster activates (warm or cold), DR data promotes, and failover ingress begins receiving traffic.
For detailed signal and control mapping, see the ADR overview. The execution model page explains the contract chain in detail.
WAN topology
Hetzner edge pair, BGP peering to GCP hub, and HA VPN tunnels — as deployed by the networking blueprints.
Three-zone WAN topology connecting on-prem workloads through a Hetzner edge pair to the GCP cloud hub. BGP route exchange and HA VPN tunnels provide redundant, automatically-converging connectivity.
Workload hosts, management network, and VLAN segments on the on-prem site. Routes are advertised via eBGP to the Hetzner edge pair for onward transit.
Primary/secondary edge pair with floating IP for automatic failover. Terminates IPsec and WireGuard VPN tunnels to GCP. BGP sessions maintained across both tunnels.
Cloud Router peers with the edge pair via eBGP over HA VPN. Dynamic routes propagate on-prem prefixes into the VPC. Cloud DNS handles failover routing policy.