Community edition now available

Deterministic Automation for Hybrid Infrastructure

Validated operating paths across Proxmox, Hetzner, and GCP: authoritative on-prem foundation, hybrid WAN, GitOps-ready Kubernetes, and both self-managed and managed PostgreSQL DR.

NetBox-backed IPAM • PostgreSQL HA • RKE2 • VyOS edge • Cloud SQL DR

The public site stays concise. Walkthroughs, runbooks, and detailed run paths live on the docs site.

HybridOps execution model Module contracts coordinate execution via driver, profile, and pack primitives. Modules declarative intent Driver execution engine Profile policy defaults Pack tool plan bundle
Module contracts stay stable. Profiles carry policy. Packs stay replaceable.
5
Current validated platform walkthroughs
2
PostgreSQL DR lanes proven end to end
54
Runtime modules in the current core tree
3
Proven surfaces: on-prem, edge, and cloud

Already proven

These are the current platform stories the site should lead with. Each one links to the full walkthrough on the docs site.

01
Authoritative on-prem foundation

Stand up NetBox-backed IPAM, Proxmox SDN, and foundation services from a clean on-prem environment with deterministic state and repeatable rebuilds.

02
PostgreSQL HA with rehearsed failover and failback

Run PostgreSQL HA on-prem with Patroni and pgBackRest, restore the cluster into GCP during a DR event, and return it on-prem with checksum-verified application data.

03
Hybrid WAN edge and site extension

Use a Hetzner-hosted VyOS edge pair as the public WAN anchor, extend on-prem routes through site-extension tunnels, and exchange prefixes with a GCP hub over redundant BGP sessions.

04
GitOps-ready Kubernetes foundation

Bring up an on-prem RKE2 control plane and worker pool with a clean kubeconfig handoff, then layer workloads and GitOps on top without changing the underlying execution model.

05
Managed PostgreSQL DR with Cloud SQL

Establish a managed Cloud SQL standby from the on-prem PostgreSQL HA source, promote it under control, and fail back into an isolated on-prem lane without touching the live service path.

Capabilities

A compact operating surface that stays stable as environments, policy, and tooling evolve.

DR drills
Rehearsed failover and failback with structured run records, redacted logs, and a clear verification chain.
Controlled burst
Provision burst capacity only when signals and policy allow. Keep cloud spend event-driven.
Policy profiles
Centralize backends, approvals, connectivity expectations, and toolchain policy without leaking it into module intent.
Replaceable packs
Swap Terraform (via Terragrunt), Ansible, or Packer plans without breaking the module contract.
Validation
Deterministic input merge with module-owned validators and preflight before execution runs.
Run records
Every run persists deterministic results, redacted logs, and published outputs that can back an incident review or audit.
Representative runtime record

Drivers run in isolated workdirs and persist reviewable output under the runtime root. Redaction is default.

<runtime-root>/logs/module/<module_id>/<run_id>/
  driver_meta.json
  driver_result.json
  inputs_runtime.json
  workspace_policy.json
  backend_binding.json
  terragrunt.log
  stdout.redacted
  stderr.redacted

Docs and runbooks describe the wiring. The operating surface stays stable even when the toolchain behind it changes.

Execution model

A strict boundary between intent, policy, and implementation.

ModuleSpec stays clean

Intent only. No backend logic, tool flags, or secret material.

api_version: hybridops/v1
kind: ModuleSpec
module_ref: platform/onprem/postgresql-ha

inputs:
  defaults:
    apply_mode: auto
    patroni_cluster_name: postgres-ha
    postgresql_version: 16
    dcs_type: etcd
    inventory_requires_ipam: true
requirements:
  credentials: []
execution:
  driver: config/ansible
  profile: onprem-linux@v1.0
  pack_ref:
    id: onprem/common/platform/35-postgresql-ha@v1.0
outputs:
  publish:
    - pg_host
    - cluster_vip
    - endpoint_dns_name
    - apps
    - cap.db.postgresql_ha

Profiles carry policy. Packs carry implementation. ModuleSpec remains tool-agnostic.

Drivers produce run records
01
ModuleSpec
Intent contract: inputs, constraints, probes
02
Driver
Execution engine and isolation boundary
03
Profile
Policy and defaults (governance)
04
Pack
Replaceable tool plan bundle
05
Evidence
Deterministic outputs and provenance
Inputs
Merged with deterministic precedence, then validated.
Workdir
Packs are copied into an isolated workdir under the runtime root.
Run record
Stable paths with redaction by default and deterministic published outputs.

hyops apply is the stable entry point. Docs describe the wiring.

Pick the edition that fits your risk profile

Start with self-serve evaluation. Add governance, onboarding, and support as your risk profile grows.

Self-serve
Community

For: Teams validating the workflow in a lab or low-risk environment

  • Reference modules, docs, and demos
  • Structured run records with redacted output
  • Lab profiles and practical runbooks
  • On-demand cloud — no always-on workload spend
Guided rollout Most common fit
SME

For: Lean IT teams and schools moving from evaluation to adoption

  • Everything in Community
  • Paid Copilot and full documentation access
  • Academy training for guided implementation tracks
  • Onboarding, DR drill review, and rollout guidance
  • Cost-guarded burst patterns with practical support
Program engagement
Enterprise

For: Regulated, multi-team, or multi-environment organizations

  • Everything in SME
  • Governance profiles and hardened rollout standards
  • Audit-ready evidence and operating model alignment
  • Structured training, enablement, and stakeholder support
  • Commercial planning for broader adoption
Start here

Run your first DR drill in minutes.

Start with the self-serve quickstart to evaluate the workflow directly. For a guided walkthrough or delivery model mapping, use the demo option at the top of the page.

  • Install the Community edition — no always-on cloud required
  • Pick a reference module and run hyops apply
  • Inspect the run record: redacted logs, published outputs, and provenance
  • Read the runbook to go further