← Blog

HybridOps Architecture Overview

HybridOps is structured around four abstractions — module, driver, profile, and pack — that separate intent from execution, governance from implementation, and keep the operational model coherent across environments.

Most infrastructure projects are easier to describe by their tools than by their structure. “We use Terraform and Ansible” is a common answer to “how does your platform work?” It is accurate, but it does not explain how the pieces fit together, who makes decisions, or what happens when something goes wrong.

Tool fluency is not the same as system understanding. This distinction matters most when you are trying to operate infrastructure reliably, hand it off to someone else, or work out why it is not behaving the way you expected.

HybridOps is structured around four abstractions. They are not unique to this platform — most well-designed systems have something similar — but making them explicit changes how the platform is built, documented, and operated.


The four abstractions

Module — intent without tooling

A module is a declarative contract. It specifies what should happen: what inputs are required, what constraints apply, what a successful outcome looks like, what credentials are needed, and what a run record should contain. It does not say how any of that happens.

This separation is deliberate. A module that declares “provision a PostgreSQL HA cluster with Patroni, two replicas, and pgBackRest backup configured” should not contain Terraform resource definitions. That is implementation detail. The module states the intent. Something else handles the execution.

The practical effect is that the same module can be run against different execution strategies without changing the contract. The operational interface stays stable even when the implementation changes.

Driver — execution without opinion

A driver is the execution engine. It takes a module, resolves the appropriate pack, renders an isolated working directory, invokes the tooling (Terraform, Ansible, Packer, or whichever tool the pack requires), and captures a redacted run record.

The driver has no opinion about infrastructure topology or operational policy. It knows how to execute. It does not know whether this is the right moment to execute, or whether the environment is in the right state to receive the operation. Those constraints live elsewhere.

Profile — governance without implementation

A profile is the governance layer. It defines the backend strategy (where state lives, which credentials are available), naming rules, timeout constraints, tool version requirements, and environment-level policy.

The same module run with a lab profile behaves differently from the same module run with a production-safe profile — different backends, tighter constraints, potentially different approval requirements. The module does not change. The profile controls what it is safe to do in that environment.

profile: production-safe
backend:
  type: terraform-cloud
  workspace_prefix: prod-
constraints:
  max_runtime_minutes: 60
  require_plan_approval: true
tool_versions:
  terraform: "~> 1.7"
  ansible: "~> 9.0"

This is where governance lives in the architecture. Not in individual scripts or in the judgement of whoever happens to be running the command.

Pack — implementation without permanence

A pack is the replaceable implementation bundle. It is what the driver actually executes: a Terragrunt stack, a Terraform entrypoint, a Packer template, an Ansible playbook collection, a GitOps manifest set. It is the thing that knows about specific cloud APIs, specific resource types, specific tool invocation patterns.

Packs are designed to be swapped. If the implementation of a module needs to change — different Terraform provider, different provisioning strategy, different cloud target — the pack changes and the module contract stays the same. Operators working through the module interface do not see the change.


Why this structure exists

The four-layer model is a response to a specific failure mode: infrastructure that is hard to reason about because the intent, the execution, the governance, and the implementation are all tangled together.

When everything lives in a single Terraform module or a flat playbook directory, changing one thing requires understanding all the other things it touches. When someone new joins the team, they need to understand the entire system before they can safely operate any part of it. When something breaks, tracing the cause requires reading implementation code rather than examining a structured record of what ran.

Separating the layers does not eliminate complexity. It moves it to the right places. Intent lives in modules. Execution mechanics live in drivers. Governance lives in profiles. Implementation lives in packs. Each of those can be reasoned about independently.


What this looks like in practice

When an operator runs a module, the flow is roughly this: the module declares what should happen, the driver selects the right pack, the profile validates that the operation is permitted in this environment, the pack executes against the real target, and a structured run record is written containing the inputs, tool output, and outcome.

The run record is not optional. It is part of the architecture. Every operation should produce evidence of what ran, against what, with what result. That evidence is what makes the platform auditable, what feeds incident investigation, and what makes it possible to verify that the environment matches the declared intent.


The broader picture

HybridOps operates across on-prem (Proxmox), cloud (Azure, GCP), and edge (Hetzner) targets. The same module contract applies across all of them. The driver and pack handle the surface-specific differences.

This is the architectural claim: the operational model should not change based on where the infrastructure lives. The interface stays stable. The implementation adapts.

Whether that claim holds in all environments is the question the platform is designed to answer.