Skip to content

Xferity Architecture — Control Plane, Runtime Engine, and State Layer

Xferity is organized as a control plane, a runtime plane, and a shared state and service layer.

This page describes the current implementation architecture: what runs, how flows execute, where policy and crypto decisions live, and which capabilities depend on the selected backend.

flowchart TD
subgraph ControlPlane[Control Plane]
UI[Operator UI]
API[Authenticated API]
AUTH[Auth Service]
CRYPTO[Crypto Inventory]
POSTURE[Posture Engine]
SUPPRESS[Suppression Manager]
NOTIFY[Notification Manager]
end
subgraph RuntimePlane[Runtime Plane]
RUNNER[Flow Runner]
PREFLIGHT[Runtime Preflight]
RESOLUTION[Crypto Resolver]
TRANSPORT[Transport Connectors]
end
subgraph DataPlane[State and Data Layer]
BACKEND[File Backend or Postgres Backend]
VAULT[Vault and Secret Providers]
AUDIT[Audit Logs]
SNAP[Posture Snapshots]
SUPPDB[Suppressions]
end
UI --> API
API --> AUTH
API --> CRYPTO
API --> POSTURE
API --> SUPPRESS
API --> NOTIFY
API --> RUNNER
RUNNER --> PREFLIGHT
RUNNER --> RESOLUTION
RUNNER --> TRANSPORT
API --> BACKEND
RUNNER --> BACKEND
CRYPTO --> BACKEND
POSTURE --> SNAP
SUPPRESS --> SUPPDB
RUNNER --> AUDIT
RESOLUTION --> VAULT

The control plane includes the operator-facing and policy-facing parts of the product:

  • operator UI
  • authenticated API
  • auth service
  • Certificate and PGP Key inventory
  • Partner Crypto Policy views
  • Flow Crypto Requirements views
  • posture engine
  • suppression manager
  • notification manager

This is where operators review configuration, inventory, posture, and runtime history.

The runtime plane is responsible for executing flows and resolving the concrete transport and crypto choices needed at run time.

It includes:

  • flow execution
  • runtime preflight validation
  • runtime crypto resolution
  • transfer and protocol execution
  • run and job history updates

The shared state and service layer includes:

  • file backend or Postgres backend
  • auth/session state where supported
  • local vault or external secret providers
  • audit logging
  • posture snapshots and suppressions
  • licensing state where enabled

FlowRoleSpecs() is the canonical definition for flow PGP role requirements.

It is intentionally reused by:

  • validation
  • UI policy rendering
  • runtime preflight
  • runtime crypto resolution

This is one of the most important architecture rules in the current system because it prevents role-definition drift between configuration review and execution.

Several design decisions in the current architecture are worth calling out explicitly, because they determine operational behavior — not just aesthetics.

FlowRoleSpecs() is the canonical definition for flow PGP role requirements. It is intentionally reused across validation, UI policy rendering, runtime preflight, and runtime crypto resolution. This means a role definition change propagates equally to all consumers — there is no way for the UI to show a different policy than what the runtime enforces.

The configuration loader rejects unknown YAML fields. Misspelled configuration keys cause a startup failure rather than silent misconfiguration. In file-based MFT deployments, silent misconfiguration has historically been a significant source of undetected behavioral drift.

When GnuPG is used, each operation gets a temporary isolated GnuPG home. This eliminates cross-flow key contamination, agent side effects, and shared-keyring race conditions that have historically been a problem in script-based PGP automation.

The audit writer produces an append-only JSONL file with optional SHA-256 hash-chain linkage. An operator can verify chain continuity with standard tooling. This provides a local tamper-evidence foundation that scripts producing plain log files do not.

Hardened mode — startup enforcement, not runtime warnings

Section titled “Hardened mode — startup enforcement, not runtime warnings”

security.hardened_mode: true causes Xferity to refuse startup if any security rule is violated. This turns configuration best practices into a hard gate rather than advice that teams may overlook during deployment.


At a high level, startup proceeds through these phases:

  • load global configuration
  • initialize logging
  • validate startup configuration and hardened-mode restrictions
  • initialize the selected backend
  • initialize crypto, notifications, auth, posture, and transport-related services
  • load partners and flows
  • start workers when enabled and supported
  • start the UI and API surfaces when enabled

The backend choice changes what the control plane can do.

The file backend is suitable for simpler single-node operation. It provides:

  • local state
  • local history and idempotency
  • basic flow execution

It does not provide the full production-oriented control-plane feature set.

The Postgres backend is the reference backend for mature production deployments. It enables:

  • auth and session persistence
  • queued jobs and workers
  • Certificate and PGP Key inventory
  • suppressions
  • posture snapshots
  • regression alerts based on snapshot history
  • richer shared-state operation

Xferity supports these transport or exchange families:

  • SFTP
  • FTPS
  • S3-compatible storage
  • AS2

SFTP, FTPS, and S3-compatible storage operate as transfer transports. AS2 is a message-oriented exchange path with certificate-role handling and MDN processing.

The posture engine is part of the control plane, but it depends on shared state for its richer features.

It evaluates these domains:

  • crypto
  • secrets
  • transport
  • auth
  • flow drift

Across these scopes:

  • platform
  • partner
  • flow

The posture engine produces Active Findings, Suppressed Findings, hourly snapshots, posture trends, and regression alerts when security state worsens.

In file-backed deployments, the posture engine evaluates the current state but cannot persist snapshots or produce regression comparisons across time.

In Postgres-backed deployments, the posture engine gains access to full snapshot history, trend data, suppression records, and regression alert delivery through configured notification channels.

The main trust boundaries are:

  • the Xferity runtime host
  • the selected state backend
  • local filesystems for config, keys, staging, logs, and audit files
  • secret providers used for runtime credential resolution
  • the operator access surface: CLI, Web UI, HTTP API

Security-relevant decisions happen at these boundaries, including SSH host verification, TLS certificate validation, AS2 certificate role use, secret resolution, suppression handling, and audit event creation.

Xferity focuses on transfer workflow security and traceability. It does not replace network security controls, identity systems, infrastructure hardening, or external evidence retention.

  • Postgres-backed workers are not the same as full clustering or automatic HA.
  • The control plane is not a generic workflow orchestration platform.
  • Audit logging improves evidence quality, but it is not a substitute for external immutable evidence retention.
  • Xferity does not implement a Kubernetes operator, distributed control-plane coordination, or automatic failover.