Xferity Architecture — Control Plane, Runtime Engine, and State Layer
Architecture
Section titled “Architecture”Xferity is organized as a control plane, a runtime plane, and a shared state and service layer.
This page describes the current implementation architecture: what runs, how flows execute, where policy and crypto decisions live, and which capabilities depend on the selected backend.
Architecture diagram
Section titled “Architecture diagram”flowchart TD subgraph ControlPlane[Control Plane] UI[Operator UI] API[Authenticated API] AUTH[Auth Service] CRYPTO[Crypto Inventory] POSTURE[Posture Engine] SUPPRESS[Suppression Manager] NOTIFY[Notification Manager] end
subgraph RuntimePlane[Runtime Plane] RUNNER[Flow Runner] PREFLIGHT[Runtime Preflight] RESOLUTION[Crypto Resolver] TRANSPORT[Transport Connectors] end
subgraph DataPlane[State and Data Layer] BACKEND[File Backend or Postgres Backend] VAULT[Vault and Secret Providers] AUDIT[Audit Logs] SNAP[Posture Snapshots] SUPPDB[Suppressions] end
UI --> API API --> AUTH API --> CRYPTO API --> POSTURE API --> SUPPRESS API --> NOTIFY API --> RUNNER RUNNER --> PREFLIGHT RUNNER --> RESOLUTION RUNNER --> TRANSPORT API --> BACKEND RUNNER --> BACKEND CRYPTO --> BACKEND POSTURE --> SNAP SUPPRESS --> SUPPDB RUNNER --> AUDIT RESOLUTION --> VAULTControl plane
Section titled “Control plane”The control plane includes the operator-facing and policy-facing parts of the product:
- operator UI
- authenticated API
- auth service
- Certificate and PGP Key inventory
- Partner Crypto Policy views
- Flow Crypto Requirements views
- posture engine
- suppression manager
- notification manager
This is where operators review configuration, inventory, posture, and runtime history.
Runtime plane
Section titled “Runtime plane”The runtime plane is responsible for executing flows and resolving the concrete transport and crypto choices needed at run time.
It includes:
- flow execution
- runtime preflight validation
- runtime crypto resolution
- transfer and protocol execution
- run and job history updates
Shared state and service layer
Section titled “Shared state and service layer”The shared state and service layer includes:
- file backend or Postgres backend
- auth/session state where supported
- local vault or external secret providers
- audit logging
- posture snapshots and suppressions
- licensing state where enabled
Canonical crypto-requirement model
Section titled “Canonical crypto-requirement model”FlowRoleSpecs() is the canonical definition for flow PGP role requirements.
It is intentionally reused by:
- validation
- UI policy rendering
- runtime preflight
- runtime crypto resolution
This is one of the most important architecture rules in the current system because it prevents role-definition drift between configuration review and execution.
Architecture strengths
Section titled “Architecture strengths”Several design decisions in the current architecture are worth calling out explicitly, because they determine operational behavior — not just aesthetics.
Single canonical role model
Section titled “Single canonical role model”FlowRoleSpecs() is the canonical definition for flow PGP role requirements. It is intentionally reused across validation, UI policy rendering, runtime preflight, and runtime crypto resolution. This means a role definition change propagates equally to all consumers — there is no way for the UI to show a different policy than what the runtime enforces.
Strict YAML parsing
Section titled “Strict YAML parsing”The configuration loader rejects unknown YAML fields. Misspelled configuration keys cause a startup failure rather than silent misconfiguration. In file-based MFT deployments, silent misconfiguration has historically been a significant source of undetected behavioral drift.
Isolated crypto execution
Section titled “Isolated crypto execution”When GnuPG is used, each operation gets a temporary isolated GnuPG home. This eliminates cross-flow key contamination, agent side effects, and shared-keyring race conditions that have historically been a problem in script-based PGP automation.
Tamper-evident audit by default
Section titled “Tamper-evident audit by default”The audit writer produces an append-only JSONL file with optional SHA-256 hash-chain linkage. An operator can verify chain continuity with standard tooling. This provides a local tamper-evidence foundation that scripts producing plain log files do not.
Hardened mode — startup enforcement, not runtime warnings
Section titled “Hardened mode — startup enforcement, not runtime warnings”security.hardened_mode: true causes Xferity to refuse startup if any security rule is violated. This turns configuration best practices into a hard gate rather than advice that teams may overlook during deployment.
Startup sequence
Section titled “Startup sequence”At a high level, startup proceeds through these phases:
- load global configuration
- initialize logging
- validate startup configuration and hardened-mode restrictions
- initialize the selected backend
- initialize crypto, notifications, auth, posture, and transport-related services
- load partners and flows
- start workers when enabled and supported
- start the UI and API surfaces when enabled
Backend-dependent behavior
Section titled “Backend-dependent behavior”The backend choice changes what the control plane can do.
File backend
Section titled “File backend”The file backend is suitable for simpler single-node operation. It provides:
- local state
- local history and idempotency
- basic flow execution
It does not provide the full production-oriented control-plane feature set.
Postgres backend
Section titled “Postgres backend”The Postgres backend is the reference backend for mature production deployments. It enables:
- auth and session persistence
- queued jobs and workers
- Certificate and PGP Key inventory
- suppressions
- posture snapshots
- regression alerts based on snapshot history
- richer shared-state operation
Protocol execution paths
Section titled “Protocol execution paths”Xferity supports these transport or exchange families:
- SFTP
- FTPS
- S3-compatible storage
- AS2
SFTP, FTPS, and S3-compatible storage operate as transfer transports. AS2 is a message-oriented exchange path with certificate-role handling and MDN processing.
Posture engine in the architecture
Section titled “Posture engine in the architecture”The posture engine is part of the control plane, but it depends on shared state for its richer features.
It evaluates these domains:
- crypto
- secrets
- transport
- auth
- flow drift
Across these scopes:
- platform
- partner
- flow
The posture engine produces Active Findings, Suppressed Findings, hourly snapshots, posture trends, and regression alerts when security state worsens.
In file-backed deployments, the posture engine evaluates the current state but cannot persist snapshots or produce regression comparisons across time.
In Postgres-backed deployments, the posture engine gains access to full snapshot history, trend data, suppression records, and regression alert delivery through configured notification channels.
Trust boundaries
Section titled “Trust boundaries”The main trust boundaries are:
- the Xferity runtime host
- the selected state backend
- local filesystems for config, keys, staging, logs, and audit files
- secret providers used for runtime credential resolution
- the operator access surface: CLI, Web UI, HTTP API
Security-relevant decisions happen at these boundaries, including SSH host verification, TLS certificate validation, AS2 certificate role use, secret resolution, suppression handling, and audit event creation.
Xferity focuses on transfer workflow security and traceability. It does not replace network security controls, identity systems, infrastructure hardening, or external evidence retention.
What the architecture is not
Section titled “What the architecture is not”- Postgres-backed workers are not the same as full clustering or automatic HA.
- The control plane is not a generic workflow orchestration platform.
- Audit logging improves evidence quality, but it is not a substitute for external immutable evidence retention.
- Xferity does not implement a Kubernetes operator, distributed control-plane coordination, or automatic failover.