PostgreSQL Backend — Durable State and Workers for Production Xferity
Postgres and Workers
Section titled “Postgres and Workers”Postgres-backed deployment is the reference mode for production Xferity installations. It adds durable shared state, worker-based job execution, authenticated sessions, certificate inventory, and posture snapshots.
This page covers what Postgres adds to the deployment, how to set it up, and what you need to operate it.
What Postgres enables
Section titled “What Postgres enables”Compared to file-backed mode, Postgres-backed deployment adds:
| Feature | File backend | Postgres backend |
|---|---|---|
| Flow history and run records | local files | database |
| Auth and session persistence | none | full |
| Certificate and PGP Key inventory | none | full |
| AS2 message persistence | none | full |
| Posture snapshots | none | hourly |
| Regression alerts | none | supported |
| Suppression management | none | supported |
| Local encrypted vault secrets | none | supported |
| Durable job queue for workers | none | supported |
Prerequisites
Section titled “Prerequisites”- PostgreSQL 13 or later
- A database and user with CREATE TABLE, INSERT, UPDATE, DELETE, SELECT permissions
- Network connectivity from the Xferity host to the Postgres instance
sslmode=requireor stronger (default) unless using a private trusted network
Configuration
Section titled “Configuration”Set state.backend=postgres and provide the connection details:
state: backend: postgres postgres: host: postgres.internal port: 5432 user: xferity password: env:POSTGRES_PASSWORD dbname: xferity sslmode: requireOr use a DSN:
state: backend: postgres postgres: dsn: postgres://xferity:password@postgres.internal:5432/xferity?sslmode=requireThe DSN or password should reference a secret, not use a plaintext value.
Migrations
Section titled “Migrations”On startup, Xferity applies migrations from the embedded migration set. The process:
- connects to Postgres
- verifies or creates a migration tracking table
- applies any pending migrations in order
Migration failures abort startup. Check logs if startup fails at the database phase.
The database user must have CREATE TABLE and DDL permissions for initial migration. After setup, you can optionally reduce to DML-only permissions.
Worker configuration
Section titled “Worker configuration”Enable workers in the global configuration:
worker: enabled: true poll_interval: 5s job_execution_timeout: 300s max_concurrent_jobs: 4 max_attempts: 3 retry_backoff_base: 5s retry_backoff_cap: 60s| Field | Default | Description |
|---|---|---|
enabled | false | Enable the Postgres worker polling loop. |
poll_interval | 5s | How often workers poll for new jobs. |
job_execution_timeout | 300s | Maximum time a single job may run before timeout. |
max_concurrent_jobs | — | Maximum jobs a single worker process runs in parallel. |
max_attempts | 3 | Maximum retry attempts per job before marking it failed. |
retry_backoff_base | 5s | Initial retry backoff. |
retry_backoff_cap | 60s | Maximum retry backoff cap. |
Shared state and multiple processes
Section titled “Shared state and multiple processes”In Postgres-backed mode, the shared job queue supports multiple worker processes:
- each worker polls the same queue independently
- jobs are claimed with a select-for-update mechanism to prevent double-execution
- workers can run on separate hosts as long as they share the same Postgres database and configuration
This is not clustering or HA. It is shared-queue workers. Each worker still needs access to the same filesystem paths (config, flow definitions, partner definitions, key material) unless you replicate those explicitly.
What the worker healthcheck reports
Section titled “What the worker healthcheck reports”GET /health/worker returns worker readiness without authentication. It checks:
- whether the worker polling loop is active
- whether the database connection is reachable
Use this endpoint for container or load balancer health probes.
Operating considerations
Section titled “Operating considerations”- monitor queue depth — a growing queue that doesn’t drain usually means a worker problem or database connectivity problem
- back up the Postgres database regularly
- plan for schema migration during upgrades — Xferity applies migrations automatically at startup
- use least-privilege database credentials for the application user
- test that the application user can apply migrations and that migrations do not fail on version upgrades
Postgres minimum permissions
Section titled “Postgres minimum permissions”For initial deployment (migration phase):
CREATE,ALTER,DROP,INSERT,UPDATE,DELETE,SELECTon the target database
For ongoing operation (post-migration), a reduced permission set is acceptable:
INSERT,UPDATE,DELETE,SELECTon all Xferity tablesUSAGEon sequences
The exact minimum depends on which features are enabled.
Limits
Section titled “Limits”- Postgres workers do not implement automatic HA or failover
- workers do not replicate filesystem state — each worker must access the same config, flow, and key files
- stopping all workers does not drain the queue — pending jobs remain and resume when workers restart