Skip to content

Postgres Workers — Durable Job Queue for Production Xferity Deployments

Postgres workers are the execution model used when Xferity runs with Postgres-backed durable state and queued jobs.

This mode is used when teams need more than a single-process file-backed runtime.

In Postgres-backed deployments, supported triggers can enqueue durable jobs instead of executing everything immediately in-process.

Workers then:

  • claim queued work
  • execute the relevant transfer or AS2 workflow
  • update status and history
  • apply retry behavior where allowed
  • mark work completed or failed

Teams move to Postgres workers when they need:

  • durable job handling
  • shared state across processes
  • richer UI and API operating workflows
  • AS2 persistence and certificate records
  • a clearer queue-based operational model

At a high level:

  1. configuration selects state.backend=postgres
  2. the runtime initializes the Postgres backend and migrations
  3. a trigger creates durable job records for supported work
  4. workers poll the queue and claim jobs
  5. execution results are persisted for later inspection
Terminal window
xferity run-service payroll-upload --interval-seconds 300

That command is still part of the operator surface, but in a Postgres-backed deployment the surrounding runtime can use durable job semantics instead of relying only on local in-process execution.

Postgres workers add real operational dependencies:

  • database connectivity
  • migration success
  • worker readiness
  • queue monitoring
  • backup and restore planning
  • least-privilege database credentials

Postgres workers do not automatically mean:

  • clustering
  • HA coordination
  • automatic failover orchestration
  • distributed control-plane semantics beyond shared durable state

They should be described as a richer shared-state execution model.