Skip to content

Automated File Transfer Pipelines — Multi-Step Workflows with Xferity

This use case describes how to build automated, multi-step file transfer pipelines with Xferity.

A file transfer pipeline is a sequence of operations that moves, transforms, and delivers files as part of a repeatable workflow. Examples:

  • source system exports files → encrypt with PGP → upload to partner via SFTP
  • download from partner SFTP → decrypt → stage to S3 → notify downstream system
  • receive AS2 from trading partner → store → trigger processing workflow
  • payroll system generates payslips → encrypt per employee → deliver via SFTP or AS2

These pipelines are often built as shell scripts, but scripts accumulate failure modes over time. Xferity models each stage as a flow with explicit reliability guarantees.


A common outbound pattern:

  1. Source system writes files to a local staging path
  2. Xferity flow picks up files matching a glob pattern
  3. Xferity encrypts each file with OpenPGP before upload
  4. Xferity uploads to partner SFTP (or AS2, or FTPS, or S3)
  5. Audit events written per file
  6. On success: local file deleted or archived
  7. On failure: file moved to dead-letter path after retries exhausted

Flow configuration defines all of this — no script required.

A common inbound pattern:

  1. Xferity polls partner SFTP on a cron schedule
  2. Downloads files matching glob pattern
  3. Verifies and decrypts each file with OpenPGP
  4. Stages decrypted output to a local path
  5. Downstream system picks up files from staging path

For internal pipeline handoffs via object storage:

  1. Upstream system uploads to S3 bucket prefix A
  2. Xferity downloads from S3 prefix A
  3. Xferity processes (decrypt, transform-ready) and uploads to prefix B
  4. Downstream system reads from prefix B

For EDI or structured B2B data:

  1. Trading partner sends AS2 message to Xferity inbound endpoint
  2. Xferity decrypts and verifies signature
  3. Xferity sends MDN receipt to partner
  4. Payload staged to local or S3 path
  5. Processing system consumes from staging path

Each stage in a Xferity flow has built-in reliability:

  • Remote file stability checks (SFTP) — wait for files still being written
  • Glob include/exclude matching
  • SHA-256 idempotency — files already processed are skipped

Process files (encrypt, decrypt, sign, verify)

Section titled “Process files (encrypt, decrypt, sign, verify)”
  • OpenPGP operations run in isolated GnuPG homes per file
  • Crypto failure is permanent — not retried (prevents re-encrypting with wrong key)
  • Provider observability fields logged on every crypto operation
  • Exponential backoff retry on transient transport failures
  • Distributed flow lock prevents two instances running concurrently
  • Job survives process restart (Postgres-backed mode)
  • Delete, archive, or dead-letter based on outcome
  • Audit event written per file per stage
  • xferity trace <filename> shows the full pipeline history for a file

Xferity supports multiple scheduling models:

  • Six-field cronschedule_cron: "0 */5 * * * *" (every 5 minutes)
  • Interval pollingxferity run-service flow-name --interval-seconds 300
  • On-demandxferity run flow-name
  • API-triggeredPOST /api/flows/<flow>/run

For pipelines with dependencies between flows, trigger downstream flows via the HTTP API after upstream flow completion.


For each pipeline, monitor:

  • Flow run success/failure rates (Prometheus)
  • Dead-letter accumulation (Prometheus + CLI)
  • Worker queue depth (Prometheus) in Postgres-backed mode
  • Certificate expiry for any flows using FTPS, AS2, or PGP (posture engine)

Alerts configured via Email, Slack, Webhook, Ntfy, Gotify, or Pushover.


  • AS2 (with MDN)
  • SFTP / FTPS
  • OpenPGP + CMS
  • Durable job execution
  • Retry and resume
  • Air-gapped deployment