Automated File Transfer Pipelines — Multi-Step Workflows with Xferity
Automated File Transfer Pipelines
Section titled “Automated File Transfer Pipelines”This use case describes how to build automated, multi-step file transfer pipelines with Xferity.
What is a file transfer pipeline
Section titled “What is a file transfer pipeline”A file transfer pipeline is a sequence of operations that moves, transforms, and delivers files as part of a repeatable workflow. Examples:
- source system exports files → encrypt with PGP → upload to partner via SFTP
- download from partner SFTP → decrypt → stage to S3 → notify downstream system
- receive AS2 from trading partner → store → trigger processing workflow
- payroll system generates payslips → encrypt per employee → deliver via SFTP or AS2
These pipelines are often built as shell scripts, but scripts accumulate failure modes over time. Xferity models each stage as a flow with explicit reliability guarantees.
Building a pipeline with Xferity flows
Section titled “Building a pipeline with Xferity flows”Pattern 1: Encrypt-then-upload
Section titled “Pattern 1: Encrypt-then-upload”A common outbound pattern:
- Source system writes files to a local staging path
- Xferity flow picks up files matching a glob pattern
- Xferity encrypts each file with OpenPGP before upload
- Xferity uploads to partner SFTP (or AS2, or FTPS, or S3)
- Audit events written per file
- On success: local file deleted or archived
- On failure: file moved to dead-letter path after retries exhausted
Flow configuration defines all of this — no script required.
Pattern 2: Download-then-decrypt
Section titled “Pattern 2: Download-then-decrypt”A common inbound pattern:
- Xferity polls partner SFTP on a cron schedule
- Downloads files matching glob pattern
- Verifies and decrypts each file with OpenPGP
- Stages decrypted output to a local path
- Downstream system picks up files from staging path
Pattern 3: S3 staging handoff
Section titled “Pattern 3: S3 staging handoff”For internal pipeline handoffs via object storage:
- Upstream system uploads to S3 bucket prefix A
- Xferity downloads from S3 prefix A
- Xferity processes (decrypt, transform-ready) and uploads to prefix B
- Downstream system reads from prefix B
Pattern 4: AS2 receive and stage
Section titled “Pattern 4: AS2 receive and stage”For EDI or structured B2B data:
- Trading partner sends AS2 message to Xferity inbound endpoint
- Xferity decrypts and verifies signature
- Xferity sends MDN receipt to partner
- Payload staged to local or S3 path
- Processing system consumes from staging path
Reliability at each pipeline stage
Section titled “Reliability at each pipeline stage”Each stage in a Xferity flow has built-in reliability:
Get files (download or local pickup)
Section titled “Get files (download or local pickup)”- Remote file stability checks (SFTP) — wait for files still being written
- Glob include/exclude matching
- SHA-256 idempotency — files already processed are skipped
Process files (encrypt, decrypt, sign, verify)
Section titled “Process files (encrypt, decrypt, sign, verify)”- OpenPGP operations run in isolated GnuPG homes per file
- Crypto failure is permanent — not retried (prevents re-encrypting with wrong key)
- Provider observability fields logged on every crypto operation
Deliver files (upload or send)
Section titled “Deliver files (upload or send)”- Exponential backoff retry on transient transport failures
- Distributed flow lock prevents two instances running concurrently
- Job survives process restart (Postgres-backed mode)
Post-delivery
Section titled “Post-delivery”- Delete, archive, or dead-letter based on outcome
- Audit event written per file per stage
xferity trace <filename>shows the full pipeline history for a file
Scheduling pipelines
Section titled “Scheduling pipelines”Xferity supports multiple scheduling models:
- Six-field cron —
schedule_cron: "0 */5 * * * *"(every 5 minutes) - Interval polling —
xferity run-service flow-name --interval-seconds 300 - On-demand —
xferity run flow-name - API-triggered —
POST /api/flows/<flow>/run
For pipelines with dependencies between flows, trigger downstream flows via the HTTP API after upstream flow completion.
Monitoring pipelines
Section titled “Monitoring pipelines”For each pipeline, monitor:
- Flow run success/failure rates (Prometheus)
- Dead-letter accumulation (Prometheus + CLI)
- Worker queue depth (Prometheus) in Postgres-backed mode
- Certificate expiry for any flows using FTPS, AS2, or PGP (posture engine)
Alerts configured via Email, Slack, Webhook, Ntfy, Gotify, or Pushover.
Xferity supports
Section titled “Xferity supports”- AS2 (with MDN)
- SFTP / FTPS
- OpenPGP + CMS
- Durable job execution
- Retry and resume
- Air-gapped deployment