Skip to content

Replace SFTP Scripts with Xferity — Structured MFT Instead of Bash and Cron

This use case describes how Xferity replaces fragile SFTP scripts, cron-driven jobs, and WinSCP-style automation with a structured, auditable managed file transfer model.

Teams that migrate to Xferity are usually running one of these patterns:

  • Shell scripts with sftp or scp scheduled via cron on a Linux server
  • WinSCP scripting running on a Windows task scheduler
  • PowerShell scripts using WinSCP COM objects or SSH libraries
  • Python scripts wrapping paramiko or fabric
  • Perl or Ruby scripts from legacy integration projects

These scripts can move files, but they typically leave key operational problems unsolved.


ProblemIn scriptsWith Xferity
SSH host key verificationStrictHostKeyChecking=no or trusted manuallySHA-256 fingerprint required; hardened mode rejects insecure settings
Duplicate transfers after crashNo protectionSHA-256 content-hash idempotency
Retry on transient failureManual or fragile loopsExponential backoff with jitter; permanent vs. transient distinction
Concurrent executionRace conditions, no lockingDistributed flow locking with stale-lock takeover
Transfer evidenceText log files or nothingStructured JSONL audit log with SHA-256 hash chain
Dead files after failureLost or silently droppedDead-letter directory with inventory
Recovery after crashRe-run from scratch, risk duplicatesxferity resume from last committed state
Secret managementHardcoded in script or env file7 secret providers; hardened mode rejects plaintext
MonitoringCron output to email or no alertingPrometheus metrics, Slack/Email notifications
Config reviewRead the scriptReviewable YAML flows under version control

A typical SFTP upload script does:

  1. connect to remote host
  2. upload matching files
  3. maybe delete local files after upload
  4. write something to a log

A Xferity flow does the same — and adds:

  • SSH host key verification (required)
  • Remote file stability check (wait for files still being written)
  • SHA-256 content-hash idempotency check (skip already-processed files)
  • Exponential backoff retry on transient failures
  • Distributed flow lock (prevents concurrent runs)
  • Structured JSONL audit event per file (with hash-chain tamper evidence)
  • Dead-letter directory for files that exhaust retries
  • xferity resume for safe rerun after crash
  • Prometheus counter increments per run outcome
  • Slack/Email notification on failure

All of this is defined in a YAML flow — not embedded in a script.


Step 1: model your current scripts as flows

Section titled “Step 1: model your current scripts as flows”

For each existing SFTP script, create:

  • a partner file capturing the endpoint, hostname, auth, and trust material
  • a flow file defining direction, file matching, schedule, cleanup
Terminal window
xferity validate
xferity diag my-sftp-flow
Terminal window
xferity run my-sftp-flow --dry-run
xferity run my-sftp-flow
Terminal window
xferity flow history my-sftp-flow
xferity trace <filename>

Replace the cron entry with:

Terminal window
xferity run-service my-sftp-flow --interval-seconds 300

Or use schedule_cron in the flow file for cron-style scheduling.


  • No new infrastructure for basic SFTP flows (file-backed mode works)
  • No database for simple deployments
  • No container runtime — single binary
  • No vendor lock-in — YAML flows under version control

For production multi-flow deployments, SFTP + PGP workflows, or AS2 exchange, use Postgres-backed mode for the full feature set.


  • SFTP / FTPS
  • AS2 (with MDN)
  • OpenPGP + CMS
  • Durable job execution
  • Retry and resume
  • Air-gapped deployment