Skip to content

Designing Auditable File Transfer Systems — From Logs to Evidence

“Auditable” is not the same as “logged.” Application logs tell you what the software did. An audit record tells you what happened to the file and who was involved.

The difference matters when you need to answer a partner inquiry, review a compliance sample, or investigate an incident.

An auditable transfer system produces records that answer:

  • which file moved
  • when it moved
  • what the outcome was
  • which partner endpoint was involved
  • what the idempotency key was (to correlate reruns)
  • whether any error occurred and what kind

These answers should be available without reading raw application logs.

Write one event per meaningful action: file started, file completed, file failed, run started, run completed. Not one log line at the end.

Include a correlation ID: each run gets a unique correlation ID. Every event within that run carries it. Querying by correlation ID reconstructs the complete run.

Separate the audit log from the application log: the application log tells operators about service behavior. The audit log tells operators about file and run outcomes. Mixing them makes both harder to use.

Make file lookup fast: operators investigate by file name. If the audit lookup is “grep the entire log file,” it does not scale. A sidecar index keyed by filename brings lookup from O(n) to O(1).

Plan for external retention: an audit log that lives only on the Xferity host is better than nothing. An audit log that also ships to an immutable external system is significantly better.

  • structured JSONL audit events per meaningful action
  • per-run correlation IDs
  • sidecar index for O(1) file lookups
  • xferity trace <filename> for instant file lifecycle queries
  • optional tamper-evidence hash chaining
  • configurable retention with day-based rotation
  • GET /api/audit?file=<basename> for API-based lookup