Skip to content

Docker Deployment — Running Xferity in a Container with Docker Compose

Xferity ships with a multi-stage Dockerfile and a Docker Compose configuration. This page covers how they work, what to configure, and what to review before using them in production.

The repository Dockerfile is a multi-stage build:

  1. Build stage: compiles the Xferity binary from Go source
  2. Runtime stage: copies the binary into a minimal Alpine-based image with a non-root user

The runtime image:

  • runs as a non-root user
  • creates runtime directories under /app
  • exposes port 8080

The default Docker CMD is:

run-service nav_incoming --interval-seconds 300

This is a placeholder that works for local testing. Review and change this to match your actual flow name before production use. Do not assume the default command will work for your deployment.

Terminal window
docker build -t xferity:latest .

Or with a specific version tag:

Terminal window
docker build -t xferity:1.0.0 .

Basic run with mounted config:

Terminal window
docker run -d \
--name xferity \
-p 8080:8080 \
-v /etc/xferity/config:/app/config \
-v /etc/xferity/flows:/app/flows \
-v /etc/xferity/partners:/app/partners \
-v /var/xferity/state:/app/state \
-v /var/xferity/logs:/app/logs \
-v /var/xferity/audit:/app/audit \
-v /var/xferity/storage:/app/storage \
-v /etc/xferity/keys:/app/keys \
xferity:latest \
run-service payroll-upload --interval-seconds 300

The repository includes a docker-compose.yml that mounts these paths and publishes port 8080.

The actual service name in the repository is xferity. Use commands like:

Terminal window
# Start the service
docker compose up -d xferity
# View logs
docker compose logs -f xferity
# Run validation
docker compose exec xferity xferity validate
# Run diagnostics
docker compose exec xferity xferity diag payroll-upload

Plan these mounts carefully before production use:

Local pathContainer pathContents
/etc/xferity/config/app/configGlobal config YAML
/etc/xferity/flows/app/flowsFlow definition files
/etc/xferity/partners/app/partnersPartner definition files
/etc/xferity/keys/app/keysKey and certificate files
/var/xferity/state/app/stateState files (file backend)
/var/xferity/logs/app/logsRuntime logs
/var/xferity/audit/app/auditAudit JSONL files
/var/xferity/storage/app/storageLanding and staging paths

All of these paths are part of the trust boundary. Apply appropriate filesystem permissions.

For non-sensitive config values, use environment variables with the FTO_ prefix or pass them through the config file.

For secrets, use environment variables (env: references in config) or mount secret files into the container.

Example with environment variable secrets:

Terminal window
docker run -d \
--name xferity \
-e PARTNER_SFTP_PASSWORD=... \
-e BANK_PGP_PASSPHRASE=... \
... \
xferity:latest

In production, prefer a secrets management solution (Vault, AWS Secrets Manager, Kubernetes secrets, Docker secrets) over passing secrets directly as environment variables.

For Postgres-backed deployment, provide a DSN or connection fields:

services:
xferity:
image: xferity:latest
environment:
- POSTGRES_DSN=postgres://xferity:password@postgres:5432/xferity?sslmode=require
command: ui # or run-service <flow>
depends_on:
- postgres
postgres:
image: postgres:15
environment:
POSTGRES_DB: xferity
POSTGRES_USER: xferity
POSTGRES_PASSWORD: password

Configure a healthcheck for container orchestration:

healthcheck:
test: ["CMD", "wget", "-qO-", "http://localhost:8080/health/worker"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s

/health/worker is the unauthenticated readiness endpoint.

  • run as non-root (the Dockerfile creates a non-root user, ensure no --user root override)
  • avoid --privileged mode
  • mount only the paths you actually need
  • restrict port exposure to internal networks when possible
  • use Docker secrets or a vault integration for credentials rather than plain environment variables

Docker packaging does not add HA, clustering, or automatic failover to Xferity. If you need worker-level redundancy, run multiple worker processes with a shared Postgres backend.