Installation

Adding remote hosts

How to initialize a new remote host for fsbackup and add it to targets.yml.

Each machine fsbackup backs up needs a backup user with the fsbackup SSH public key and rsync installed.

Initialize the remote host

Copy remote/fsbackup_remote_init.sh to the target machine and run it as root:

scp /opt/fsbackup/remote/fsbackup_remote_init.sh root@<hostname>:/tmp/
ssh root@<hostname> bash /tmp/fsbackup_remote_init.sh \
  --pubkey "$(cat /var/lib/fsbackup/.ssh/id_ed25519_backup.pub)"

This script:

  • Creates the backup user (system user, no login shell)
  • Installs the SSH public key in ~backup/.ssh/authorized_keys
  • Restricts SSH to rsync-only via command= in authorized_keys
  • Installs rsync if not present

Trust the host key

Back on the backup server:

sudo /opt/fsbackup/utils/fs-trust-host.sh <hostname>

Add targets to targets.yml

Edit /etc/fsbackup/targets.yml to add targets for the new host. See Targets configuration for the full format.

Example — backing up /etc/nginx from a host called rp:

class2:
  - id: rp.nginx.config
    host: rp
    source: /etc/nginx
    type: dir

Create ZFS datasets for new targets

After editing targets.yml, provision ZFS datasets for any new targets:

sudo /opt/fsbackup/bin/fs-provision.sh

This is idempotent — it skips targets that already have datasets.

Verify

sudo -u fsbackup /opt/fsbackup/bin/fs-doctor.sh --class class2

The new target should show OK. If it shows FAIL, check SSH connectivity:

sudo -u fsbackup ssh backup@rp echo ok

Local paths (same machine)

For paths on the backup server itself, use host: localhost in the target definition:

class1:
  - id: myapp.data
    host: localhost
    source: /docker/volumes/myapp_data
    type: dir

Then grant the fsbackup user read access:

sudo /opt/fsbackup/bin/fs-fix-permissions.sh

Running fs-fix-permissions.sh on Docker volume paths grants the fsbackup user read access to all files in those directories, including application secrets. Review each path carefully before applying.