Backup-data: multiple schedule and backends

It’s just like this. I will try to explain myself with an example.
Let’s configure duplicifs multiple backup and create a file named /etc/backup-data/duplicifs.include.
The backup will contain only files listed inside /etc/backup-data/duplicifs.include, except the global excludes.
If we create a file named /etc/backup-data/duplicifs.exclude, list of exclude files is read only from this file.

Why this implementation? Because it’s very flexible and allow the creation of backups with limited data set.
You can have the single backup saving everything, and a new mailbackup which backups only the mail every hour.
To achieve this you need to create two files:

  1. /etc/backup-data/backup.include which contains only:
    /var/lib/nethserver/vmail
    
  2. an empty /etc/backup-data/backup.exclude

If something is still unclear, please ask! :slight_smile:

1 Like

OK, didn’t get the idea from docs, now I understand. Thanks.

1 Like

Just added the same example to the administrator manual.

After some tests, we also implemented the restic Prune option once a week.
On our backup, a daily prune takes around 2 hours; if it’s execute once a week it takes around 6 hours.
If the prune is executed less often (like once a month), the process will be really slow and could clash with the next backup job. (/cc @pike @m.traeumner).

test case 4 (a-c) (duplicity, restic; cifs, nfs): OK
test case 4 (a-c) (rsync; nfs): OK
test case 5 (restic, rsync; sftp): OK

2 Likes

If it is possible to save an NFS/SAMBA to an RDX tape, which has 5 different tapes in a /dev/sdc1, I have to create 5 jobs, which is not possible, because there could only be one tape per day. Would I have to do a full backup every time?

Sorry but tape is not supported, but I know @filippo_carletti would like to add a custom script for it.
Just try to add your tar-based script inside the post-backup event.

1 Like

RDX is not a real tape, but a USB drive with removable disks, it is only recognized as tape by some backup software. I have now solved my one cronjob that performs a rsync and with the Sogo tool I make a backup to the medium.

RDX should be configured as USB.

1 Like

@kunstlust does RDX support eject of the “cartrdige” from shell?

Yes eject /dev/sdx

Hey @giacomo,

I just configured a simple (not multiple) rsync backup to a NFS share, launched backup-data -b mybackup some times and noticed those lines at the beginning of the /root/.rsync_tmbackup/xxx.log file :slight_smile:

2018/09/17 21:40:47 [14466] building file list                                                                                                                                          │
2018/09/17 21:40:47 [14466] rsync: chown "/mnt/backup-TimeMachine/cloud/2018-09-17-214047/root" failed: Operation not permitted (1)                                                 │
2018/09/17 21:40:47 [14466] rsync: chown "/mnt/backup-TimeMachine/cloud/2018-09-17-214047/root/.byobu" failed: Operation not permitted (1)        

And so on for all folders already existing on the NFS share.

Don’t know what to think. You ?

It could depend on permission mapping on the NFS.

Try to mount the filesystem, anche with ‘stat’ what are the file permissions.

stat /mnt/backup-backup-data/cloud/2018-09-18-201709/root/LOG_imapsync/2018_09_17_15_44_44_olivia\@lebrass.be.txt 
  File: ‘/mnt/backup-backup-data/cloud/2018-09-18-201709/root/LOG_imapsync/2018_09_17_15_44_44_olivia@lebrass.be.txt’
  Size: 11076           Blocks: 23         IO Block: 262144 regular file
Device: 29h/41d Inode: 326420      Links: 1
Access: (0600/-rw-------)  Uid: (  100/ UNKNOWN)   Gid: (  100/   users)
Access: 1970-01-01 07:10:45.000000000 +0100
Modify: 2018-09-18 20:17:21.492797411 +0200
Change: 2018-09-18 20:17:21.504797569 +0200
 Birth: -

Reading this https://serverfault.com/questions/212178/chown-on-a-mounted-nfs-partition-gives-operation-not-permitted it looks like this is the result of so-called “root squashing” that keeps the client to perform operations on the nfs server as root.

Since I don’t administer the nfs server, which is a backup storage provided by my hosting company, it looks like rsync will not work for me : the only other options are CIFS (which doesn’t support linux extensions and therefore hard links) and FTP (no comment).

Update. Their documentation states that

L’utilisateur NFS est root , les modifications de droits avec cet utilisateur peuvent générer des conflits avec des droits CIFS existants

Which translates roughly to

the NFS user is root, modifications to user rights using that user could create conflicts with existing CIFS rights

1 Like

Any thought to adding support for Duplicati? It’s got some nice features:

  • Built-in support for a number of cloud storage providers (Microsoft OneDrive, Amazon Cloud Drive & S3, Google Drive, box.com, Mega, hubiC, etc.), as well as standard protocols like WebDAV, FTP, SSH, etc.
  • Built-in AES-256 encryption of backup data
  • Has a decent web GUI of its own to set up backups (including getting authentication tokens from the cloud storage, if you’re using that), or it can run from the CLI as well.

There’s a CentOS-compatible RPM available, and yum does a fine job of tracking dependencies and such. After reviewing this write-up, getting it running on a Neth box only took a few minutes (the only difference was that I substituted config set fw_duplicati ... and signal-event firewall-adjust for the fw commands there.

1 Like

+1 for Duplicati, use it to back up a few remote machines already to S3 and works well.

1 Like

+3 for Duplicati.

I just took a quick look to duplicate manual: https://duplicati.readthedocs.io/en/latest/04-using-duplicati-from-the-command-line/
Some links are broken so I’m not sure how to fully configure it from command line.

If all Duplicati is published under an Open Source license, we could try to integrate it.
Before integrating, the software should have the following features (from command line):

  • inclusion and exclusion of a list of files and directories
  • execute a backup
  • restore a file or an entire backup in selected target directory
  • list backup content
1 Like

It is–it’s LGPL.

Supported, though I think it’d be by way of multiple --include and --exclude arguments, rather than by passing a list.

All of these are supported.

It looks like the “find” command could do something like this, though it isn’t immediately clear from the docs how you’d go about getting a list of all the contents of the most recent backup.

Edit: As to doing it on the command line, here’s the command line for the full system backup I’m running:

mono /usr/lib/duplicati/Duplicati.CommandLine.exe backup "googledrive://duplicati/main-neth-backup?authid=(redacted)" /root/ /var/lib/nethserver/ /var/lib/collectd/ /var/lib/rspamd/ /var/lib/redis/ /var/lib/sogo/backups/ /var/www/html/ /usr/share/nextcloud/config/config.php --backup-name="Main Neth Backup" --dbpath=/root/.config/Duplicati/DTGNDWJOQY.sqlite --encryption-module=aes --compression-module=zip --dblock-size=50mb --passphrase="(redacted)" --retention-policy="1W:1D,4W:1W,12M:1M" --run-script-before="signal-event pre-backup-data" --run-script-after="signal-event post-backup-data" --disable-module=console-password-input 
3 Likes