We can’t remove it, we must preserve backward compatibility with thousands servers.
But we can hide the single backup when we will implement a new UI using cockpit.
I agree, but this is the most flexible implementation. We will hide also this complexity.
Okay. Still not sure to see the backward compatibily issue (looks easier to migrate old settings to fit the new backup system) but I trust you.
Don’t forget the documentation.
Remember we already share some thoughts about this :
We could also use some Lucene on the fly indexing if we really want the UI be super responsive instead of building a file list when loading the UI page. I’m not sure it’s worth the trouble : someone wanting to restore some files will easily accept to wait some seconds when the list of files is being built.
I just merged nethserver-restic and nethserver-rsync inside nethserver-backup-data package.
restic binary has been moved to a restic rpm which can be eventually switched for non-x86_64 architectures.
If you already installed the packages from testing, you need to execute the following:
TESTS test case 1: OK (but last backup info not shown on dashboard, for this and for any other Single Backup test case) test case 2(a-c)(restic; cifs, nfs, webdav): OK test case 3(a-c)(rsync; nfs): OK (chown failures on previous test were due to bad nfs config on destination) test case 3(a-c)(rsync; cifs, webdav): FAILED
Backup is done
Can list backup files (backup-data-list)
Cannot restore files: from server-manager it says the folders were restored but they were not. From CLI reports symlink problem.
results of restore-data command
# restore-data
Restore started at 2018-07-21 20:16:03
Event pre-restore-data: SUCCESS
rsync: change_dir "/mnt/backup/server/latest" failed: No such file or directory (2)
Number of files: 0
Number of created files: 0
Number of deleted files: 0
Number of regular files transferred: 0
Total file size: 0 bytes
Total transferred file size: 0 bytes
Literal data: 0 bytes
Matched data: 0 bytes
File list size: 0
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 20
Total bytes received: 12
sent 20 bytes received 12 bytes 64.00 bytes/sec
total size is 0 speedup is 0.00
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1178) [sender=3.1.2]
Action '/etc/e-smith/events/actions/restore-data-rsync': SUCCESS
Result of test case 3 with rsync and webdav
rsync: failed to set permissions on "/mnt/backup/server/2018-07-25-151336/var/lib/nethserver/ibay/sharedfolder": Invalid argument (22)
rsync: mkstemp "/mnt/backup/server/2018-07-25-151336/var/lib/nethserver/nextcloud/.htaccess.EMiL21" failed: Invalid argument (22)
ln: failed to create symbolic link ‘/mnt/backup/server/latest’: Function not implemented
Backup failed
Action 'backup-data-rsync ': FAIL
Backup status: FAIL
rsync_tmbackup: Backup completed without errors.
ln: failed to create symbolic link ‘/mnt/backup/server/latest’: Operation not supported
Action 'backup-data-rsync ': SUCCESS
General problems
Using single backup with any engine /var/log/last-backup.log is not created.
Log also shows Requested path not found:
esmith::event[4719]: Action: /etc/e-smith/events/pre-backup-data/S20nethserver-backup-config-predatabackup SUCCESS [1.203213]
esmith::event[4719]: Requested path not found
Many problems using webdav on single backup mode with any engine. I suspect the webdav server. Reports of open files exceeding max cache size, problems removing locks, or duplicity’s remote manifest not matching local one. (EDIT: using a different webdav local destination server, no problems).
Restic deletes snapshots according to the retention policy, but disk space is not reclaimed automatically, you have to run a prune operation.
See https://restic.net/blog/2016-08-22/removing-snapshots for details.
NethServer forces pruning of snapshots to reclaim space on every backup, to mimic duplicity.
But prune is a really expensive operation. Real life example: a backup of about 250G with 2.5 million files and about 1G of changes every day is completed in less than 10 minutes on a 30 gbit/s link.
The same backup takes about 2.5 hours to be pruned.
And, due to deduplication, the prune operation frees a little space. I think we are wasting resources (cpu and time) for a very little benefit.
I propose to run prune only once in a while (maybe every week) as a different cron job.
We could add an option to select the prune frequency, but I don’t like to add another option.
Agree. Cannot say how often (are you thinking of weekend?), but hope the time it takes doesn’t clash with “work hours” (I guess the task can consume many server resources).
An option for prune frequency and schedule would be good.
I think it would be good to let the user set the time and the frequency for the prune operation, because every business is different. Perhaps some people need to do the job every 2 days to have enough storage place for the next backup and others work on Sunday but not at Tuesday for example.
Yes, this log has been removed and everything is now logged inside /var/log/backup.
Did you find any incorrect documentation about it? Maybe I missed something.
This is quite strange, I was quite sure to have already tested it I will work on it.
I agree but the implementation could be hard because during the prune, no other restic backup must be running.
What about executing the prune operation using a hook script (GitHub - NethServer/nethserver-backup-data)?
Not really, it can be executed even on working hours on a normal server.
Agree, a new option is necessary.
But following my proposal I’d like to add a Prune option for each backup with following values:
never: do not prune
always: always prune after the backup
day of the week: a value from 0 to 6, which execute the prune once a week on the selected day
Prune could (or could not) overlap time on backup via restic backend. Which could generate an error.
I’m asking if could be possible (or logical) have the option “prune before backup”, or add a field/option for “on which occasion restic backup should be pruned”… @giacomo is week the only timespace? Could be more time-to-gained-space efficient prune once a month?
IIRC /var/log/backup was empty when I tried, and /var/log/last-backup.log was referenced on stdout/stderr after backup failed or completed. Also this log was used to read the last backup data shown on dashboard for duplicity engine.
The log is generated only if the backup is invoked using backup-data-wrapper, otherwise the output is sent to stdout.
I also removed the wrong reference to the old unused /var/log/last-backup.log file.
As a side note, this refactor needs also a new PR for nethserver-duc which should now exclude all backup mount directories like /mnt/backup-* (I hope @edoardo_spadoni can help here).
Single Backup Mode: last backup end time on dashboard is off by 2 hours (ex. 20:07). Log filename and content has the correct time (ex. 22:07, same as date command). Backup called from backup-data-wrapper.
MULTIPLE BACKUPS test case 4(duplicity, restic; cifs, nfs):
if custom inclusion is set: global backup inclusion not respected. Only files on custom inclusion are backed up.
multiple backups read the same configuration of the single backup.
(…) file will override the list on included and excluded files from the single backup
OK, so /etc/backup-data/*.include takes preference over /etc/backup-data.d/custom.include
Sorry, by using “global inclusion” wording didn’t make myself clear. If custom inclusion is set for the multiple backup job then NOTHING else is backed up (no /var/lib/nethserver/* …)