I’ve recently moved my home backups over to restic . I’m using restic to backup the /etc and /home folders and on all machines are my website files and databases. Media files are backed up separately.
I have around 220 Gigabytes of data, about half of that is photos.
My Home setup
I currently have 4 regularly-used physical machines at home: two workstations, one laptop and server. I also have a VPS hosted at Linode and a VM running on the home server. Everything is running Linux.
Existing Backup Setup
For at least 15 years I’ve been using rsnaphot for backup. rsnapshot works by keeping a local copy of the folders to be backup up. To update the local copy it uses rsync over ssh to pull down a copy from the remote machine. It then keeps multiple old versions of files by making a series of copies.
I’d end up with around 12 older versions of the filesystem (something like 5 daily, 4 weekly and 3 monthly) so I could recover files that had been deleted. To save space rsnapshot uses hard links so only one copy of a file is kept if the contents didn’t change.
I also backed up a copy to external hard drives regularly and kept one copy offsite.
The main problem with rsnapshot was it was a little clunky. It took a long time to run because it copied and deleted a lot of files every time it ran. It also is difficult to exclude folders from being backed up and it is also not compatible with any cloud based filesystems. It also requires ssh keys to login to remote machines as root.
Getting started with restic
I started playing around with restic after seeing some recommendations online. As a single binary with a few commands it seemed a little simpler than other solutions. It has a push model so needs to be on each machine and it will upload from there to the archive.
Restic supports around a dozen storage backends for repositories. These include local file system, sftp and Amazon S3. When you create an archive via “restic init” it creates a simple file structure for the repository in most backends:
You can then use simple commands like “restic backup /etc” to backup files to there. The restic documentation site makes things pretty easy to follow.
Restic automatically encrypts backups and each server needs a key to read/write to it’s backups. However any key can see all files in a repository even those belonging to other hosts.
Backup Strategy with Restic
I decided on the followup strategy for my backups:
- Make a daily copy of /etc and other files for each server
- Keep 5 daily and 3 weekly copies
- Have one copy of data on Backblaze B2
- Have another copy on my home server
- Export the copies on the home server to external disk regularly
Backblaze B2 is very similar Amazon S3 and is supported directly by restic. It is however cheaper. Storage is 0.5 cents per gigabyte/month and downloads are 1 cent per gigabyte. In comparison AWS S3 One Zone Infrequent access charges 1 cent per gigabyte/month for storage and 9 cents per gigabyte for downloads.
|What||Backblaze B2||AWS S3|
|Store 250 GB per month||$1.25||$2.50|
|Download 250 GB||$2.50||$22.50|
AWS S3 Glacier is cheaper for storage but hard to work with and retrieval costs would be even higher.
Backblaze B2 is less reliable than S3 (they had an outage when I was testing) but this isn’t a big problem when I’m using them just for backups.
Setting up Backblaze B2
To setup B2 I went to the website and created an account. I would advise putting in your credit card once you finish initial testing as it will not let you add more than 10GB of data without one.
I then created a private bucket and changed the bucket’s lifecycle settings to only keep the last version.
I decided that for security I would have each server use a separate restic repository. This means that I would use a bit of extra space since restic will only keep one copy of a file that is identical on most machines. I ended up using around 15% more.
For each machine I created an B2 application key and set it to have a namePrefix with the name of the machine. This means that each application key can only see files in it’s own folder
On each machine I installed restic and then created an /etc/restic folder. I then added the file b2_env:
export B2_ACCOUNT_ID=000xxxx export B2_ACCOUNT_KEY=K000yyyy export RESTIC_PASSWORD=abcdefghi export RESTIC_REPOSITORY=b2:restic-bucket:/hostname
You can now just run “restic init” and it should create an empty repository, check via b2 to see.
I then had a simple script that runs:
source /etc/restic/b2_env restic --limit-upload 2000 backup /home/simon --exclude-file /etc/restic/home_exclude restic --limit-upload 2000 backup /etc /usr/local /var/lib /var/backups restic --verbose --keep-last 5 --keep-daily 6 --keep-weekly 3 forget
The “source” command loads in the api key and passwords.
The restic backup lines do the actual backup. I have restricted my upload speed to 20 Megabits/second . The /etc/restic/home_exclude lists folders that shouldn’t be backed up. For this I have:
as these are folders with regularly changing contents that I don’t need to backup.
The “restic forget” command removes older snapshots. I’m telling it to keep 6 daily copies and 3 weekly copies of my data, plus at least the most recent 5 no matter how old then are.
This command doesn’t actually free up the space taken up by the removed snapshots. I need to run the “restic prune” command for that. However according to this analysis the prune operation generates so many API calls and data transfers that the payback time on disk space saved can be months(!). So for now I’m planning to run the command only occasionally (probably every few months, depending on testing).
Setting up sftp
As well as backing up to B2 I wanted to backup my data to my home server. In this case I decided to have a single repository shared by all the servers.
First of all I created a “restic” account on my server with a home of /home/restic. I then created a folder /media/backups/restic owned by the restic user.
I then followed this guide for sftp-only accounts to restrict the restic user. Relevant lines I changed were “Match User restic” and “ChrootDirectory /media/backups/restic “
On each host I also needed to run “cp /etc/ssh/ssh_host_rsa_key /root/.ssh/id_rsa ” and also add the host’s public ssh_key to /home/restic/.ssh/authorized_keys on the server.
Then it is just a case of creating a sftp_env file like in the b2 example above. Except this is a little shorter:
export RESTIC_REPOSITORY=sftp:firstname.lastname@example.org:shared export RESTIC_PASSWORD=abcdefgh
For backing up my VPS I had to do another step since this couldn’t push files to my home. What I did was instead add a script that ran on the home server and used rsync to copy down folders from by VPS to local. I used rrsync to restrict this script.
Once I had a local folder I ran “restic –home vps-name backup /copy-of-folder” to backup over sftpd. The –host option made sure the backups were listed for the right machine.
Since the restic folder is just a bunch of files, I’m copying up it directly to external disk which I keep outside the house.
I’m fairly happy with restic so far. I don’t have not run into too many problems or gotchas yet although if you are starting up I’d suggest testing with a small repository to get used to the commands etc.
I have copies of keys in my password manager for recovery.
There are a few things I still have to do including setup up some monitoring and also decide how often to run the prune operation.