In this tutorial, we will set up a secure backup solution primarily using
Basic backup setup with
We start off with a simple bash script to backup our data without encryption. Our goals are:
- Retain multiple backups over a reasonable time period, say one backup a day for the last fourteen days.
- We'll store each backup separately so that it can be easily restored.
Here's an example script that will backup your web directory and a MySQL database. The filename will include the date of the backup:
#!/bin/bash date=`date '+%F_%H:%M:%S'` tar -czf /tmp/$date.tar.gz /var/www/ mysqldump --skip-lock-tables -u backup --password='BACKUPPASSWORD' mydb | gzip > /tmp/$date.sql.gz rsync -e 'ssh -i /root/.ssh/backup_id' -a /tmp/$date.tar.gz /tmp/$date.sql.gz email@example.com:~/ rm /tmp/$date.tar.gz /tmp/$date.sql.gz
You'll need to modify:
- Directories to backup: change from /var/www/ to one or more directories
- Database to backup: change from mydb to your database name
- Password for database backup user: change from BACKUPPASSWORD
- Backup server: of course, replace firstname.lastname@example.org with your backup server and user to store backups under
Modify the directories (from /var/www/), databases (from mydb), MySQL password (from BACKUPPASSWORD), and backup server (from backup.example.com) as needed and save this in
/usr/local/bin/backup.sh. Make it executable and unreadable by normal users with
chmod 700 /usr/local/bin/backup.sh.
Now, we'll create the MySQL user for backups:
mysql -u root -p > GRANT LOCK TABLES, SELECT ON mydb.* TO 'backup'@'localhost' IDENTIFIED BY 'BACKUPPASSWORD'; > flush privileges;
We also need to set up SSH keys for the backup script (the -i option in the script above tells the SSH client where to look for the keys):
ssh-keygen -t rsa -f /root/.ssh/backup_id ssh-copy-id -i /root/.ssh/backup_id email@example.com
Finally, we set up a cron job to make this run daily:
echo '0 0 * * * root /usr/local/bin/backup.sh' > /etc/cron.d/backup
And that's it for the backup script! Try running it to make sure there's no problems.
At this point, we should be successfully creating the backups. However, we'll run out of space since old backups aren't being deleted. We can solve this with a simple cron job on the backup server:
echo '30 0 * * * backups find /home/backups/ -name "*.gz" -mtime +14 -delete' > /etc/cron.d/clean_backups
find command above recursively scans the given directory, identifies files that both end in ".gz" and were modified at least fourteen days ago, and deletes them. You may need to change the username and/or directory.
Encrypting your backups with
Although we're transmitting our backups in the last section securely using rsync (which tunnels the file transfers over SSH), a malicious actor who gets their hands on one of our backups could obtain all of the data. We can't do much about our application server where our web files and MySQL database resides (even if we encrypted the database, our application still needs to access the data somehow, so a key would need to be stored in memory that the attacker could find), but we can certainly encrypt our backups so that our backup server isn't an easy attack vector.
Start by generating a GPG keypair on a secure computer (this may be your personal computer, another server, or the application server; definitely not the backup server though!):
This will ask you for various parameters.
- Key type: stick with the default
- Key size: suggested to use 2048
- Validity: set to 0 so the key doesn't expire; this key is for internal usage only so it's fine
- Name, e-mail address, comment: again, since this key is only for internal usage, these can be anything you like
- Make sure to enter a strong passphrase
It'll take a while to generate the key, since it has to collect enough entropy.
Once it's done, if you didn't generate the keypair directly on the application server, then export the public key to the application server:
(keypair computer) gpg --list-keys (keypair computer) gpg --export -a PUB_KEY_ID_FROM_ABOVE > backup_pub.asc && scp backup_pub.asc firstname.lastname@example.org (application server) gpg --import backup_pub.asc && rm backup_pub.asc
We're almost done; we just need to modify the backup script to perform the encryption. We first need the long key ID of our keypair, which we can find using the command below:
gpg --list-keys --keyid-format LONG
The output will look something like this:
/home/me/.gnupg/pubring.gpg --------------------------- pub 2048R/A2FC04C0D2067E6B 2015-07-07 uid Backups
sub 2048R/2A1217D86D6DB4E3 2015-07-07
In this case, "A2FC04C0D2067E6B" is the long key ID that you want to use in the script. Now modify the script at
#!/bin/bash date=`date '+%F_%H:%M:%S'` long=[THE LONG KEY ID] tar -czf /tmp/$date.tar.gz /var/www/ mysqldump --skip-lock-tables -u backup --password='BACKUPPASSWORD' mydb | gzip > /tmp/$date.sql.gz gpg --encrypt --batch --cipher-algo AES256 --compress-algo none -r $long -o /tmp/$date.tar.gz.enc --trusted-key $long /tmp/$date.tar.gz gpg --encrypt --batch --cipher-algo AES256 --compress-algo none -r $long -o /tmp/$date.sql.gz.enc --trusted-key $long /tmp/$date.sql.gz rsync -e 'ssh -i /root/.ssh/backup_id' -a /tmp/$date.tar.gz.enc /tmp/$date.sql.gz.enc email@example.com:~/ rm /tmp/$date.tar.gz /tmp/$date.sql.gz /tmp/$date.tar.gz.enc /tmp/$date.sql.gz.enc
The cronjob to delete old backups also needs to be changed, from "*.gz" to "*.enc".
If you unfortunately need to recover from your backups, you can decrypt the backup with
gpg --decrypt backup.enc > backup, and then either extract the contents or use it to restore the database.