In this tutorial I will show you step-by-step instructions how to make cheap and secure encrypted backups of you webserver (or any other dir) files on you linux server using Amazon Storage Service
Create Bucket and User at Amazon AWS
Some links that we will need
- Amazon Console link – https://aws.amazon.com/
- Amazon S3 – https://s3.console.aws.amazon.com/s3
- Amazon IAM – https://console.aws.amazon.com/iam
Firstly create an user at Amazon IAM managment console. Don’t forget to set permision –
Now create a bucket for a backups at Amazon S3. For testing you can leave all settings by default.
Install AWS CLI tools on your server machine
Now you can install Amazon AWS command line interface. For that you will need Python and Pip. If you don’t have it, you can found a very nice tutorial here .
pip install awscli --upgrade --user
You may need to export path where aws was installed .
# Try which aws /root/.local/bin/aws
Add the executable path to your PATH variable: ~/.local/bin
export PATH=~/.local/bin:$PATH source ~/.bash_profile
Configurate access to Amazon S3. Use your access keys from IAM.
aws configure AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY Default region name [None]: eu-central-1 Default output format [None]: json
Now you can test if you can copy any files to your bucket
# show available buckets aws s3 ls #copy file to bucket aws s3 cp test_file.txt s3://mybucketname
Creating Database and Webserver backup with encryption
Create a backup shell script file. If you get an error, try to run with sudo
curl -o /usr/local/bin/backup.sh https://gist.githubusercontent.com/grambas/6950da61b5ad31185931ddb2c0f9ecab/raw/backup.sh
Content of backup.sh
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/bin/bash | |
# Execute it as root, due to archiving permision conflicts | |
#CRONTAB entry for every day at 04:00 AM | |
#0 4 * * * /usr/local/bin/webserver-backup.sh >/dev/null 2>&1 | |
# DATABSE DATA | |
DB_USER="XXX" | |
DB_PASSWORD="XXX" | |
# PATH'S | |
OUTPUT_PATH="/srv/backup" | |
WWW_PATH="/srv/webserver" | |
TEMP_PATH="/srv/backup/temp" #where to store data temporary before archiving and encrypting | |
# FTP DATA (where to save backup) | |
FTP_LOGIN="XXX" | |
FTP_PASS="XXX" | |
REMOTE_PATH="BACKUPS/WEBSERVER/" | |
REMOTE_HOST="77.23.XXZ.XX" | |
PORT="21" | |
# OTHER CONSTS | |
DATE=`date +%Y-%m-%d` | |
LOG_DATE=`date +[%Y-%m-%d:%H:%M:%S]` | |
# create directory if not exist | |
mkdir -p "$TEMP_PATH" | |
# remove old backup data at webserver in case error occurred during upload. | |
rm -f $TEMP_PATH/*.sql > /dev/null 2>&1 | |
rm -f $OUTPUT_PATH/*.tar.gz > /dev/null 2>&1 | |
rm -f $OUTPUT_PATH/*.tar.gz.asc > /dev/null 2>&1 | |
# get database table names | |
databases=`mysql –user=$DB_USER –password=$DB_PASSWORD -e "SHOW DATABASES;" | tr -d "| " | grep -v Database` | |
echo "$LOG_DATE DUMPING DATABASE TABLES" | |
for db in $databases; do | |
# Ignore default phpmyadmin tables | |
if [[ "$db" != "information_schema" ]] && [[ "$db" != _* ]] && [[ "$db" != mysql* ]] && [[ "$db" != "performance_schema" ]] ; then | |
name=$TEMP_PATH/$DATE-$db | |
echo "$LOG_DATE Dumped: $name" | |
mysqldump –force –opt –user=$DB_USER –password=$DB_PASSWORD –databases $db > $name.sql | |
fi | |
done | |
echo "$LOG_DATE DONE!" | |
# arhive dumped sql's and webserver directories | |
echo "$LOG_DATE Archieving data" | |
tar -czf $OUTPUT_PATH/$DATE-webserver.tar.gz -C $TEMP_PATH . -C $WWW_PATH . | |
echo "$LOG_DATE DONE!" | |
# ENCRYPT archive | |
# gpg -e -u "Sender User Name" -r "Receiver User Name" somefile | |
# –yes auto overwrite | |
echo "$LOG_DATE ENCRYPTING WEBSERVER" | |
/usr/bin/gpg –yes -e -a -r "mindaugas" $OUTPUT_PATH/$DATE-webserver.tar.gz | |
# remove archived after encryption | |
rm -f $OUTPUT_PATH/*.tar.gz > /dev/null 2>&1 | |
echo "$LOG_DATE DONE!" | |
echo "$LOG_DATE UPLOADING TO REMOTE FTP" | |
sudo curl -T $OUTPUT_PATH/$DATE-webserver.tar.gz.asc ftp://$REMOTE_HOST:$PORT/$REMOTE_PATH/ –user $FTP_LOGIN:$FTP_PASS | |
echo "$LOG_DATE DONE!" | |
# CHECK IF BACKUP EXIST IN REMOTE FTP | |
if curl $OUTPUT_PATH/$DATE-webserver.tar.gz.asc ftp://$REMOTE_HOST:$PORT/$REMOTE_PATH/ -s | grep $DATE-webserver.tar.gz.asc; then | |
echo "$LOG_DATE ENCRYPTED ARCHIVE SUCCESFULLY UPLOADED!" | |
echo "$LOG_DATE DELETING TEMP DATA" | |
rm -f $TEMP_PATH/*.sql > /dev/null 2>&1 | |
rm -f $OUTPUT_PATH/*.tar.gz > /dev/null 2>&1 | |
rm -f $OUTPUT_PATH/*.tar.gz.asc > /dev/null 2>&1 | |
echo "$LOG_DATE DONE!"cronmt | |
echo "$LOG_DATE DO WEBSERVER BACKUP SUCCESS!" | /usr/local/bin/slackpost.sh | |
else | |
echo "$LOG_DATE [ERROR] NO BACKUP WAS FOUND AFTER UPLOAD!" | /usr/local/bin/slackpost.sh | |
fi |
Set the excecution permision on backup.sh
chmod +x /usr/local/bin/backup.sh
Create log file (feel free to change the path) and open Crontab window
touch /srv/backup/cronlog.log crontrab -e
Add the following text to the end of crontab file (there is Crontab Generator, where you can easily set your execution schedule).
#crontab does not have path so we add it manualy SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/.local/bin #This is crontab entry CRONTAB example. Backup daily at 04:00 AM 0 4 * * * /usr/local/bin/backup.sh > /srv/backup/cronlog.log 2>&1
Just in case , test you script
backup.sh
Sure later you may need to decrypt it
gpg --output decrypted.tar.gz --decrypt encrypted.tar.gz.asc