What it wanted to do…
I manage some websites for me and my family. All of them are based on content management systems which are using the usual stack of php files and a database. To backup them I used the integrated tools of the content management systems which are sometimes a bit awkward. Also these are manual tasks which means I had to log on and click the “backup” button and then download the backup file by myself.
It was always on my todo list to automate this process in a way that it will do the backup completely and as automatic as possible.
1. Step – backup the files
From the past I knew that downloading the files via ftp/sftp takes ages and was not completely 100% successful. This because I have only a slow internet line and for every file it took some ms to start the transfer. Also my machine here (Apple MacBook) uses the default apfs filesystem which is not case sensitive. Therefore the sftp download always claims that the file I’m just downloading is already there.
A better approach would be to pack all files on the host into a single compressed backup file and download that.
Therefore I created a script named backup.sh on the host which simply creates a compressed tar archive for every website on the host.
tar -czf website1.tar.gz /webserverroot/website1
tar -czf website2.tar.gz /webserverroot/website2
tar -czf website3.tar.gz /webserverroot/website3
To make it a bit more user friendly I added some logic to create a subfolder with the current date and time where I put the archives in.
# define folder name with date and time
foldername="/myUserFolder/myBackups/$(date +'%Y_%m_%d_%I_%M_%S')"
# create this folder
echo "create $foldername"
mkdir $foldername
# backup filespaces
echo "backup website 1"
tar -czf website1.tar.gz /webserverroot/website1
# backup filespaces
echo "backup website 2"
tar -czf website2.tar.gz /webserverroot/website2
# backup filespaces
echo "backup website 3"
tar -czf website3.tar.gz /webserverroot/website3
2. Step – backup the database
Next I wanted to create backup files of the databases. For this purpose I used the mysqldump command. I also used the $foldername variable to save the files in the folder with date and time in his name.
mysqldump --host="<myhost>" --port="3306" --user="<user>" --password="<password>" --no-tablespaces <database1> > "$foldername/website1.sql"
mysqldump --host="<myhost>" --port="3306" --user="<user>" --password="<password>" --no-tablespaces <database2> > "$foldername/website2.sql"
mysqldump --host="<myhost>" --port="3306" --user="<user>" --password="<password>" --no-tablespaces <database3> > "$foldername/website3.sql"
3. Step – no passwords in the script file
Of course it is a very bad idea to have the password in the script file. Therefore I replaced it with a variable which I had to input when the script runs.
echo "backup website 1 db"
read -p "website 1 password:" website1password
mysqldump --host="<myhost>" --port="3306" --user="<user>" --password=$website1password --no-tablespaces <database1> > "$foldername/website1.sql"
echo "backup website 2 db"
read -p "website 2 password:" website2password
mysqldump --host="<myhost>" --port="3306" --user="<user>" --password=$website2password --no-tablespaces <database2> > "$foldername/website2.sql"
echo "backup website 3 db"
read -p "website 3 password:" website3password
mysqldump --host="<myhost>" --port="3306" --user="<user>" --password=$website3password --no-tablespaces <database3> > "$foldername/website3.sql"
4. Step – remote execution of the script
The script works quite well in this state, but I did not want to open manually a ssh session to the host to start the script. I wanted to have something which opens a ssh session, starts the script and feed it with the right inputs (the passwords) for the databases.
While searching for a solution I came across the tool expect. This seemed to be suitable for my needs. But it took me some days to fight with the recognition of the command prompt. My prompt when accessing the host via ssh looks like this:
(<host>):<user>:~$
The expect documentation says this can be recognized with:
expect "(<host>):<user>:~$"
Since this didn’t work I found out after a lot of failures that the problem are the last two characters. The tilde and the dollar sign. Even if I tried to mask them with the backslash I didn’t worked. I finally gave up and used a string which did not contains the last two characters.
So the resulting script (backup-websites.sh) on my machine looked like this:
#!/usr/bin/expect
# disable timeout
set timeout -1
# open ssh session
spawn ssh <user>@<host>
# enter password
expect "password"
send "<password>\r"
# wait for prompt
expect "(<host>):<user>:"
send "backup.sh\r"
# wait for website 1 sql dump password
expect "website 1 password:"
send "<website1password>\r"
# wait for website 2 sql dump password
expect "website 2 password:"
send "<website2password>\r"
# wait for website 3 sql dump password
expect "website 3 password:"
send "<website3password>\r"
# close ssh session
send "exit\r"
# exit expect
exit
5. Step – download the backups
Till now I can start a script on my local machine which creates backups in a separate folder on my host. Next comes the download to my local machine. This is where rsync came into play. I’ve add a new spawn command at the end of my local expect script where I call rsync with the right parameters.
# transfer files via rsync
spawn rsync -av <user>@<host>:/<backupfolder>/myBackups /Users/<localuser>/myBackups
expect "<user>@<host>'s password:"
send "<password>\r"
# wait for the end of rsync
expect "<localuser>@<localhost>"
Final scripts
On the host: backup.sh
This script is placed on the host where my websites are.
#!/bin/bash
# stop on error
set -e
# define folder name with date and time
foldername="/myUserFolder/myBackups/$(date +'%Y_%m_%d_%I_%M_%S')"
# create this folder
echo "create $foldername"
mkdir $foldername
# backup filespaces
echo "backup website 1"
tar -czf website1.tar.gz /webserverroot/website1
# backup filespaces
echo "backup website 2"
tar -czf website2.tar.gz /webserverroot/website2
# backup filespaces
echo "backup website 3"
tar -czf website3.tar.gz /webserverroot/website3
echo "backup website 1 db"
read -p "website 1 password:" website1password
mysqldump --host="<myhost>" --port="3306" --user="<user>" --password=$website1password --no-tablespaces <database1> > "$foldername/website1.sql"
echo "backup website 2 db"
read -p "website 2 password:" website2password
mysqldump --host="<myhost>" --port="3306" --user="<user>" --password=$website2password --no-tablespaces <database2> > "$foldername/website2.sql"
echo "backup website 3 db"
read -p "website 3 password:" website3password
mysqldump --host="<myhost>" --port="3306" --user="<user>" --password=$website3password --no-tablespaces <database3> > "$foldername/website3.sql"
On my local machine: backup-websites.sh
This script is placed on my local machine from which I initiate the backups and download it to.
#!/usr/bin/expect
# disable timeout
set timeout -1
# open ssh session
spawn ssh <user>@<host>
# enter password
expect "password"
send "<password>\r"
# wait for prompt
expect "(<host>):<user>:"
send "backup.sh\r"
# wait for website 1 sql dump password
expect "website 1 password:"
send "<website1password>\r"
# wait for website 2 sql dump password
expect "website 2 password:"
send "<website2password>\r"
# wait for website 3 sql dump password
expect "website 3 password:"
send "<website3password>\r"
# close ssh session
send "exit\r"
# transfer files via rsync
spawn rsync -av <user>@<host>:/<backupfolder>/myBackups /Users/<localuser>/myBackups
expect "<user>@<host>'s password:"
send "<password>\r"
# wait for the end of rsync
expect "<localuser>@<localhost>"
# exit expect
exit
So this was my solution to backup websites consisting of files and databases. Maybe something in here will be useful for others.