How to backup LiveAgent database

To keep your data safe, we recommend at least daily backup of your database. It could be hard task to backup complete database if you have hundreds thousands tickets in your database and DB size is hundreds of Gigabytes big.

Here are some recommendations how we do it to make backups less painful:

1. If you can afford more DB servers, setup DB servers as MASTER-SLAVE and make backups always on the SLAVE server to avoid DB locks. In case this is not an option for you to keep 2 servers, you should do the backup in time with lowest traffic (e.g. at 2AM). If your database grows over few Gigabytes, backup could take many minutes, when your DB is locked and you are not able to work with application... This will be the right time to add next server as slave.

2. LiveAgent database contains also attachments, which are not changing so often. Therefore we backup just incremental changes in the table with file contents (qu_g_filecontents). We skip backup of table qu_g_logs as well to save space even more.

3. To keep backup files as small as possible and compress files as fast as possible, we use pbzip2 utility to zip backups. Advantage of pbzip2 is parallel compression (compression is running on all CPUs and therefore is much faster comparing to standard gzip of zip utilities). To install pbzip2 on your server (e.g. Centos/Redhat), just use command:
 

yum install pbzip2

 

Here is simplified example script we use to make backups:

#!/bin/bash
dbname='liveagent'

mkdir -p /backup/${dbname}
#DELETE OLD BACKUP FILES
find /backup/${dbname}/db* -type f -mtime +5  -exec rm {} \;

##MAKE BACKUP OF ALL ACTIVE TABLES (skip filecontents)
now=$(date +"%Y_%m_%d")
sqlFile="/backup/${dbname}/db_${dbname}_$now.sql"

if [ -f "$sqlFile.bz2" ]
then
    echo "$sqlFile.bz2 already created, skipping..."
    exit
else
    nice -n 19 /usr/bin/mysqldump --single-transaction --skip-add-drop-table --skip-extended-insert  --ignore-table=${dbname}.qu_g_logs --ignore-table=${dbname}.qu_g_filecontents -u'root' -p'<<PASSWORD>>' -h localhost ${dbname} > "$sqlFile"
    cd /backup/${dbname}; nice -n 19 ionice -c 3 pbzip2 -p10 "$sqlFile" > /dev/null 2>&1
fi

##IF EXISTS filecontents master, make incremental backup and zip it
nowTime=$(date +"%Y-%m-%d %H:%M:%S")
timestampFile="/backup/${dbname}/filecontents_timestamp"

##IF MASTER DOESN'T EXIST, create master file and zip it
fileContentFile="/backup/${dbname}/filecontents_${dbname}_$nowTime.sql"
if [ -f "$timestampFile" ]
then
    oldTime=`cat $timestampFile`
    nice -n 19 /usr/bin/mysqldump --where="created<'$nowTime' AND created>'$oldTime'" --no-create-info --single-transaction --skip-add-drop-table --skip-extended-insert -u'root' -p'<<PASSWORD>>' -h localhost ${dbname} qu_g_filecontents > "$fileContentFile"
else
    nice -n 19 /usr/bin/mysqldump --where="created<'$nowTime'" --single-transaction --skip-add-drop-table --skip-extended-insert -u'root' -p'<<PASSWORD>>' -h localhost ${dbname} qu_g_filecontents > "$fileContentFile"
fi
cd /backup/${dbname}; nice -n 19 ionice -c 3 pbzip2 -p10 "$fileContentFile" > /dev/null 2>&1
echo "$nowTime" > "$timestampFile"

 

×