There are two ways to do backups which are not standard to MySQL
COLD BACKUP
You could perform parallel mysqldumps in conjunction with FLUSH TABLES WITH READ LOCK
I wrote a post about that a long time ago : How can I optimize a mysqldump of a large database?
Here is a sample script to use a global read lock and a monolithic mysqldump
MYSQL_CONN="-hhostip -uusername -ppassword"
mysql ${MYSQL_CONN} -A -e"FLUSH TABLES WITH READ LOCK; SELECT SLEEP(86400)" &
SEARCHING_FOR_SLEEP=0
while [ ${SEARCHING_FOR_SLEEP} -eq 0 ]
do
sleep 3
SEARCHING_FOR_SLEEP=`${MYSQL} ${MYSQL_CONN} -A -e"SHOW PROCESSLIST;" | grep -c "SELECT SLEEP(86400)"`
done
sleep 1
SLEEP_ID=`mysql ${MYSQL_CONN} -A -e"SHOW PROCESSLIST;" | grep "SELECT SLEEP(86400)" | awk '{print $1}'`
mysqldump ${MYSQL_CONN} --master-data=2 --single-transaction --flush-privileges --routines --triggers --all-databases > /root/mydata.sql
mysql ${MYSQL_CONN} -A -e"KILL ${SLEEP_ID}"
Here is a sample script to use a global read lock and parallel mysqldumps 20 DBs at a time
MYSQL_CONN="-hhostip -uusername -ppassword"
mysql ${MYSQL_CONN} -A -e"FLUSH TABLES WITH READ LOCK; SELECT SLEEP(86400)" &
SEARCHING_FOR_SLEEP=0
while [ ${SEARCHING_FOR_SLEEP} -eq 0 ]
do
sleep 3
SEARCHING_FOR_SLEEP=`${MYSQL} ${MYSQL_CONN} -A -e"SHOW PROCESSLIST;" | grep -c "SELECT SLEEP(86400)"`
done
sleep 1
SLEEP_ID=`mysql ${MYSQL_CONN} -A -e"SHOW PROCESSLIST;" | grep "SELECT SLEEP(86400)" | awk '{print $1}'`
mysql ${MYSQL_CONN} -AN -e"SELECT schema_name FROM information_schema.schemata WHERE schema_name NOT IN ('information_schema','mysql')" > /tmp/ListOfDatabases.txt
COMMIT_COUNT=0
COMMIT_LIMIT=20
for DB in `cat /tmp/ListOfDatabases.txt`
do
mysqldump ${MYSQL_CONN} --single-transaction --hex-blob --routines --triggers ${TBL_EXCLUSION_LIST} ${DB} | gzip > ${DB}.sql.gz &
(( COMMIT_COUNT++ ))
if [ ${COMMIT_COUNT} -eq ${COMMIT_LIMIT} ]
then
COMMIT_COUNT=0
wait
fi
done
if [ ${COMMIT_COUNT} -gt 0 ]
then
wait
fi
mysql ${MYSQL_CONN} -A -e"KILL ${SLEEP_ID}"
LUKEWARM / CHILLY BACKUP
A more risque method for backing up mysql is to do rsyncs. The basic concept is this:
- Step 01) About 1 hour beforehand, run
SET GLOBAL innodb_max_dirty_pages_pct = 0; (OPTIONAL if your data is all InnoDB)
- Step 02) Make sure DB2 has mysql already shutdown
- Step 03) rsync /var/lib/mysql for DB1 to /var/lib/mysql on DB2
- Step 04) Repeat Step03 until two consecutive rsyncs are about the same time
- Step 05) shutdown mysql on DB1
- Step 06) rsync /var/lib/mysql for DB1 to /var/lib/mysql on DB2
- Step 07) startup mysql on DB1
From here, you can startup mysql on DB2 and perform mysqldumps on DB2 to your heart's content.
Rather than just putting the code here, I will share with you links I posted in the past on the concept of this (This concept I use periodically for making new Slaves for Giant Masters)