109

I'm not sure if this is true but I remember reading if you run the following command in linux

mysqldump -u username -p database_name > backup_db.sql

while reads and writes are being made to a database then the dump may contain errors.

Are there particular options in the command mysqldump to make sure this is done safely on a live system? I'm okay with reads/writes being disabled for our users for a few seconds (The database <50MB)

RolandoMySQLDBA
  • 182,700
  • 33
  • 317
  • 520
user784637
  • 1,225
  • 2
  • 9
  • 7
  • 1
    oh... I wish there was a safe way to do it, because not only the dump can contain errors, but also the database itself is getting not saved records if they occur duding the dump. It is happening occasionally, maybe when the record is saved at the same time when the according db table is being scan for dumping. Because of it, I'm afraid of making backup in a single transaction, I guess it will increase the chances of lost records. – Yevgeniy Afanasyev Oct 19 '20 at 04:37

6 Answers6

115

All Data is InnoDB

This is what will give you an exact point-in-time snapshot of the data:

mysqldump -uuser -ppass --single-transaction --routines --triggers --all-databases > backup_db.sql

--single-transaction produces a checkpoint that allows the dump to capture all data prior to the checkpoint while receiving incoming changes. Those incoming changes do not become part of the dump. That ensures the same point-in-time for all tables.

--routines dumps all stored procedures and stored functions

--triggers dumps all triggers for each table that has them

All Data is MyISAM or Mix of InnoDB/MyISAM

You will have to impose a global read lock, perform the mysqldump, and release the global lock

mysql -uuser -ppass -Ae"FLUSH TABLES WITH READ LOCK; SELECT SLEEP(86400)" &
sleep 5
mysql -uuser -ppass -ANe"SHOW PROCESSLIST" | grep "SELECT SLEEP(86400)" > /tmp/proclist.txt
SLEEP_ID=`cat /tmp/proclist.txt | awk '{print $1}'`
echo "KILL ${SLEEP_ID};" > /tmp/kill_sleep.sql
mysqldump -uuser -ppass --single-transaction --routines --triggers --all-databases > backup_db.sql
mysql -uuser -ppass -A < /tmp/kill_sleep.sql

Give it a Try !!!

UPDATE 2012-06-22 08:12 EDT

Since you have <50MB of total data I have another option. Instead of launching a SLEEP command into the background to hold the global read lock for 86400 sec (that 24 hr) just to get the process ID and kill outside, let's try setting a 5 second timeout in mysql rather than in the OS:

SLEEP_TIMEOUT=5
SQLSTMT="FLUSH TABLES WITH READ LOCK; SELECT SLEEP(${SLEEP_TIMEOUT})"
mysql -uuser -ppass -Ae"${SQLSTMT}" &
mysqldump -uuser -ppass --single-transaction --routines --triggers --all-databases > backup_db.sql

This is a cleaner and simpler approach for very small databases.

RolandoMySQLDBA
  • 182,700
  • 33
  • 317
  • 520
  • Thanks for the thorough answer Rolando, I'll try this out when I get back home – user784637 Jun 20 '12 at 02:23
  • Rolando - would it ever take longer than sleep 5 to perform a read lock? – user784637 Jun 20 '12 at 02:33
  • 1
    5 secs is just precautionary. You can can try lower values. – RolandoMySQLDBA Jun 20 '12 at 03:33
  • 1
    Rolando - is ERROR 2013 (HY000) at line 1: Lost connection to MySQL server during query an expected error message? – user784637 Jun 22 '12 at 11:32
  • 1
    Did all the MySQL data come out in the the mysqldump ? – RolandoMySQLDBA Jun 22 '12 at 11:56
  • 1
    I'm not sure about the error message. This is only a guess, but that might have come from the one-line script that kills the user-defined SLEEP function call I mentioned in the second script. – RolandoMySQLDBA Jun 22 '12 at 11:59
  • Yes, all the data came out in the mysqldump and I was able to successfully load it into a fresh database. Should I be concerned about the error message? It comes up after I execute the final command. – user784637 Jun 22 '12 at 12:10
  • 1
    Try my new suggestion and see if it goes well. Hopefully, there will be no error message. – RolandoMySQLDBA Jun 22 '12 at 12:16
  • 1
    Hi Rolando, your second suggestion works (I modified the syntax slighty.) Just to clarify, if I choose your original suggestion, is the error message anything to be concerned about (IE does it indicate that the data in the dump could possibly be corrupt?) – user784637 Jun 22 '12 at 12:21
  • 1
    As long as the mysqldump goes to completion, the error message just becomes an annoyance. – RolandoMySQLDBA Jun 22 '12 at 12:26
  • 1
    Since the mysqldump is probably somewhere between 50MB and 100MB, you should be able to look at the first 30 lines and last 20 lines. If the mysqldump went successfully, the top and bottom lines will have the same appearance every time. – RolandoMySQLDBA Jun 22 '12 at 12:28
  • Why do you sleep for 86400 seconds, couldn't you just do a read lock without the sleep? – samwell Feb 24 '13 at 09:11
  • @chudapati09 if you do not call the SLEEP function to keep the session open, the read lock will die immediately as the session terminates. – RolandoMySQLDBA Feb 24 '13 at 12:22
  • Hi @RolandoMySQLDBA that's an awesome answer, I have one doubt about it though, In my case I have a highly concurrent live database (size ~7Gb), only InnoDB tables. Would the dump cause significant impact on the system performance? Thanks. – Jorge Campos Apr 18 '18 at 14:56
  • For InnoDB only, don't your recommended options suffer from being corrupted if a DDL change is made in parallel? That's one of the warnings that goes along with using --single-transaction: that repeatable reads are not isolated from such DDL changes and that can cause improper / invalid dumps to occur. What's the best way around that issue? @RolandoMySQLDBA – jschultz410 Feb 20 '19 at 19:23
7
  • For InnoDB tables, you should use the --single-transaction option, as mentioned in another answer.
  • For MyISAM there is --lock-tables.

See the official documentation here

Julien Vavasseur
  • 10,109
  • 2
  • 27
  • 46
pesco
  • 71
  • 1
  • 1
3

Here's how I did it. It should work in all cases since it uses FLUSH TABLES WITH READ LOCK.

#!/bin/bash

DB=example
DUMP_FILE=export.sql

# Lock the database and sleep in background task
mysql -uroot -proot $DB -e "FLUSH TABLES WITH READ LOCK; DO SLEEP(3600);" &
sleep 3

# Export the database while it is locked
mysqldump -uroot -proot --opt $DB > $DUMP_FILE

# When finished, kill the previous background task to unlock
kill $! 2>/dev/null
wait $! 2>/dev/null

echo "Finished export, and unlocked !"

The shell sleep command is just to make sure that the background task running the mysql locking command is executed before the mysqldump starts. You could reduce it to 1 second and it should still be fine. Increase it to 30 seconds and try inserting a values in any table from another client during those 30 seconds you'll see it's locked.

There are 2 advantages in using this manual background locking, instead of using the mysqldump options --single-transaction and --lock-tables:

  1. This locks everything, if you have mixed MyISAM/InnoDB tables.
  2. You can do run other commands in addition to the mysqldump during the same locking period. It's useful, for instance, when setting up replication on a master node, because you need to get the binary log position with SHOW MASTER STATUS; at the exact state of the dump you created (before unlocking the database), to be able to create a replication slave.
Nicomak
  • 141
  • 2
1

The suggestion of the mysql official documentation is that you should have a Master "M1" database and a Slave "S1" Database which is described in "Scenario 2: Backup with a Read-Only Slave" Backing Up a Master or Slave by Making It Read Only

You should set the slave database read only and perform th

1

If you want to do this for MyISAM or mixed tables without any downtime from locking the tables, you can set up a slave database, and take your snapshots from there. Setting up the slave database, unfortunately, causes some downtime to export the live database, but once it's running, you should be able to lock it's tables, and export using the methods others have described. When this is happening, it will lag behind the master, but won't stop the master from updating it's tables, and will catch up as soon as the backup is complete.

0

if you have a very big MYISAM table and you need to dump the table without lock and avoid high server load, you can use the following script.

#!/bin/sh

my_user="user"
my_password="password"
my_db="vpn"
my_table="traffic"
my_step=100000

read -p "Dumping table ${my_db}.${my_table} to ${my_table}.sql?" yn
case $yn in
    [Yy]* ) break;;
    * ) echo "User cancel."; exit;;
esac

my_count=$(mysql $my_db -u $my_user -p$my_password -se "SELECT count(*) FROM $my_table")
my_count=$(($my_count + 0))

if [ ! $my_count ]
then
    echo "No records found"
    exit
fi

echo "Records in table ${my_db}.${my_table}: ${my_count}"

echo "" > $my_table.sql

max_progress=60

for (( limit=0; limit<=$my_count; limit+=$my_step )); do
    progress=$((max_progress * ( limit + my_step) / my_count))

    echo -ne "Dumping ["
    for ((i=0; i<$progress; i ++)); do
        echo -ne "#"
    done
    for ((; i<$max_progress; i ++)); do
        echo -ne "."
    done

    mysqldump -u $my_user -p$my_password --complete-insert --no-create-info --opt --where="1 limit $limit , $my_step" $my_db $my_table >> $my_table.sql
    echo "" >> $my_table.sql

    echo -ne "] $((100 * ( limit + my_step ) / my_count)) %"
    echo -ne "\r"

    sleep 1

done

echo -ne "\n"
vadim_hr
  • 111
  • 3