A few months ago I updated a MySQL setup from 5.5 to 5.6. Since then I've been having problems with a script that I use to dump the various databases so that I can back them up.
The script is a short piece of perl that gets a list of all databases and then calls mysqldump for each as follows:
mysqldump -udb_account -pdb_pw -hserver.com --single-transaction --flush-logs
--routines --triggers --quick $fn 2> $fn.err | gzip > $fn.mysql.gz
Issue: many of these databases have hundreds of tables (ever increasing). For these larger dbs the mysqldump command often exits after only a single table. If I run the command from a terminal session it executes correctly. (typically it runs as a cron job 1x/wk)
The .err file contains no messages. Neither does the server.err file in the MySQL root dir.
Note: this script had been running fine on MySQL 5.5 for several years. This problem started happening when I upgraded to 5.6.
Also: the --flush-logs portion isn't working. The mysql_bin folder has never been emptied since this system was brought online.
One variable that I haven't controlled for yet: when run as a CRON job the script forks 3 processes at a time. When I test the command in a term session I'm only doing one at a time.
System in question:
- CentOS 6.4
- x64
- 64Gb RAM
mysqldumpbin use the 5.6 version instead of 5.5? – Cristian Porta Feb 24 '14 at 07:53--verboseto themysqldumpoptions and see what you get in the error logs. This option doesn't change the dump files, but it writes some progress information to STDERR. You should, at a minimum, see what it is "trying" to do, written to the error logs. – Michael - sqlbot Feb 24 '14 at 12:17