3

I've looked at a lot answers here MySQL any way to import a huge (32 GB) sql dump faster? and on Server Fault but haven't found a solution to not being able to import a 16 GB MySQL file completely; it silently fails at different parts of the import. Sometimes 2 GB will import before the import stops, sometimes 10 GB. There are no errors in the logs. Sometimes the console returns to #, sometimes not.

top shows MySQL running at 100% when importing, but then drops down to normal loads when the import stops.

The database is 16 GBs, and has 90% InnoDB tables with a few MyISAM tables. To export the database on the first machine, I'm using

mysqldump --single-transaction --lock-tables  -u mydatabaseuser -p mydatabase > archive.sql

The importing machine has 8 dedicated cores and 150 GB dedicated RAM (server is at Linode) and is running Alma Linux and MySQL 8.0.25.

To import, I'm using

mysql -u root -p mydatabase < archive.sql

Running the import from within mysql, i.e. using mysql> doesn't help.

One table is 12 GB, and I tried importing that table by itself with no luck.

Using these in my.cnf throws an error on mysql restart:

autocommit=0
unique_checks=0
foreign_key_checks=0

Should I be exporting differently?

Is MySQL on the importing machine timing out?

What do I need to edit in my.cnf?

my.cnf on the importing machine:

[mysqld]
disable-log-bin=1
default-authentication-plugin=mysql_native_password
performance-schema=0

port = 3306 socket = /tmp/mysql.sock skip-external-locking skip-name-resolve

uncomment for import

bulk_insert_buffer_size = 40G read_buffer_size = 40G

If I uncomment these, MySQL throws the error on restart

Job for mysqld.service failed because the control process exited with error code"

autocommit=0

unique_checks=0

foreign_key_checks=0

innodb_buffer_pool_size=50G innodb_buffer_pool_instances=56 innodb_log_file_size = 8G innodb_log_buffer_size=64M innodb_read_io_threads=8 innodb_write_io_threads=8 innodb_io_capacity=300

innodb_doublewrite = 0

max_allowed_packet=268435456 open_files_limit=40000 innodb_file_per_table=1

join_buffer_size=128M sort_buffer_size=2M read_rnd_buffer_size=2M key_buffer_size = 20M

datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock

log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid

Edit 7/28/21

 mysql> SHOW GLOBAL VARIABLES LIKE '%timeout%';

connect_timeout 10
delayed_insert_timeout 300
have_statement_timeout YES
innodb_flush_log_at_timeout 1
innodb_lock_wait_timeout 50
innodb_rollback_on_timeout OFF
interactive_timeout 28800
lock_wait_timeout 31536000 mysqlx_connect_timeout 30
mysqlx_idle_worker_thread_timeout 60
mysqlx_interactive_timeout 28800
mysqlx_port_open_timeout 0
mysqlx_read_timeout 30
mysqlx_wait_timeout 28800
mysqlx_write_timeout 60
net_read_timeout 30
net_write_timeout 60
replica_net_timeout 60
rpl_stop_replica_timeout 31536000 rpl_stop_slave_timeout 31536000 slave_net_timeout 60
wait_timeout 6000

mysql> SHOW SESSION VARIABLES LIKE '%timeout%';

connect_timeout 10
delayed_insert_timeout 300
have_statement_timeout YES
innodb_flush_log_at_timeout 1
innodb_lock_wait_timeout 50
innodb_rollback_on_timeout OFF
interactive_timeout 28800
lock_wait_timeout 31536000 mysqlx_connect_timeout 30
mysqlx_idle_worker_thread_timeout 60
mysqlx_interactive_timeout 28800
mysqlx_port_open_timeout 0
mysqlx_read_timeout 30
mysqlx_wait_timeout 28800
mysqlx_write_timeout 60
net_read_timeout 30
net_write_timeout 60
replica_net_timeout 60
rpl_stop_replica_timeout 31536000 rpl_stop_slave_timeout 31536000 slave_net_timeout 60
wait_timeout 28800

  • 3
    There isn't a configuration variable that, when set, will magically prevent your import from randomly failing. You need to identify the reason for those failures. Which logs exactly did you look at that contained no errors? How exactly are you running the import? – mustaccio Jul 19 '21 at 17:26
  • Could it be that importing 16Gb in a single transaction is a bit too much to ask? – Gerard H. Pille Jul 19 '21 at 17:52
  • @mustaccio thanks, I'm using mysql -u root -p mydatabase < archive.sql for import. And there are no errors in the /var/log/mysqld.log – BlueDogRanch Jul 19 '21 at 19:31
  • @GerardH.Pille Break the database into smaller imports? One table is 12 GB, and I tried importing that table by itself with no luck. – BlueDogRanch Jul 19 '21 at 19:32
  • Dropping 4 of the 16Gb is not what I had in mind. You need to find out how much your database can handle in a single transaction, and split your dump according to that. The alternative is to insert commit statements every X inserts. I don't understand why mysqldump has no option to do that automatically, but then their enterprise products might find less customers. – Gerard H. Pille Jul 19 '21 at 19:38
  • Is there anything interesting in the syslog between the import start and end? I'm assuming you have the input file somewhere local on the server, where you run mysql, and not remotely connected. – mustaccio Jul 22 '21 at 00:35
  • Yes, the file is on the server; the import starts after password entry, MySQL spikes in top, and then goes back down. The # prompt returns but the import hasn't completed most of the time. It only completes once every 6-7 tries. In messages there are a lot of firewall notes, but no MySQL errors. Jul 19 13:20:07 li575-185 kernel: Firewall: *TCP_IN Blocked* IN=eth0 OUT= MAC=f2:3c:92:04:57:5b:00:26:98:02:ab:c1:08:00 SRC=89.248.165.201 DST=192.155.84.185 LEN=40 TOS=0x00 PREC=0x00 TTL=244 ID=21508 PROTO=TCP SPT=44653 DPT=21489 WINDOW=1024 RES=0x00 SYN URGP=0 – BlueDogRanch Jul 22 '21 at 16:51

2 Answers2

2

Those 3 settings are probably wrong. When autocommit=0, nothing gets stored until a COMMIT is issued.

Look in the dump file; it will probably have a lot of settings, plus some huge (multi-row) INSERTs. Big INSERTs like those work best with autocommit=ON.

150GB is an odd RAM size; are you sure?

These sound dangerous, even with 150G of RAM:

bulk_insert_buffer_size = 40G
read_buffer_size = 40G

They are "buffers", they don't need to be huge; 1G is probably more than ample. Set them to 100M each.

16 is the recommended max for this:

innodb_buffer_pool_instances=56

Timeout??

Time it. How long does it run? Perhaps exactly 5 minutes? Look for a timeout setting that is 600 (seconds).

SHOW GLOBAL VARIABLES LIKE '%timeout%';
SHOW SESSION VARIABLES LIKE '%timeout%';
Rick James
  • 78,038
  • 5
  • 47
  • 113
  • Thanks! Those are good ideas, and I think they help with memory usage and not running out of RAM. I do have 150GB; it's a server at Linode. But for some reason, I can get a full import once in awhile, but not for all the attempts. I don't know why it's so inconsistent. I can't open the dump file in a text editor; it's 12 gigs. – BlueDogRanch Jul 21 '21 at 23:46
  • Thanks, I added the outputs in my question. The imports now appear to work, but take 3 hours for the 12 GB table – BlueDogRanch Jul 28 '21 at 17:28
  • 2
    How long before it "silently failed"? – Rick James Jul 28 '21 at 17:59
  • Ha, I had been watching the import for 1.5 hours, and the sizes of the tables in mysql> and PHPMyAdmin weren't changing, so I'd restart the import. Then I let it run overnight and then saw it had completed, and then narrowed it down to taking 3 hours. But your RAM settings helped. – BlueDogRanch Jul 29 '21 at 22:00
0

I suggest trying a staggered import approach, like this:

https://www.ozerov.de/bigdump/

Staggered imports allow you to tune the process based on the limitations of your server, which sometimes you have no control over.

labega
  • 58
  • 5