1

I have a question regarding Mysql server 5.7, put the correct value for innodb_buffer_pool_size or increase RAM. Maybe I'm wrong, I'm not an expert, but let me explain what is it about.

Server configuration: Mysql 5.7.32 server is running on Ubuntu 20.04 server, specs are 125GB of RAM, 64 cores of CPU and it have also 3TB partition for the database. SSD is nvme and CPU is AMD EPYC 7502P, so the machine is dedicated.

Here is an mysqld.cnf file conf:

[mysqld]
#
# * Basic Settings
#
#innodb_monitor_enable      =   all
#performance_schema     =   ON
default-storage-engine      =   innodb
tmpdir              =   /data/mysql_tmp
skip-log-bin
innodb_support_xa       =   0

* innodb settings

innodb_read_io_threads = 64 innodb_write_io_threads = 64 innodb_buffer_pool_instances = 64 innodb_buffer_pool_size = 85G innodb_buffer_pool_chunk_size = 256M innodb_io_capacity = 2500 innodb_io_capacity_max = 5000 innodb_log_file_size = 13G innodb_log_buffer_size = 32M innodb_flush_log_at_trx_commit = 0

query_cache_size = 0 query_cache_type = 0 sort_buffer_size = 10M join_buffer_size = 1M read_buffer_size = 1M key_buffer_size = 16M thread_cache_size = 100 read_rnd_buffer_size = 1M

max_allowed_packet = 1073741824 net_buffer_length = 1048576 #innodb_fill_factor = 50 max_connections = 150 table_open_cache = 3000 table_open_cache_instances = 55 back_log = 65535 wait_timeout = 31536000 connect_timeout = 31536000 interactive_timeout = 31536000 net_read_timeout = 10000 net_write_timeout = 10000

Instead of skip-networking the default is now to listen only on

localhost which is more compatible and is not less secure.

bind-address = 0.0.0.0

Procedures tunning

sql_mode=STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION event_scheduler = on

The next thing that I can share is that on this server we have 18 different databases, a lot of them are small. The main problem is with 2 big databases. First database have around 700GB and around 100 tables. Second have 200GB of data and around 15 tables. And those two constantly expands (each day).

Let me now explain what I need. I got some ticket on my work to do make adjustments with the server regarding and adjusting innodb_buffer_pool_size , and I need to calculate how much more RAM we need to add to this server so Mysql can put data in innodb_buffer_pool_size. Because there is another service that need to pull the data in some their weird specific way and we wanna to increase speed up for them somehow. But I'm not sure how can I do that and is it possible.

Question is: Can we just somehow put specific large tables from these two databases in innodb_buffer_pool_size, like to tell Mysql Server just keep them in RAM bassed? Until we have some zero I/O ? Or Mysql can do there something or some other service can do this?

Some lead guy at my work told me that this is possible, like to subtract unneeded tables, or exclude tables? Combined with this "zero I/O" stuff !? Really don't know what to google and what exactly can I do there. I am aware of memcached, also to have the other SQL server installed, but that isn't an option right now.

I've tried to tune the server with a PERL script called mysqltuner.pl , also tried with some queries similar from this page: How large should be mysql innodb_buffer_pool_size? , and I got the info like install 2TB of ram and simillar. Which for me is insane.

Hope someone can help and suggest/guide for some solution to this or maybe I'm looking wrong on this with innodb_buffer_pool_size variable. Thank you!

mejdenovac
  • 13
  • 2
  • Additional information request. Post on pastebin.com and share the links. From your SSH login root, Text results of: B) SHOW GLOBAL STATUS; after minimum 24 hours UPTIME C) SHOW GLOBAL VARIABLES; D) SHOW FULL PROCESSLIST; E) STATUS; F) complete MySQLTuner report G) SHOW ENGINE INNODB STATUS; AND Optional very helpful information, if available includes - htop OR top for most active apps, ulimit -a for a Linux/Unix list of limits, iostat -xm 5 3 for IOPS by device and core/cpu count, for server workload tuning analysis to provide suggestions. – Wilson Hauck Dec 27 '20 at 22:07
  • Take a step back, you seem to have some process that you wish to improve the performance of - start by finding out where the time is going for that and work to reduce it. Just increasing the amount of memory available is unlikely to help a loading process - it’s already going to be writing to memory first anyway. – Andrew Sayer Dec 27 '20 at 23:34
  • 1
    Thanks for answer @WilsonHauck, here is the pastebin link https://pastebin.com/ceKe1Cm5 – mejdenovac Dec 28 '20 at 09:56
  • @AndrewSayer so there is no posibility for put specific tables in RAM and keep them there ? – mejdenovac Dec 28 '20 at 09:57
  • 1
    Well, you could use the memory storage engine but that would have very severe consequences to your recoverability. There is no way, that I can see, to pin a particular innodb table in ram. The engine will already be very clever about making sure that the bits that would benefit from staying in cache, stay in cache. As I’ve already said, if you want to improve the performance then you need to start by figuring out where the time goes, blindly assuming that ram is your problem is going to end up wasting time and money – Andrew Sayer Dec 28 '20 at 12:31
  • 1
    @AndrewSayer what do you think about this:

    I know in front which tables are mostly used, so I will run this query for example: SELECT CEILING(Total_InnoDB_Bytes*1.6/POWER(1024,3)) RIBPS FROM (SELECT SUM(data_length+index_length) Total_InnoDB_Bytes FROM information_schema.tables WHERE engine='InnoDB' and TABLE_NAME in ('table1', 'table2')) A;

    And I get from this around 350GB.

    Now my server need to have 350GB of ram instead 125. Any logic?

    Cost are in the budget so far, and it's possible to go to 700GB of ram. But I think 300-400GB will be ok..

    – mejdenovac Dec 28 '20 at 13:43
  • I think it’s telling you that 1.6 times the size of 2 of your tables is 350GB. Does it mean that if you had that much memory then those tables will always be in memory? No. Does it mean that that amount of memory is going to make your performance fears go away? No. Now, start from the correct place - you have some process that you want to be fast - where is the time going for it? – Andrew Sayer Dec 28 '20 at 14:03
  • Thank you for posting your data. The analysis is in process. Check for Suggestions in about 24 hours, please. – Wilson Hauck Dec 28 '20 at 19:04
  • @mejdenovac Were the comments or Answer of any value to you? If so, please upvote and Accept is you have enough points. Positive or negative for any tips feedback from you would be great. – Wilson Hauck Jan 06 '21 at 11:28
  • @WilsonHauck I did that, did accept. Sorry, holidays. It's more clear now, Thanks again both of you for the answers! – mejdenovac Jan 11 '21 at 10:59

1 Answers1

0

Your ulimit -a report indicates Open Files is limited by the OS to 1024.

From your OS Command Prompt, ulimit -n 32000 would increase this limit dynamically. Shutdown then restart your instance would make the additional file handles available and reduce opened_tables, opened_table_definitions, opened_files.

For this change to persist over OS restarts, follow this url - https://glassonionblog.wordpress.com/2013/01/27/increase-ulimit-and-file-descriptors-limit/

  • but use 32000 rather than 500000 used in their example.

Suggestions to consider for your my.cnf [mysqld] section Rate Per Second - RPS

innodb_lru_scan_depth=100  # from 1024 to conserve 90% of CPU cycles used every second for the function
innodb_buffer_pool_dump_pct=90  # from 25 (percent) to reduce warmup bp time required
innodb_io_capacity=2000  # from 2500 to extend life of your NVME device
read_buffer_size=192K  # from 1M to reduce handler_read_rnd_next RPS of 160,774

REMOVE innodb_io_capacity_max to allow MySQL to auto calc the max as innodb_io_capacity*2

Many more opportunities exist to improve your configuration. View profile, Network profile for contact and free downloadable Utility Scripts to assist with performance tuning, specifically our findfragtables.sql will be helpful.

Monitoring SHOW GLOBAL STATUS LIKE 'innodb_data_reads'; divided by uptime for read rate per second should be your primary consideration for when you need more RAM to support your innodb_buffer_pool. The provided SHOW GLOBAL STATUS data indicates RPS of 16. Many people are OK with up to 100 RPS. Less RPS is desirable, when funds are available. 0 RPS is likely unnecessary due to the way innodb only ages out of RAM the least recently used data.

Wilson Hauck
  • 1,748
  • 11
  • 13
  • This query will likely be helpful to monitor with a single query. SELECT * FROM information_schema.global_status WHERE variable_name = 'innodb_data_reads' OR variable_name = 'uptime'; – Wilson Hauck Dec 29 '20 at 13:48
  • This is a great answer except for the raising innodb_io_capacity part. Raising innodb_io_capacity makes background flushing of the buffer pool more aggressive, reducing it's efficiency, and will ultimately lead to bad performance and reduced SSD life.

    For solid state devices innodb_flush_neighbors should be turned off, because seek-time is essentially zero, so this is wasted IO, and innodb_io_capacity should be set as low as possible without inducing stalls. See https://tinyurl.com/y7ekk8qh for a great explanation of what innodb_io_capacity really does and how to tune it.

    – Eljuan Dec 29 '20 at 23:48
  • @mejdenovac Please review my updated answer for innodb_io_capacity, please. – Wilson Hauck Dec 30 '20 at 15:39