All we need is an easy explanation of the problem, so here it is.
Following this topic, I want to reduce the disk activity for my Moodle database. Currently, the following variables are set
innodb_buffer_pool_size 8589934592 innodb_buffer_pool_chunk_size 134217728 innodb_ft_cache_size 8000000 key_buffer_size 16777216 key_cache_age_threshold 300 open_files_limit 5000 query_cache_limit 1048576
Open_tables 2000 Table_open_cache_hits 10705086 Table_open_cache_misses 137377 Table_open_cache_overflows 135369 Threads_cached 2 Threads_connected 65 Threads_created 29751 Threads_running 4 Uptime 96267
From the previous topic, I would like to increase
Open_tables but don’t know how much to increment (2k to 4k or 2k to 10k) and how to monitor its effect.
I also would like to know if I can collect the number of read and write queries for a period of time. Is that possible?
Thanks to Rick for the explanation, I used
Open_tables | 4000 and now I see
| Uptime | 199784 | | Table_open_cache_misses | 210777 | | Table_open_cache_overflows | 206768 |
So, the miss and overflow rate are now 1.05 and 1.03 which are getting smaller than before and I hope they reach below 1.
On the server, I have installed Moodle and Zabbix. So the following databases are available
mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | moodle | | mysql | | performance_schema | | sys | | zabbixdb | +--------------------+
I don’t know which database is using
SHOW VARIABLES and
CREATE INDEX a lot. But I am curious to see if there is a way to find that. Maybe there is a bug there.
I also used slow query and see few instances that are above 5 seconds from Moodle.
It is explained here that turning on
query_cache_type has positive impacts. I have have turned that on. Don’t know what are the side effects of that.
I also have increased
max_connections to 300. Don’t know why you said, lower is better in the comments. Did you say that for Apache? That will become a bottleneck then as there are guides to increase the number of connections Apache can handle if the server has resources.
I also have increased
thread_cache_size to 20. Don’t know if I can go further or not. What is expected then?
How to solve :
I know you bored from this bug, So we are here to help you! Take a deep breath and look at the explanation of your problem. We have many solutions to this problem, But we recommend you to use the first method because it is tested & true method that will 100% work for you.
- Version: 5.7.30-0ubuntu0.18.04.1
- 16 GB of RAM — Is this correct?
- Uptime = 1d 02:44:27
- You are not running on Windows.
- Running 64-bit version
- You appear to be running entirely (or mostly) InnoDB.
The More Important Issues:
How many tables do you have? Apparently
table_open_cache = 2000 is not high enough. Set it to 4000; then see if
Table_open_cache_overflows / Uptime and
Table_open_cache_misses / Uptime drop below 1 per second.
If you are using SSDs, increase
innodb_io_capacity to 500.
For production servers, it is usally better to turn OFF the Query Cache.
Use the slowlog to discover the "worst" queries. There seem to be some naughty queries.
Why are you doing
SHOW VARIABLES twice a second? And why
CREATE INDEX a hundred times in a day?
Even in the 26 hours that the server has been up, you have hit
max_connections (151). Can you explain why this is happening? Yes, that setting could be increased, but that could make things worse. So, we should try to get the root cause.
thread_cache_size (from 8) to 20. (I don’t know the optimal number for your server, but apparently 8 is too low.)
Details and other observations:
( Table_open_cache_overflows ) = 135,369 / 96267 = 1.4 /sec
— May need to increase table_open_cache (now 2000)
( Table_open_cache_misses ) = 137,377 / 96267 = 1.4 /sec
— May need to increase table_open_cache (now 2000)
( innodb_lru_scan_depth * innodb_page_cleaners ) = 1,024 * 4 = 4,096 — Amount of work for page cleaners every second.
— "InnoDB: page_cleaner: 1000ms intended loop took …" may be fixable by lowering lru_scan_depth: Consider 1000 / innodb_page_cleaners (now 4). Also check for swapping.
( innodb_page_cleaners / innodb_buffer_pool_instances ) = 4 / 8 = 0.5 — innodb_page_cleaners
— Recommend setting innodb_page_cleaners (now 4) to innodb_buffer_pool_instances (now 8)
( innodb_lru_scan_depth ) = 1,024
— "InnoDB: page_cleaner: 1000ms intended loop took …" may be fixed by lowering lru_scan_depth
( innodb_io_capacity_max / innodb_io_capacity ) = 2,000 / 200 = 10 — Capacity: max/plain
— Recommend 2. Max should be about equal to the IOPs your I/O subsystem can handle. (If the drive type is unknown 2000/200 may be a reasonable pair.)
( innodb_flush_method ) = innodb_flush_method = — How InnoDB should ask the OS to write blocks. Suggest O_DIRECT or O_ALL_DIRECT (Percona) to avoid double buffering. (At least for Unix.) See chrischandler for caveat about O_ALL_DIRECT
( innodb_flush_neighbors ) = 1 — A minor optimization when writing blocks to disk.
— Use 0 for SSD drives; 1 for HDD.
( innodb_io_capacity ) = 200 — I/O ops per second capable on disk . 100 for slow drives; 200 for spinning drives; 1000-2000 for SSDs; multiply by RAID factor.
( innodb_print_all_deadlocks ) = innodb_print_all_deadlocks = OFF — Whether to log all Deadlocks.
— If you are plagued with Deadlocks, turn this on. Caution: If you have lots of deadlocks, this may write a lot to disk.
( character_set_server ) = character_set_server = latin1
— Charset problems may be helped by setting character_set_server (now latin1) to utf8mb4. That is the future default.
( local_infile ) = local_infile = ON
— local_infile (now ON) = ON is a potential security issue
( Qcache_lowmem_prunes ) = 1,739,291 / 96267 = 18 /sec — Running out of room in QC
— increase query_cache_size (now 16777216)
( Qcache_lowmem_prunes/Qcache_inserts ) = 1,739,291/5746238 = 30.3% — Removal Ratio (frequency of needing to prune due to not enough memory)
( (query_cache_size - Qcache_free_memory) / Qcache_queries_in_cache / query_alloc_block_size ) = (16M - 6135552) / 4715 / 8192 = 0.276 — query_alloc_block_size vs formula
— Adjust query_alloc_block_size (now 8192)
( Created_tmp_disk_tables ) = 226,136 / 96267 = 2.3 /sec — Frequency of creating disk "temp" tables as part of complex SELECTs
— increase tmp_table_size (now 16777216) and max_heap_table_size (now 16777216).
Check the rules for temp tables on when MEMORY is used instead of MyISAM. Perhaps minor schema or query changes can avoid MyISAM.
Better indexes and reformulation of queries are more likely to help.
( Com_show_variables ) = 200,230 / 96267 = 2.1 /sec — SHOW VARIABLES …
— Why are you requesting the VARIABLES so often?
( Select_scan ) = 1,067,468 / 96267 = 11 /sec — full table scans
— Add indexes / optimize queries (unless they are tiny tables)
( Select_scan / Com_select ) = 1,067,468 / 5869892 = 18.2% — % of selects doing full table scan. (May be fooled by Stored Routines.)
— Add indexes / optimize queries
( slow_query_log ) = slow_query_log = OFF — Whether to log slow queries. (5.1.12)
( long_query_time ) = 10 — Cutoff (Seconds) for defining a "slow" query.
— Suggest 2
( log_slow_slave_statements ) = log_slow_slave_statements = OFF — (5.6.11, 5.7.1) By default, replicated statements won’t show up in the slowlog; this causes them to show.
— It can be helpful in the slowlog to see writes that could be interfering with Replica reads.
( back_log ) = 80 — (Autosized as of 5.6.6; based on max_connections)
— Raising to min(150, max_connections (now 151)) may help when doing lots of connections.
( Max_used_connections / max_connections ) = 152 / 151 = 100.7% — Peak % of connections
— increase max_connections (now 151) and/or decrease wait_timeout (now 28800)
( Connections ) = 911,023 / 96267 = 9.5 /sec — Connections
— Increase wait_timeout (now 28800); use pooling?
Open_files = 0
Com_create_index = 4.2 /HR Innodb_buffer_pool_pages_misc = 143,234 Innodb_buffer_pool_pages_misc * 16384 / innodb_buffer_pool_size = 27.3% Innodb_os_log_pending_fsyncs = 1
external_user = root innodb_fast_shutdown = 1 optimizer_trace = enabled=off,one_line=off optimizer_trace_features = greedy_search=on, range_optimizer=on, dynamic_range=on, repeated_subselect=on slave_rows_search_algorithms = TABLE_SCAN,INDEX_SCAN
Note: Use and implement method 1 because this method fully tested our system.
Thank you 🙂