Postgres hit ratio: should big history tables be removed from the production database?

All we need is an easy explanation of the problem, so here it is.

A database has these cache hit ratios:

table A: 0.006
table B: 0.955
table C: 0.023

Tables A and C are history tables. No relationship, large content and no need of fast queries, only few read requests. I looked for a feature to tell Postgres to ignore cache for these tables, in vain.

Are things as easy as if tables A and C are removed from the database, it will automatically increase the cache hit ratio for table B? (assuming the same amount of data)

How to solve :

I know you bored from this bug, So we are here to help you! Take a deep breath and look at the explanation of your problem. We have many solutions to this problem, But we recommend you to use the first method because it is tested & true method that will 100% work for you.

Method 1

PostgreSQL always caches pages it reads, there is no way to avoid that.
Dropping tables A and C may improve the cache hit ratio for table B, but not by much, because there is not much room for improvement. Perhaps there are some parts of table B that are not in constant use.

It seems to me that PostgreSQL is already doing what you want it to do: pages from tables A and C drop out of the cache, and pages from B mostly stay in cache.

Note: Use and implement method 1 because this method fully tested our system.
Thank you 🙂

All methods was sourced from stackoverflow.com or stackexchange.com, is licensed under cc by-sa 2.5, cc by-sa 3.0 and cc by-sa 4.0

Leave a Reply