All we need is an easy explanation of the problem, so here it is.
I create a table with 1 million records, then I delete those records. (Common with some sort of processing list.)
CREATE TABLE example (id int PRIMARY KEY); INSERT INTO example SELECT generate_series(1, 1000 * 1000); DELETE FROM example; VACUUM example;
The index size remains at the same size as before deletion.
SELECT pg_size_pretty(pg_relation_size(c.oid)) AS size FROM pg_class c WHERE oid = 'example_pkey'::regclass;
size ------- 21 MB (1 row)
Likewise, bloat-detection queries (e.g. ioguix/pgsql-bloat-estimation) identify this index as extremely bloated.
I don’t care about the disk usage itself. Will the performance of the index be affected by its large size? Should I be running
How to solve :
I know you bored from this bug, So we are here to help you! Take a deep breath and look at the explanation of your problem. We have many solutions to this problem, But we recommend you to use the first method because it is tested & true method that will 100% work for you.
There may be a substantial impact on the first few queries after the rows were deleted, because autovacuum hasn’t finished processing the table yet and queries need to dig through a lot of index entries that belong to deleted rows.
However, these index scans will mark the index entries as "dead", and subsequent index scans will ignore them.
After that, and after
VACUUM has finished, index scans will be alnost as fast as before.
Note: Use and implement method 1 because this method fully tested our system.
Thank you 🙂