All we need is an easy explanation of the problem, so here it is.
I have a huge table that I cannot delete the rows, only update the columns that are storing huge base64 data, that I should update to null to try to release space.
So I programmed a script that are able to set all the images in base64 expecting that the space will be released after Vacuum!
The images are set to null, vacuum executed, but the table is still with the exactly same size, and I am pretty sure the space must be released immediately, so what I am doing wrong?
Will vacuum full able to release space from huge varchar data that I updated to null? (Because I will need to lock the table if I will do this and I need to be sure)
The dump size decreased by 10 times so I expect similar behavior on the database size.
How to solve :
I know you bored from this bug, So we are here to help you! Take a deep breath and look at the explanation of your problem. We have many solutions to this problem, But we recommend you to use the first method because it is tested & true method that will 100% work for you.
Short answer – Yes.
When you update rows, you create a new version of each row that is held inside the table.
- Vacuum does not remove these row versions.
It does make the space occupied by these "dead" rows available for reuse, but that doesn’t reduce the overall size.
- Vacuum Full removes the dead rows completely.
That’s what releases space back to the operating system.
And yes; the table will get locked up while this is happening.
See the PostgreSQL Wiki page on "Vacuum Full".
Note: Use and implement method 1 because this method fully tested our system.
Thank you 🙂