All we need is an easy explanation of the problem, so here it is.
I need to dump some tables of my postgres database and I am scared of getting inconsistent data.The database is verly large and the dump lasts some minutes.
On my research I found a command to dump single tables:
pg_dump --column-inserts -a -t exampleTable -t exampleTable 2> /tmp/myTables.sql
The next problem is that while dumping new data will be inserted in the tables. Is it correct that data which are inserted while dumping are ignored? Especially if I ru the command like the example? Is there anything special to do for dumping?
Or can I really run the pg_dump command and everything is fine? Does anyone have an easy way to test this?
How to solve :
No matter how you run
pg_dump, it will always use a single
REPEATABLE READ READ ONLY transaction to read the data, so it will see a consistent snapshot of the data from the time that the backup started.
So as long as you dump all the data you need with a single
pg_dump command, you will be safe from data inconsistencies.
If you want a quote from the documentation:
It makes consistent backups even if the database is being used concurrently.
Note: Use and implement method 1 because this method fully tested our system.
Thank you 🙂