All we need is an easy explanation of the problem, so here it is.
I would like to logically replicate pg_catalog tables from various (100s) databases to a single cluster to help me reliably compare schemas via query. I have tried FDW (and dblink) but found that network instability, at times, would leave me with unsatisfactory results. To combat that problem I attempted materializing FDW queries but the scheduling of so many refresh was a pain. I’d really rather just replicate if at all possible.
How to solve :
I know you bored from this bug, So we are here to help you! Take a deep breath and look at the explanation of your problem. We have many solutions to this problem, But we recommend you to use the first method because it is tested & true method that will 100% work for you.
No, that is not possible. For one, the destination table would have to have the same name and lie in the same schema. Trigger-based replication is also not an option, because you cannot have triggers on catalog tables.
Foreign tables are your only choice. If the connection is unstable, use a materialized view on top of the foreign table and take snapshots regularly. That way you have at least the most recent snapshot.
Note: Use and implement method 1 because this method fully tested our system.
Thank you 🙂