All we need is an easy explanation of the problem, so here it is.
I’ve seen many articles and videos that say things like the following about Postgres over MySQL.
Postgres allocates a significant amount of memory (about 10MB) when it
forks a new process for each connection. This causes bloated memory
usage and effectively eats away at speed. Thus, it sacrifices speed
for data integrity and standards compliance. For a simple
implementation, then, Postgres would be a poor choice! – Sumo Logic
Every time I read that or hear that somewhere, there’s no context about what it really means or if there is a way to handle it. What is are specific way to deal with that type of problem in PostgreSQL? Is this overcome by using connection pools?
How to solve :
I know you bored from this bug, So we are here to help you! Take a deep breath and look at the explanation of your problem. We have many solutions to this problem, But we recommend you to use the first method because it is tested & true method that will 100% work for you.
Interesting to hear that 10MB is "a significant amount of memory".
A database is not a web server, which is optimized for serving lots of short-lived connections. A PostgreSQL connection loads cached catalog data for efficiency.
That is why you use a connection pool, so that all your short database requests are handled by a small number of persistent database connections.
I doubt that this is specific to PostgreSQL – other databases benefit from connection pools as well, and some even have one built into the server. So I would see the statement you quote as hate speech from a competitor who cannot think of anything better than reiterating the old myth that PostgreSQL is slow and complicated.
Note: Use and implement method 1 because this method fully tested our system.
Thank you 🙂