how much connections should I set for PostgreSQL

All we need is an easy explanation of the problem, so here it is.

How much connections should I set for PostgreSQL(version 13.x)? by CPU core? the memory size? what value should I choose? how to estimate the value of connection for PostgreSQL? should I estimate like this?

(single_thread_memory) = thread_stack(256KB) + binlog_cache_size(32KB) + join_buffer_size(256KB) + sort_buffer_size(256KB) + read_buffer_size(128KB) + read_rnd_buffer_size(256KB)= 1MB

How to solve :

I know you bored from this bug, So we are here to help you! Take a deep breath and look at the explanation of your problem. We have many solutions to this problem, But we recommend you to use the first method because it is tested & true method that will 100% work for you.

Method 1

Unfortunately, there’s no single formula to definitely determine what a max_connections setting should be. Many factors should probably be considered.

For example:

  • How many cores can be allocated to your database server?
  • How many connections will be persistent v. transient? How many of the persistent connections will generally sit idle or idle in transaction (the latter generally less desirable)?
  • What is the maximum number of concurrent connections that should be serviced (e.g., for incidental, accidental, and/or intentional denial-of-service prevention)?
  • What proportion of your queries are expected to be core-bound v. I/O-bound?
  • What other services, if any will your database server be performing and, are those services more core-bound and/or I/O bound?
  • Can additional servers with synchronized/replicated data be implemented to handle short- and/or long-term demands (as opposed to increasing max_connections?
  • If you can perform load testing, such as with pgbench and/or with application-specific testing, what queries, concurrencies, etc. revealed core limits, I/O limits, networking limits, etc.!?
  • How do you want your environment to fail when overloaded (e.g., is it better to accept as many connections as possible, if they can’t all be reasonably serviced, or, is it better to ensure that all successful connections be serviced at the expense of rejecting others)?

Alternatives to max_connections that might be of interest:

  • Use of superuser_reserved_connections for critical/high-priority connection availability.
  • Use of extensions (e.g., user-contributed modules like connection_limits) that allow per-database, per-user, and per-IP connection limits.
  • Use of standalone and/or communal procedural (e.g., PL/pgSQL) wrappers to effectively implement "application"-layer quotas for users, IPs, queries, etc..

Regarding the other aspect(s) of your question, the same type of analysis applies; e.g., how much work_mem should be allocated based on the queries being performed. Consider reading through PostgreSQL: Server Administration: Server Configuration: Resource Consumption. Nearly all these settings are worth contemplating when tuning a PostgreSQL server (as are those throughout PostgreSQL: Server Administration: Server Configuration.

Note: Use and implement method 1 because this method fully tested our system.
Thank you 🙂

All methods was sourced from stackoverflow.com or stackexchange.com, is licensed under cc by-sa 2.5, cc by-sa 3.0 and cc by-sa 4.0

Leave a Reply