AWS RDS Postgresql and max_worker_processes

All we need is an easy explanation of the problem, so here it is.

AWS RDS Postgresql 12.10

According to https://www.enterprisedb.com/postgres-tutorials/comprehensive-guide-how-tune-database-parameters-and-configuration-postgresql,

max_worker_processes Set this to the number of CPUs you want to share
for PostgreSQL exclusively
. This is the number of background processes
the database engine can use. Setting this parameter will require a
server restart. The default is 8.

The default is also 8 in AWS RDS Postgresql, no matter the Instance type (and thus number of vCPUs). Am I cheating myself by paying for a db.r5.12xlarge (48 CPUs) while using the default max_worker_processes value?

How to solve :

I know you bored from this bug, So we are here to help you! Take a deep breath and look at the explanation of your problem. We have many solutions to this problem, But we recommend you to use the first method because it is tested & true method that will 100% work for you.

Method 1

If you depend on large scale parallel query from just a few concurrent sessions to get your work done, then surely you have been cheating yourself. But if you don’t, then probably not. For example, if you have a large number of simultaneous connections all submitting CPU intensive queries at the same time, you will probably be able to keep all 24 CPUs busy without any parallelization. ("virtual" CPUs doesn’t mean much, for r5.12xlarge more than 24 CPU-intensive jobs running at the same time will start competing with each other for processing time, regardless of the number of "vCPU" that are claimed)

Note: Use and implement method 1 because this method fully tested our system.
Thank you 🙂

All methods was sourced from stackoverflow.com or stackexchange.com, is licensed under cc by-sa 2.5, cc by-sa 3.0 and cc by-sa 4.0

Leave a Reply