All we need is an easy explanation of the problem, so here it is.
I am trying to change the MAX DOP value for my Azure Managed SQL which currently is set to ZERO.
Here is the script I am running:
USE [my-sql-db] GO ALTER DATABASE SCOPED CONFIGURATION SET MAXDOP =5 ;
the script works fine and returns success, but then when I run
SELECT value_in_use FROM sys.configurations WHERE name = 'max degree of parallelism'
I see the number is set to ZERO
and when I run
select * from sys.dm_exec_query_memory_Grants I can see the value is set to 1.
I am wondering why do I get conflicting numbers when I check the DB Settings in SSMS vs when I check
How to solve :
I know you bored from this bug, So we are here to help you! Take a deep breath and look at the explanation of your problem. We have many solutions to this problem, But we recommend you to use the first method because it is tested & true method that will 100% work for you.
In addition to what sqL_handle said, you’re running
ALTER DATABASE SCOPED CONFIGURATION, keyword being database. If you read the docs on
sys.configurations, they say this system view "Contains a row for each server-wide configuration option value in the system." So in short, they’re two different configuration options you’re comparing, database vs server.
MAXDOP for a given query execution is negotiated at runtime, taking into account a number of factors.
This article from 2013 gives the nuts and bolts involved at that time.
How It Works: Maximizing Max Degree Of Parallelism (MAXDOP)
Since that article was written, the database scoped configuration setting for MAXDOP was added. It doesn’t replace the setting in sys.configurations which can be set via the UI in SSMS or via sp_configure. Rather, it’s another layer at which MAXDOP can be influenced for a given query. Setting the database scoped configuration MAXDOP option should not be expected to change the value stored in the instance-wide MAXDOP setting.
Regardless of the MAXDOP candidate determined by evaluating MAXDOP system settings within their hierarchy and applicability to a query execution, there are additional query and runtime considerations that may determine DOP for execution.
Use of a non-inlined scalar user defined function, for example, can result in a serial plan rather than parallel (or at least a serial plan zone).
One of the runtime factors influencing DOP is "Max Workers Count" with current execution context ID 0 workers, current reserved parallel workers, and desired parallel worker reservation in mind. If the desired parallel worker reservation would make that sum exceed "Max Workers Count" the candidate DOP is downgraded until that desired reservation fits within "Max Workers Count".
The allowed memory grant can also reduce candidate DOP for a query. I’ve seen this most often and maybe exclusively with columnstore bulk inserts – if I want DOP 8 but the optimizer estimates the allowed memory grant for the query execution is too small to perform the bulk insert well, candidate dop will be lowered.
And of course, when selecting a query plan, a serial query plan may be chosen rather than a parallel plan if its cost is lower than parallel plan cost.
But… not if it’s cost is too low. There is an instance-wide setting "Cost Threshold for Parallelim". Default CTFP is 5. If the cost of the serial plan is is below CTFP, a parallel plan won’t be selected (without the influence of something like the ENABLE_PARALLEL_PLAN_PREFERENCE hint).
Note: Use and implement method 1 because this method fully tested our system.
Thank you 🙂