Thinking About Parallelism

Grant Fritchey continues his thoughts on parallelism:

Microsoft set the default value for the Cost Threshold for Parallelism back in the 1990s. They were developing SQL Server 2000. That means this value was determined as a good starting point for query plan costs over 17 years ago. In case you’re not aware, technology, T-SQL, SQL Server, and all the databases and database objects within them shifted, just a little, in the intervening 17 years. We can argue whether or not this value made sense as a starting point (and remember, the default settings are meant to be starting points covering a majority of cases, not a final immutable value) for determining your Cost Threshold for Parallelism 17 years ago. I think we can agree that it’s no longer even a good starting point.

For more thoughts, check out a prior post on figuring out the cost threshold.

Related Posts

Breaking Down the MAXDOP Guidance Change

Joe Obbish digs into Microsoft’s new guidance for maximum degree of parallelism: I’ve heard some folks claim that keeping all parallel workers on a single hard NUMA nodes can be important for query performance. I’ve even seen some queries experience reduced performance when thread 0 is on a different hard NUMA node than parallel worker […]

Read More

Offloading Code Review Burdens with Automation

Ed Elliott argues that automation and testing can make code reviews easier: OK so if we break this down into what a DBA should be doing as part of a code review: – Ensure formatting is correct and any standards followed– Have they introduces a SQL injection vulnerability?– Consider any side effects of the actual […]

Read More

Categories

March 2017
MTWTFSS
« Feb Apr »
 12345
6789101112
13141516171819
20212223242526
2728293031