Press "Enter" to skip to content

Category: Query Tuning

Window Function Sort Performance

Lukas Eder explains one potential issue with window functions against large data sets:

Usually, this blog is 100% pro window functions and advocates using them at any occasion. But like any tool, window functions come at a price and we must carefully evaluate if that’s a price we’re willing to pay. That price can be a sort operation. And as we all know, sort operations are expensive. They follow O(n log n) complexity, which must be avoided at all costs for large data sets.

In a previous post, I’ve described how to calculate a running total with window functions (among other ways). In this post, we’re going to calculate the cumulative revenue at each payment in our Sakila database.

This is a good article comparing how different RDBMS products handle a fairly complicated windowed query and what you can do to improve performance.

Comments closed

Join Elimination

Lukas Eder has a nice post explaining different forms of automatic join elimination:

We intended to fetch all customers and their addresses. But observe: We project only columns from the CUSTOMER table and we don’t have any predicates at all, specifically not predicates using the ADDRESS table. So, we’re completely ignoring any contributions from the ADDRESS table. We never really needed the JOIN in the first place!

And in fact, the optimiser can prove this too, because of the FOREIGN KEY constraint on C.ADDRESS_ID, which guarantees that every CUSTOMER record has exactly one corresponding ADDRESS record. The JOIN does not duplicate, nor remove any CUSTOMER rows, so it is unneeded and thus eliminated (by some, not all databases, will list each database at the end of the article).

So, the database can rewrite the SQL statement to the following, equivalent SQL statement in the presence of said FOREIGN KEY

Read on for a comparison across different products as well.

Comments closed

The Pain Of Multi-Statement TVFs

Andy Mallon walks through a multi-statement table-valued function in Microsoft Dynamics CRM:

Look at all those table-valued function calls! Followed immediately by a really expensive hash match. My Spidey Sense started to tingle. What is fn_GetMaxPrivilegeDepthMask, and why is it being called 30 times? I bet this is a problem. When you see “Table-valued function” as an operator in a plan, that actually means it’s a multi-statement table-valued function. If it were an inline table-valued function, it would get incorporated into the larger plan, and not be a black box. Multi-statement table-valued functions are evil. Don’t use them. The Cardinality Estimator isn’t able to make accurate estimates. The Query Optimizer isn’t able to optimize them in the context of the larger query. From a performance perspective, they don’t scale.

Even though this TVF is an out-of-the-box piece of code from Dynamics CRM, my Spidey Sense tells me that it’s the problem.

That said, Joe Sack and team are working on making multi-statement TVFs faster in SQL Server 2017.  Whether it will move the needle from Andy’s excellent advice, we’ll have to wait and see.

Comments closed

Parameter Sniffing Explained

Bert Wagner looks at how parameter sniffing works, why it’s (mostly) a good thing, and how to gently explain to the server when it’s a bad thing:

All subsequent executions of that same query will go to the query cache to reuse that same initial query plan — this saves SQL Server time from having to regenerate a new query plan.

Note: A query with different values passed as parameters still counts as the “same query” in the eyes of SQL Server.

In the case of the examples above, the first time the query was executed was with the parameter for “Costa Rica”. Remember when I said this dataset was heavily skewed? Let’s look at some counts:

Check it out for a clear depiction of the topic.  One solution that Bert doesn’t have but I will use sometimes is to create local variables in a procedure and set their values equal to input parameters.  That way, the optimizer doesn’t have an assumption as to the value of the local variable.  But there are several ways to get around this when it’s an issue.

Comments closed

The Risks Of Clearing The Procedure Cache

Erin Stellato explains two downsides to running DBCC FREEPROCCACHE or anything else which clears query plans:

Ideally, you should remove only what’s absolutely necessary.  Using DBCC FREEPROCCACHE is a sledgehammer approach and typically creates a spike in CPU as all subsequent queries need to have their plans re-generated.  Glenn gives examples on how to use each statement (and others) in his post Eight Different Ways to Clear the SQL Server Plan Cache, and I want to show you one more thing that happens when you clear a plan (or all plans) from cache.

For this demo script, I recommend running it against a TEST/DEV/QA environment because I am removing plans from cache which can adversely affect performance.

There are reasons to run these commands, but ideally, you should be as precise as possible in clearing plans out of the cache.

Comments closed

Prioritizing Rows In A Union

Daniel Hutmacher shows several ways to combine data from multiple sources, prioritized by source:

For the sake of simplicity, I’ll assume that our example data has a clustered index on the “primary key” that we want to use to determine which rows have already been included in the set – in our example, the primary key is (FirstName, LastName). The data I’m using is a fictional example, but here’s the jist:

  • #Employees has about 33 000 rows.
  • #Customers has about 44 000 rows.
  • #Passengers has about 500 000 rows.

The data is constructed in a way that these queries should return 530 000 rows, so we’re looking at some overlap but far from totally overlapping rows.

Example #492,806 that T-SQL is not a true 4th Generation Language, that how you write the query can greatly matter for performance.

Comments closed

SQL Server 2017 Finds Plan Regressions

Jovan Popovic shows off some automatic tuning functionality in SQL Server 2017:

Plan change regression happens when SQL Database changes a plan for some T-SQL query, and the new plan has the worse performance than the previous one. SQL Server 2017 has Automatic Tuning feature that enables you to easily find plan change regressions and fix them. In this post you will see the demo script that you can use to cause plan change regression and manually fix it using new sys.dm_db_tuning_recommendations view.

If you are not familiar with plan regressions and new tuning recommendations in SQL Server 2017, I would recommend to read these two posts:

This would be enough to understand steps in this demo.

Our experience with plan regression recommendations has been uniformly positive so far.  Those tests have been in dev and QA environments, but so far, there hasn’t been a terrible recommendation.

Comments closed

Parameter Sniffing On Conditional Statements

Kendra Little explains that SQL Server will cache parameter values for invalid statements:

The first time that dbo.ReviewFlags is executed after the database comes online, it’s with an invalid parameter, like this:

  • EXEC dbo.ReviewFlags @Flag = null;
  • GO

This is caught by the IF block, hits the RAISERROR, and goes down to the THROW block, and the output is:

  • Msg 50000, Level 11, State 1, Procedure ReviewFlags, Line 8 [Batch Start Line 70]
  • @Flag must be a value between 1 and 5

But even though SQL Server didn’t execute the SELECT statement, it still compiled it. And it also cached the plan.

Read on to understand the trouble this can cause, as well as a few ways of solving the problem.  This is a special case of parameter sniffing problems, but the solutions are the same as in the general case.

Comments closed

Open Query Store

William Durkin announces Open Query Store, a Query Store-like solution for pre-2016 SQL Server instances:

The first release was published at the end of June 2017 and provides a background collection of query execution statistics in a database of your choice. The v1.0 release supports SQL Server from 2008 to 2014 and all editions (Express, Standard, Enterprise). There is also a PowerShell installer for those that are so inclined, which will grab the code from GitHub and install OQS into a specified database.

There is also a custom report which can be installed in SSMS (2016 and 2017), which will display information that OQS has collected.

This is a really cool community solution if you’re stuck on an older version of SQL Server for a while.

Comments closed

Combinatorics With Joins

Dmitry Zaytsev explains the math behind why query plans can be so inefficient when dealing with a large number of joins:

Let’s talk about the sequence of table joins in detail. It is very important to understand that the possible number of table joins grows exponentially, not linearly. Fox example, there are only 2 possible methods to join 2 tables, and the number can reach 12 methods for 3 tables. Different join sequences can have different query cost, and SQL Server optimizer must select the most optimal method. But when the number of tables is high, it becomes a resource-intensive task. If SQL Server begins going over all possible variants, such query may never be executed. That is why, SQL Server never does it and always looks for a quite good plan, not the best plan. SQL Server always tries to reach compromise between execution time and plan quality.

There are ways you can help the optimizer, and one of my favorite query tuning books was all about table selection.

Comments closed