Press "Enter" to skip to content

Category: Query Tuning

SARGability And Date Math

Erik Darling explains performance ramifications of date math operations:

Both plans are helped by our thoughtfully named index on the OrderDate column, though the one with cheaper estimated cost is the bottom one. Yes, I know this can sometimes lie, but we’re not hiding any functions in here that would throw things off horribly. If you’re concerned about the Nested Loops join, don’t worry too much. There is a tipping point where the Constant Scan operator is removed in favor of just index access. I didn’t inflate this table to find exact row counts for that, but I’ve seen it at work elsewhere. And, yeah, the second query will still be cheaper even if it also scans.

Which side of the equation you put a function on can make all the difference.

Comments closed

Creating Plan Guides

Brent Ozar has a couple examples of creating plan guides to work around bad queries using OPTIMIZE FOR UNKNOWN:

You can see the actual execution plan here, and hover your mouse over various parts of it to see the estimated vs actual rows.

The OPTIMIZE FOR UNKNOWN hint tells SQL Server to use the density vector rather than column statistics, so it only estimates that 1,865 rows will come back – when in actuality, 3.3mm rows come back. In performance tuning, that’s what we call a “bad thing,” since SQL Server ends up doing around 10mm page reads due to that key lookup. It would have been much more efficient to just do a clustered index scan.

Creating plan guides can be ugly business, but sometimes they’re the best solution.

Comments closed

Query Folding In Power Query

Reza Rad discusses the performance implications of query folding in M:

I can’t start talking about the issue without explaining what Query Folding is, so let’s start with that. Query Folding means translating Power Query (M) transformations into native query language of the data source (for example T-SQL). In other words; when you run Power Query script on top of a SQL Server database, query folding will translate the M script into T-SQL statements, and fetch the final results.

Click through for more details.  The advice here sounds pretty similar to what we get for optimizing Integration Services:  push as much of the heavy lifting onto well-optimized source queries as possible, particularly when it comes to filtering out rows.

Comments closed

Showplan XML Updates

Pedro Lopes discusses new properties which show up on execution plans:

This caused some friction, and so we are happy to announce that in the latest (October) release of SSMS, these attributes are now readily available in the Properties window of an Actual Execution Plan, in the scope of each operator.

Here’s an example with the same query as the March post, where we can see the Actual Number of Rows returned by each of the two threads executing the index seek.

Click through for more details.

Comments closed

Performance Of IN

Daniel Janik looks at how the IN clause behaves differently based on the number of items in the list:

As you can see the second query is much slower and the extra value in the IN caused late filtering. This is a limitation on some types of operators such as this clustered index scan.

There isn’t just a limitation of 15 input values. There’s also one at 64. On the 65th input value the list will be converted to a constant scan which is then sorted and joined. Most interestingly enough is that the list in my demo query is already sorted ascending.

Read the whole thing.

Comments closed

Comments And Performance

Aaron Bertrand looks at whether comments affect query performance:

Every once in a while, a conversation crops up where people are convinced that comments either do or don’t have an impact on performance.

In general, I will say that, no, comments do not impact performance, but there is always room for an “it depends” disclaimer.

I’m glad that there’s no appreciable difference.  Even if there were, good comments are valuable enough to make me not care about performance implications.  But fortunately, that’s not a trade-off I have to make.

Comments closed

Thinking About Compile Time

Jay Robinson has a post on compilation time and especially indexed views:

What I found was that worker time needed to compile these queries is indistinguishable from that needed to execute them. To show this, let’s look at an example in AdventureWorks2014. In this example, I’m going to create and execute two similar procedures. I’m also going to create a number of indexed views.

Why indexed views? I want to increase compile time significantly for this exercise, and a large number of indexed views can do that. From MSDN: “The query optimizer may use indexed views to speed up the query execution. The view does not have to be referenced in the query for the optimizer to consider that view for a substitution.” My thanks tooas_public on stackoverflow.com for that tip.

Indexed views come at a cost, as Jay shows.

Comments closed

Sort Spill To Level 15K

Paul White explains spill levels:

At this point, you might be wondering what combination of tiny memory grant and enormous data size could possibly result in a level 15,000 sort spill. Trying to sort the entire Internet in 1MB of memory? Possibly, but that is way too hard to demo. To be honest, I have no idea if such a genuinely high spill level is even possible in SQL Server. The goal here (a cheat, for sure) is to get SQL Server to report a level 15,000 spill.

The key ingredient is partitioning. Since SQL Server 2012, we have been allowed a (convenient) maximum of 15,000 partitions per object (support for 15,000 partitions is also available on 2008 SP2 and 2008 R2 SP1, but you have to enable it manually per database, and be aware of all the caveats).

Read the whole thing.

Comments closed

Understanding Query Durations

Kendra Little explains some of the intricacies behind query durations:

I typically look at the ‘CPU time’ metric when tuning instead of ‘elapsed time’ (duration). This can work well for tuning because you’re measuring how much more efficient you made the  query in terms of CPU cycles.

But ‘CPU time’ isn’t perfect, and it can get a little weird for reporting results to users, because:

  • If the query uses parallelism, CPU time can be higher than the duration — which may make the query seem “slower” than it actually is to anyone reading a report

  • ‘elapsed time’ includes all the time that it takes to display the results in Management Studio, which is probably a different duration than it would take to return the results to an application server. If you’re just returning a few rows, this may be negligible– but once it gets into the thousands of rows, it can be very noticeable.

Moral of the story:  also use SQL Sentry Plan Explorer…

Comments closed

Specifying Columns In Entity Framework

Richie Rump shows how to limit the number of columns returned in an Entity Framework query:

This one’s a bit more tricky but let’s walk through it. We’re getting data from the Posts table where the Tags column equals “<sql-server>” and selecting every column from both the Posts and PostTags tables. We can tell because there are no specified properties in the Select. Even though this statement looks more complex it’s only three lines and looks somewhat like a SQL statement. But it’s really a LINQ (Language Integrated Query) statement, specifically a LINQ to Entities statement. This LINQ statement will be translated into this SQL statement:

Read the whole thing.

Comments closed