Press "Enter" to skip to content

Category: Query Tuning

What Prevents Plan Reuse?

Eric Blinn walks us through what might cause a query plan not to be used:

There are several reasons that a query plan would need to be compiled again, but they can be boiled down to a few popular reasons.

The first one is simple.  The plan cache is stored exclusively in memory.  If there is memory pressure on the instance SQL Server will eject plans from cache that aren’t being used to make room for newer, more popular plans or even to expand the buffer pool.  If a command associated to a plan that has been ejected from the plan cache is issued, it will need to be compiled again before it can execute.

Since SQL Server 2008 a system stored procedure, sp_recompile, has been available to clear a single stored procedure plan from the cache.  When executed with a valid stored procedure name as the only parameter any plans for that procedure will be marked for recompilation so that a future execution of that procedure will need to be compiled.  Running sp_recompile does not actually compile the procedure.  It simply invalidates any existing plans so that some future execution, which in theory may never come, will need to compile before executing.

Read on for additional causes.

Comments closed

Capturing UDF CPU Times

Jonathan Kehayias notes an improvement in recent versions of SQL Server:

Microsoft has been enhancing the contents of the ShowplanXML output for SQL Server over the last few releases and in SQL Server 2017 CU3, they introduced user-defined function (UDF) execution statistics into the QueryTimeStats node of the XML output. This was also back ported to SQL Server 2016 in Service Pack 2 for actual execution plans. This feature allows you to definitively know the impact of scalar UDF execution as part of the performance characteristics of a query. However, there is an interesting catch associated with using this feature; you have to collect the actual execution plan using an up to date version of SQL Server Management Studio or using SentryOne Plan Explorer, or the information will be removed from the execution plan.

This is a good improvement.  Historically, user-defined function costs were hidden in SSMS, as you’d see the cost of a single execution.  This made them look a lot more benign than they actually were.

Comments closed

Causing tempdb Spills

Kendra Little shows us a quick and easy way to cause tempdb spills:

Sometimes it’s useful to know how to cause a problem.

Maybe you’ve never encountered the problem, and want to get hands-on experience. Maybe you’re testing a monitoring tool, and want to see if a condition flags an alert. Maybe you’re testing out a new client tool, and want to see how it displays it.

I recently was going through some demos in SQL Operations Studio, and I found that a spill on a sort operator wasn’t causing a warning to visibly show in the graphic execution plan.

I wanted to file an issue on this and let the Ops Studio team know that would be helpful – but my demo code was somewhat complex and required restoring a rather large database. So I set up a quick code sample to cause a spill that could be run in any database.

It’s important to know how to cause problems if you want to make sure you’ve solved them correctly.

Comments closed

Getting An Accurate Query Execution Time

Grant Fritchey shares some tips on accurate query time estimation:

Before we get into all the choices and compare them, let’s baseline on methodology and a query to use.

Not sure why, but many people give me blow back when I say “on average, this query runs in X amount of time.” The feedback goes “You can’t say that. What if it was just blocking or resources or…” I get it. Run a query one time, change something, run that query again, declare the problem solved, is not what I’m suggesting. Notice the key word and trick phrase “on average.” I don’t run the query once. I run it several times, capture them all, then get the average of the durations.

The observer effect is in full force with a couple of the techniques Grant shows, but the rest are generally stable, which is a good thing.

Comments closed

Ways To Check For Non-Existence

Brent Ozar shows two methods for finding records missing associated child records:

You’re writing a query, and you wanna check to see if rows exist in a table.

I’m using the free Stack Overflow database, and I wanna find all of the users who have not left a comment. The tables involved are:

  • In dbo.Users, the Id field uniquely identifies a user.
  • In dbo.Comments, there’s a UserId field that links to who left the comment.

A quick way to write it is:

And this works fine. When you read the query, you might think SQL Server would run that SELECT * FROM dbo.Comments query for every single row of the Users table – but it’s way smarter than that, bucko. It scans the Comments index first because it’s much larger, and then joins that to the Users table.

But this isn’t the only way to query these tables, and Brent shows how to tell which method works better.

Comments closed

Grouping And Aggregating: Optimizing The Optimizer

Itzik Ben-Gan shows an example of how you can nudge the SQL Server optimizer to the right answer by rewriting a query:

As you can see, the groups are obtained by scanning the index on the groups table, and the aggregate is obtained by applying a seek in the index on the main table. The higher the density of the grouping set, the more optimal this plan is compared to the default strategy for the grouped query.

Just like we did earlier for the default scan strategy, let’s estimate the number of logical reads and plan cost for the seeks strategy. The estimated number of logical reads is the number of reads for the single execution of the Index Scan operator that retrieves the groups, plus the reads for all of the executions of the Index Seek operator.

The estimated number of logical reads for the Index Scan operator is negligible compared to the seeks; still, it’s CEILING(1e0 * @numgroups / @rowsperpage). Take Query 4 as an example; say the index idx_sid fits about 600 rows per leaf page (actual number depends on actual shipperid values since the datatype is VARCHAR(5)). With 5 groups, all rows fit in a single leaf page. If you had 5,000 groups, they would fit in 9 pages.

Plus some love for the APPLY operator.  Read the whole thing.

Comments closed

Betteridge’s Law And Index Hints

Bert Wagner asks a question in his title, Should You Use Index Hints?  Those familiar with Betteridge’s Law of Headlines know the general answer already:

One way to “fix” a poor performing plan is to use an index hint.  While we normally have no control over how SQL Server retrieves the data we requested, an index hint forces the  query optimizer to use the index specified in the hint to retrieve the data (hence, it’s really more of a “command” than a “hint”).

Sometimes when I feel like I’m losing control I like using an index hint to show SQL Server who’s boss.  I occasionally will also use index hints when debugging poor performing queries because it allows me to confirm whether using an alternate index would improve performance without having to overhaul my code or change any other settings.

About the only place I consistently use index hints is with filtered indexes, where the combination of parameter sniffing and inexactitude in filters will convince the optimizer that the filtered index isn’t helpful when it really is.

Comments closed

When Using DBCC DROPCLEANBUFFERS

Dan Guzman shares words of wisdom with using DBCC DROPCLEANBUFFERS for testing query performance in SQL Server:

One can make the argument that DBCC DROPCLEANBUFFERS might not be particularly valuable for testing. First, the storage engine in SQL Server Enterprise Edition (or Developer Edition, which is often used when testing) behaves differently with a cold cache versus a warm one. With a warm cache, a page not already in cache (e.g. index seek by primary key) will be fetched from disk using a single 8K page IO request as one expects. However, when the cache isn’t fully warmed up (Buffer Manager’s Target Pages not yet met), the entire 64K extent (8 contiguous 8K pages) is read for the single page request regardless of whether the adjacent pages are actually needed by the query. This has the benefit of warming the cache much more quickly than would otherwise occur, but given that the normal steady state of a production SQL Server is a warm cache, testing with a cold cache isn’t a fair comparison of different plans. More data than normal will be transferred from storage so timings may not be indicative of actual performance.

I don’t think I agree 100% with that argument, but I am sympathetic to it.  Still, Dan has great advice in this post.

Comments closed

Non-Blocking Aggregations

Daniel Hutmacher tilts at windmills:

It’s not entirely uncommon to want to group by a computed expression in an aggregation query. The trouble is, whenever you group by a computed expression, SQL Server considers the ordering of the data to be lost, and this will turn your buttery-smooth Stream Aggregate operation into a Hash Match (aggregate) or create a corrective Sort operation, both of which are blocking.

Is there anything we can do about this? Yes, sometimes, like when those computed expressions are YEAR() and MONTH(), there is. But you should probably get your nerd on for this one.

There are many ways to solve a problem, and sometimes the best method is indirect.

Comments closed