Press "Enter" to skip to content

Category: Query Tuning

Digging Into Batch Mode And Parameter Sniffing

Erik Darling has mixed news on the efficacy of using batch mode for rowstore as a way of eliminating problems arising from parameter sniffing:

SQL Server 2019 introduced batch mode over row store, which allows for batch mode processing to kick in on queries when the optimizer deems it cost effective to do so, and also to open up row store queries to the possibility of Adaptive Joins, and Memory Grant Feedback.

These optimizer tricks have the potential to help with parameter sniffing, since the optimizer can change its mind about join strategies at run time, and adjust memory grant issues between query executions.

But of course, the plan that compiles initially has to qualify to begin with. In a way, that just makes parameter sniffing even more frustrating.

Read on for both the good and the bad.

Comments closed

Understanding Query Optimizer Timeouts

Joseph Pilov answers frequently asked questions about SQL Server’s query optimizer when it times out:

What Is Optimizer Timeout?

SQL Server uses a cost-based query optimizer. Therefore, it selects a query plan with the lowest cost after it has built and examined multiple query plans. One of the objectives of the SQL Server query optimizer (QO) is to spend a “reasonable time” in query optimization as compared to query execution. Therefore, QO has a built-in threshold of tasks to consider before it stops the optimization process. If this threshold is reached before QO has considered most, if not all, possible plans then it has reached the Optimizer TimeOut limit. An event is reported in the query plan as Time Out under “Reason For Early Termination of Statement Optimization.” It’s important to understand that this threshold isn’t based on clock time but on number of possibilities considered. In current SQL QO versions, over a half million possibilities are considered before a time out is reached.

Optimizer timeout is designed in Microsoft SQL Server and in many cases encountering it is not a factor affecting query performance. However, in some cases the SQL query plan choice may be affected by optimizer timeout and thus performance could be impacted. When you encounter such issues, if you understand optimizer timeout mechanism and how complex queries can be affected in SQL Server, it can help you to better troubleshoot and improve your performance issue.

Read the whole thing.

Comments closed

Risks Of Using Resource Governor To Set Max DOP

Joe Obbish builds an example where Resource Governor’s CPU cap can actively harm query performance:

I uploaded the query plan here if you want to look at it. This type of scenario can happen even without Resource Governor. For example, a compiled parallel query may be downgraded all the way to MAXDOP 1 if it can’t get enough parallel threads.

The query performs significantly worse than before, which hopefully is not a surprise. A single execution took 12860 ms of CPU time and 13078 ms of elapsed time. Nearly all of the query’s time is spent on the hash join for the index intersection, with a tempdb spill and the processing of additional rows both playing a role. The tempdb spill occurs because SQL Server expected the build side of the hash join to be reduced to 1213170 rows. The bitmap filtering does not occur so 8 million rows were sent to the build side instead.

Read the whole thing.

Comments closed

When Table Variables Have Realistic Estimates, Unrealistic Results May Occur

Milos Radivojevic wraps up a series on deferred compilation for table variables by looking at a hack which used to work but no longer does:

With this change, the query is executed very fast, with the appropriate execution plan:

SQL Server Execution Times:
CPU time = 31 ms,  elapsed time = 197 ms.

However, the LOOP hint does not affect estimations and the optimizer decisions related to them; it just replaces join operators chosen by the optimizer by Nested Loop Joins specified in the hint. SQL Server still expects billions of rows, and therefore the query got more than 2 GB memory grant for sorting data, although only 3.222 rows need to be sorted. The hint helped optimizer to produce a good execution plan (which is great; otherwise this query would take very long and probably will not be finished at all), but high memory grant issue is not solved.

As you might guess, now it’s time for table variables.

This is an interesting article with workarounds and counter-workarounds to solve a nasty estimation problem.

Comments closed

Parameter Sniffing Issues With Table Variables

Milos Radivojevic points out the downside to table variable deferred compilation in SQL Server 2019:

Since the actual number of rows is significantly greater than in the call with the parameter ‘White’, you can see here sort warnings because this time 1MB of memory grant was not sufficient for sorting. But, the execution plan is exactly the same as for the first call.

Prior to SQL Server 2019, the execution plan for the second query in this stored procedure was always the same, regardless of parameter used for the first invocation and thus plan generation. Since the table variable has cardinality of 1, all estimations and the execution plan will be the same. We can say, using a table variable in this stored procedure and passing it to the second query neutralizes parameter sniffing effect. That does not mean, this is good or bad for all executions (you saw sort warnings and they are always bad), but the plan was stable, it did not change even after failover or clearing cache. If you call this stored procedure usually with high selective parameters, you can consider this plan as a good plan.

In SQL Server 2019, since table variable could have different cardinality, this stored procedure is prone to parameter sniffing and depending on the first execution parameter, you can have different execution plans:

This is a natural outcome and something we’d work with just like we would with a temp table or regular table in a stored procedure.

Comments closed

Enhancements To Actual Query Plans In SSMS 18

Brent Ozar points out a big enhancement to the way SQL Server Management Studio views actual query plans:

You can see the estimated and actual number of rows right there on the query plan just like live query plans! You no longer have to waste hours of your life hovering over different parts of the query plan in order to see where the estimated row counts veer off from the actual row counts.

This doesn’t require SQL Server 2019, either.

Read on for Brent’s thoughts on the matter.

Comments closed

Forcing A Plan Is A Temporary Solution

Erin Stellato explains when she forces plans—and that this is not a permanent solution to a performance problem:

Whether you force plans manually, or let SQL Server force them with the Automatic Plan Correction feature, I still view plan forcing as a temporary solution.  I don’t expect you to have plans forced for years, let alone months.  The life of a forced plan will, of course, depend on how quickly code and schema changes are ported to production.  If you go the “set it and forget it route”, theoretically a manually forced plan could get used for a very long time.  In that scenario, it’s your responsibility to periodically check to ensure that plan is still the “best” one for the query.  I would be checking every couple weeks; once a month at most.  Whether or not the plan remains optimal depends on the tables involved in the query, the data in the tables, how that data changes (if it changes), other schema changes that may be introduced, and more.

Further, you don’t want to ignore forced plans because there are cases where a forced plan won’t be used (you can use Extended Events to monitor this).  When you force a plan manually, forcing can still fail.  For example, if the forced plan uses an index and the index is dropped, or its definition is changed to the point where it cannot be used in plan in the same manner, then forcing will fail.  Important note: if forcing fails, the query will go through normal optimization and compilation and it will execute; SQL Server does not want your query to fail!  If you’re forcing plans and not familiar with the reasons that it can fail, note the last_force_failure_reason values listed for sys.query_store_plan.  If you have manually forced a plan for a query, and the force plan fails, it remains forced.  You have to manually un-force it to stop SQL Server from trying to use that plan.  As you can see, there are multiple factors related to plan forcing, which is why you don’t just force a plan and forget it.

There is much sound advice in this post.

Comments closed

What Prevents Plan Reuse?

Eric Blinn walks us through what might cause a query plan not to be used:

There are several reasons that a query plan would need to be compiled again, but they can be boiled down to a few popular reasons.

The first one is simple.  The plan cache is stored exclusively in memory.  If there is memory pressure on the instance SQL Server will eject plans from cache that aren’t being used to make room for newer, more popular plans or even to expand the buffer pool.  If a command associated to a plan that has been ejected from the plan cache is issued, it will need to be compiled again before it can execute.

Since SQL Server 2008 a system stored procedure, sp_recompile, has been available to clear a single stored procedure plan from the cache.  When executed with a valid stored procedure name as the only parameter any plans for that procedure will be marked for recompilation so that a future execution of that procedure will need to be compiled.  Running sp_recompile does not actually compile the procedure.  It simply invalidates any existing plans so that some future execution, which in theory may never come, will need to compile before executing.

Read on for additional causes.

Comments closed

Capturing UDF CPU Times

Jonathan Kehayias notes an improvement in recent versions of SQL Server:

Microsoft has been enhancing the contents of the ShowplanXML output for SQL Server over the last few releases and in SQL Server 2017 CU3, they introduced user-defined function (UDF) execution statistics into the QueryTimeStats node of the XML output. This was also back ported to SQL Server 2016 in Service Pack 2 for actual execution plans. This feature allows you to definitively know the impact of scalar UDF execution as part of the performance characteristics of a query. However, there is an interesting catch associated with using this feature; you have to collect the actual execution plan using an up to date version of SQL Server Management Studio or using SentryOne Plan Explorer, or the information will be removed from the execution plan.

This is a good improvement.  Historically, user-defined function costs were hidden in SSMS, as you’d see the cost of a single execution.  This made them look a lot more benign than they actually were.

Comments closed

Causing tempdb Spills

Kendra Little shows us a quick and easy way to cause tempdb spills:

Sometimes it’s useful to know how to cause a problem.

Maybe you’ve never encountered the problem, and want to get hands-on experience. Maybe you’re testing a monitoring tool, and want to see if a condition flags an alert. Maybe you’re testing out a new client tool, and want to see how it displays it.

I recently was going through some demos in SQL Operations Studio, and I found that a spill on a sort operator wasn’t causing a warning to visibly show in the graphic execution plan.

I wanted to file an issue on this and let the Ops Studio team know that would be helpful – but my demo code was somewhat complex and required restoring a rather large database. So I set up a quick code sample to cause a spill that could be run in any database.

It’s important to know how to cause problems if you want to make sure you’ve solved them correctly.

Comments closed