Press "Enter" to skip to content

Category: Query Tuning

Making Life Harder for Filter Operators

Erik Darling has a how-not-to guide for us:

We looked at a couple examples of when SQL Server might need to filter out rows later in the plan than we’d like, and why that can cause performance issues.

Now it’s time to look at a few more examples, because a lot of people find them surprising.

Read on for several examples. Also, because the bribes came through I don’t mind shilling for Erik, check out 25 hours of recorded content for $100. I think Erik’s knowledge is worth at least $4 an hour. Maybe even $5.

Comments closed

Filter Operators in a Query

Erik Darling shares some introductory thoughts on the filter operator:

When we write queries that need to filter data, we tend to want to have that filtering happen as far over to the right in a query plan as possible. Ideally, data is filtered when we access the index.

Whether it’s a seek or  a scan, or if it has a residual predicate, and if that’s all appropriate isn’t really the question.

In general, those outcomes are preferable to what happens when SQL Server is unable to do any of them for various reasons. The further over to the right in a query plan we can reduce the number of rows we need to contend with, the better.

Read on to see where Erik takes this and stay tuned for part 2.

Comments closed

Visualizing Analysis Services Tasks with the Job Graph

Chris Webb is excited:

More details about it, and how it can be used, are in the samples here:

https://github.com/microsoft/Analysis-Services/tree/master/ASJobGraphEvents

The data returned by the Job Graph event isn’t intelligible if you look at the text it returns in Profiler. However if you save a .trc file with Job Graph event data to XML you can use the Python scripts in the GitHub repo to generate DGML diagrams that can be viewed in Visual Studio, plus Gantt charts embedded in HTML. Of course to do this you’ll need to have Python installed; you’ll also need to have Visual Studio and its DGML editor installed (see here for details).

Read on to see how it looks and Chris’s thoughts on the matter.

Comments closed

Fun with Function Rewrites

Erik Darling reminds me why I hate user-defined functions in SQL Server:

At 23 seconds, this is probably unacceptable. And this is on SQL Server 2019, too. The function inlining thing doesn’t quite help us, here.

One feature restriction is this, so we uh… Yeah.

The UDF does not contain aggregate functions being passed as parameters to a scalar UDF

But we’re probably good query tuners, and we know we can write inline functions.

Read the whole thing, as this is not always straightforward.

Comments closed

Adaptive Query Execution in Databricks

MaryAnn Xue and Allison Wang explain how Adaptive Query Execution works with Databricks:

One of the most important cost-based decisions made in the Spark optimizer is the selection of join strategies, which is based on the size estimation of the join relations. But since this estimation can go wrong in both directions, it can either result in a less efficient join strategy because of overestimation, or even worse, out-of-memory errors because of underestimation.

AQE offers a trouble-free solution here by switching to the faster broadcast hash join during execution time.

This is pretty similar to Adaptive Query Processing in SQL Server.

Comments closed

Optimizing Common Table Expressions

Itzik Ben-Gan continues a series on common table expressions:

If you’re wondering why not use a much simpler solution with a grouped query and a HAVING filter, it has to do with the density of the shipperid column. The Orders table has 1,000,000 orders, and the shipments of those orders were handled by five shippers, meaning that in average, each shipper handled 20% of the orders. The plan for a grouped query computing the maximum order date per shipper would scan all 1,000,000 rows, resulting in thousands of page reads. Indeed, if you highlight just the CTE’s inner query (we’ll call it Query 3) computing the maximum order date per shipper and check its execution plan, you will get the plan shown in Figure 3.

Read on for classic Itzik.

Comments closed

Finding the Most Costly Statement in a Stored Procedure

Grant Fritchey takes us through one method of figuring out what which statement you’re waiting to finish when running a stored procedure:

A lot of stored procedures have multiple statements and determining the most costly statement in a given proc is a very common task. After all, you want to focus your time and efforts on fixing the things that cause you the most pain. You simply don’t have the time to tune every single statement in every single procedure. So, identifying the most costly statement is vital.

Happily, Extended Events are here to help.

Click through to see how you can use extended events to figure this out.

Comments closed

Diving Into the Window Spool Operator

Hugo Kornelis continues a series on execution plan operators:

The Window Spool operator is one of the four spool operators that SQL Server supports. Like other spool operators, it retains a copy of data it receives and can then return those rows as often as needed. The specific functionality of the Window Spool operator allows it to replay rows within a window, as defined in a ROWS or RANGE specification of an OVER clause.

Read on to see how these work, as well as a few differences from their spool brethren.

Comments closed

Making Use of Sort Rewinds: Closest Match

Paul White follows up on an article:

In When Do SQL Server Sorts Rewind? I described how most sorts can only rewind when they contain at most one row. The exception is in-memory sorts, which can rewind at most 500 rows and 16KB of data.

These are certainly tight restrictions, but we can still make use of them on occasion.

To illustrate, I am going reuse a demo Itzik Ben-Gan provided in part one of his Closest Match series, specifically solution 2 (modified value range and indexing).

Click through for the explanation.

Comments closed