Press "Enter" to skip to content

Category: Query Tuning

Fun with Query Tuning in SQL Server 2019

Erik Darling has just wrapped up a nice series on tackling a problem which looks like parameter sniffing but isn’t. Part 2 covers the issue:

This isn’t always the exact case, but generally speaking you’ll observe something along these lines.

It’s definitely not the case for what we’re going to be looking at this week.

This week is far more interesting.

That’s why it’s a monstrosity.

Part three digs in:

It’s not parameter sniffing, but it sure could feel like it.

– When the procedure compiles and runs with VoteTypeId 5, it runs for 12 minutes
– Other VoteTypeIds run well with the same plan that VoteTypeId 5 gets
– When VoteTypeId 5 runs with a “small” plan, it does okay at 10 seconds

Part four gives us a solution without using OPTIMIZE FOR MEDIOCRE:

This is what happens when we optimize for unknown. The density vector guess is 13,049,400.

That guess for Vote Types with very few rows ends up with a plan that has very high startup costs.

This version of the query will run for 13-17 seconds for any given parameter. That sucks in zero gravity.

Part 5 looks into something which occasionally pops up with this query:

You see, there’s a mystery plan.

It only shows up once in a while, like Planet X. And when it does, we get bombarded by asteroids.

Just like when Planet X shows up.

I wouldn’t call it a good all-around plan, but it does something that we would want to happen when we run this proc for VoteTypeId 5.

Read on for an educational romp through the SQL Server 2019 optimizer.

Comments closed

Window Functions in Spark SQL

Juoko Virtanen walks us through window functions in Spark SQL:

When you think of windows in Spark you might think of Spark Streaming, but windows can be used on regular DataFrames. Window functions calculate an output value for every row of a DataFrame based on a group of rows. I have been working on optimizing some Spark code and have noticed a few places where the use of a window function eliminates the need for a join and speeds up the code. A common pattern where a window can be used to replace a join is when an aggregation is performed on a DataFrame and then the DataFrame resulting from the aggregation is joined to the original DataFrame. Let’s take a look at an example.

Read on for a few examples using the Scala flavor of Spark SQL.

Comments closed

Choosing an Algorithm for Table.Join in Power Query

Chris Webb continues a series on optimizing merge performance in Power Query:

The first thing to say is that if you don’t specify a join algorithm in the sixth parameter of Table.Join (it’s an optional parameter), Power Query will try to decide which algorithm to use based on some undocumented heuristics. The same thing also happens if you use JoinAlgorithm.Dynamic in the sixth parameter of Table.Join, or if you use the Table.NestedJoin function instead, which doesn’t allow you to explicitly specify an algorithm.

There are going to be some cases where you can get better performance by explicitly specifying a join algorithm instead of relying on JoinAlgorithm.Dynamic but you’ll have to do some thorough testing to prove it. From what I’ve seen there are lots of cases where explicitly setting the algorithm will result in worse performance, although there are enough cases where doing so results in better performance to make all that testing worthwhile.

That behavior is the same as if you decided to start writing INNER LOOP JOIN or INNER HASH JOIN for your queries. In the right spot, you may have knowledge the optimizer doesn’t have and can make performance faster, but a lot of the time the approach will be a bit too heavy-handed and end up a net degradation of performance.

Comments closed

One-Column Fusion with DAX

Phil Seamark has a performance tuning tip for DAX:

I wrote an article about an optimisation called DAX Fusion that attempts to fuse similar SE calls when it can. This article highlights an elegant DAX-based trick that works for a specific scenario by reducing the number of SE calls that doesn’t rely on DAX fusion. The difference between 1-Column fusion and the other is:

– DAX fusion in the engine works across multiple columns that have the same effective WHERE clause
– 1-Column fusion works by fusing multiple measures that all reference a single column, but with different WHERE clauses

Read on to learn how to switch around a bit of DAX to reduce the number of storage engine calls, as well as an example of one scenario in which it can come in handy.

Comments closed

Optimizing Derived Table Expressions

Itzik Ben-Gan continues a series on table expressions:

As mentioned, next month I’ll get to the details of unnesting of derived tables. For now, suffice to say that SQL Server normally does apply an unnesting/inlining process to derived tables, where it substitutes the nested queries with a query against the underlying base tables. Well, I’m oversimplifying a bit. It’s not like SQL Server literally converts the original T-SQL query string with the derived tables to a new query string without those; rather SQL Server applies transformations to an internal logical tree of operators, and the outcome is that effectively the derived tables typically get unnested. When you look at an execution plan for a query involving derived tables, you don’t see any mention of those because for most optimization purposes they don’t exist. You see access to the physical structures that hold the data for the underlying base tables (heap, B-tree rowstore indexes and columnstore indexes for disk-based tables and tree and hash indexes for memory optimized tables).

This article deserves a careful reading.

Comments closed

Actual I/O Statistics in Execution Plans

Hugo Kornelis talks about a fairly recent property in execution plans:

There are two operators that read from the SalesOrderDetail table (or from indexes on that table). The top left operator is an Index Seek on one of the nonclustered indexes on SalesOrderDetail, and on the bottom input of the Nested Loops operator is a Clustered Index Scan that scans the clustered index on the same table.

So, now what? Which of the two is in this case the problem? Is each doing exactly 625 logical reads? Is one doing 50 and the other 1200? For the longest time, there was no way to find out. Sometimes you could make an educated guess by looking at the rest of the execution plan. Sometimes you can get an idea by running other queries with similar plans and check their logical reads (like in this case, you could run the subquery by itself and that would work). But none of these methods are really satisfactory.

Read on to see how the SQL Server team has addressed this.

Comments closed

Optimizing Power BI Merge Performance with Table.Join

Chris Webb shows us another way to optimize Power BI merge performance:

The SortMerge algorithm, last in the list above, is the focus of this blog post. I mentioned in my earlier posts that the reason that merge operations on non-foldable data sources are often slow is that both of the tables used in the merge need to be held in memory. There is an exception though: if you know that the data in the columns used to join the two tables is sorted in ascending order, you can use the Table.Join function and the SortMerge algorithm and the data from both sources can be streamed rather than held in memory, which in turn results in the merge being much faster.

That’s the same in the relational world: merge joins are the fastest, assuming that your data is pre-sorted in the proper manner.

Comments closed

Power Query Performance Differences in When You Remove Columns

Chris Webb continues a series on optimizing Power Query merge performance:

In my last post I demonstrated how the size of a table affects the performance of Power Query merge operations on non-foldable data sources in Power BI. Specifically, I showed that removing columns from the tables involved in a merge before the merge took place improved performance. But does it matter when you remove the columns? Is it enough to only select the columns you need when you expand the nested table returned by a merge, for example, or just to remove columns after the merge step? So, today’s question is:

Does it make a difference to Power Query merge performance if you remove unwanted columns from your source tables in the step before the merge or in the step afterwards?

Read on for the result, as well as a pleasant surprise around Power BI’s capabilities.

Comments closed

Row Counts and Arrow Widths, Continued

Hugo Kornelis finishes a series on row counts and arrow widths with a look at Compute Scalar operators:

Compute Scalar operator is probably the most common of all operators. I hardly ever see an execution plan that doesn’t have at least a few occurrences of this operator. The task of the Compute Scalar operator is a simple one: to use some of the data in its input and, based on that, produce new data that is then added as extra columns in its output.

Because of the simplicity of this task, the actual execution of that task is often done by one of the other operators in the execution plan, and the Compute Scalar operator itself doesn’t actually execute. A side effect is that it can’t track how many rows it processes, because it doesn’t process anything at all. The result is that, even in an execution plan with run-time statistics (aka “actual execution plan”), no run-time statistics will be reported by a Compute Scalar operator when all its computations are performed by other operators. (See also the note in this (retired) Books Online article).

But then Hugo head-fakes us and shows us the real conclusion:

I already described, in a previous post, how sometimes the optimizer can create an execution plan that uses a Filter operator to evaluate a specific predicate, but then a post-optimization rewrite finds a way to push that predicate down into another operator, as a Predicate property, and then removes the Filter operator. When this happens with a bitmap filter, the Estimated Number of Rows is not adjusted, which can be quite confusing.

But for the issue in this post, the root cause was the same, but the error surfaces completely differently.

This has been a fun series to read, showing how an extremely useful signal can nonetheless exhibit problems in many edge cases.

Comments closed