Press "Enter" to skip to content

Category: Query Tuning

Join Algorithm Selection in Spark

The Hadoop in Real World team takes us through the selection criteria for join types:

There are several factors Spark takes into account before deciding on the type of join algorithm to use to join datasets at runtime.

Spark has the following 5 algorithms to choose from –

1. Broadcast Hash Join
2. Shuffle Hash Join
3. Shuffle Sort Merge Join
4. Broadcast Nested Loop Join
5. Cartesian Product Join (a.k.a Shuffle-and-Replicate Nested Loop Join)

Read on to learn which join types are supported in which circumstances, as well as rules of precedence.

Comments closed

Sorting Pre-Sorted Data

Daniel Hutmacher has an idea:

Whenever SQL Server needs to sort a data stream, it will use the Sort operator to reorder the rows of the stream. Sorting data is an expensive operation because it entails loading part or all of the data into memory and shifting that data back and forth a couple of times. The only time SQL Server doesn’t sort the data is when it already knows the data to be ordered correctly, like when it has already passed a Sort operator or it’s reading from an appropriately sorted index.

But what happens if the data is ordered correctly, but SQL Server doesn’t know about it? Let’s find out.

Click through for the answer.

Comments closed

More Number Series Generators

Itzik Ben-Gan continues a series:

This is the second part in a series about solutions to the number series generator challengeLast month I covered solutions that generate the rows on the fly using a table value constructor with rows based on constants. There were no I/O operations involved in those solutions. This month I focus on solutions that query a physical base table that you pre-populate with rows. For this reason, beyond reporting the time profile of the solutions like I did last month, I’ll also report the I/O profile of the new solutions. Thanks again to Alan Burstein, Joe Obbish, Adam Machanic, Christopher Ford, Jeff Moden, Charlie, NoamGr, Kamil Kosno, Dave Mason, John Nelson #2 and Ed Wagner for sharing your ideas and comments.

Read on for three more solutions, as well as a re-evaluation of the solutions in the first article.

Comments closed

Performance Comparison of ISNULL and COALESCE

Erik Darling notes the edge cases where ISNULL() can be faster than COALESCE():

Sometimes there are very good reasons to use either coalesce or isnull, owing to them having different capabilities, behaviors, and support across databases.

But isnull has some particular capabilities that are interesting, despite its limitations: only two arguments, specific to SQL Server, and uh… well, we can’t always get three reasons, as a wise man once said.

There is one thing that makes isnull interesting in certain scenarios. Let’s look at a couple.

Read the whole thing. You (hopefully) won’t be in this situation often, but if you do happen to find yourself in it for whatever reason (and you can’t re-write the query to something better), it’s good to keep this in mind.

Comments closed

Using Hints Instead of Trace Flags

Taiob Ali has a set of hints to use instead of setting trace flags:

Introduced in SQL Server 2016 (13.x) SP1 and Azure SQL Database is the USE HINT option. At the time of writing this article, there are 23 of those. You can see an up to date list by running Dynamic Management View sys.dm_exec_valid_use_hints.

With the introduction of this feature, we can replace some of the trace flags with hints. I personally like hints over trace flags (if I have to choose between two devils). That way someone reading the code have some idea what the hint is about instead of remembering the trace flag numbers.

I agree with Taiob on that sentiment.

Comments closed

The Performance of sys.partitions

Aaron Bertrand tries to disentangle a query:

This question was posted to #sqlhelp by Jake Manske, and it was brought to my attention by Erik Darling.

I don’t recall ever having a performance issue with sys.partitions. My initial thought (echoed by Joey D’Antoni) was that a filter on the data_compression column should avoid the redundant scan, and reduce query runtime by about half. However, this predicate doesn’t get pushed down, and the reason why takes a bit of unpacking.

Click through for the results of Aaron’s research.

Comments closed

Faster String Concatenation with SQL Server

Steve Stedman has some tips for people who need to combine strings:

What the previous show as that the longer the string gets the slower the concatenation is. So instead we declare a second VARCHAR(MAX) variable called @stringBuilder, and each time through the loop we concatenate to that, then every thousandth time through the loop we take the @stringBuilder variable and concatenate it on to the @bigString, then clear out the @stringBuilder variable. This keeps the @stringBuilder variable relatively short, and reduces the number of concatenations to the @bigString to roughly 1/1000th the original.

Click through for a demo of the process. I don’t think I’ve been in too many situations where string concatenation was a performance killer in SQL Server, but it’s good to know.

Comments closed

Using sp_prepare with Plan Guides

Aaron Bertrand tries something different:

There are features many of us shy away from, like cursors, triggers, and dynamic SQL. There is no question they each have their use cases, but when we see a trigger with a cursor inside dynamic SQL, it can make us cringe (triple whammy).

Plan guides and sp_prepare are in a similar boat: if you saw me using one of them, you’d raise an eyebrow; if you saw me using them together, you’d probably check my temperature. But, as with cursors, triggers, and dynamic SQL, they have their use cases. And I recently came across a scenario where using them together was beneficial.

Read on to see the method to this madness.

Comments closed

Don’t Fear the tempdb

Erik Darling puts his pants on one leg at a time and once his pants are on, he makes gold records:

One persistent idea is that tempdb is something to be avoided. Either because it was “slow” or to avoid contention.

Granted, if a query has been around long enough, these may have been valid concerns at some point. In general though, temp tables (the # kind, not the @ kind) can be quite useful when query tuning.

Erik is absolutely right in this post. Ceteris paribus I’d rather not directly use tempdb because I’d prefer one query over multiple queries. But once performance comes into question, working on smaller subsets of data one step at a time will typically give you at least an acceptable solution.

Comments closed