We have three types of physical join algorithms in SQL Server: hash, nested loops and merge. Adaptive join allows SQL Server automatically choose an actual physical algorithm on the fly between the first two – hash (HM) and nested loops (NL).
NL has two join strategies – naive nested loops join (inner loop scans the whole inner table or index) and index nested loops join (index on the join column of the inner table is used to find necessary rows and then those rows are applied to the outer row, also called Nested Loops Apply). Typically, the second one performs very well if you have rather small input on the outer side and indexed rather big input on the inner side.
HM is more universal and uses hash algorithms to match rows, so no indexes are necessary. You may refer to my blog post Hash Join Execution Internals for more details.
Adaptive Join starts execution as a Hash Join. It consumes all the input of the build phase and looks at the adaptive join threshold, if the number of rows is more or equal this threshold it will continue as a hash join. However, if the number of rows is less than this threshold, it will switch to a NL.
If you want to get a better understanding of how adaptive joins works, Dmitry’s post is a great start.
If you’re an experienced SQL tuner, you’ll notice some issues with this statement. First off the query has not one but two table variables. It’s generally better to use temp tables because table variables don’t have good statistics by default. Secondly, the statement uses a MERGE statement. The MERGE statement has had more than it’s fair share of issues. See Aaron’s Bertrand’s post “Use Caution with SQL Server’s MERGE Statement” for more details on those issues.
But that got me wondering, why would the EF team use SQL features that perform so poorly? So I decided to take a closer look at the SQL statement. Just so you know the code that was used to generate the SQL saves three entities (Katana, Kama, and Tessen) to the database in batch. (Julie used a Samurai theme so I just continued with it.)
Yeah…I’m not liking the MERGE statement very much here.
Erik Darling continues his adaptive joins exploration with two more posts. First, how local variables can affect the query plan:
The easiest way to look at this is to compare Adaptive Joins with literal values to the same ones using local variables. The results are a little… complicated.
Here are three queries with three literal values. In my copy of the Super User database (the largest Stack Overflow sub-site), I’ve made copies of all the tables and added Clustered ColumnStore indexes to them. That’s the only way to get Adaptive Joins at this point — Column Store has to be involved somewhere along the line.
The last day of data in this dump is from December 11. When I query the data, I’m looking at the last 11 days of data, the last day of data, and then a day where there isn’t any data.
The queries with non-SARGable predicates on the Users table used Adaptive Joins.
The queries with non-SARGable predicates on the Posts table did not.
Now, there is an Extended Events… er… event to track this, called adaptive_join_skipped, however it didn’t seem to log any information for the queries that didn’t get Adaptive Joins.
Bummer! But, if I had to wager a guess, it would be that this happens because there’s no alternative Index Seek plan for the Posts table with those predicates. Their non-SARGableness takes that choice away from the optimizer, and so Adaptive Joins aren’t a choice. The Users table is going to get scanned either way — that’s the nature of ColumnStore indexes, so it can withstand the misery of non-SARGable predicates in this case and use the Adaptive Join.
Two more good posts in Erik’s series, and both definitely worth reading.
Joe states in the article “MSTVFs have a fixed cardinality guess of “100” in SQL Server 2014 and SQL Server 2016, and “1” for earlier versions. Interleaved execution will help workload performance issues that are due to these fixed cardinality estimates associated with multi-statement table valued functions.”
This is exactly what I saw where the below is just a basic screen shot of 1 of many tests that I carried out.
Read the whole thing for more details.
Erik Darling has a couple blog posts getting deeper into Adaptive Join Optimizations in SQL Server 2017. First, Erik discusses the basics:
You see, in every plan, you see both possible paths the optimizer could have chosen. Right now it’s only limited to two choices, Nested Loops and Hash Joins.
Just guessing that Merge Joins weren’t added because there would have been additional considerations around the potential cost of a Sort operation to get the data in order.
Be sure to read Brent’s comment that in the initial release, it will just support columnstore indexes. Then, Erik talks about execution plan details:
Some points of interest:
- Actual Join Type: doesn’t tell you whether it chose Hash or Nested Loops
- Estimated Join Type: Probably does
- Adaptive Threshold Rows: If the number of rows crosses this boundary, Join choice will change. Over will be Hash, under will be Nested Loops.
The rest is fairly self-explanatory and is the usual stuff in query plans.
Good stuff here.
This probably won’t seem like a big deal soon
But I just got the optimizer to pick an Adaptive Join! It took a few tries to figure out what would cause some guesswork to happen, but here it is.
Erik promises more details are forthcoming as he works out the XML details.
If we rerun the query and then take a look at the first operator in the execution plan, we can see that the Plan Guide is in use… and that the query hash has changed. It no longer matches the original query. Now it matches the query that included the query hint. This actually makes perfect sense. The Plan Guide is basically changing the query from the first example above, into the second.
Now, what happens when we toss in the Query Store
The query behavior is exactly what you want, but some of the metadata is no longer correct.
I was trying to come up with a demo for something totally different. Don’t ask. Seriously. It’s top secret.
Okay, so it’s just embarrassing.
Anyway. I had these two queries. Which are actually the same query twice. The only difference is the table variable definition.
Click through for the demo and additional information.
First of all, note that Estimated Number of Rows is 10 000 now, which is correct and equals Actual Number of Rows. Due to the correct estimate the optimizer decided that there are enough rows to benefit from a partial (local/global) aggregation and introduced a partial Hash aggregate before the join.
If you take a Profiler, enable events SP:StmtStarting, SP:StmtCompleted, SQL:StmtStarting, SQL:StmtCompleted and run the query without and with a TF, you’ll see what does it actually mean “interleaved” in terms of the execution sequence.
During the regular execution the query starts executing, then the function is executed, the query continues execution and finishes the execution. We see the following event sequence:
This is a very thorough post, but if you have multi-statement TVFs, you absolutely want to read it.
A probe residual is important because they can indicate key performance problems that might not otherwise be brought to your attention.
What is a probe residual?
Simply put, a probe residual is an extra operation that must be performed to compete the matching process. Extra being left over things to do.
Click through for an example brought about by implicit conversion.