Let’s talk about the sequence of table joins in detail. It is very important to understand that the possible number of table joins grows exponentially, not linearly. Fox example, there are only 2 possible methods to join 2 tables, and the number can reach 12 methods for 3 tables. Different join sequences can have different query cost, and SQL Server optimizer must select the most optimal method. But when the number of tables is high, it becomes a resource-intensive task. If SQL Server begins going over all possible variants, such query may never be executed. That is why, SQL Server never does it and always looks for a quite good plan, not the best plan. SQL Server always tries to reach compromise between execution time and plan quality.
There are ways you can help the optimizer, and one of my favorite query tuning books was all about table selection.
Oracle hints were quite common during the infancy of the Oracle Cost Based Optimizer, (CBO). It could be frustrating for a database administrator who was accustomed to the Rules Based Optimizer, (rules, people! If there’s an index, use it!) to give up control of performance to a feature that simply wasn’t taking the shortest route to the results. As time passed from Oracle 9i to 10g, we harnessed hints less, trusting the CBO and by Oracle 11g, it started to be frowned upon unless you had a very strong use case for hinting. I was in the latter scenario, as my first Oracle 11g database environment required not just new data, but a new database weekly and a requirement for me to guarantee performance. I knew pretty much every optimal plan for every SQL statement in the systems and it was my responsibility to make sure each new database chose the most optimal plan. I had incorporated complex hints, (and then profiles as we upgraded…)
With the introduction of database version Oracle 12c, it became a sought after skill to use hints effectively again, as many new optimizer features, (often with the words “dynamic” or “automated” in them) started to impact performance beyond what was outside the allowable.
Read on for a nearly-equivalent query in the two database systems.
Let’s look at one query with a few variations.SELECT COUNT(*) AS [Records], SUM(CONVERT(BIGINT, t.Amount)) AS [Total] FROM dbo.t1 AS t WHERE t.Id > 0 AND t.Id < 3;
The plan for it is alright. It’s fairly straightforward and the query finishes in about 170ms.
We can see from the graphical execution plan that it’s been Simple Parameterized. SQL Server does this to make plan caching more efficient.
Check out the entire post.
Here are results of the queries:SQL Server Execution Times: CPU time = 656 ms, elapsed time = 651 ms. SQL Server Execution Times: CPU time = 204 ms, elapsed time = 197 ms.
As you can see, WITH clause specify that OPENJSON should immediately return properties from the JSON array without second parsing. Performance of the queries might be increased 3 times if you avoid double parsing.
That’s a pretty big difference when you specify the relevant data model elements.
If you need to add, remove, or replace hints from ad-hoc queries where you can’t change the code, plan guides can help. See a demo of removing a query hint from parameterized TSQL run from an application, and get tips on how to make your plan guides work in SQL Server.
The code from the demo is here. Links for more info are below the video. Have fun!
Click through to watch the video, or you can catch the podcast version.
Erik Darling has a couple more posts on adaptive joins in SQL Server 2017. First, he wonders what happens when you add scalar functions to the mix:
See, this isn’t SARGable either (and no, SCHEMABINDING doesn’t change this). When a predicate isn’t SARGable, you take away an index seek as an access choice. You don’t see too many Nested Loops with an index scan on the other end, do you?
So there you go. It’s not the function itself that bops our Adaptive Join on the head, but the lack of SARGability.
I do have to point out that Cross Apply used to only be implemented as a Nested Loops Join. I learned that many years ago from one of the best articles written about Cross Apply by Paul White. That changed recently — it’s possible to see it implemented with a Hash Join in at least 2016. I’ve seen it crop up in Cross Apply queries without a TOP operator.
The latter results are a bit surprising.
There are three types of logical plans:
- Parsed logical plan.
- Analyzed logical plan.
- Optimized logical plan.
Analyzed logical plans go through a series of rules to resolve. Then, the optimized logical plan is produced. The optimized logical plan normally allows Spark to plug in a set of optimization rules. You can plug in your own rules for the optimized logical plan.
Click through for the details.
Erik Darling has started looking at interleaved execution of multi-statement table-valued functions in SQL Server 2017. First, he gives an intro:
In the first plan, the optimizer chooses the ColumnStore index over the nonclustered index that it chose in compat level 130.
This plan is back to where it was before, and I’m totally cool with that. Avoiding bad choices is just as good as making good choices.
I think. I never took an ethics class, so whatever.
In this case, the inline table valued function wiped the floor with the MSTVF, even with Interleaved Execution.
Obviously there’s overhead dumping that many rows into a table variable prior to performing the join, but hey, if you’re dumping enough rows in a MSTVF to care about enhanced cardinality estimation…
Just like Global Thermonuclear War, I believe the best way to win mutli-statement versus inline TVFs is not to play at all.
We have three types of physical join algorithms in SQL Server: hash, nested loops and merge. Adaptive join allows SQL Server automatically choose an actual physical algorithm on the fly between the first two – hash (HM) and nested loops (NL).
NL has two join strategies – naive nested loops join (inner loop scans the whole inner table or index) and index nested loops join (index on the join column of the inner table is used to find necessary rows and then those rows are applied to the outer row, also called Nested Loops Apply). Typically, the second one performs very well if you have rather small input on the outer side and indexed rather big input on the inner side.
HM is more universal and uses hash algorithms to match rows, so no indexes are necessary. You may refer to my blog post Hash Join Execution Internals for more details.
Adaptive Join starts execution as a Hash Join. It consumes all the input of the build phase and looks at the adaptive join threshold, if the number of rows is more or equal this threshold it will continue as a hash join. However, if the number of rows is less than this threshold, it will switch to a NL.
If you want to get a better understanding of how adaptive joins works, Dmitry’s post is a great start.
If you’re an experienced SQL tuner, you’ll notice some issues with this statement. First off the query has not one but two table variables. It’s generally better to use temp tables because table variables don’t have good statistics by default. Secondly, the statement uses a MERGE statement. The MERGE statement has had more than it’s fair share of issues. See Aaron’s Bertrand’s post “Use Caution with SQL Server’s MERGE Statement” for more details on those issues.
But that got me wondering, why would the EF team use SQL features that perform so poorly? So I decided to take a closer look at the SQL statement. Just so you know the code that was used to generate the SQL saves three entities (Katana, Kama, and Tessen) to the database in batch. (Julie used a Samurai theme so I just continued with it.)
Yeah…I’m not liking the MERGE statement very much here.