One of the many current downsides of @table variables is that modifying them inhibits parallelism, which is a problem #temp tables don’t have.
While updating and deleting from @table variables is fairly rare (I’ve seen it, but not too often), you at minimum need an insert to put some data in there.
No matter how big, bad, ugly, or costly your insert statement is, SQL Server can’t parallelize it.
That’s just what he wants you to think and then the trap goes off.
Now, there’s an Extended Event that… Used to work.
These days it just stares blankly at me. But since I’ve worked with this before, I know the problem.
It’s that Key Lookup — I’ll explain more in a minute.
Adaptive joins won’t do all the work for you, so Erik explains how you can set yourself up for success.
Getting the actual execution plan, that is the plan with run-time statistics for a query from an application has always been a little difficult. It’s fine if you can get the query running in Management Studio and reproducing the behaviour from the app, but that can be difficult.
There’s the query_post_execution_showplan event in Extended Events, but that’s a pretty heavy event and not something that I’d like to run on a busy server.
No more! SQL 2019 adds a new plan-related function to get the last actual plan for a query: sys.dm_exec_query_plan_stats.
Read on to see how to configure this, as well as a demo.
The idea behind FROID is that it removes some restrictions around scalar valued functions.
1. They can be inlined into the query, not run per-row returned
2. They don’t force serial execution, so you can get a parallel plan
If your functions already run pretty quickly over a small number of rows, and the calling query doesn’t qualify for parallelism, you may not see a remarkable speedup.
Even in that case, Erik argues that you can still get some benefits from SQL Server 2019 bringing those scalar UDFs inline.
This is a slightly different take on yesterday’s post, which is also a common problem I see in queries today.
Someone wrote a function to figure out if a user is trusted, or has the right permissions, and sticks it in a predicate — it could be a join or where clause.
If you do need to use scalar UDFs, SQL Server 2019 is a big step forward.
This question came up the other day from a co-worker, he said he couldn’t change a query but was there a way of making the same query produce a better plan by doing something else perhaps (magic?)
He said his query had a WHERE clause that looked like the following
WHERE RIGHT(SomeColumn,3) = '333'
I then asked if he could change the table, his answer was that he couldn’t mess around with the current columns but he could add a column
Click through to see how Denis was able to solve this problem.
The optimizer’s output may contain both apply and nested loops join physical operations. Both are shown in execution plans as a Nested Loops Join operator, but they have different properties:
The Nested Loops Join operator has Outer References. These describe parameter values passed from the outer (upper) side of the join to operators on the inner (lower) side of the join. The value of the each parameter may change on each iteration of the loop. The join predicate is evaluated (given the current parameter values) by one or more operators on the inner side of the join. The join predicate is not evaluated at the join itself.
The Nested Loops Join operator has a Predicate (unless it is a cross join). It does not have any Outer References. The join predicate is always evaluated at the join operator.
And to make things tricky,
APPLY can generate either of these. Read the whole thing.
I’d suggest starting with CTEs because they’re easy to write and to read. If you hit a performance wall, try ripping out a CTE and writing it to a temp table, then joining to the temp table.
This is my advice, too. Start with the thing which is easiest for you to develop and maintain. If it suffices for performance, stick with it; otherwise, move to the next-lowest level of complication. Stop when you have good enough performance. This optimizes for one of the most precious resources people rarely think about: developer maintenance time. Developers are pretty expensive, so the more time they spend trying to understand complex code, the less time they’re doing stuff which pushes the business forward.
One of the more perplexing problems to troubleshoot in SQL Server can be those related to memory grants. Some queries need more memory than others to execute, based on what operations need to be performed (e.g. sort, hash). SQL Server’s optimizer estimates how much memory is needed, and the query must obtain the memory grant in order to start executing. It holds that grant for the duration of query execution – which means if the optimizer overestimates memory you can run into concurrency issues. If it underestimates memory, then you can see spills in tempdb. Neither is ideal, and when you simply have too many queries asking for more memory than is available to grant, you’ll see RESOURCE_SEMAPHORE waits. There are multiple ways to attack this issue, and one of my new favorite methods is to use Query Store.
Click through for a demonstration.
I had to write some hand-off training about query tuning when I was starting a new job.
As part of the training, I had to explain why writing “complicated logic” could lead to poor plan choices.
So I did what anyone would do: I found a picture of a pirate, named him Captain Or, and told the story of how he got Oared to death for giving confusing ORders.
Click through for a troublesome query and a few ways of rewriting it to be less troublesome. My goto is typically to rewrite as two statements with a
UNION ALL between them if I can.