Because the showplan schema contains notes throughout stating what the units of measure are, what each of the values means. For instance, I can explain why SerialDesiredMemory, DesiredMemory, RequestedMemory are identical:
…Provide memory grant estimate as well as actual runtime memory grant information. Serial required/desired memory attributes are estimated during query compile time for serial execution. The rest of attributes provide estimates and counters for query execution time considering actual degree of parallelism. SerialRequiredMemory: Required memory in KB if the query runs in serial mode. The query will not start without this memory. SerialDesiredMemory: Memory estimated to fit intermediate results in KB if the query runs in serial mode. RequiredMemory: Required memory in KB for the chosen degree of parallelism. If the query runs in serial mode, this is the same as SerialRequiredMemory. …
That’s taken directly from the 2017 schema. The units of measure are KB.
I’d never seen this before, so that’s going on my to-read list.
Time Zones were definitely being a drag today. I got an email from one of the developers at work asking about the performance difference between 2 queries. The only difference between the 2 queries is that one of them uses the AT TIME ZONE clause that was added in SQL Server 2016. I have not played around with this particular clause, but we do store quite a bit of data in the datetimeoffset data type. In the table in the developer’s queries, the data is all stored in the Eastern time zone, but they are considering storing it in additional time zones and will want to be able to display it in the Eastern time zone even if not stored that way. Thus, AT TIME ZONE.
When the developer was testing the conversion function, he noticed that the query slowed waaaayyyyy down when he added AT TIME ZONE. Before adding AT TIME ZONE to the query, STATISTICS TIME for the query was: CPU time: 145549 ms, elapsed time: 21693 ms.. It returned 8,996 rows, but if I removed the DISTINCT, it returned over 72M rows. That’s a lot of clams … er, data.
Read on for the rest of the story, including Robert’s solution. Also check out his Connect item related to this.
I had a great question submitted to me (thank you Brandman!) that I thought would make for a good blog post:
…I’ve been wondering if it really matters from a performance standpoint where I start my queries. For example, if I join from A-B-C, would I be better off starting at table B and then going to A & C?
The short answer: Yes. And no.
One of my favorite query tuning books is SQL Tuning by Dan Tow. Parts of it are rather dated at this point—like pretty much anything involving a rule-based optimizer—but the gist still works well. What it comes down to is finding the best single table from which to drive your query (based on table size, filters, etc.) and selecting the appropriate join order afterward. It’s fairly time-consuming effort, but for the 0.5-1% of queries which really need it, it can be the difference between an awful plan and a good one.
I have read-only T-SQL that references the MSTVF. I did have some code that use both data modifications and cross apply but interleaved execution does not occur in those scenarios.
So on my SQL Server 2017 instance I set the database to 110 compatibility mode and set query store on where then I execute my code.
Note that 110 is the compatibility mode for SQL Server 2012. That becomes an important part of Arun’s story.
Usually, this blog is 100% pro window functions and advocates using them at any occasion. But like any tool, window functions come at a price and we must carefully evaluate if that’s a price we’re willing to pay. That price can be a sort operation. And as we all know, sort operations are expensive. They follow O(n log n) complexity, which must be avoided at all costs for large data sets.
In a previous post, I’ve described how to calculate a running total with window functions (among other ways). In this post, we’re going to calculate the cumulative revenue at each payment in our Sakila database.
This is a good article comparing how different RDBMS products handle a fairly complicated windowed query and what you can do to improve performance.
We intended to fetch all customers and their addresses. But observe: We project only columns from the
CUSTOMERtable and we don’t have any predicates at all, specifically not predicates using the
ADDRESStable. So, we’re completely ignoring any contributions from the
ADDRESStable. We never really needed the JOIN in the first place!
And in fact, the optimiser can prove this too, because of the
FOREIGN KEYconstraint on
C.ADDRESS_ID, which guarantees that every
CUSTOMERrecord has exactly one corresponding
ADDRESSrecord. The JOIN does not duplicate, nor remove any
CUSTOMERrows, so it is unneeded and thus eliminated (by some, not all databases, will list each database at the end of the article).
So, the database can rewrite the SQL statement to the following, equivalent SQL statement in the presence of said FOREIGN KEY
Read on for a comparison across different products as well.
Look at all those table-valued function calls! Followed immediately by a really expensive hash match. My Spidey Sense started to tingle. What is
fn_GetMaxPrivilegeDepthMask, and why is it being called 30 times? I bet this is a problem. When you see “Table-valued function” as an operator in a plan, that actually means it’s a multi-statement table-valued function. If it were an inline table-valued function, it would get incorporated into the larger plan, and not be a black box. Multi-statement table-valued functions are evil. Don’t use them. The Cardinality Estimator isn’t able to make accurate estimates. The Query Optimizer isn’t able to optimize them in the context of the larger query. From a performance perspective, they don’t scale.
Even though this TVF is an out-of-the-box piece of code from Dynamics CRM, my Spidey Sense tells me that it’s the problem.
That said, Joe Sack and team are working on making multi-statement TVFs faster in SQL Server 2017. Whether it will move the needle from Andy’s excellent advice, we’ll have to wait and see.
All subsequent executions of that same query will go to the query cache to reuse that same initial query plan — this saves SQL Server time from having to regenerate a new query plan.
Note: A query with different values passed as parameters still counts as the “same query” in the eyes of SQL Server.
In the case of the examples above, the first time the query was executed was with the parameter for “Costa Rica”. Remember when I said this dataset was heavily skewed? Let’s look at some counts:
Check it out for a clear depiction of the topic. One solution that Bert doesn’t have but I will use sometimes is to create local variables in a procedure and set their values equal to input parameters. That way, the optimizer doesn’t have an assumption as to the value of the local variable. But there are several ways to get around this when it’s an issue.
Ideally, you should remove only what’s absolutely necessary. Using DBCC FREEPROCCACHE is a sledgehammer approach and typically creates a spike in CPU as all subsequent queries need to have their plans re-generated. Glenn gives examples on how to use each statement (and others) in his post Eight Different Ways to Clear the SQL Server Plan Cache, and I want to show you one more thing that happens when you clear a plan (or all plans) from cache.
For this demo script, I recommend running it against a TEST/DEV/QA environment because I am removing plans from cache which can adversely affect performance.
There are reasons to run these commands, but ideally, you should be as precise as possible in clearing plans out of the cache.
For the sake of simplicity, I’ll assume that our example data has a clustered index on the “primary key” that we want to use to determine which rows have already been included in the set – in our example, the primary key is (FirstName, LastName). The data I’m using is a fictional example, but here’s the jist:
- #Employees has about 33 000 rows.
- #Customers has about 44 000 rows.
- #Passengers has about 500 000 rows.
The data is constructed in a way that these queries should return 530 000 rows, so we’re looking at some overlap but far from totally overlapping rows.
Example #492,806 that T-SQL is not a true 4th Generation Language, that how you write the query can greatly matter for performance.