This one’s a bit more tricky but let’s walk through it. We’re getting data from the Posts table where the Tags column equals “<sql-server>” and selecting every column from both the Posts and PostTags tables. We can tell because there are no specified properties in the Select. Even though this statement looks more complex it’s only three lines and looks somewhat like a SQL statement. But it’s really a LINQ (Language Integrated Query) statement, specifically a LINQ to Entities statement. This LINQ statement will be translated into this SQL statement:
Read the whole thing.
It’s a free community service. We hope you love it as much as we do, especially all the cool new execution plan icons by our illustrator, Eric Larsen.
Down the road, we’re thinking about adding logins (so you can see past plans you’ve submitted), execution plan advice, image and HTML downloads (so you can embed the plan in your own blog or report), zooming, and more.
Check out the Paste The Plan website.
Which brings me to today, one of the most exciting days in the history of SQL Sentry. I would like to introduce to you the new unified Plan Explorer v3! This edition includes all features of Plan Explorer PRO, plus two new and incredibly powerful features: Index Analysis and Performance Profiling.
These new features were originally going to be part of yet a 3rd edition called Plan Explorer ULTIMATE, and at an even higher price point. However, we ultimately came to our senses and realized that this would be counter to the mission, and that the only way ahead was to make all features of Plan Explorer freely available to everyone.
If you’ve used Live Query Profile in Plan Explorer (or Live Query Statistics in SSMS), then you’re familiar with seeing your query plan play live as it executes. Performance Profiling adds some special sauce to that feature. Its like a DVR for the live query profile, and also displays the performance metrics as a graph below the plan diagram. The graph gives performance metrics by time, rather than by operator. In complex plans, you might have many operators doing work at the same time, and you’ll be able to see that better by looking at the graphs. Adding a time axis to the plan is a pretty ingenious way to give a new way to look at performance.
You can then replay the “DVRed” query, and re-watch the profile without re-running the query. If you save the plan as a .pesession file, the DVR’ed query with performance profile is included in that file. You can even change the speed to watch in slow-mo or fast-forward.
If you don’t already use Plan Explorer, you’re missing out on the single best query tuning add-in.
Let’s answer a different question first. What happens when the plan gets invalidated, when the index being used gets dropped or some other structural change occurs so that the plan is no longer valid? I answered that question in this blog post. The plan being forced, after the object is dropped, becomes invalid, so that plan can no longer be used. The Query Store still attempts to apply the plan during any recompile or compile event of the query in question, but it fails and a proper plan is used. All this means, I think, the Query Store is going to ignore the new index, since a new index doesn’t invalidate an existing plan. A new index just makes new plans possible. However, when I was asked this question, this wasn’t something I had tested, so I gave a speculative, best guess, answer with plenty of caveats and the promise to provide a tested answer ASAP. Here we go.
Read on for the full answer.
One of the first things we often learn when we’re looking at plans is that SQL Server uses estimates. And sometimes, those estimates are wrong. At first glance, this looks really wrong– it estimated 11.5 rows, and actually got 20,825 rows!
Read the whole thing to see how nested loop estimates which seem way off might not actually be.
Below the finished query. Again, overall query execution cannot go over 100%, but at the operator level, percentages are shown as the real ratio between actual and estimated rows, with no caps.
So you can see how the actual rows from the clustered index scan on PhoneNumberType table was 14500% above estimations, and how a series of severe misestimations are coming from the bottom right area of the plan (where actual is not even 1% of estimated rows), worsening as it goes up in the nodes.
I think this makes Live Query Stats a better tool for query analysis. I haven’t used it much in production, but this makes me want to give it another try.
Parameter sniffing fixes are based on your career progression with databases, and they go like this:
1. Reboot the server! – Junior folks panic and freak out, and just restart the server. Sure enough, that erases all cached execution plans. As soon as the box comes back up, they run rpt_Sales for China because that’s the one that was having problems. Because it’s called first, it gets a great plan for big data – and the junior admin believes they’ve fixed the problem.
2. Restart the SQL Server instance – Eventually, as these folks’ careers progress, they realize they can’t go rebooting Windows all the time, so they try this instead. It has the same effect.
If a reboot can’t fix the problem, I’m out of ideas…
By the way, I second Brent’s recommendation of Erland’s query plan article. Erland doesn’t publish frequently, but when he does it’s worth the wait.
Along comes our aggressive junior DBA who decides that there are “too many” indexes on the server. No, I don’t know what that means either, but they evidently read it on the internet or something so they drop the index we created before:
What now happens to our lovely execution plan and the plan forcing? We’ll take a look at two events in Extended Events, sql_statement_recompile and query_store_plan_forcing_failed. Nothing happens immediately on dropping the index. The plans associated with that object, if any, are marked as invalid in the cache. The next time we call the query it’s going to recompile and we can see the event:
The result is a bit happier than I would have expected; I was looking forward to a “and the world came crashing down” conclusion.
For example, when I run it with 10,000 joins:
- Msg 8631, Level 17, State 1, Line 1
- Internal error: Server stack limit has been reached. Please look for potentially deep nesting in your query, and try to simplify it.
When I drop it down to a much more realistic 5,000 joins:
- Msg 4074, Level 16, State 1, Line 1
- Client drivers do not accept result sets that have more than 65,535 columns.
Ah! Okay, that’s fair. (That’s also two error messages I’ve never seen before. Probably a good thing.) Alright, let’s take out the SELECT * and replace it with SELECT p1.* and see what happens:
This is a fun read.
When you first execute a batch or stored procedure which may contain multiple statements. Not every statement will be compiled when you first use the procedure. If there are some code paths that result in the statement not being executed, SQL Server may choose not to compile that statement in a small set of scenarios. In other words, some statements will only be compiled upon first execution. So far, I have found the following two scenarios that a statement will not be compiled (deferred compilation) if code path result in that statement being skipped.
Statement involving temp table. Data could have been changed when the statement is actually executed. So it doesn’t make sense to compile right off the beginning.
Statement that has option (recompile) attached to it We need to recompile that statement every time anyway. If we don’t use, it why compile in advance?
Read the whole thing.