If we rerun the query and then take a look at the first operator in the execution plan, we can see that the Plan Guide is in use… and that the query hash has changed. It no longer matches the original query. Now it matches the query that included the query hint. This actually makes perfect sense. The Plan Guide is basically changing the query from the first example above, into the second.
Now, what happens when we toss in the Query Store
The query behavior is exactly what you want, but some of the metadata is no longer correct.
In CTP2.0 version is added new system view sys.dm_db_tuning_recommendations that returns recommendations that you can apply to fix potential problems in your database. This view contains all identified potential performance issues in SQL queries that are caused by the SQL plan changes, and the correction scripts that you can apply. Every row in this view contains one recommendation that you can apply to fix the issue. Some of the information that are shown in this view are:
Id of the query, plan that caused regression, and the plan that that might be used instead of this plan.
Reason that describes what kind of regression is detected (e.g. CPU time for the query is changed from 17ms to 189ms)
T-SQL script that can be used to force the plan.
Information about the current plan, and previous plan that had better performance.
In the “surgical scalpel to chainsaw” range of query tuning options, this rates approximately guillotine. I think it’ll be a very useful tool for finding issues, but it wouldn’t be wise to start lopping off all the heads just because the optimizer tells you to. In this context, I imagine this DMV to be about as useful as the missing indexes DMV and for the same reasons.
I love the questions I get while I’m presenting because they force me to think and learn. The question in the title is one I received recently. The answer, now that I’m not standing in front of people, is easy. Of course the space is not pre-allocated. Query Store tables are just system tables. They have a limit on how big they can grow (100mb by default), but that space isn’t going to be pre-allocated in any way. The space will just get used as and when it’s needed, just like any other system table. However, don’t take my word for it, let’s prove that.
Read on for the proof.
A question came up recently about plan guides and compatibility mode, and it got me thinking about forced plans in Query Store and compatibility mode. Imagine you upgraded to SQL Server 2016 and kept the compatibility mode for your database at 110 to use the legacy Cardinality Estimator. At some point, you have a plan that you force for a specific query, and that works great. As time goes on, you do testing with the new CE and eventually are ready to make the switch to compatibility mode 130. When you do that, does the forced plan continue to use compatibility mode 110? I had a guess at the answer but thought it was worth testing.
There are some interesting results here.
Two of these requests have Connect items, which I’m listing below. The first allows you to change the location of Query Store data to reside somewhere in the user database besides the PRIMARY filegroup:
The other request is related to exporting that data, which is technically possible now, but it’s not a supported method so it’s not something I really want to implement in a client environment. I’ve had many people describe their process for testing which includes restoring database nightly. If they’re using Query Store as part of testing, that data is lost every night by the restore.
Click through for more and vote up those items relevant to you.
Interestingly enough, no runtime statistics are returned for our natively stored procedure. As it turns out, we need to specifically enable the collection of statistics for natively compiled stored procedures through the sp_xtp_control_query_exec_stats stored procedure. Keep in mind, though, that enabling the collection of statistics for natively-compiled stored procedure can cause performance degradation. Thankfully we can very easily control the collection of statistics by enabling it for only specific natively compiled stored procedures and turn it on and off directly.
Read the whole thing. Long story short: they work together nicely.
Here, you will see the four default reports that come with this Option.
Regressed Queries – shows query history and changes in statistics
Overall Resource Consumption – history of resources used in the database
Top resource Consuming Queries – Top x of queries using the most resources
Tracked Queries – enables you to see multiple query plans for a T-SQL statement and compare the plans or force a plan
For DBAs, this is one of the biggest reasons to upgrade to 2016.
Instantly it is apparent that the most resource intensive query was the same query across both workload tests and note that the query hash is consistent too. It is also apparent that this query performs worse under the new cardinality estimator model version 120. To investigate and understand why this particular query behaves differently under the different cardinality estimators we’ll need to look at the actual query and the execution plans.
Looking at the information in #TempCEStats and the execution plans, the problematic query below belongs to the SLEV stored procedure.
There’s also a discussion of Query Store in there, but it’s important to understand how to figure this out even if you’re on 2014 and don’t have access to Query Store.
When I was presenting on this topic at the PASS Summit a few weeks ago, one great question came up (great question = answer is “I don’t know”), well, I defaulted to an “I don’t know” answer, but my guess was, “No.” The question was: can you take a plan from one server, let’s say a test server, export it in some way, and then import it to production? In this manner, you ensure that a plan you like gets into production without having to clear the plan from cache & generate a plan by running the query.
Read on for the answer, as well as ways to manipulate query store data.
Can you change the filegroup where Query Store keeps its data?
I thought there might be a trick to use a different filegroup for Query Store by using the default filegroup setting in SQL Server before enabling it, but NOPE!
I concur; Query Store can grow to be pretty large on busy systems, so diligent DBAs who want to keep PRIMARY as small as possible will suddenly find a multi-gigabyte Query Store slowing down those PRIMARY filegroup restores.