If you’re using SQL Server 2016’s awesome new feature, Query Store, there’s a new bug with automatic cleanup. Books Online explains:
Automatic data cleanup fails on editions other than Enterprise and Developer. Consequently, space used by the Query Store will grow over time until configured limit is reached, if data is not purged manually. If not mitigated, this issue will also fill up disk space allocated for the error logs, as every attempt to execute cleanup will produce a dump file.
Here’s hoping that bug gets fixed quickly.
At least for now, Query Store can only record query performance in databases that are read-write. Once you go read-only you can review the performance of past queries, but you can’t track the performance of anyone who queried the database after the point it went read-only.
At least for now. Query Store is such an awesome feature that perhaps this will change in the future. (I don’t have any inside info, only optimism.)
That’s a little bit of a letdown, but makes perfect sense.
You can see that our problem query is incredibly easy to find in the top left window based on total duration. Also notice that in the top right Plan summary window, there is currently only one available plan for the query (plan_id 49).
We need to figure out how we can get our ‘good plan’ using Trace Flag 9481 as an available plan that we can force using the Query Store.
The cardinality estimator change in SQL Server 2014 wasn’t perfect, but when you can fix individual plans like this, it makes SQL Server much more powerful.
While presenting at SQLDay in Wroclaw, Poland, on the Query Store, I was asked a pretty simple question, which takes precedence, the Query Store or a Plan Guide?
One of my favorite answers to questions is “I don’t know” because it gives me the opportunity to learn. Let’s figure this one out together.
I’ll post the code to recreate this experiment within AdventureWorks at the end of the article. I’m doing this because the code for forcing execution plans using Plan Guides can be pretty doggone long (you may need to generate your own XML from a plan on your own system, fair warning).
The answer is not quite as clear-cut as I would have expected, and I’ll be interested to see what others find.
To start with, a small stored procedure that I use all the time for bad parameter sniffing demos that reliably gets different plans with different values due to statistics skew:
PROC dbo.spAddressByCity @City NVARCHAR(30)
a.StateProvinceID = sp.StateProvinceID
a.City = @City;
If this procedure is called for a value of ‘London’ it gets a plan with a Merge Join. For most other value it gets a plan with a Loops Join. Here’s an example of the ‘London’ plan:
It’s a good question with a good answer.
For each database where I enable Query Store, I’d consider the workload and then look at the settings. I tend to think that the default value of 100MB for MAX_STORAGE_SIZE_MB is really low, and I would be inclined to bump up STALE_QUERY_THRESHOLD_DAYS from 30 to something a bit higher. I’d also probably drop DATA_FLUSH_INTERVAL_SECONDS to something lower than 900 seconds (15 minutes) if my storage can support it. This setting determines how often Query Store data is flushed to disk. If it’s every 15 minutes, then I could potentially lose 15 minutes of Query Store data if my server happened to crash before it could be written to disk. I’d also think about changing INTERVAL_LENGTH_MINUTES to a value smaller than 60 if I wanted to aggregate my query data over a smaller amount of time. Sometimes interesting events happen within a 60 minute time frame, and they get can lost when data is aggregated across that window. However, aggregating more frequently means I’m adding processing overhead to the system – there’s a trade-off there to figure out.
In our environment at least, 100 MB of query store data would last, oh, a couple hours? Definitely tweak your settings and keep an eye on them early on.
Now, there are catalog views that allow you to view the Query Store data. You can copy that data into another database using SELECT INTO, and then do comparisons, but wouldn’t it be nice to have some kind of export option? There’s a Connect item for that:
Export Query Store tables separately from the database tables: https://connect.microsoft.com/SQLServer/feedback/details/2620017/export-query-store-tables-separately-from-the-database-tables
If you think this is something that would be useful for Query Store, please up vote it! Again, Query Store is available in ALL editions of SQL Server 2016, this is definitely a feature you CAN use and will want to use! This potential option won’t make it into RTM, but with the change in how SQL Server is releasing CUs, the more important the SQL Server team sees this option (as a result of votes), the faster it might make it in a release.
Query Store is one of the most exciting features for database administrators to hit in quite a while. There will be some V1 pains, but this feature is well worth the upgrade to 2016.
A question came up in class today about easily seeing the degree of parallelism for parallel query plans, so I’ve updated my waiting tasks script to pull in the dop field from sys.dm_exec_query_memory_grants. Here it is for your use.
This is a good one to have in your grab bag of scripts.
In short, the plan is stored in the query store, even though the plan isn’t stored in cache. Now, this has implications. I’m not saying they’re good and I’m not saying they’re bad, but there are implications. If you’re in a situation where you need to use Optimize For Ad Hoc to help manage your cache, now, you’re going to possibly see negative impacts on your Query Store since it’s going to capture all the plans that you avoided. There are mechanisms for managing Query Store behavior.
I’d consider this correct behavior. I want to be able to see those one-off query plans. A quick note on Query Store, though: it chews up a lot of disk space in a busy environment, so if you’re planning on holding query store entries for a while, keep plenty of disk space available.
Now, no matter what value I pass it, the RECOMPILE hint is effectively ignored. I have the same plan every time (in my case, the London plan). Ah, but is the RECOMPILE hint ignored? In fact, no, it isn’t. If I capture the extended event sql_statement_recompile, I can see every single time I execute this procedure that it’s going through a full recompile… which is then tossed and the plan I chose is forced upon the optimizer. Every time.
This tells me that if you were using OPTION(RECOMPILE) to get around nasty parameter sniffing problems before, and if you go to Query Store, and if you force a particular plan to get around said nasty parameter sniffing problems, then you probably want to update the query to get rid of that OPTION(RECOMPILE) in order to prevent unnecessary plan compilation work.