Lonny Niederstadt has a new series on usability “soft spots” with Query Store. Part one looks at plan identifiers:
Yeah. That’s a lotta plans in the right-hand legend. 22 of them. In a not very helpful order. In fact… though I’ve tried to figure it out, I don’t know what type of order is used for that right-hand legend. It’s not chronological. It’s not based on duration which is the metric displayed by the graph. I dunno.
Let’s refresh the “Tracked Queries” activity.
Ohhhh. I forced plan_id 2 (in the purple box below) but what showed up was plan_id 3220 (in the yellow box below).
Lonny promises more, so keep on the lookout.
One of the more perplexing problems to troubleshoot in SQL Server can be those related to memory grants. Some queries need more memory than others to execute, based on what operations need to be performed (e.g. sort, hash). SQL Server’s optimizer estimates how much memory is needed, and the query must obtain the memory grant in order to start executing. It holds that grant for the duration of query execution – which means if the optimizer overestimates memory you can run into concurrency issues. If it underestimates memory, then you can see spills in tempdb. Neither is ideal, and when you simply have too many queries asking for more memory than is available to grant, you’ll see RESOURCE_SEMAPHORE waits. There are multiple ways to attack this issue, and one of my new favorite methods is to use Query Store.
Click through for a demonstration.
In addition to default value, the minimum number of query executions in the AUTOquery_capture_mode for storing the query, its plan(s) and runtime statistics in SQL Server 2019 has been increased from 3 to 30. That means, Query Store does not store anything for first 29 query executions. It reserves query_ids, but it starts storing execution plan and runtime stats from 30thexecution in a single day.
These look like reasonable changes to me.
Friends, CTP 3.0 dropped today, and it includes some changes for Query Store in SQL Server 2019! I am so excited!! I’ve downloaded it and have WideWorldImporters installed and have a lot of testing planned, but if you’re impatient, guess what? The documentation is already updated! If you check out the ALTER DATABASE SET page you will see that Query Store now has a new option for QUERY_CAPTURE_MODE: CUSTOM. For those of you with ad hoc workloads, this will help.
Read on to see how it can help.
Last week in our IEPTO2 class I was asked about queries with OPTION (RECOMPILE) and Query Store. Specifically: Do queries that have the OPTION (RECOMPILE) hint go into Query Store, AND do queries in a stored procedure created with the RECOMPILE option go into Query Store? I knew the answer to the first question, and was pretty sure I know the answer to the second one, but I wanted to test to verify. Let’s take a look.
Erin gives you a tl;dr version but I’m going to ask you to read the whole thing anyhow.
This weekend I was in Stockholm in Sweden, talking Query Store and plan forcing with Steinar Anderson, when he mentioned the problems he had while forcing plans that had table variables in them.
Don’t panic. Of course you can force a plan with a table variable, most of the time. Steinar had a fairly focused problem. Before I go on to explain the issue, let me be really clear, Steinar figured out the issue all on his own. When he outlined the problem, I saw immediately what his conclusion was going to be. What’s spurring this blog post is that Steinar said he’d searched on the internet and no one had talked about the issue yet. So, let’s talk about it.
Read on for the problem as well as solution.
I love the questions I get when presenting: Can You Force a Parallel Plan in Query Store. I haven’t a clue. The trick I think is going to be in setting up the test. Let’s try it out.
Reliably Getting a Parallel Plan
Because this is for testing, rather than try to build some crazy query that may or may not go parallel, I’ve decided to cheat. I’ll take advantage of a little functionality that ensures I see a parallel plan when I want to.
Read on to see Grant’s trick and also the results of this test.
A couple of notes on the query. I cast the query_plan as xml so that I can use the XQuery to pull out the information. It is possible that the plan might be so large that you get an error because of the limit on nesting levels within XML. Also, I aggregate the information from the sys.query_store_runttime_stats. You may want to modify this to only look at limited ranges. I’ll leave that to you as an exercise.
Do read Grant’s warning in the conclusion.
I’ve worked for SaaS companies for the last 6 years or so. So our queries are largely the same across our system and by default Query Store is per database. So it would be handy to have a central repository to help you determine which queries across your whole server are your worse performing queries. Hence comes my idea to build a central repository. I believe I put in connect item before it got moved to the new platform for this but never put a new ticket. So this is the beginning of building something along those lines. So it will be a work in progress so to speak. My current company I care about queries that are taking a long time to run. So I’m going to store the top 50 queries in total duration into a database handily called DBA because that’s where I store all the DBA stuff. To do this, I have some none client related databases I don’t care about so I create a table to tell which databases to collect the data from. Then a table to put the information into and job to run every day at midnight and sum up the data. Now the data is stored in UTC time so the data will be off by whatever timezone difference you are in but with most people being 24×7 shops as SaaS companies that shouldn’t matter and if it does you can edit the query.
This helps to resolve a necessary pain point in Query Store: all of that data is per-database, so if you have a federated system with a large number of equivalent databases, getting cross-system stats is painful.
Since our preview announcement, hundreds of customers have been enabling Query Store to provide insight on query performance. We’re excited to share the general availability of Query Store worldwide for Azure SQL Data Warehouse.
Query Store automatically captures a history of queries, plans, and runtime statistics and retains them for your review when monitoring your data warehouse. Query Store separates data by time windows so you can see database usage patterns and understand when plan changes happen.
Given its power in the on-prem product, I’m glad that Azure SQL Data Warehouse is getting Query Store as well.