Plan change regression happens when SQL Database changes a plan for some T-SQL query, and the new plan has the worse performance than the previous one. SQL Server 2017 has Automatic Tuning feature that enables you to easily find plan change regressions and fix them. In this post you will see the demo script that you can use to cause plan change regression and manually fix it using new sys.dm_db_tuning_recommendations view.
If you are not familiar with plan regressions and new tuning recommendations in SQL Server 2017, I would recommend to read these two posts:
- What is plan regression in SQL Server?
- How to find query plan choice regressions with SQL Server 2017 CTP2
This would be enough to understand steps in this demo.
Our experience with plan regression recommendations has been uniformly positive so far. Those tests have been in dev and QA environments, but so far, there hasn’t been a terrible recommendation.
Let’s assume that you are not driven by logins, workers and session counts how does one select the right level? What exactly does DTUs (Database Transaction Units) mean? I suggest reading this post by Andy Mallon https://sqlperformance.com/2017/03/azure/what-the-heck-is-a-dtu
I am going to undersize my database and create a S0 database and run some day to day tasks – let’s see what happens. I will open up connections and issue some queries via my application. I would not class these queries as bad, what I am trying to drive here is getting the sizing right for your workload.
This is one of the trickier things to get, I think. We’re taking an existing workload and want to make sure it doesn’t fall over…but we aren’t measuring in terms of DTUs locally. I know that there are some tools that help the conversion process, but if you’re starting a new product or don’t have a great handle on normal workload, it’s really easy to fall into the Scylla and Charybdis of undersizing and overpaying.
My first question to the attendee was what the Service Level Agreement (SLA) says. As we know from previous posts, a disaster recovery strategy is dictated by business requirements, not technical ones. The Recovery Point Objective (how much data loss is acceptable) and Recovery Time Objective (how much time there is to bring everything back) will guide my proposal.
He told me that the SLA was 24 hours, so I started writing on the white board while I was thinking aloud.
On average, a fast storage layer can read and write around 200 MB/s, so it would take 5.12 seconds to write 1 GB, or just under 85 minutes to restore the database back to disk, not counting the log file or crash recovery. I never assume that Instant File Initialization is enabled, plus I won’t know how big the transaction log file is going to be, and that needs to be zeroed out.
I like this post a lot because it lets us get a glimpse into Randolph’s thought process and gives some hard numbers that you should have in mind.
What we settled on was building a CLR that would make the web calls, feeding it our data via a FOR JSON query. We would then log the results into a separate table to make sure everything worked as expected. I made this as generic as possible so that others could use it.
So let’s go through the steps.
Create the .Net code necessary
Create a CLR script for compilation
Compile the CLR
CREATE the ASSEMBLY
CREATE the PROCEDURE
Call the procedure
Run it automatically
For a more detailed look at building a CLR function, after you go through Michael’s post, check out Solomon Rutzky’s Stairway to CLR.
The most recent addition to the Power BI Map family. It’s supported by a company called Esri, and is a very feature rich map visual! What makes this visual stand out is that you can overlay whatever data you have with public geographical data such as demographics, weather, and even historical data. It’s highly customizable and offers multiple ways to visualize data with maps, and that’s even before you start adding the public data sets! Can you tell that I like this visual a lot? Because I do!
Now I could easily spend an entire blog post JUST outlining all the ways to use this visual, but I’ll stick to the highlight reel. It can visualize data with maps using the bubble or fill method similar to the other map visuals, albiet with a few more customizations and tweaks. However, one of the unique features of this visual is the heat map option! Any of you familiar with Power Maps in Excel has probably seen this before…well now we have it in Power BI. I find this data visualization super useful in identifying data clustering based on location.
Read on for additional varieties of maps you can create. I personally think the bubble map is ugly and that one map with pie charts (thankfully not shown in Reid’s post) is hideous, but there are some very good map visuals available to us.
I’ve wanted to do some Event Notification testing for SQL Server deprecation events for quite some time. The thought process here is that I could send myself an alert to identify usage of SQL Server features that will be removed from the next major version (or future version) of SQL Server. I could then forward this info to development and let them take action…or not (I kid, I kid). Today is the day I finally got around to the testing. I didn’t get very far, though.
Without rehashing the basics of event notifications (this post may help if you need some context), I created an EVENT NOTIFICATION for the TRC_DEPRECATION event group. That group includes the DEPRECATION_FINAL_SUPPORT and DEPRECATION_ANNOUNCEMENT child events. I also created a QUEUE, a SERVICE, and an activation PROCEDURE (for the QUEUE). The proc is simplistic. It takes the EVENTDATA() XML data, transforms it into an HTML <table>, and emails the info to me.
Watch out, though: Dave discovered something quite funny when he set this up.
In a virtual data center CPU is spread across many guest VMs. This is one of the key drivers behind the effort to virtualize – CPU cores mostly sit unused. For example, we can take a host with maybe 48 cores and virtualize many machines that present logically with > 48 cores. The hypervisor can swap in and our cores as it needs based on what the guest VMs are doing. If the baseline for a guest VM is only 10% CPU usage then this is easy. However, when an intense application like SQL Server is virtualized it must have CPU available otherwise performance will suffer noticeably.
Generally for CPU on a guest VM:
Reservations on CPU are not often possible but consider them if you data center allows for it.
You want more cores than sockets. So if you are aiming for 8 cores you want something like 2 sockets with 4 cores each instead of 8 sockets with 1 core each.
If priority can be given to the SQL VM for CPU then change the Shares Resource Allocation from normal to high.
Click through for more helpful hints.
Currently, I think there are two main approaches to Data Warehouse Automation
- Data Warehouse Generation: You provide sources, mappings, datatype mappings etc.. The tool generates code (or artifacts).
- Data Warehouse Automation (DWA): The tool not only generates code / artifacts, but also manages the existing Data Warehouse, by offering continuous insight in data flows, actual lineage, row numbers, etc..
The difference might seem small, but IMHO is visible most clearly whenever changes occur in the Data Warehouse – the second class of tools can handle those changes (while preserving history). With the first class of tools provide you with the new structures, but you need to handle the preservation of history yourself (as you would’ve without DWA).
Read on for a contrast of these two approaches.
I needed to query SQL Servers metadata about partitioned tables, especially the column and the partition scheme used partitioning. The former is quite nicely documented in the SQL Server documentation (see link below), but the latter is not (yet). I have written the team about this, hopefully the documentation will be updated. Until then, I wrote this blog post to help others searching for an answer to this.
Click through for the script.
Remember, this disk is as the title of this section says…TEMPORARY! Do NOT put ANYTHING on this drive you cannot afford to lose. Don’t say nobody warned you either, because the drive itself contains a nice little txt file warning you, here is the EXACT text:
WARNING : THIS IS A TEMPORARY DISK.
Any data stored on this drive is SUBJECT TO LOSS and THERE IS NO WAY TO RECOVER IT.
Please do not use this disk for storing any personal or application data.
It’s good to see what you need to do to get this working. I’ve found it just to be easier to set up a permanent SSD, but if you’re on a tight budget, this can save you some cash.