Press "Enter" to skip to content

Day: July 21, 2017

SQL Server 2017 Finds Plan Regressions

Jovan Popovic shows off some automatic tuning functionality in SQL Server 2017:

Plan change regression happens when SQL Database changes a plan for some T-SQL query, and the new plan has the worse performance than the previous one. SQL Server 2017 has Automatic Tuning feature that enables you to easily find plan change regressions and fix them. In this post you will see the demo script that you can use to cause plan change regression and manually fix it using new sys.dm_db_tuning_recommendations view.

If you are not familiar with plan regressions and new tuning recommendations in SQL Server 2017, I would recommend to read these two posts:

This would be enough to understand steps in this demo.

Our experience with plan regression recommendations has been uniformly positive so far.  Those tests have been in dev and QA environments, but so far, there hasn’t been a terrible recommendation.

Comments closed

Azure SQL Database Performance Tips

Arun Sirpal looks at some performance issues in Azure SQL Database:

Let’s assume that you are not driven by logins, workers and session counts how does one select the right level? What exactly does DTUs (Database Transaction Units) mean? I suggest reading this post by Andy Mallon https://sqlperformance.com/2017/03/azure/what-the-heck-is-a-dtu

I am going to undersize my database and create a S0 database and run some day to day tasks – let’s see what happens. I will open up connections and issue some queries via my application. I would not class these queries as bad, what I am trying to drive here is getting the sizing right for your workload.

This is one of the trickier things to get, I think.  We’re taking an existing workload and want to make sure it doesn’t fall over…but we aren’t measuring in terms of DTUs locally.  I know that there are some tools that help the conversion process, but if you’re starting a new product or don’t have a great handle on normal workload, it’s really easy to fall into the Scylla and Charybdis of undersizing and overpaying.

Comments closed

Planning For A Good Disaster

Randolph West loves it when a (disaster recovery) plan comes together:

My first question to the attendee was what the Service Level Agreement (SLA) says. As we know from previous posts, a disaster recovery strategy is dictated by business requirements, not technical ones. The Recovery Point Objective (how much data loss is acceptable) and Recovery Time Objective (how much time there is to bring everything back) will guide my proposal.

He told me that the SLA was 24 hours, so I started writing on the white board while I was thinking aloud.

On average, a fast storage layer can read and write around 200 MB/s, so it would take 5.12 seconds to write 1 GB, or just under 85 minutes to restore the database back to disk, not counting the log file or crash recovery. I never assume that Instant File Initialization is enabled, plus I won’t know how big the transaction log file is going to be, and that needs to be zeroed out.

I like this post a lot because it lets us get a glimpse into Randolph’s thought process and gives some hard numbers that you should have in mind.

Comments closed

Making A CLR Function

Michael Bourgon walks us through creating a CLR function:

What we settled on was building a CLR that would make the web calls, feeding it our data via a FOR JSON query.  We would then log the results into a separate table to make sure everything worked as expected.  I made this as generic as possible so that others could use it.

So let’s go through the steps.

  1. Create the .Net code necessary

  2. Create a CLR script for compilation

  3. Compile the CLR

  4. CREATE the ASSEMBLY

  5. CREATE the PROCEDURE

  6. Call the procedure

  7. Run it automatically

For a more detailed look at building a CLR function, after you go through Michael’s post, check out Solomon Rutzky’s Stairway to CLR.

Comments closed

Maps In Power BI

Reid Havens shows off the different map visuals within Power BI:

ArcGIS Map

The most recent addition to the Power BI Map family. It’s supported by a company called Esri, and is a very feature rich map visual! What makes this visual stand out is that you can overlay whatever data you have with public geographical data such as demographics, weather, and even historical data. It’s highly customizable and offers multiple ways to visualize data with maps, and that’s even before you start adding the public data sets! Can you tell that I like this visual a lot? Because I do! visualize data with maps

Now I could easily spend an entire blog post JUST outlining all the ways to use this visual, but I’ll stick to the highlight reel. It can visualize data with maps using the bubble or fill method similar to the other map visuals, albiet with a few more customizations and tweaks. However, one of the unique features of this visual is the heat map option! Any of you familiar with Power Maps in Excel has probably seen this before…well now we have it in Power BI. I find this data visualization super useful in identifying data clustering based on location.

Read on for additional varieties of maps you can create.  I personally think the bubble map is ugly and that one map with pie charts (thankfully not shown in Reid’s post) is hideous, but there are some very good map visuals available to us.

Comments closed

Identifying Deprecated Features

Dave Mason provides a method for determining if you’re using deprecated functionality on your SQL Server instance:

I’ve wanted to do some Event Notification testing for SQL Server deprecation events for quite some time. The thought process here is that I could send myself an alert to identify usage of SQL Server features that will be removed from the next major version (or future version) of SQL Server. I could then forward this info to development and let them take action…or not (I kid, I kid). Today is the day I finally got around to the testing. I didn’t get very far, though.

Without rehashing the basics of event notifications (this post may help if you need some context), I created an EVENT NOTIFICATION for the TRC_DEPRECATION event group. That group includes the DEPRECATION_FINAL_SUPPORT and DEPRECATION_ANNOUNCEMENT child events. I also created a QUEUE, a SERVICE, and an activation PROCEDURE (for the QUEUE). The proc is simplistic. It takes the EVENTDATA() XML data, transforms it into an HTML <table>, and emails the info to me.

Watch out, though:  Dave discovered something quite funny when he set this up.

Comments closed

VMware Configuration For SQL Server

Jeff Mlakar talks about things you want to look at if you’re running SQL Server on VMware:

In a virtual data center CPU is spread across many guest VMs. This is one of the key drivers behind the effort to virtualize – CPU cores mostly sit unused. For example, we can take a host with maybe 48 cores and virtualize many machines that present logically with > 48 cores. The hypervisor can swap in and our cores as it needs based on what the guest VMs are doing. If the baseline for a guest VM is only 10% CPU usage then this is easy. However, when an intense application like SQL Server is virtualized it must have CPU available otherwise performance will suffer noticeably.

Generally for CPU on a guest VM:

  • Reservations on CPU are not often possible but consider them if you data center allows for it.

  • You want more cores than sockets. So if you are aiming for 8 cores you want something like 2 sockets with 4 cores each instead of 8 sockets with 1 core each.

  • If priority can be given to the SQL VM for CPU then change the Shares Resource Allocation from normal to high.

Click through for more helpful hints.

Comments closed

Data Warehouse Automation

Koos van Strien provides some thoughts on data warehouse automation tools:

Currently, I think there are two main approaches to Data Warehouse Automation

  1. Data Warehouse Generation: You provide sources, mappings, datatype mappings etc.. The tool generates code (or artifacts).
  2. Data Warehouse Automation (DWA): The tool not only generates code / artifacts, but also manages the existing Data Warehouse, by offering continuous insight in data flows, actual lineage, row numbers, etc..

The difference might seem small, but IMHO is visible most clearly whenever changes occur in the Data Warehouse – the second class of tools can handle those changes (while preserving history). With the first class of tools provide you with the new structures, but you need to handle the preservation of history yourself (as you would’ve without DWA).

Read on for a contrast of these two approaches.

Comments closed

Discovering Partition Schemes

Kennie Nybo Pontoppidan shows us how to find the partition scheme for a particular table:

I needed to query SQL Servers metadata about partitioned tables, especially the column and the partition scheme used partitioning. The former is quite nicely documented in the SQL Server documentation (see link below), but the latter is not (yet). I have written the team about this, hopefully the documentation will be updated. Until then, I wrote this blog post to help others searching for an answer to this.

Click through for the script.

Comments closed