Press "Enter" to skip to content

Author: Kevin Feasel

Planning For A Good Disaster

Randolph West loves it when a (disaster recovery) plan comes together:

My first question to the attendee was what the Service Level Agreement (SLA) says. As we know from previous posts, a disaster recovery strategy is dictated by business requirements, not technical ones. The Recovery Point Objective (how much data loss is acceptable) and Recovery Time Objective (how much time there is to bring everything back) will guide my proposal.

He told me that the SLA was 24 hours, so I started writing on the white board while I was thinking aloud.

On average, a fast storage layer can read and write around 200 MB/s, so it would take 5.12 seconds to write 1 GB, or just under 85 minutes to restore the database back to disk, not counting the log file or crash recovery. I never assume that Instant File Initialization is enabled, plus I won’t know how big the transaction log file is going to be, and that needs to be zeroed out.

I like this post a lot because it lets us get a glimpse into Randolph’s thought process and gives some hard numbers that you should have in mind.

Comments closed

Making A CLR Function

Michael Bourgon walks us through creating a CLR function:

What we settled on was building a CLR that would make the web calls, feeding it our data via a FOR JSON query.  We would then log the results into a separate table to make sure everything worked as expected.  I made this as generic as possible so that others could use it.

So let’s go through the steps.

  1. Create the .Net code necessary

  2. Create a CLR script for compilation

  3. Compile the CLR

  4. CREATE the ASSEMBLY

  5. CREATE the PROCEDURE

  6. Call the procedure

  7. Run it automatically

For a more detailed look at building a CLR function, after you go through Michael’s post, check out Solomon Rutzky’s Stairway to CLR.

Comments closed

Maps In Power BI

Reid Havens shows off the different map visuals within Power BI:

ArcGIS Map

The most recent addition to the Power BI Map family. It’s supported by a company called Esri, and is a very feature rich map visual! What makes this visual stand out is that you can overlay whatever data you have with public geographical data such as demographics, weather, and even historical data. It’s highly customizable and offers multiple ways to visualize data with maps, and that’s even before you start adding the public data sets! Can you tell that I like this visual a lot? Because I do! visualize data with maps

Now I could easily spend an entire blog post JUST outlining all the ways to use this visual, but I’ll stick to the highlight reel. It can visualize data with maps using the bubble or fill method similar to the other map visuals, albiet with a few more customizations and tweaks. However, one of the unique features of this visual is the heat map option! Any of you familiar with Power Maps in Excel has probably seen this before…well now we have it in Power BI. I find this data visualization super useful in identifying data clustering based on location.

Read on for additional varieties of maps you can create.  I personally think the bubble map is ugly and that one map with pie charts (thankfully not shown in Reid’s post) is hideous, but there are some very good map visuals available to us.

Comments closed

Identifying Deprecated Features

Dave Mason provides a method for determining if you’re using deprecated functionality on your SQL Server instance:

I’ve wanted to do some Event Notification testing for SQL Server deprecation events for quite some time. The thought process here is that I could send myself an alert to identify usage of SQL Server features that will be removed from the next major version (or future version) of SQL Server. I could then forward this info to development and let them take action…or not (I kid, I kid). Today is the day I finally got around to the testing. I didn’t get very far, though.

Without rehashing the basics of event notifications (this post may help if you need some context), I created an EVENT NOTIFICATION for the TRC_DEPRECATION event group. That group includes the DEPRECATION_FINAL_SUPPORT and DEPRECATION_ANNOUNCEMENT child events. I also created a QUEUE, a SERVICE, and an activation PROCEDURE (for the QUEUE). The proc is simplistic. It takes the EVENTDATA() XML data, transforms it into an HTML <table>, and emails the info to me.

Watch out, though:  Dave discovered something quite funny when he set this up.

Comments closed

VMware Configuration For SQL Server

Jeff Mlakar talks about things you want to look at if you’re running SQL Server on VMware:

In a virtual data center CPU is spread across many guest VMs. This is one of the key drivers behind the effort to virtualize – CPU cores mostly sit unused. For example, we can take a host with maybe 48 cores and virtualize many machines that present logically with > 48 cores. The hypervisor can swap in and our cores as it needs based on what the guest VMs are doing. If the baseline for a guest VM is only 10% CPU usage then this is easy. However, when an intense application like SQL Server is virtualized it must have CPU available otherwise performance will suffer noticeably.

Generally for CPU on a guest VM:

  • Reservations on CPU are not often possible but consider them if you data center allows for it.

  • You want more cores than sockets. So if you are aiming for 8 cores you want something like 2 sockets with 4 cores each instead of 8 sockets with 1 core each.

  • If priority can be given to the SQL VM for CPU then change the Shares Resource Allocation from normal to high.

Click through for more helpful hints.

Comments closed

Data Warehouse Automation

Koos van Strien provides some thoughts on data warehouse automation tools:

Currently, I think there are two main approaches to Data Warehouse Automation

  1. Data Warehouse Generation: You provide sources, mappings, datatype mappings etc.. The tool generates code (or artifacts).
  2. Data Warehouse Automation (DWA): The tool not only generates code / artifacts, but also manages the existing Data Warehouse, by offering continuous insight in data flows, actual lineage, row numbers, etc..

The difference might seem small, but IMHO is visible most clearly whenever changes occur in the Data Warehouse – the second class of tools can handle those changes (while preserving history). With the first class of tools provide you with the new structures, but you need to handle the preservation of history yourself (as you would’ve without DWA).

Read on for a contrast of these two approaches.

Comments closed

Discovering Partition Schemes

Kennie Nybo Pontoppidan shows us how to find the partition scheme for a particular table:

I needed to query SQL Servers metadata about partitioned tables, especially the column and the partition scheme used partitioning. The former is quite nicely documented in the SQL Server documentation (see link below), but the latter is not (yet). I have written the team about this, hopefully the documentation will be updated. Until then, I wrote this blog post to help others searching for an answer to this.

Click through for the script.

Comments closed

TempDB In Azure IaaS

Jim Donahoe shows how to use the temporary SSD on an Azure VM for SQL Server’s tempdb:

Remember, this disk is as the title of this section says…TEMPORARY! Do NOT put ANYTHING on this drive you cannot afford to lose. Don’t say nobody warned you either, because the drive itself contains a nice little txt file warning you, here is the EXACT text:

WARNING : THIS IS A TEMPORARY DISK.

Any data stored on this drive is SUBJECT TO LOSS and THERE IS NO WAY TO RECOVER IT.

Please do not use this disk for storing any personal or application data.

It’s good to see what you need to do to get this working.  I’ve found it just to be easier to set up a permanent SSD, but if you’re on a tight budget, this can save you some cash.

Comments closed

Downgrading SQL Server

Jonathan Kehayias is going the opposite direction of the Jeffersons:

At some point in your career working with SQL Server, you will run into a situation where the wrong edition of SQL Server has been installed on a server and will need to change the edition for licensing reasons.  Whether it is Enterprise Edition where Standard Edition should have been installed, Enterprise Edition where Developer Edition should have been used, or my favorite, Evaluation Edition where the 180 day trial has expired and Enterprise Edition isn’t going to be used, the only route available for downgrading the edition is to uninstall and reinstall SQL Server entirely.  SQL Server Setup makes upgrading editions a piece of cake with SKUUPGRADE as a command line option for going from Standard/Developer/Evaluation to Enterprise, but anything else requires a full uninstall and reinstall to change the SKU/Edition and then restore all of the system and user databases to the new instance, which typically means a lot of work.  I hate having to restore system databases and avoid having to do it if possible, so here is how I do this process and minimize the work required:

No matter what you are going to have to do an uninstall and reinstall of the SQL Server instance to downgrade the SKU.  However, you can save yourself some time and the headache of trying to restore the system databases if you are careful about what you do.  I have done a plenty of SKU downgrades in the past and the easiest way to do it, and I am not saying this is the Microsoft supported way but that it works if done correctly, is to:

Jonathan has an 11-point checklist that’s well worth checking out.  Though hopefully, may all your editions be Enterprise…

Comments closed

R Model Compression

I have a post showing off some of the value of compressing R models:

So right now, we’re burning roughly 200K per model.  My stated goal is to be able to store several years worth of data for 10 million products.  Let’s say that I need 10 million products in ProductModel and 1 billion rows in ProductModelHistory.  That means that we’d end up with 1.86 TB of data in the ProductModel table and 186 TB in ProductModelHistory.  This seems…excessive.

As a result, I decided to try using the COMPRESS() function in SQL Server 2016.  The COMPRESS function simply uses GZip compression.  Yeah, there are compression algorithms which tend to be more compact (e.g., bz2 or 7z), but GZip is relatively CPU efficient and I can wrap my SQL statements with COMPRESS() and DECOMPRESS() and not have to change any calling code; I just need to update the two stored procedures I use to insert and then retrieve product models.

Most of the time, it’s not a big deal.  But once you start talking hundreds of gigabytes or in my case, a couple hundred terabytes, it’s definitely worth compressing this data.

Comments closed