Press "Enter" to skip to content

Curated SQL Posts

Whitelisting SQL CLR Assemblies

Niels Berglund walks through the process of whitelisting a CLR assembly in SQL Server 2017:

What Microsoft introduces in SQL Server 2017 RC1, is something I refer to as whitelisting. It is somewhat similar to the TRUSTWORTHY setting, where you indicate that a database is to be trusted. But instead of doing it on the database level, you do it per assembly.

To whitelist in SQL Server 2017 RC1, you use the system stored procedure sys.sp_add_trusted_assembly. As the name implies the procedure adds an assembly to a list of “trusted” assemblies. By marking an assembly as trusted, SQL Server will allow it to be loaded when clr strict security is on (on by default), even if:

  • the assembly is not signed, and

  • the database where you want to deploy it to is not TRUSTWORTHY.

With the elimination of the CAS model finally hitting CLR, this is probably going to be one of the easier ways for DBAs to move forward with CLR in the future.

Comments closed

Finding The Largest Tables

Andy Galbraith has a script to find the largest tables in a database:

In the previous post in this series “Toolbox – Where Did All My Space Go?” I shared a script for finding which database files consumed the most space and which of those files had free space in them.  The next step after finding out which databases are using the space is determining which tables in those databases are occupying that space to consider purge/archive opportunities.  In many cases you will find a single large table (often called a “mother table”) taking up most of the space in your database.

There are cases when you wouldn’t want to use the Disk Usage By Table report, so here is a viable alternative.

Comments closed

Narrowing Down Deprecated Feature Usage

Dave Mason now skips msdb when he looks for deprecated feature usage:

In my previous post, I took a stab at monitoring deprecation events for SQL Server. It didn’t go so well. A deprecation event occurred more than 5,000 times in a very short period of time, and I got one email for every occurrence. Not good. Here’s what I kept seeing over and over:

USER_ID will be removed from a future version of SQL Server. Avoid using this feature in new development work, and plan to modify applications that currently use the feature. Use DATABASE_PRINCIPAL_ID instead.

It turns out the system stored proc msdb.dbo.sp_send_dbmail has a USER_ID() reference. I suspect an unrelated alert/email happened once, which executed sp_send_dbmail, which generated a DEPRECATION_FINAL_SUPPORT event, which ultimately led to another execution of sp_send_dbmail, which generated yet another DEPRECATION_FINAL_SUPPORT event, and round and round we go.

Click through for examples of deprecated features that various Microsoft products, including Reporting Services and Team Foundation Server, still use.

Comments closed

How Long Was That Event?

Taiob Ali shows that some extended event times are in milliseconds, some are in microseconds, and some are in unknownaseconds:

There was a question in dba.stackexchange.com titled “module_end extended events duration in microseconds?” that I answered and it was Microsecond in that instance.
Later on I questioned myself if the duration is always in Microseconds for extended events. I found, it is a mix of Millisecond, Microsecond and some are unknown meaning famous NULL. I wrote below tsql code to determine which is what.

Click through for the script that Taiob used to determine the answer.

Comments closed

Message Queues For The DBA

Drew Furgiuele explains message queueing theory and puts together a nice demo:

Please note, there’s one thing I need to make super abundantly clear for this demo: You’d never, ever configure these components like this for production. There’s so much more to consider, like setting up RabbitMQ to use SSL, writing actual applications and services instead of PowerShell to handle message listeners, and putting more effort into Service Broker. This should serve fine as a proof of concept, but if you want to actually implement something like this, make sure you do you research and follow best practices for each component.

This is the warning label on the chainsaw that says “normally you’d grab it by the other end.”  This is a great post, giving an introduction to tying Service Broker to RabbitMQ.  My biases lead me to Apache Kafka over RabbitMQ in most cases, but that’s just me.

Comments closed

Batch Execution Mode And Window Functions

Chris Adkin shows how taking advantage of batch execution mode on rowstore tables can lead to faster performance as degree of parallelism increases:

The SQL Server execution engine fundamentally acts like a cursor, control flow is exerted from the root node of the plan down to right most child node or iterators. The (logical) flow of data through the plan is in the opposite direction.

I say ‘Logical’ because in practise the run time uses buffers in order to minimise the data movement whilst executing the plan. However, up until SQL Server 2012, which first introduced batch mode, execution plans were executed by iterators processing data in a row by agonising row manner. Batch mode changes all of this, if we can ferry rows around in batches, this reduces the number of CPU cycles it takes to process an iterator in the plan (providing it supports batch mode). Also by sizing batches such that they fit inside the level 2 cache of the CPU, we gain even more performance by minimizing CPU last level cache misses or worse still main memory.

Adkin credits Niko Neugebauer for the insight and shows how you can use this on normal rowstore tables.

Comments closed

SQL Server 2017 Finds Plan Regressions

Jovan Popovic shows off some automatic tuning functionality in SQL Server 2017:

Plan change regression happens when SQL Database changes a plan for some T-SQL query, and the new plan has the worse performance than the previous one. SQL Server 2017 has Automatic Tuning feature that enables you to easily find plan change regressions and fix them. In this post you will see the demo script that you can use to cause plan change regression and manually fix it using new sys.dm_db_tuning_recommendations view.

If you are not familiar with plan regressions and new tuning recommendations in SQL Server 2017, I would recommend to read these two posts:

This would be enough to understand steps in this demo.

Our experience with plan regression recommendations has been uniformly positive so far.  Those tests have been in dev and QA environments, but so far, there hasn’t been a terrible recommendation.

Comments closed

Azure SQL Database Performance Tips

Arun Sirpal looks at some performance issues in Azure SQL Database:

Let’s assume that you are not driven by logins, workers and session counts how does one select the right level? What exactly does DTUs (Database Transaction Units) mean? I suggest reading this post by Andy Mallon https://sqlperformance.com/2017/03/azure/what-the-heck-is-a-dtu

I am going to undersize my database and create a S0 database and run some day to day tasks – let’s see what happens. I will open up connections and issue some queries via my application. I would not class these queries as bad, what I am trying to drive here is getting the sizing right for your workload.

This is one of the trickier things to get, I think.  We’re taking an existing workload and want to make sure it doesn’t fall over…but we aren’t measuring in terms of DTUs locally.  I know that there are some tools that help the conversion process, but if you’re starting a new product or don’t have a great handle on normal workload, it’s really easy to fall into the Scylla and Charybdis of undersizing and overpaying.

Comments closed

Planning For A Good Disaster

Randolph West loves it when a (disaster recovery) plan comes together:

My first question to the attendee was what the Service Level Agreement (SLA) says. As we know from previous posts, a disaster recovery strategy is dictated by business requirements, not technical ones. The Recovery Point Objective (how much data loss is acceptable) and Recovery Time Objective (how much time there is to bring everything back) will guide my proposal.

He told me that the SLA was 24 hours, so I started writing on the white board while I was thinking aloud.

On average, a fast storage layer can read and write around 200 MB/s, so it would take 5.12 seconds to write 1 GB, or just under 85 minutes to restore the database back to disk, not counting the log file or crash recovery. I never assume that Instant File Initialization is enabled, plus I won’t know how big the transaction log file is going to be, and that needs to be zeroed out.

I like this post a lot because it lets us get a glimpse into Randolph’s thought process and gives some hard numbers that you should have in mind.

Comments closed

Making A CLR Function

Michael Bourgon walks us through creating a CLR function:

What we settled on was building a CLR that would make the web calls, feeding it our data via a FOR JSON query.  We would then log the results into a separate table to make sure everything worked as expected.  I made this as generic as possible so that others could use it.

So let’s go through the steps.

  1. Create the .Net code necessary

  2. Create a CLR script for compilation

  3. Compile the CLR

  4. CREATE the ASSEMBLY

  5. CREATE the PROCEDURE

  6. Call the procedure

  7. Run it automatically

For a more detailed look at building a CLR function, after you go through Michael’s post, check out Solomon Rutzky’s Stairway to CLR.

Comments closed