Press "Enter" to skip to content

Author: Kevin Feasel

Holding and Pruning WhoIsActive Results

Andrea Allred has a script for us:

Last month I talked about how I started collecting data from sp_whoisactive. The long term goal was to analyze and tune those long running processes that are sometimes hard to quantify in Query Store. We had started collecting the data in a table (whoisactiveLog), but wanted to make a simple table that our Engineers could refer to and find the long running processes. We also wanted to archive off the whoisactiveLog into another table and save that for 70 days in case it was needed for further research on the tuning of processes. Each night, we have a break in our processes where we can do this maintenance so we decided we would empty the whoisactiveLog table completely.

There’s also a check for session length in there.

Comments closed

Microsoft Fabric for the Power BI Practitioner

Kurt Buhler provides a nice graphic:

I’m just writing this quick article to share a visual overview I made of the newly announced Power BI-related features. I hope it helps you get an at-a-glance overview of some of the big changes relevant to Power BI. More importantly, I hope it doesn’t make you feel overwhelmed! For more information, check out the documentation and learning paths.

There is a lot in store for the platform, but you can already see a slew of new changes and opportunities for Power BI developers.

Comments closed

Combining Backup Encryption and Compression

Matthew McGiffen joins two great flavors:

In SQL Server you can also compress your encrypted backups. Unlike TDE this has been possible with Backup Encryption since the feature was first made available, and there have been no issues that have required fixing – though as always you should still test that restores work correctly. As mentioned in my post about compression with TDE, compressing backups has benefits not just in terms of file size but potentially also in reduced backup times as the time taken to write to disk is smaller.

Read on for more information. Microsoft did the right thing: they compress first and then encrypt; otherwise, you’re not getting any benefit from the compression.

Comments closed

Building a Lakehouse in Microsoft Fabric

Reza Rad builds a warehouse down by the river:

The term Lakehouse is derived from two other words; Data Lake and Data Warehouse. A Lakehouse is a place to store structured data (such as Data Warehouse) and unstructured data (such as a Data lake) in a single location. Lakehouse is capable of scaling up to handle large amounts of data. Other tools and services can be used to interact with the lakehouse, for example, to load or read data into it.

Click through for instructions on how to build one and how to access it from SSMS and Power BI.

Comments closed

Trying out Parameter Sensitive Plan Optimization

Jared Poche gives it a try:

Parameter Sensitive Plan Optimization attempts to solve the parameter sniffing issue by allowing a query to have different plans that are used based on the cardinality of one important parameter. The parent query has a plan that is really just a stub; there can be up to three variant queries that have full execution plans. When the query is executed, the optimizer chooses which variant query and plan to use based on the cardinality of the parameter’s value.

Click through for examples. I share Jared’s thoughts about row counts, though I’m not a huge fan of just using those. The rule of thumb I like to apply (though I certainly didn’t make it up myself!) is that, if you’re retrieving 0.5% or less of a table, a seek is the best option, assuming there is an appropriate index. If you’re retrieving 20% or more of a table, a scan is the best option, regardless of index quality.

Going a little further, somewhere in that 0.5-20% range, there’s an in-between zone where you should be indifferent between seek and scan, as they’ll both perform approximately as well. But if the optimizer chooses “seek” for the in-between zone and you nudge up those returned row counts higher and higher, seek becomes less viable, and there may be a zone somewhere between X% (that mid-point of indifference) and 20% where you haven’t yet crossed the row count threshold for another plan but should switch over to the scan.

Coming up with the right solution to this problem would be pretty hard, and I’m not paid to solve problems. I’m not-paid to come up with problems, however.

Comments closed

Delayed Durability and Shutdown Behaviors

Guy Glantser does some explaining:

In the world of database management systems, ensuring data durability is of paramount importance. It involves guaranteeing that committed transactions are not lost in the event of unexpected failures or system crashes. SQL Server offers a feature known as Delayed Durability to address this challenge. Delayed Durability provides a mechanism for optimizing transaction logging and improving performance without compromising data integrity. This article aims to delve into the concept of Delayed Durability in SQL Server, specifically focusing on its behavior during expected shutdown scenarios.

Guy’s post surprised me, and I recommend reading it if you’ve thought about enabling the feature.

Comments closed

Elastic Pools for Azure SQL DB Hyperscale

Arvind Shyamsundar announces a new preview:

We are very excited to announce the preview of elastic pools for Hyperscale service tier for Azure SQL Database!

For many years now, developers have selected the Hyperscale service tier in a “single database” resource model to power a wide variety of traditional and modern applications. Azure SQL Hyperscale is based on a cloud native architecture providing independently scalable compute and storage, and with limits which substantially exceed the resources available in the General Purpose and Business Critical tiers.

Click through to learn more about what’s on offer.

Comments closed

An Overview of CLR

Chad Callihan gives us the primer:

I was setting up a dev environment for a new application recently. All seemed to be well until I went to actually run the application. I was getting a vague error in the application and still didn’t know the exact cause. I fired up an XEvents session to find the query causing the issue and found a query failing with the following error:

Msg 6263, Level 16, State 1, Procedure dbo.MyProc, Line 60 [Batch Start Line 14]
Execution of user code in the .NET Framework is disabled. Enable “clr enabled” configuration option.

Incidentally, my single biggest gripe around CLR integration in SQL Server (and something I complain about almost every time someone brings up the feature) is that they really messed up in picking names. CLR has Safe and Unsafe modes.

What the terms mean:

  • Safe = managed mode. Your .NET language of choice (at the time, VB.Net or C#) handles pointers for you, so you can’t dereference a null pointer. Well, you still can but will instead get an “Object reference not set to an instance of an object” exception rather than some potentially unexpected behavior
  • Unsafe = developers control pointers. There were some things, especially in 2005, that made sense to create in Unsafe mode because it was considerably faster

What far too many DBAs interpreted it as: “I don’t want unsafe code running in my systems! Ban CLR!”

This fundamental misunderstanding of terms killed a smart integration and capability for extremely fast functions—for example, the fastest way to split a string in T-SQL is via CLR, and there’s a lot of impressive functionality you can get at near-native speeds (i.e., if the SQL Server team wrote the code as a function instead of you, allowing them to optimize the database engine for that function) with a few lines of code. But the leads at most places saw the word “Unsafe” and nope’d out.

Comments closed

Kusto Detective Agency Season 2

Anshul Sharma announces season 2 to a great program:

Greetings, esteemed investigators and data enthusiasts! We are thrilled to announce the highly anticipated launch of Kusto Detective Agency Season 2. After the immense success of Season 1, with over 10,000 participants diving deep into the world of data investigation, we cannot thank you enough for your incredible support and enthusiasm! 

Season 2 of Kusto Detective Agency is set to be an even grander adventure, filled with more challenges, mind-bending mysteries, and countless opportunities to showcase your analytical skills. Prepare yourself for a journey that will push the boundaries of your data prowess and reward you with an unforgettable experience. 

I just finished season 1 yesterday and saw the link to season 2, but didn’t touch it yet. If you’re learning the Kusto Query Language, this is a series of challenges which will really push your skills. As I was going through season 1, there were several times when I’d say “I know exactly how to answer this in T-SQL but how do I answer it in KQL?” If your KQL skills aren’t great, there are plenty of people who have shown their answers online as well, so you can walk through it with them.

Admittedly, I want more Poppy the goldfish lore. The twist in challenge 5 was not something I’d expected.

Comments closed