Press "Enter" to skip to content

Month: January 2025

Working with Unity Catalog

Dustin Vannoy has a new video:

Unity Catalog Open Source Software (OSS) is a compelling project and there are some key benefits to working with it locally. In this video I share reason for using the open source project Unity Catalog (UC) and walk through some of the setup and testing I did to create and write to tables from Apache Spark.

Click through for the video, as well as a text summary and script examples.

Leave a Comment

Notes on Change Tracking for Warehouse Incremental Loads

Meagan Longoria shares some hard-earned experience:

I have a few clients that incrementally load tables from a SQL Server source into their data warehouse or lakehouse by using change tracking. Lately, they encountered some issues with changes to the configuration and the data in the source database, so I decided to share some things you can check before using change tracking as part of your ETL load or when troubleshooting your data load.

Click through for three common issues you may run into while using change tracking.

Leave a Comment

T-SQL Variables and Transactions

Simon Frazer rolls back:

If you’ve been working with T-SQL scripts for a while, you’ve likely encountered variables. These are essential for writing scripts that go beyond basic SELECT, INSERT, UPDATE, or DELETE statements.

Variables can serve multiple purposes: they can act as parameters passed into stored procedures, hold the results of one query to use in another, or even help debug scripts during testing (you do test your scripts before running them on real data, right?).

One particularly interesting aspect of variables is how they behave in transactions, often in ways we might not expect.

Read on for the demo. This also applies to table variables, which is why they can be extremely important for diagnosing issues before performing a rollback. You can write error results to the table variable first, then access them post-rollback.

Leave a Comment

Finding Privilege Changes in SQL Server’s Default Trace

Tom Collins takes a gander:

I’m doing some sql server security privilege troubleshooting , because a customer has reported an incident  – they have privileges yesterday to a certain SQL table but today those privileges no longer. I’d like to know if there is a way to identify what changes were made and by which login those changes occured. Can you supply a method ?

Read on for the answer, though Tom has an important caveat.

Leave a Comment

The Pain of Cursors in SQL Server

Joe Fleming is not a fan of cursors:

So what are cursors, and why are they so bad?  A cursor is a construct in SQL Server that lets you define a query which produces a multi-row dataset and allows you to step through it one row at a time. You declare a cursor in this way:

DECLARE Performance_killer1 CURSOR FOR SELECT OrderNumber, OrderLineNumber, ItemID, ItemDescription, Quantity, Price FROM Orders

You must then OPEN the cursor and FETCH the next value from it into a set of variables, then perform your calculations and updates, repeating until you’ve reached the end of your dataset. From the standpoint of someone unfamiliar with database processing, this method is fairly intuitive. You do one thing at a time. From the database professional’s perspective, it’s very painful to see. There are a few reasons why.

I generally agree with this, though there are specific queries that you cannot write in a set-based context, particularly administrative queries that have to run over each table or database in an instance. I also found that I was unable to write a proper leaky bucket algorithm implementation in T-SQL without using a cursor. But the other 99.x% of the time? No cursors needed.

Also, if you do need a cursor, use a cursor instead of a WHILE loop. It’s more to remember but you can performance tune cursors, whereas WHILE loops are about as dog-slow as they get and never get better.

Leave a Comment

Kernel Methods in Python

Matthew Mayo does a bit of kernel work:

Kernel methods are a powerful class of machine learning algorithm that allow us to perform complex, non-linear transformations of data without explicitly computing the transformed feature space. These methods are particularly useful when dealing with high-dimensional data or when the relationship between features is non-linear.

Kernel methods rely on the concept of a kernel function, which computes the dot product of two vectors in a transformed feature space without explicitly performing the transformation. This is known as the kernel trick. The kernel trick allows us to work in high-dimensional spaces efficiently, making it possible to solve complex problems that would be computationally infeasible otherwise.

Read on for the pros and cons of kernel methods and a pair of techniques that use them.

Leave a Comment

The Ephemeral Nature of Index Rebuilds on RCSI and ADR

Brent Ozar lays out an argument:

Accelerated Database Recovery (ADR) is a database-level feature that makes transaction rollbacks nearly instantaneous. Here’s how it works.

Without ADR, when you update a row, SQL Server copies the old values into the transaction log and updates the row in-place. If you roll that transaction back, SQL Server has to fetch the old values from the transaction log, then apply them to the row in-place. The more rows you’ve affected, the longer your transaction will take.

With ADR, SQL Server writes a new version of the row inside the table, leaving the old version in place as well.

Because you’re a smart cookie, you immediately recognize that storing multiple versions of a row inside the same table is going to cause a storage problem: we’re going to be boosting the size of our table, quickly. However, the problem’s even bigger than that, and it starts right from the beginning when we load the data.

This was an interesting analysis, looking at table growth with ADR + RCSI, with ADR or RCSI alone, and with neither feature. Given that I’m all-in on RCSI, this is particularly interesting to me. And if you want to dig really deeply into index maintenance, Jeff Moden has a fantastic set of presentations, which TriPASS recorded in 2021: GUIDs vs Fragmentation and LOB data. These two presentations help provide sound footing for deciding under what circumstances it makes sense to rebuild an index, and noting that (unless you’re Brent), the answer is probably “less often than you’d think.”

Leave a Comment