Press "Enter" to skip to content

Category: T-SQL

Row Counts and Execution Time for Active SQL Server Queries

Kendra Little wants to know what’s happening right now with this query:

I frequently need to see rowcounts and execution time for queries while they’re running. Maybe I’m troubleshooting a slow query that’s still executing, or I want to understand which operators are causing the slowdown before the query completes.

Last week at the PASS Summit I learned some little nuances about how this works that I’d missed.

Click through to learn what Kendra learned (and now what I learned).

Leave a Comment

Using the PRODUCT() Function in T-SQL

Rajendra Gupta uses a reducer function:

SQL Server 2025 includes new features and enhancements. In the previous SQL Server 2025 tips, we have explored many new features. Have you explored the new Product() function? If not, this will walk you through the Product() function with several examples.

Read on to see how PRODUCT() works and how thoughtful the development team was around supporting window functions as an aggregate function.

Leave a Comment

Invoking REST API Endpoints in SQL Server 2025

Hristo Hristov makes a call:

One highly anticipated new feature in SQL Server 2025 is the ability to call an external REST API endpoint from the database server itself. This new feature opens the door to new data integration scenarios and delivers on the promise to “bring AI closer to data.” What are the steps to follow if you want to use this new feature?

I expect to see two things from this. First, some percentage of developers will abuse it and cause performance problems in the database. Second, some percentage of database administrators will panic about this and try to prevent its use even when it makes sense.

But hey, at least this time, they didn’t use the term “unsafe” to describe something DBAs don’t understand and thus cause a widescale panic.

Leave a Comment

Slimming down Batch Deletion in SQL Server

Matt Gantz deletes a batch at a time:

In previous articles I showed patterns for working with large amounts of data on big tables while keeping locking at a minimum. These processes can allow migrations and maintenance without requiring downtime but, in environments with unpredictable database workloads, there is a risk of heavy traffic starting at any time and disrupting a once smooth operation. In this article, I’ll demonstrate how to augment these processes to allow dynamic adjustment of the configuration.

For most systems, the main limitation these techniques run into is the speed and throughput of I/O (input/output). During periods of low traffic, a large batch size may perform great with no impact to production, but as traffic increases, the storage subsystem may not be able to keep up.

Read on for two mechanisms to make batch operations a little less stressful on the server.

A consulting customer of mine has a fairly clever mechanism for this as well: track the number of non-trivial active processes before the batch begins. If that number is above a certain threshold (say, 10 or 15 or whatever), pause for a pre-defined period of time before running. That way, if the server isn’t very active, batches can keep processing willy-nilly. But once things get busy, it reduces its activity load.

Leave a Comment

Multiple Filters with Regular Expressions

Louis Davidson shows off some more of the power of regular expressions:

One of the practical uses of RegEx is more powerful filtering. One of the projects I am working on, (very slowly) is sharing some SQL utilities on GitHub, Utilities like looking at the metadata of a table, searching for columns, database sizes, etc. I usually use LIKE to filter data, which lets me simply use an equality search, or I can also do a partial value search when I don’t know exactly what I am looking for.

LIKE is quite useful but, as Louis points out, it does have its limits. And in those limits is where regular expressions do so well.

Leave a Comment

Refactoring Code Segments in SQL

Lee Asher performs refactoring:

Over time the term “refactoring” has expanded and is sometimes used to mean code quality improvement in general, but here we are using it with its original meaning: condensing and eliminating redundant segments of code. Like factoring a number in math, we break the code into smaller blocks, identify any repeated elements, then replace them with a single reference.

I appreciate that Lee is sticking to the original meaning of the term here. Interestingly, Lee doesn’t cover T-SQL functions at all. On net, that’s probably a good thing, especially scalar functions. It’s easy to find cases where converting a function to an inline call can speed up query performance by 3x or more.

The mechanisms Lee does use could have an impact on query performance, especially lateral join/APPLY. But for some of these, as long as you do not overuse the technique, performance will be pretty similar.

Leave a Comment

The Joys of FORMATMESSAGE

Louis Davidson listened to some advice:

A few weeks ago, I wrote a post on using temporary stored procedures in SQL Server. Kevin Feasel of Curated SQL had this reply Using Temporary Stored Procedures to Output Common Messages. I had heard of FORMATMESSAGE before, but I had completely coupled this in my mind with formatting SQL Server error messages. (Which turns out to be a great use of this tech)

Click through to see how it works and some additional testing with RAISERROR().

Leave a Comment

Fuzzy Text Match in SQL Server

Rob Farley is excited:

However, SQL Server 2025 does bring some great options for doing fuzzy string matches, making custom Data Quality options even richer. I’ve spoken about this at some user groups recently (including tomorrow, remotely for TriPASS, and in a few weeks in Melbourne and Sydney for Difinity), and in that session I go much deeper into how I see data matching going. I’ll also write more about these methods in future posts, but it’ll take a few posts, covering quite a few sub-topics.

If you want to see that session, our user group (the Triangle Area SQL Server Users Group) is hosting it Wednesday morning Australia time, or this evening US Eastern Standard Time.

Leave a Comment

Calculating Exponential Moving Average in T-SQL

Rick Dobson watches the flow:

Exponential moving averages (emas) are a powerful means of detecting changes in time series data. However, if you are new to this task, you may be wondering how to choose from conflicting advice about how to calculate emas. This tip reviews several of the most popular methods for calculating moving averages. Additionally, this tip presents T-SQL code samples with common table expressions and stored procedures for generating emas from an underlying time series dataset.

“Emas don’t just track trends—they reveal momentum in motion.” That’s why they’re favored when recent values matter most—and why this tip focuses on helping you calculate them with precision.

Read on for the formula and a couple of lengthy scripts to generate it.

Leave a Comment

Comparing Sets of Data in T-SQL

Louis Davidson figured out which of these was not like the others, which of these just didn’t belong:

There are many occasions when we want to see the differences between two sets of data. Sometimes a whole table, a subset of a table, or even the results from a couple of queries, like in a unit test.

Maybe you want to see that two sets are exactly the same, for example domain table in DEV, PROD, or maybe even from source control. You might you have a orders table and an orders_history table and you want to see the overlap/changes over a given period of time, like for example, to clean out any useless history.

No matter what the reason, there is a query pattern that will work for you. In this blog I will demonstrate several of these techniques and why you might want to use them in different places.

Click through for those techniques. I am particularly fond of INTERSECT/EXCEPT because of how it handles missing data and typically performs quite well.

Leave a Comment