Press "Enter" to skip to content

Author: Kevin Feasel

TempDB Resource Governor in SQL Server 2025

Brent Ozar tries out an update to resource governor:

We’ve finally got a way to defend ourselves. We can configure Resource Governor to divide people into groups (something that seems to be trendy lately), and then cap how much TempDB space each group can consume. You don’t even have to divide them into groups, either (take note, politicians) – you can just cap how much resources everyone can use altogether.

To keep things simple for the sake of this blog post, let’s just assume we’re limiting everyone’s usage altogether. 

Click through to see how it works, as well as some of the caveats that are going to require foresight before this works the way you’ll want it to.

Leave a Comment

Trace Flag 1448 and Replication

Garry Bargsley covers the behavior of a trace flag:

In SQL Server environments where transactional replication runs alongside Always On Availability Groups (AGs), DBAs sometimes face a frustrating scenario: replication stalls when a secondary replica or subscriber is offline for maintenance, patching, or unexpected downtime.

By default, SQL Server’s Log Reader Agent is cautious. It only marks transactions as ready for replication once they are hardened on both the primary and all replicas. This ensures consistency across the AG, but it can also cause replication to stall if an asynchronous replica or subscriber is unavailable for an extended period of time.

Click through to see how it works and what the consequences are.

Leave a Comment

Temporary Stored Procedures in SQL Server

Louis Davidson is only here for a little while:

Pretty much every T-SQL programmer knows about temp tables very early in their journey with the language. They are generally wonderful tools that, when used properly, can be invaluable for storing data for a short amount of time. When used improperly they can be somewhat problematic, but certainly they are a tool we all use regularly.

But did you realize you can create a temporary stored procedure as well? It is not something I had ever used before, and while I remember hearing they existed a few times over the years, I had never tried them. The other day. I was creating an informal demo of some data, and once I had written the main query that listed rows that needed to be looked at, I needed a way to display the details of some rows.

I’ve worked with production code that included temporary stored procedures one time, I think. Otherwise, I’ve never used them either. But read on as Louis takes us through the utility of this concept.

Leave a Comment

DAX: VALUES in SUMMARIZE

Marco Russo and Alberto Ferrari talk about values:

We discussed VALUES in previous articles: Choosing between DISTINCT and VALUES in DAX and Using VALUES in iterators. However, there is a third case where VALUES could be used with a table reference, which is when you use SUMMARIZE to group by columns you want to iterate. In this article, we describe this particular scenario to understand when VALUES is needed to retrieve the blank for an invalid relationship using SUMMARIZE and SUMMARIZECOLUMNS.

When you use SUMMARIZE, you may want to use VALUES over the aggregated table in case it could have an additional blank row for an invalid relationship, and you must ensure that this blank row is included. This condition is uncommon because SUMMARIZE often includes blank rows for invalid relationships that are implicitly included. For example, consider the following measure that uses SUMMARIZE over the Sales table, grouping by Customer[State] and Customer[City] to apply an adjustment to Columbus, Ohio (note that there are other cities with that name in other states):

Curated SQL tip number 17: if you want to show up here, use the best city in Ohio as a (positive) example.

Leave a Comment

Using Log Parser to Preprocess Data

Lucas Kartawidjaja gives us a blast from the past:

When dealing with data inside SQL Server, especially when it’s delimited by a clear separator character, earlier versions (before SQL Server 2016) required us to write custom parsing functions—either as T-SQL user-defined functions or CLR functions. Starting with SQL Server 2016, we can use the built-in STRING_SPLIT() function to handle most of these tasks.

However, more often than not, we need to parse data that resides outside SQL Server—for example, in log files, CSV data, or other data sources. For these cases, I often use Microsoft Log Parser, a free command-line tool available here.

Click through for a demonstration of how it works. Or a reminder, if you’ve been in the business for a long long time.

Leave a Comment

ROW_NUMBER() Filtering Performance

Erik Darling answers an office hours question in detail. The question comes down to why a filter on ROW_NUMBER() where the row number is equal to 1 could differ from the same query where row number is less than or equal to 1. Knowing that ROW_NUMBER() starts at 1 and can never be anything other than a natural number, you’d think that SQL Server would treat these exactly the same. But Erik shows an example where the two can differ, and the answer was a good one. I will admit that my pre-video guess was wrong but once he showed the execution plans, things clicked. And, like Erik mentions, this is why it’s so important to dig into the execution plan, because the answers are typically in there somewhere.

1 Comment

Automating Power BI Load Testing via Fabric Notebook

Gilbert Quevauvilliers grabs a query:

Load testing is essential when working with Microsoft Fabric capacity. With limited resources, deploying a Power BI report without testing can lead to performance issues, downtime, and frustrated users. In this series, I’ll show you how to automate load testing using Fabric Notebooks, making the process faster, easier, and repeatable.

Inspired by Phil Seamark’s approach, this method eliminates manual complexity and allows you to capture real user queries for accurate testing.

Read on for the first part, in which Gilbert uses the Performance Analyzer to capture query details.

Leave a Comment

Parsing and Avoiding Composite Keys

Hugo Kornelis prefers surrogate keys:

I am currently most known for my performance tuning and execution plan work. But when I started working with database, I actually came from a background of data modelling, database design, and normalization. And that has never fully left me. In fact, I have in the past two years created a whole series of YouTube videos about database design and normalization. And a much longer time ago, I recorded a Pluralsight course on this topic that is still available for viewing.

One of the very basics of schema design for a relational database is to store atomic values in every column. One column, one value, no more, no less. That automatically rules out all repeating groups. So a single column to list all my email addresses? Sorry. You are doing it wrong. And you will pay the price when you try to protect the integrity of your data. Or even just report on it.

I wanted to copy this second paragraph because CJ Date, in his book Database Design and Relational Theory (2nd edition), issues a mea culpa around repeating groups, stating that it’s best to ignore his prior arguments on the topic. Though in Date’s case, he specifies a repeating group as something like { Name, Email1, Email2, Email3, … } rather than a delimited list.

But even composite items can be in 1st normal form. For example, a US telephone number has a country code (+1), an area code, an exchange, and a four-digit number, followed potentially by an extension. The name “Bob” is an array of characters, and each array of characters is a composite of bits forming 1-4 bytes depending on collation and other details. At the end of the day, first normal form is about the shape of the tuple (a heading exists with a known set of names and data types; all tuples follow the same header; no duplicate tuples are allowed; attribute and tuple order does not matter for operations; and all attributes are regular inasmuch as they have names, data types, are not hidden, etc.).

At the end of the day, what Hugo is saying is good practice: if you have a business need to identify segments of an attribute separately, then it makes sense for each segment to be an attribute on its own. But because there is no solid mathematical property that explains exactly what an attribute is, and because database normalization is ultimately a series of mathematical formulations, we cannot use normalization as the reason to keep or separate the contents of an attribute. Thankfully, there is more to database design than normalization alone (and thankfully, database normalization itself is such a robust field that provides good advice that people should follow).

Leave a Comment

Splitting Strings with T-SQL

Courtney Woolum splits a string:

If you’ve escaped string parsing thus far, count thyself lucky. I’ve seen some numbing scripts from pre-SQL Server 2016 when STRING_SPLIT was introduced. I think the fact this month’s T-SQL Tuesday is entirely on string parsing says enough about how many ways there are to paint this particular cat.

In the post, Courtney mentions learning early on to avoid using the APPLY operator. I want to have harsh words with whoever taught her that. Purposefully avoiding the APPLY operator artificially hamstrings your ability to write effective T-SQL code.

Leave a Comment

HTML Parsing in T-SQL

Louis Davidson goes looking for list items:

From the title of “Favorite String Parsing”, I will say 100% it is using SQL Server 2025’s addition of Regular Expressions. Previously, parsing text in SQL Server was one of my least favorite things to do. Regular expressions will make it just a bit nicer, because it has a lot more power than SUBSTRINGLEFTRIGHT, and CHARINDEX/PATINDEX. All generally “good enough” functions for a lot of the things you need to do, but often woefully inadequate for parsing large amounts of text.

Yeah, T-SQL RegEx is definitely a nicer approach, though HTML doesn’t have to follow the consistency rules of XML due to browsers being very forgiving in their interpretation of the language, so it’s easy to get tangled up trying to parse websites.

Leave a Comment