Press "Enter" to skip to content

Author: Kevin Feasel

Optimized Locking Slowdowns in SQL Server 2025

Rebecca Lewis points out when optimized locking fails:

SQL Server 2025 introduces a feature called Optimized Locking, designed to reduce blocking across read–modify workloads. In many environments it helps, but certain mixed workloads experience longer lock waits, stalled S → U → X transitions, and even occasional threadpool saturation.

My inner 12-year-old has already made about a half-dozen plays on the phrase “When optimized locking SUX.”

Leave a Comment

Comparing TRANSLATE() and REPLACE()

Louis Davidson is lost in translation:

The data I am working with sometimes has people with multiple parts to their name (Mary Jo, Cindy Lou) etc, or sometimes Fred/Joe, Mary and Jack, Mary & Jack, or what have you. My goal was to turn these names into little delimited lists that I could parse on a space character with STRING_SPLIT and there were a “few” of these cases. This was the code I had arrived at when I reached the “good enough” stage of my coding.

Louis had 19 nested REPLACE() calls, but Certified Good Guy Erik Darling shows him the way.

Leave a Comment

Sending E-Mails from Data Factory Pipelines

Andy Brownsword glues together a few resources:

Not all pipelines in Azure have native options for sending emails. The lucky Fabric folks have an Office 365 connection ready to rock. Data Factory and Synapse don’t have email capability built-in and need another option to send messages. So for everyone who wants more emails for their inbox rules to handle (delete), we’ll build and call a Logic App.

For this example we’ll have basic email parameters – To, Subject, Body – along with an optional attachment from a storage account. Also note that I’ve left all actions with their default names – if you change them make sure to update expressions accordingly.

Read on to see how it all works.

Leave a Comment

Thoughts on Renaming sa

Andreas Wolter asks, what is in a name?

This is another one of those subjects that keep circulating: should you rename your sa account?

Plenty of “security check” scripts swear you should. Meanwhile, when you talk to actual humans in the real world, you’ll notice that almost nobody does it. (Funny how that works.)

So what does Sarpedon Quality Lab® recommend – and why? Our answer (read to the end) may surprise you.

Before reading Andreas’s post, I wanted to note that, when I was a DBA and had control of such things, I would actually rename the sa account and then create a separate account called “sa” with zero permissions, not even the ability to connect. I never considered it a critical part of a security posture, but knowing how many scripts try to brute-force an account called sa, I figured setting up a false front like that would add some protection on the margin.

After reading Andreas’s post, I wasn’t aware that doing so would break replication, though it makes sense because I wasn’t using replication at that time. Making sa’s password extremely long is also a good idea.

Leave a Comment

Comparing JSON Data Types in MySQL and Postgres

Aisha Bukar draws some comparisons:

MySQL and PostgreSQL are two of the most popular relational database systems in the world. Both are open-source, widely used in web and enterprise applications, and support structured data in tables.

Modern applications, however, often work with semi-structured data that doesn’t always neatly fit into tables with rows and columns. This type of data gets its name because it still has some organization but doesn’t follow a strict format.

JSON (JavaScript Object Notation) is a popular way to store and share this kind of data. It’s a text format easy for both people and computers to understand.

Read on to see what’s supported in each of these platforms, as well as strengths and limitations of using JSON in each.

Leave a Comment

The 1600 Column Limit in PostgreSQL

Andreas Scherbaum covers a limitation:

A recent blog posting by Frédéric Delacourt (Did you know? Tables in PostgreSQL are limited to 1,600 columns) reminded me once again that in the analytics world customers sometimes ask for more than 1600 columns.

Read on for the technical limitations around how many columns could conceivably fit in a PostgreSQL table. But I will rely here on Swart’s 10% Rule: if you have more than 160 columns on a single table, that’s a sign you should step back and ask why this is the case. Even the widest of data warehousing dimensions is likely to be considerably smaller than this and the smart move is to rethink that data model before agitating for additional columns.

Leave a Comment

Join Planning in PostgreSQL 19

Robins Tharakan notes an upcoming performance boost:

The hidden cost of knowing too much. That’s one way to describe what happens when your data is skewed, Postgres statistics targets are set high, and the planner tries to estimate a join.

For over 20 years, Postgres used a simple O(N^2) loop to compare (equi-join) Most Common Values (MCVs) during join estimation. It worked fine when statistics targets are small (default_statistics_target defaults to 100). But in the modern era – we often see Postgres best-practices recommend cranking that up. Customers are known to be using higher values (1000 and sometimes even higher) to handle complex data distributions + throw a 10 JOIN query to the mix – and this “dumb loop” can easily become a silent performance killer during planning. 

That changes in Postgres 19.

Read on for an example of the problem and what is coming out to mitigate issues that currently exist.

Leave a Comment

Exposing Materialized View in Microsoft Fabric Lakehouses

Ed Lima makes some data available to other tools:

In today’s data-driven world, the ability to quickly expose data through modern APIs is crucial. Microsoft Fabric’s API for GraphQL combined with Materialized Lake Views offers a powerful solution that bridges the gap between your Fabric LakeHouse data and application developers who need fast, flexible access to your data.

In this guide, we’ll walk you through how to create a materialized view in a Lakehouse and expose it through a GraphQL API—all within the Microsoft Fabric ecosystem. This approach gives you the best of both worlds: the performance optimization of materialized views and the developer-friendly querying capabilities of GraphQL.

I’d say one interesting reason for why you might want to do this is to feed data to products like Teams, Power Automate, or Copilot Studio. In those cases, having the data be accessible via GraphQL makes it easier than working with finicky connectors that may or may not exist.

Leave a Comment

Backing up a Microsoft Fabric Workspace

Gilbert Quevauvilliers finds a gap and fills it:

In the high-stakes world of data architecture, where downtime can cascade into real business disruptions, I’ve learned that even the most robust platforms have their blind spots. Just last month, while collaborating with a client’s Architecture team on their disaster recovery strategy, we uncovered a subtle but critical gap in Microsoft Fabric: while OneLake thoughtfully mirrors data across multiple regions by default, other workspace items—like notebooks, semantic models, and pipelines—aren’t directly accessible in a failover scenario without extra steps. For the nitty-gritty on Fabric’s built-in reliability features, check out this Microsoft Learn guide.

That’s the spark that led me down this rabbit hole, and in this post, I’ll walk you through a practical solution: a Python Notebook that automates backing up your entire Fabric workspace to OneLake and an Azure Storage Account for that extra layer of redundancy. Whether you’re prepping for the worst or just embracing the “better safe than sorry” mindset, this approach gives you portable, versioned copies you can restore quickly.

Click through for the notebook, as well as instructions on how to use it.

Leave a Comment