Press "Enter" to skip to content

Day: December 5, 2025

Optimized Locking Slowdowns in SQL Server 2025

Rebecca Lewis points out when optimized locking fails:

SQL Server 2025 introduces a feature called Optimized Locking, designed to reduce blocking across read–modify workloads. In many environments it helps, but certain mixed workloads experience longer lock waits, stalled S → U → X transitions, and even occasional threadpool saturation.

My inner 12-year-old has already made about a half-dozen plays on the phrase “When optimized locking SUX.”

Leave a Comment

Comparing TRANSLATE() and REPLACE()

Louis Davidson is lost in translation:

The data I am working with sometimes has people with multiple parts to their name (Mary Jo, Cindy Lou) etc, or sometimes Fred/Joe, Mary and Jack, Mary & Jack, or what have you. My goal was to turn these names into little delimited lists that I could parse on a space character with STRING_SPLIT and there were a “few” of these cases. This was the code I had arrived at when I reached the “good enough” stage of my coding.

Louis had 19 nested REPLACE() calls, but Certified Good Guy Erik Darling shows him the way.

Leave a Comment

Sending E-Mails from Data Factory Pipelines

Andy Brownsword glues together a few resources:

Not all pipelines in Azure have native options for sending emails. The lucky Fabric folks have an Office 365 connection ready to rock. Data Factory and Synapse don’t have email capability built-in and need another option to send messages. So for everyone who wants more emails for their inbox rules to handle (delete), we’ll build and call a Logic App.

For this example we’ll have basic email parameters – To, Subject, Body – along with an optional attachment from a storage account. Also note that I’ve left all actions with their default names – if you change them make sure to update expressions accordingly.

Read on to see how it all works.

Leave a Comment

Thoughts on Renaming sa

Andreas Wolter asks, what is in a name?

This is another one of those subjects that keep circulating: should you rename your sa account?

Plenty of “security check” scripts swear you should. Meanwhile, when you talk to actual humans in the real world, you’ll notice that almost nobody does it. (Funny how that works.)

So what does Sarpedon Quality Lab® recommend – and why? Our answer (read to the end) may surprise you.

Before reading Andreas’s post, I wanted to note that, when I was a DBA and had control of such things, I would actually rename the sa account and then create a separate account called “sa” with zero permissions, not even the ability to connect. I never considered it a critical part of a security posture, but knowing how many scripts try to brute-force an account called sa, I figured setting up a false front like that would add some protection on the margin.

After reading Andreas’s post, I wasn’t aware that doing so would break replication, though it makes sense because I wasn’t using replication at that time. Making sa’s password extremely long is also a good idea.

Leave a Comment

The 1600 Column Limit in PostgreSQL

Andreas Scherbaum covers a limitation:

A recent blog posting by Frédéric Delacourt (Did you know? Tables in PostgreSQL are limited to 1,600 columns) reminded me once again that in the analytics world customers sometimes ask for more than 1600 columns.

Read on for the technical limitations around how many columns could conceivably fit in a PostgreSQL table. But I will rely here on Swart’s 10% Rule: if you have more than 160 columns on a single table, that’s a sign you should step back and ask why this is the case. Even the widest of data warehousing dimensions is likely to be considerably smaller than this and the smart move is to rethink that data model before agitating for additional columns.

Leave a Comment

Comparing JSON Data Types in MySQL and Postgres

Aisha Bukar draws some comparisons:

MySQL and PostgreSQL are two of the most popular relational database systems in the world. Both are open-source, widely used in web and enterprise applications, and support structured data in tables.

Modern applications, however, often work with semi-structured data that doesn’t always neatly fit into tables with rows and columns. This type of data gets its name because it still has some organization but doesn’t follow a strict format.

JSON (JavaScript Object Notation) is a popular way to store and share this kind of data. It’s a text format easy for both people and computers to understand.

Read on to see what’s supported in each of these platforms, as well as strengths and limitations of using JSON in each.

Leave a Comment