Press "Enter" to skip to content

Author: Kevin Feasel

Generating a DAXX File for Performance Tuning

Phil Seamark does some troubleshooting:

When troubleshooting slow DAX queries, sharing the right diagnostic information with an expert can make all the difference. That’s where a DAXX file comes in. This special file format is created using DAX Studio. It bundles essential metadata and performance details without exposing query results. It’s perfect for collaborative optimisation.

Read on to learn more about what a DAXX file is and how it can be useful in the performance tuning process.

Leave a Comment

Random Number Generation in T-SQL via Marsaglia Polar Method

Sebastiao Pereira implements a method for generating random numbers in T-SQL:

Generating random numbers from a normal distribution is essential for accuracy and realistic modeling, simulation, inference, and algorithm design for scientific, engineering, statistical, and AI domains. How can we build a random number generator using Marsaglia Polar method in SQL Server without the use of external tools?

It’s an interesting technique that works well for drawing points from a two-dimensional space around a point.

Leave a Comment

Simple Data Quality Validation with T-SQL

Kenneth Omorodion builds a validation process:

As the need and use of data grows within any organization, there is a corresponding rising issue for the need of data quality validation. Most organizations have large stores of data but most of it are not managed efficiently in terms of data quality assurances, thus leading to inaccurate insights for the business which in turn leads to distrust in the data.

Organizations have now, more than ever, realized the importance of an efficient data quality process as part of their Business Intelligence and Analytics processes. The issue is, how can they implement data quality for their data? For larger and more data-centric organizations, they might be using pre-built data management and validation tools like Microsoft Purview or other Master Data Management tools like Informatica, Talend, SAP, Talend, and Stibo Systems. But for those organizations that can not commit to subscribing to pre-built options, or they are operating primarily on On-Premises environments, they might want to build one themselves, that’s where this article comes in.

I’m of two minds about this. One the one hand, I appreciate the effort that Kenneth put into this and expect that it would work reasonably well. On the other hand, I look at what it can do and say “Yes, but if you just use constraints like you should, then you don’t need this process.” It’s basically a very asynchronous way of applying check constraints, foreign key constraints, and specifying that an attribute is NOT NULL.

If there’s some reason why applying these constraints is impossible—such as receiving this data as an extract from some poorly-designed system—then this can do a good job of describing the failures of the upstream system. But this is where data architects need to get it right up-front.

1 Comment

CI/CD Pipelines with DACPACS Allowing Potential Data Loss

Jess Pomfret describes a use case:

I recently made a change to a database schema that involved removing a column from a table. After we recently improved our authentication process to this tool we no longer needed a password field (don’t worry it was not holding plain text passwords!).

I made the change to the Users.sql file, built the project to confirm everything was good, and committed the change. But, on deployment my pipeline failed and this was the error The schema update is terminating because data loss might occur.

Read on to see how Jess dealt with this scenario.

Leave a Comment

Microsoft Fabric October 2025 Feature Summary

Adam Saxton has a list:

This month’s update delivers key advancements across Microsoft Fabric, including enhanced security with Outbound Access Protection and Workspace-Level Private Link, smarter data engineering features like Adaptive Target File Size, and new integrations such as Data Agent in Lakehouse. Together, these improvements streamline workflows and strengthen data governance for users.

The list doesn’t feel quite as long as the prior couple of months, but there’s still a lot of content on here.

Leave a Comment

SQL ConstantCare Population Report for Fall 2025

Brent Ozar has an update:

SQL Server 2022’s market share has remained pretty stable since its huge jump in Q1, when people replaced a lot of 2016 servers with 2022. With SQL Server 2025’s release approaching quickly, I’m not sure SQL Server 2022 will ever approach the amazing dominant streak that 2019 experienced – that thing’s been a monster! It’s held over 40% market share for two years straight now. 

Read on for Brent’s predictions around SQL Server 2025. I tend to agree with Brent’s conclusion around why adoption for 2025—assuming no unforeseen catastrophe in the product—will be more like what we saw with SQL Server 2016 or 2019 versus 2017 or 2022. Which is good, because 2022 and 2025 both have some nice additions to T-SQL that will make database developers’ lives easier.

Leave a Comment

Using Temporary Stored Procedures to Output Common Messages

Louis Davidson shows a neat use for temporary stored procedures:

On another connection (on another computer for that matter), I am right no doing some pretty long loads of some test data. The script is comprised of 6 queries, and they each may take 10 minutes (not completely sure, this is my first run of the scripts). And of course, I want to get some feedback on these queries to know how long they are taking.

One common way to do this is to put a PRINT statement between the queries so you can see the progress. But PRINT statements are notorious for one thing. Caching the output until the output buffer reaches some level.

One addition I’d make to Louis’s post is to make use of the FORMATMESSAGE() functionality that SQL Server 2016 introduced. This use case is right in its wheelhouse.

    SET @message = FORMATMESSAGE(N'%s%s%s%s',
			@Message,
			CASE
				WHEN @AddTimeToMessageFlag = 1 THEN CONCAT(N' : Message Time - ', SYSDATETIME())
				ELSE N''
			END,
			CASE
				WHEN @AddSpidToMessageFlag = 1 THEN CONCAT(N' : ProcessId - ', @@spid)
				ELSE N''
			END,
			CASE
				WHEN @AddOriginalLoginToOutputFlag = 1 THEN CONCAT(N' : LoggedInUserId - ', original_login())
				ELSE N''
			END); 

FORMATMESSAGE() provides a moderate benefit to readability versus a lengthy CONCAT(). And if you always wanted to emit all fields versus the optional setup that Louis has in place, FORMATMESSAGE() makes the result even clearer to understand.

SET @message = FORMATMESSAGE(N'%s : Message Time - %s : ProcessId - %i : LoggedInuserId - %s',
			@Message,
			CAST(SYSDATETIME() AS NVARCHAR(100)),
			@@spid,
			original_login()); 
2 Comments

Bars vs Lines in Visuals

Cole Nussbaumer Knaflic contrasts a pair of visual options:

Bringing clarity to your data storytelling doesn’t usually mean you need to learn and use more chart types. It does mean choosing visuals that are appropriate for your data and what you’re trying to communicate. Two of the most useful tools in our graphing toolkit—bar charts and line graphs—often do the heavy lifting. Knowing when to use which (and when to switch) can make all the difference.

This lesson came up recently as I revisited our new book, storytelling with data: before & after. I’ll share two scenarios where the choice between bars and lines matters. They are both from Chapter 2, which is titled “embrace basic graphs.”

Read on for an initial chart that is using the wrong type of visual, and then how we can tell different stories using the same data based on our choice of visual.

Leave a Comment

Local Variables and Cardinality Estimates in SQL Server

Erik Darling lays out an explanation of what SQL Server does to estimate the cardinality of queries using local variables. Erik covers some of the conceptual questions around how a compiler could work, and then explains what SQL Server does.

Erik talks about dependency on specific behavior and how people get upset when things change, and he’s absolutely right. My favorite example of this is the “quirky update” technique people used back in the day prior to SQL Server 2012 to calculate running totals. That technique was something that relied on unsupported, accidental but repeatable behavior around creating clustered indexes on temp tables. Even though the SQL Server team explicitly stated that this behavior could change at any time, it was clear that they didn’t go out of their way to break that functionality.

Leave a Comment