Press "Enter" to skip to content

Curated SQL Posts

Validating DAX against a Lakehouse via Semantic Link

Jens Vestergaard performs some checks:

A semantic model is a promise. It promises that the numbers in your reports match the data in your lakehouse. But after enough model changes, renamed columns, new relationships, and tweaked measures, that promise gets harder to verify. I wanted a way to check it programmatically.

This is my second submission to the Fabric Semantic Link Developer Experience Challenge. The first was a DAX unit test harness that compares measures against hardcoded expected values. That works well for known business rules, but it has a limitation: someone has to decide and maintain what the “right” answer is. For a model with hundreds of measures across dozens of filter contexts, that does not scale.

Click through to see what Jens did instead.

Leave a Comment

An Azure Bill Breakdown

Elaine Cahill takes us through an Azure monthly bill:

I received an Azure bill for the period Feb. 6th 2026 – March 5th 2026 that was $2.38 usage charges, with no tax added. Although a small amount, I decided it would be a good introductory example for anyone that has to understand and pay for Azure. My account is Pay-As-You-Go and I use it for learning, experiments and proof of concepts.

Click through for that primer. I think, on the whole, the way Azure shows billing is okay. The tricky part is when you want to reduce the bill. They’ll show you, for example, that you have D3 v2 or DS3 v2 virtual machines running in East US 2, but then you have to dig in and figure out which of your virtual machines are running that SKU. And there are some services that spin up VMs in the background, so you might see billing for that even if you didn’t directly create a VM of that SKU.

But with a bit of digging, you can at least gain an understanding of what’s costing money in Azure.

Leave a Comment

Cracking SQL Server 2025 SQL Auth Passwords with hashcat

Vlad Drumea has a great post:

Last year I wrote about SQL Server 2025’s new PBKDF2 hashing algorithm: what that means from a security perspective, as well as how it impacts online cracking.
And even how to enable it in SQL Server 2022.

Vlad created a module that cracks SQL Server 2025 passwords offline (versus actually connecting to the SQL Server instance itself and extrapolates it to online cracking (connecting to the SQL Server instance and trying different passwords). Vlad has some really good news on the whole and this post serves to explain why Microsoft introduced PBKDF2 as part of the hashing algorithm for SQL Server 2025.

Leave a Comment

Pre-Filtering Power BI Reports with URL Filters

Ben Richardson takes you to the right place:

Most Power BI users share reports one of two ways: they send the full report URL and ask people to filter it themselves, or they build separate reports for each team and spend the next year maintaining them.

Neither approach is ideal. Which is why URL filters are a great third option!

By appending a short query string to a report URL, you can control exactly what a reader sees the moment they open the link.

All without touching the underlying report, without duplicating it, and without relying on your readers to set up their own filters correctly.

This guide covers how URL filters work, how to write the syntax correctly, and where they will save you time.

Ben does cover the limitations around URL filters as well. This sounds like its best-case scenario is when there is another application that can serve Power BI URLs.

Leave a Comment

Using the Performance Monitor Lite Dashboard

Erik Darling has a new video:

In this video, I dive into the Lite dashboard of my free open-source monitoring tool, which has garnered significant attention with over 10,000 installs based on GitHub repo stats. I highlight its user-friendly nature, especially for consultants or those who can’t install software on client servers, as it allows you to collect a wide range of performance metrics without the need for a separate database. I also showcase how DuckDB, an embedded analytics database, powers the Lite dashboard, ensuring fast query performance and efficient storage through Parquet file compression, making it an ideal solution for monitoring Azure SQL databases and other environments.

Click through for the video. You can grab a copy of the Lite edition, as well as the also-free Full edition, on Erik’s GitHub repo.

Leave a Comment

Fabric Deployments in Azure DevOps via fab deploy

Kevin Chant has a tutorial:

This post covers using fab deploy in Azure DevOps for Microsoft Fabric deployments based on YAML pipelines. In addition, this post shows how you can perform initial tests locally and introduces some AI concepts. Plus, this post shares plenty of links and advice.

You can find an example to accompany this post in the ‘create-genworkspace-fabric-cli.yml‘ file my ADO-fabric-cicd-sample Git repository. I also added some AI elements within this Git repository as well. Including the Fabric CLI skills that were announced during FabCon.

Click through to learn more about fab deploy and how the process works.

Leave a Comment

What’s New in R 4.6.0

Russ Hyde has a summary:

R 4.6.0 (“Because it was There”) is set for release on April 24th 2026. Here we summarise some of the more interesting changes that have been introduced. In previous blog posts, we have discussed the new features introduced in R 4.5.0 and earlier versions (see the links at the end of this post).

Once R 4.6.0 is released, the full changelog will be available at the r-release ‘NEWS’ page. If you want to keep up to date with developments in base R, have a look at the r-devel ‘NEWS’ page.

Click through for the highlights.

Leave a Comment

Applying the Pareto Principle to Query Store

Erik Darling channels an Italian economist:

In this video, I dive into an old-school approach to identifying SQL Server performance issues using SP Quickie Store and a novel method inspired by the Pomodoro [“Pareto” – ed] technique. Traditionally, Query Store surfaces queries that consumed a lot of CPU over the last seven days, but often these results are too broad for practical use. To address this, I’ve developed a multi-dimensional scoring system that evaluates queries based on their impact across several key metrics: CPU usage, duration, physical reads, writes, and executions. This approach helps pinpoint the most problematic queries more accurately, even when they run outside of typical working hours or are unparameterized. By sharing these insights, I hope to provide a practical tool for SQL Server administrators looking to optimize their databases without relying solely on modern monitoring tools.

The AI generated summary reminds me that I’ve been working for 25 minutes, so time to take a break.

I like the idea of calculating and calculating & displaying impact scores, as well as breaking it down into core components.

Leave a Comment

Preventing SQL Injection in Stored Procedures

Vlad Drumea fixes a procedure:

In the past few years, I’ve seen quite a few stored procedures that rely on dynamic T-SQL without properly guarding for SQL injection.

Some cases were reporting stored procedures, while others were maintenance type stored procedures (e.g. stats updates) that could be kicked off from the app, or even stored procedures that handled app upgrades/patching.

In all these cases, certain portions of the dynamic T-SQL relied on input provided by users via input parameters.

Read on for an example. The solution is still the classic combination of QUOTENAME() and sp_execute_sql whenever you have user input.

Leave a Comment

When Fabric Mirroring Doesn’t Copy Rows

Koen Verbeeck troubleshoots an issue:

A short blog post about an issue with Fabric Mirroring (with Azure SQL DB as the source) that I’ve managed to run into, twice. I’ve set up mirroring by creating a connection using a service principal and this principal has the proper permissions on the source database. Configuring the replication was without issues, and the replication status went from “starting” to “running”. However, no rows were being copied. The tables were all listed in the monitoring pane, but the counters of “rows replicated” remained at zero. There were no errors in the logs (in OneLake) and nothing suspicious was mentioned in the monitoring.

This was a rather pernicious issue. Based on Koen’s explanation, it sounds like there’s no way to know what the actual problem was.

Leave a Comment