Press "Enter" to skip to content

Curated SQL Posts

April Updates for Straight Path Solutions sp_Check Procedures

Jeff Iannucci shares some updates:

We also have been working on the content pages referenced by the URLs in the stored procedures. We’ve been going through them all to make sure the content was more helpful. Lots of the previous content pages have had simple sentences that weren’t very descriptive, so we’ve been filling those out to provide more explanations.

This month the updates are mostly corrections, although there were additional checks added to sp_CheckBackup and sp_CheckTempdb.

Click through for the changelogs and links to the procedures.

Leave a Comment

PostgreSQL and Variables

Shaun Thomas shows off some functionality:

There’s been a kind of persistent myth regarding Postgres since I first started using it seriously over 20 years ago: “Postgres doesn’t support user variables.” This hasn’t really been true since version 8.0 way back in 2005. Part of this stems from the fact it doesn’t do things the same way as other common database engines.

Why don’t we spend a little time exploring the functionality that time forgot?

Reading through the post, I’m pretty happy with the way MySQL and SQL Server do it, even if SQL Server’s variables are batch-level and can be annoying if you want to maintain variable state across batches. In that case, you’d typically use a user-defined table type or temp table to store the values and re-instate them in the next batch.

Leave a Comment

Connecting Semantic Models to Data Sources via Binding Hints

Chris Webb shares a note:

Did you know that you can configure your Power BI semantic model so that it automatically binds to a data source connection when you publish?

To illustrate how to do this, I created an Import mode Power BI semantic model in Power BI Desktop connected to the Products table in the ContosoSales sample database in the Azure Data Explorer help cluster. Anyone can connect to this source, you just need a Microsoft Account to authenticate.

Click through for the code and some additional tips.

Leave a Comment

Recommendations on When to Use CLR

Greg Low shares some thoughts:

I’ve recently been talking to clients about SQL CLR objects. When these were first introduced in SQL Server 2005, many of us had high hopes for them. SQL Server has never been great in regard to extensibility and this provided some way to extend the product.

Nowadays, I avoid SQL CLR. And that’s a real pity. But it’s no longer supported in Azure SQL Database, apart from the system CLR objects of geometry, geography, and hierarchyid. (Note: I’m also not a fan of hierarchyid). I need to use extensibility methods that are available in the different environments that I work in, and Azure SQL Database is one of those. The same applies to Fabric SQL Database.

But for people who don’t use Azure SQL Database or Fabric SQL Database, Greg does share some thoughts below the fold.

Leave a Comment

Microsoft Fabric April 2026 Feature Summary

kamurray has a big list of updates:

This month’s update brings a broad set of new capabilities across Microsoft Fabric, spanning the platform experience, Data Engineering, Data Science, Data Warehouse, and Real-Time Intelligence. Read on to learn about improvements to the Fabric experience, deeper VS Code integration, enhanced notebook resiliency, expanded machine learning and governance features, and new real-time data processing capabilities.

Click through to see what’s new.

Leave a Comment

Choosing between Power Apps and Translytical Task Flows

Nicky van Vroenhoven gives the standard consulting answer:

I think I have gotten this question at least five or six times in the last few months, and with Translytical Task Flows reaching GA in the March 2026 Power BI update, I expect it to come up even more. So let me write it down once and for all.

The question usually sounds something like: “We want users to be able to add comments or update values in their Power BI report. Should we use Power Apps or this new Translytical Task Flows thing?”

My honest answer is: it depends 😆, but the decision is simpler than you might think.

Click through for the decision criteria.

Leave a Comment

Guidance on Building an Application

Brent Ozar talks marketing:

I do absolutely consult for software vendors. I used to work for Quest Software, and I’ve consulted for Amazon, Google, and a lot of third party software vendors. It’s real work that requires real effort on my part, and I need to get paid for that.

Having said that, I still wanna help you for free, so I’ve put together this blog post with my favorite advice for software makers. There’s a lot of hard-learned lessons in here, and I hope you can just read ’em and avoid some of the most common pitfalls that folks run into.

As one of the few people in the SQL Server community who’s good at marketing, Brent’s advice is worth a careful read.

1 Comment

Dropping Users and Stale EXTERNAL MODEL Permissions

Andreas Wolter sees a cache invalidation issue:

When identity or permission boundaries behave inconsistently – even under specific lifecycle conditions – that is more than a product bug. It becomes a security-relevant design issue, because security depends not just on how access is granted, but also on how reliably it is revoked.

While investigating the new permission model introduced alongside SQL Server 2025’s AI integration and vector search capabilities (Article: New Permissions in SQL Server 2025), I encountered a case where EXTERNAL MODEL permissions can persist after a user is dropped, creating stale authorization state.

Click through to learn more about this issue and what it means.

Leave a Comment

Retrieving Materialized Lake View Lineage and Refresh Times

Meagan Longoria wants information:

Materialized lake views (MLVs) in Microsoft Fabric are an effective way to implement medallion architecture declaratively, but once you have a pipeline of MLVs in production, you need visibility into whether they’re current. Fabric’s MLV management area gives you a visual lineage and refresh history, but if you want to build automated alerting, logging, or custom tooling, you need to get that information programmatically. This post walks through one way to do that, using a small demo lakehouse built entirely in a Fabric notebook.

Click through for that demonstration.

Leave a Comment

A Verbose Pipe Operator for dplyr Pipelines

Guillaume Pressiat shows off logrittr:

In SAS, every DATA step prints a log:

NOTE: There were 120000 observations read from WORK.SALES.
NOTE: 7153 observations were deleted.
NOTE: The data set WORK.RESULT has 112847 observations and 11 variables.

R’s dplyr pipelines are silent. logrittr fills that gap with %>=%, a drop-in pipe that logs row counts, column counts, added/dropped columns, and timing at every step, with no function masking.

Click through to see how logrittr helps. Back when I was using R heavily, I would have really enjoyed this package. H/T R-Bloggers.

Leave a Comment