Press "Enter" to skip to content

Author: Kevin Feasel

Tracking Object Dependencies in SQL Server

Greg Low wants to know how things tie together:

This post describes how the object dependency tracking views provide more reliable insights into object dependencies than previous methods such as the use of the sp_depends system stored procedure.

During a recent consulting engagement, I was asked about the best way to determine which stored procedures and views made use of a particular table. In the past, the methods available from within SQL Server were not very reliable. Way back in SQL Server 2008, significant improvements were made in this area, yet I see so few people using them, at least not directly. Many will use them indirectly via SSMS.

In this post, let’s explore the problems with the previous mechanisms (that are still retained for backwards compatibility) and then see how the object dependency views improve the situation.

The dependency DMVs that Greg lands on are much better than sp_depends, for sure, but don’t expect them to know about cross-instance dependencies.

Leave a Comment

Representing Partial Data in a Series

Amy Esselman explains how to signify that a point in a time series is incomplete:

When we’re reporting the latest information, it can be challenging to know how to handle data that is still in progress. For example, if we’re reporting annual performance trends with only three quarters completed in the latest year, the numbers can appear misleadingly low. If you exclude the latest data points, it could hide crucial details from stakeholders. Audiences often want timely updates, but partial data can cause confusion if not clearly communicated. 

Amy includes several tactics that can clarify the situation.

Leave a Comment

Reading an EXPLAIN Plan in PostgreSQL

Andrea Gnemmi reads the plan:

A typical task DBAs and Developers perform is optimizing query performance. The first step, after identifying troublesome queries using a tool like the pg_stat_statements view, is to look at the execution plan to determine what is happening and how to improve.

In PostgreSQL this can be done using EXPLAIN or using third-party tools which use the same process to gather query execution data.

Click through to see what an explain plan looks like in PostgreSQL and ways to visualize those plans.

Leave a Comment

Primary Keys and DAX Query Performance

Phil Seamark explains why including primary keys in summarize statements can be a bad thing:

When writing DAX queries, performance tuning often comes down to small design decisions that have big consequences. One such decision is whether to include Primary Key columns from Dimension tables in your SUMMARIZECOLUMNS statements. This is particularly important when those Dimension tables use DUAL or IMPORT storage modes.

This article explains why doing so can lead to inefficient query plans. It describes what happens under the hood. It also shows how to avoid this common pitfall.

Read on to learn more.

Leave a Comment

Regression Testing for PostgreSQL Queries

Radim Marek announces a new project:

This is where RegreSQL comes in. Rather than trying to turn SQL into something else, RegreSQL embraces “SQL as strings” reality and applies the same testing methodology PostgreSQL itself uses: regression testing. You write (or generate – continue reading) your SQL queries, provide input data, and RegreSQL verifies that future changes don’t break those expectations.

The features don’t stop there though – it tracks performance baselines, detects common query plan regressions (like sequential scans), and gives you framework for systematic experimentation with the schema changes and query change management.

Read on to learn more about how it works and check out the GitHub repo if you’re interested.

Leave a Comment

The Joys of FORMATMESSAGE

Louis Davidson listened to some advice:

A few weeks ago, I wrote a post on using temporary stored procedures in SQL Server. Kevin Feasel of Curated SQL had this reply Using Temporary Stored Procedures to Output Common Messages. I had heard of FORMATMESSAGE before, but I had completely coupled this in my mind with formatting SQL Server error messages. (Which turns out to be a great use of this tech)

Click through to see how it works and some additional testing with RAISERROR().

Leave a Comment

Stream or Batch Ordering with Apache Iceberg

Jack Vanlightly shows some tradeoffs:

Today I want to talk about stream analytics, batch analytics and Apache Iceberg. Stream and batch analytics work differently but both can be built on top of Iceberg, but due to their differences there can be a tug-of-war over the Iceberg table itself. In this post I am going to use two real-world systems, Apache Fluss (streaming tabular storage) and Confluent Tableflow (Kafka-to-Iceberg), as a case study for these tensions between stream and batch analytics.

Read on for a summary of how two opposite ideas can both be perfectly reasonable.

Leave a Comment

View Creation via Visual Queries in Microsoft Fabric

Jon Vöge creates a view:

As companies adopt Microsoft Fabric, the distance between backend artifact and Semantic Model is smaller than ever, and it feels more obvious than ever to push some of those local transformations to your Fabric Storage item of choice.

The question is. How do you do that? There are many options:

Read on for those options. Jon focuses on one for people with less database experience.

Leave a Comment

Querying JSON Quickly in SQL Server 2025

Brent Ozar follows up on yesterday’s post:

SQL Server 2025 and .NET 10 bring several new improvements to storing JSON natively in the database and querying it quickly.

On the SQL Server 2025 side, the two big ones are the new native JSON indexes and the new JSON_CONTAINS function. Let’s see their improvements in action. On the .NET 10 side, EF 10 not only supports the new JSON data type, but on databases of compatibility level 170 or higher, EF will automatically migrate JSON data from NVARCHAR(MAX) data types over to JSON the next time you do a migration, as explained in the What’s New in EF Core 10 doc. That makes it especially important for you to understand how the new JSON indexes work, because they may be coming at you quickly the instant you change your compatibility level.

Read on to see Brent’s take.

Leave a Comment