Press "Enter" to skip to content

Day: November 14, 2025

Representing Partial Data in a Series

Amy Esselman explains how to signify that a point in a time series is incomplete:

When we’re reporting the latest information, it can be challenging to know how to handle data that is still in progress. For example, if we’re reporting annual performance trends with only three quarters completed in the latest year, the numbers can appear misleadingly low. If you exclude the latest data points, it could hide crucial details from stakeholders. Audiences often want timely updates, but partial data can cause confusion if not clearly communicated. 

Amy includes several tactics that can clarify the situation.

Leave a Comment

Tracking Object Dependencies in SQL Server

Greg Low wants to know how things tie together:

This post describes how the object dependency tracking views provide more reliable insights into object dependencies than previous methods such as the use of the sp_depends system stored procedure.

During a recent consulting engagement, I was asked about the best way to determine which stored procedures and views made use of a particular table. In the past, the methods available from within SQL Server were not very reliable. Way back in SQL Server 2008, significant improvements were made in this area, yet I see so few people using them, at least not directly. Many will use them indirectly via SSMS.

In this post, let’s explore the problems with the previous mechanisms (that are still retained for backwards compatibility) and then see how the object dependency views improve the situation.

The dependency DMVs that Greg lands on are much better than sp_depends, for sure, but don’t expect them to know about cross-instance dependencies.

Leave a Comment

Reading an EXPLAIN Plan in PostgreSQL

Andrea Gnemmi reads the plan:

A typical task DBAs and Developers perform is optimizing query performance. The first step, after identifying troublesome queries using a tool like the pg_stat_statements view, is to look at the execution plan to determine what is happening and how to improve.

In PostgreSQL this can be done using EXPLAIN or using third-party tools which use the same process to gather query execution data.

Click through to see what an explain plan looks like in PostgreSQL and ways to visualize those plans.

Leave a Comment

Primary Keys and DAX Query Performance

Phil Seamark explains why including primary keys in summarize statements can be a bad thing:

When writing DAX queries, performance tuning often comes down to small design decisions that have big consequences. One such decision is whether to include Primary Key columns from Dimension tables in your SUMMARIZECOLUMNS statements. This is particularly important when those Dimension tables use DUAL or IMPORT storage modes.

This article explains why doing so can lead to inefficient query plans. It describes what happens under the hood. It also shows how to avoid this common pitfall.

Read on to learn more.

Leave a Comment

Regression Testing for PostgreSQL Queries

Radim Marek announces a new project:

This is where RegreSQL comes in. Rather than trying to turn SQL into something else, RegreSQL embraces “SQL as strings” reality and applies the same testing methodology PostgreSQL itself uses: regression testing. You write (or generate – continue reading) your SQL queries, provide input data, and RegreSQL verifies that future changes don’t break those expectations.

The features don’t stop there though – it tracks performance baselines, detects common query plan regressions (like sequential scans), and gives you framework for systematic experimentation with the schema changes and query change management.

Read on to learn more about how it works and check out the GitHub repo if you’re interested.

Leave a Comment