Press "Enter" to skip to content

Curated SQL Posts

Tips for Performance Tuning SQL Server

Paul Randal provides a framework:

There are a huge number of best practices around SQL Server performance tuning – I could easily write a whole book on the topic, especially when you consider the number of different database settings, server settings, coding practices, wait types, and so on that can affect performance. For this post I decided to step back a bit from a list of specifics and list some general recommendations for how to approach performance tuning so that you maximize effort and minimize distractions.

Click through for several recommendations to make your performance tuning operations a bit smoother. None of these relate to specific performance problems. Rather, the post serves as some recommendations around how to approach performance tuning in general.

Leave a Comment

Building the Well-Architected Framework for Fabric

Joey D’Antoni applies Azure principles to Microsoft Fabric:

Let’s take a step back and talk about why I built this session. Like it or not, Microsoft’s intention with Fabric (and Power BI before it) is to make it easier for less-technical business users to build and consume data-driven reports. While I understand this mission, and it has been wildly successful in spreading love for Power BI, despite Fabric’s software-as-a-service branding, it’s actually a fully fledged data engine that needs to be well-managed to ensure data governance, security, and adherence to general best practices. In building my demos, I created a sample workspace with a couple of objects.

Click through for more notes on Joey’s talk, as well as a link to the code.

Leave a Comment

Performance Testing DATE_BUCKET()

Louis Davidson runs some tests:

A month and a half ago, I wrote a blog on using DATE_BUCKET. It is a cool feature thta makes doing some grouping really quite easy. It is here: Cool features in SQL Server I missed…DATE_BUCKET. One of the comments that came in was about performance of the DATE_BUCKET versus using things like DATEDIFF or a date table.

I started working on it then, but it got a bit involved (as performance comparison tests often do), so it took me a bit longer to get to than expected. But here it is, and the results are kind of what you would expect. The uses for DATE_BUCKET are really straightforward, and would rarely involve an index or a lot of filtering using the the function. But over a large number of rows, if it takes more time (even a millisecond more) than another method, you would notice it pretty quickly adding up.

Read on to see how DATE_BUCKET() performs compared to other methods of solving the same problem.

Leave a Comment

Batch versus Stream for Data Processing

Nikola Ilic answers a question and then the follow-up question:

If you’ve spent any time in the data engineering world, you’ve likely encountered this debate at least once. Maybe twice. Ok, probably a dozen times “Should we process our data in batches or in real-time?” And if you’re anything like me, you’ve noticed that the answer usually starts with: “Well, it depends…”

Which is true. It does depend. But “it depends” is only useful if you actually know what it depends on. And that’s the gap I want to fill with this article. Not another theoretical comparison of batch vs. stream processing (I hope you already know the basics). Instead, I want to give you a practical framework for deciding which approach makes sense for your specific scenario, and then show you how both paths look when implemented in Microsoft Fabric.

Read on to learn why both are viable patterns and how you can work with both in Microsoft Fabric.

Leave a Comment

SQL Server Performance Monitor Videos

Erik Darling continues a series of videos for his SQL Server Performance Monitor. First up is a breakdown of the two editions (both of which are free):

In this video, I delve into my new, completely free, open-source SQL Server monitoring tool, discussing which edition—full or light—you might find most useful based on your specific needs. The full edition creates a database and agent jobs to continuously collect data, offering complete control over the collection schedule and allowing for easy customization of stored procedures. It’s ideal for scenarios where you need constant data collection and want the flexibility to manage the monitoring tool as needed. On the other hand, the light edition uses an embedded DuckDB version inside itself, collecting data only when it’s open, making it perfect for quick triage or situations with limited server access, ensuring that no data is missed while you’re away from your computer. Both editions offer alerts and notifications for critical issues like blocking and high CPU usage, providing a seamless experience regardless of which edition you choose.

Erik also has a video covering how to communicate with the built-in MCP server:

In this video, I delve into the built-in MCP server feature of my FreeSQL server monitoring tool, Darling Data. This feature allows users to interact with their collected monitoring data in a conversational manner using language models like Claude or other compatible tools. The MCP server is designed for local use only and does not expose any data beyond what’s available within the performance monitor database, ensuring that no sensitive information is at risk. By enabling this feature, you can ask questions about your server’s performance directly to an LLM, receiving quick insights into issues such as deadlocks, query performance, and workload spikes. This approach simplifies the process of diagnosing problems without requiring extensive manual analysis or script writing.

Leave a Comment

Explaining Non-Yielding Schedulers during Backups

Sean Gallardy provides an explanation:

Sparked by my friends at Straight Path SQL, we’re going to look at a rare cause of non-yielding scheduler dumps. This is another one of those items that I meant to write about a long time ago when I worked in Support and then, well, I suck and clearly forgot about it. This should give you an idea of how rare this specific issue hits.

Read on to see what might cause this scenario and why you’re likely never to see it in the wild.

Though now that I’ve typed it, I’m sure someone will take that as a challenge.

Leave a Comment

Monitoring Fabric Mirroring of SQL Server 2025

Meagan Longoria wants to make sure things are working as expected:

previously wrote about how the underlying technology for Fabric mirroring changed with SQL Server 2025. The latest version of mirroring that uses the SQL Server Change Feed is reading from the database transaction logs and pushing the data to a landing zone in OneLake. The data is then merged into the Delta tables for the Fabric mirrored database.

In this blog post, we will look at how to monitor this process, both in SQL Server and in Fabric.

Click through for information on the right DMVs to query and what you can find within Microsoft Fabric itself.

Leave a Comment

Code to Perform Binary Search in SQL Server

Andy Brownsword has a procedure:

Let’s recap what we’re doing here:

Large append-heavy tables – like logs or audits – often don’t have a useful index on the timestamp. These types of tables do however have a strong correlation between their clustering key and the timestamp due to chronological inserts.

A binary search approach splits the table in half to narrow down the search space with each iteration. By abusing the incremental relationship between the clustering key and timestamps, we can quickly zero in on the point in time we’re after. If you want to see the mechanics, check out last week’s post.

I love the approach for log tables, assuming that a timestamp is part of the filter. This is a clever application of a very common computer science algorithm to database operations.

Leave a Comment

Background on a Performance Monitoring Tool

Erik Darling has a new video:

In this video, I introduce my new free performance monitoring tool for SQL Server, which was developed after a bit of boredom during the holidays and a desire to create something simple yet powerful. The main issue I’ve noticed is that many existing tools are either overly complex or too expensive, making them less accessible to smaller teams or individuals. My goal with this tool was to provide a straightforward solution that doesn’t require setting up VMs, dealing with firewalls, or managing domain accounts—essentially streamlining the process for anyone looking to monitor SQL Server performance without the hassle of a full-blown enterprise setup. The tool supports various editions of SQL Server and even Azure SQL DB, offering both a full version with database logging and an easier-to-use light version that’s self-contained within DuckDB. I also highlight how it integrates machine learning models for those who prefer not to dive deep into dashboards, providing quick answers through natural language queries.

It’s good to have yet another example of the productive powers of boredom.

Also, I need to get back into this tool now that the Linux bug has been fixed.

Leave a Comment

dbt and Microsoft Fabric

Pradeep Srikakolapu and Abhishek Narain dig into dbt:

Modern analytics teams are adopting open, SQL-first data transformation, robust CI/CD and governance, and seamless integration across lakehouse and warehouse platforms. dbt is now the standard for analytics engineering, while Microsoft Fabric unifies data engineering, science, warehousing, and BI in OneLake.

By investing in dbt + Microsoft Fabric integration, Microsoft empowers customers with a unified, enterprise-grade analytics platform that supports native dbt workflows—future-proofing analytics engineering on Fabric.

I’ll be interested to see if this retains corporate investment longer than some of their open-source collaborations. That’s been a consistent issue over the years: announce some neat integrations with a popular technology, release a couple of versions, and then quietly deprecate it a year or two later. This sounds like it’s less likely to end up in that boat, simply based on how the Fabric team is collaborating compared to, say, the various Spark on .NET efforts over the years.

Leave a Comment