Press "Enter" to skip to content

Category: Performance Tuning

Splitting GUIDs into Multiple BIGINTs for Columnstore

Forrest McDaniel performs an experiment:

You may have run into issues with GUIDs as clustering keys, but another major problem is in columnstore. Smart people at Microsoft wrote columnstore in a way to take advantage of modern CPU features, but those CPU features don’t play well with datatypes larger than 8 bytes. Which includes GUIDs.

Read on for the demonstration of this, a clever workaround, and the ramifications of splitting GUIDs into two BIGINTs. Full points for cleverness, though like Forrest, I wouldn’t want to use this in production.

Comments closed

Scaling On-Prem Vector Search with Ollama and Nginx

Anthony Nocentino solves a problem:

When you call out to an external embedding service from T-SQL via REST over HTTPS, you’re limited by the throughput of that backend. If you’re running a single Ollama instance, you’ll quickly hit a ceiling on how fast you can generate embeddings, especially for large datasets. I recently attended an event and discussed this topic. My first attempt at generating embeddings was for a three-million-row table. I had access to some world-class hardware to generate the embeddings. When I arrived at the lab and initiated the embedding generation process for this dataset, I quickly realized it would take approximately 9 days to complete. Upon closer examination, I found that I was not utilizing the GPUs to their full potential; in fact, I was only using about 15% of one GPU’s capacity. So I started to cook up this concept in my head, and here we are, load balancing embedding generation across multiple instances of ollama to more fully utilize the resources.

Click through for the solution.

Comments closed

Memory Grant Feedback Woes

Rebecca Lewis explains an issue with memory grant feedback in SQL Server:

Before SQL Server runs a query, it estimates how much memory it needs for sorting and joining. But what if it gets it wrong?

  • Too little memory → Spills to tempdb (slow)
  • Too much memory → Starves other queries

SQL Server 2017+ tries to fix bad estimates based on previous calls with ‘Memory Grant Feedback’.  Kinda like:  ‘Last time I gave you 2GB but you only used 50MB. Next time I’m giving you less.’

This is great in theory, though Rebecca shows a case in which the end result might not be great.

Comments closed

Comparing Spark Application Performance in Microsoft Fabric

Jenny Jiang announces a new capability:

The Spark Applications Comparison feature is now in preview in Microsoft Fabric. This new capability empowers developers and data engineers to analyze, debug, and optimize Spark performance across multiple application runs—whether you’re tracking changes from code updates or data variations to improve performance.

The image in the blog post is pretty small and hard to read, but I do wonder if (or how well) it will capture cases where you’re twiddling your thumbs to get a machine so that you can execute your code. This seems to be a big problem sometimes.

Comments closed

More Fun with Page Latches

Jared Poche continues a series on page latches:

In my previous blog, I set up a database with two tables, one with a large CHAR(8000) field and one with a smaller VARCHAR(100) field. Both tables use an INT IDENTITY column for their primary key. Since we’ll be inserting rows sequentially, we will see page latch contention when multiple threads attempt to insert.

We ran some initial tests with SQLQueryStress to create some page latch contention and resolved an odd problem causing connection delays.

We’ll use these two tables and test several different approaches to reduce page latch contention.

Jared shows the results for a variety of different tests and even has an embedded Excel spreadsheet, which is how you know he’s done his homework.

Comments closed

The Downside of Sticking to the Legacy Cardinality Estimator

Stephen Planck recommends taking the plunge:

Cardinality estimation (CE) is how the optimizer predicts the number of rows that will flow through each operator in a plan. Those estimates drive cost, join choices, memory grants, and ultimately latency and resource usage. SQL Server has shipped multiple CE models over time. The pre-2014 model—commonly called the legacy CE—dates back to SQL Server 7.0. Starting in SQL Server 2014, Microsoft introduced a new CE and has continued refining it in later releases, including SQL Server 2022. Keeping the legacy CE turned on in SQL Server 2022 is usually the wrong long-term choice.

One thing to note is that the “new” cardinality estimator has been out for a decade. It’s not really that new anymore, and it’s not going anywhere soon. Yeah, there are still trade-offs where some queries are better on the legacy estimator, but for the people who use that as their reason for not using the new estimator, what have you done in the past decade to address this and tune those queries to work better? If the answer is “nothing,” it’s not the cardinality estimator’s fault here.

1 Comment

Ordered Insert Optimization in OrioleDB

Alexander Korotkov deals with hot page issues:

When many sessions try to insert into the same B-tree leaf page, classic exclusive page locking serializes progress and wastes time on sleep/wake cycles. We’re introducing a batch page insertion path that lets the session holding the page lock insert for itself and its neighbors. The result: dramatically reduced lock waits, and big gains at high client counts (2X throughput boost starting from 64 clients in our benchmark).

Click through to see how it differs from normal PostgreSQL behavior on what is one of the more common performance problems on busy OLTP systems.

Comments closed

Troubleshooting Power BI Report Performance

Ben Richardson speeds up a report:

Slow Power BI reports waste time and frustrate users.

The slowdown often comes from hidden issues in the data model, DAX, visuals, or refresh settings.

This guide shows you how to find the cause, fix it, and keep reports running smoothly.

Read on to see some of the built-in tooling for performance optimization, as well as more information on these four common issues.

Comments closed

An Introduction to Query Folding in Power BI

Alex Powers takes us through one major performance optimization technique in Power BI:

One of the most powerful capabilities of Power Query and the M Language is Query Folding (also referred to as query delegation, and predicate push-down). Query Folding allows the Power Query Mashup Engine to push the transformations expressed in an M (mashup) query to the data source, in the data source’s query language, resulting in more efficient data processing.

For inexperienced database technology users this ability to leverage the graphical user interface of Power Query to dynamically generate a query written in the data source’s query language unlocks enormous opportunities to find insights with any data, at any scale.

Click through to see how you can know if query folding is enabled, as well as some hints around when and to what extent query folding will work.

Comments closed