Press "Enter" to skip to content

Category: Query Tuning

Thoughts on Combining UPDATE Operations

Brent Ozar thinks about consolidation:

I’ve got a BEGIN TRAN in there before the updates just so I can test the same queries repeatedly, and roll them back each time. The execution plan for the updates is quite nice: SQL Server divebombs into the supporting indexes:

Relatively few rows match, so our query does less than 1,000 logical reads – way less than there are pages in the table. In this case, separate UPDATE statements make sense.

Brent then continues with, but what do we do when we need to perform multiple independent scans of the same table? Read on for Brent’s answer, but definitely check out Thomas Franz’s comments, which indicate a potential complicating factor.

Leave a Comment

Functions in JOIN and Performance Problems

Steve Stedman lays out a warning:

When writing SQL queries, it’s easy to focus on getting the right results without thinking too much about performance. One common mistake that can lead to significant slowdowns is using functions in the JOIN ON clause of a SELECT statement.

While SQL Server supports a wide range of built-in functions, using them incorrectly—especially in the join conditions—can severely impact performance. Let’s break down why this happens, how it can affect your query speed, and what you can do to avoid it.

Read on for the answer.

Leave a Comment

The Power of Computed Columns

Andy Brownsword speeds up a query:

Bad code smells can run through a system, and one instance can quickly spread when code is recycled. Simon recently looked at a non-sargable example and was able to fix it by using an index and refactoring the query.

I wanted to consider an alternative approach if we saw the same issue repeated consistently. We don’t want to fix each instance, we want a single fix. We’ll solve this with indexed computed column.

We can index computed columns to help resolve deterministic (i.e. won’t change, no GETDATE() allowed) clauses. Let’s get started.

Read on to learn more. This is a powerful approach to the problem of needing to perform some sort of data transformation before filtering your data.

Leave a Comment

Transforming Queries Based on Human Intent

Andrei Lepikhov and Alena Rybakina ask a question:

As usual, this project was prompted by multiple user reports with typical complaints, like ‘SQL server executes the query times faster’ or ‘Postgres doesn’t pick up my index’. The underlying issue that united these reports was frequently used VALUES sequences, typically transformed in the query tree into an SEMI JOIN.

I also want to argue one general question: Should an open-source DBMS correct user errors? I mean optimising a query even before the search for an optimal plan begins, eliminating self-joins, subqueries, and simplifying expressions – everything that can be achieved by proper query tuning. The question is not that simple since DBAs point out that the cost of query planning in Oracle overgrows with the complexity of the query text, which is most likely caused, among other things, by the extensive range of optimisation rules.

My short answer is, yes. SQL is a 4th generation language, meaning that end users describe the results they need but leave it to the engine to determine how to get there. As performance tuners, we may understand some of the foibles of the database engine and how it does (or does not) perform these translations, but in an ideal world, every unique representation of an end state for a given query should have the same, maximally optimized internal way of getting there. This is impossible in practice, but it should be a guiding principle for engine behavior.

Leave a Comment

Techniques for Unpivoting Data in SQL Server

Jared Westover performs a technique showdown:

A few weeks ago, I helped someone combine multiple integer columns into a single column. The common term for this process is unpivoting or transposing. The table they worked with had millions of rows, so they needed the most efficient method to accomplish this task. If you search online, you’ll find several suggestions for doing this. But which performs best? Is one easier to maintain than the others?

Click through for a review of three separate techniques: using the UNPIVOT operator, using UNION ALL, and using CROSS APPLY. The dataset was relatively small, but even at that size, CROSS APPLY did a good job. But I won’t spoil too much here.

Leave a Comment

Window Functions and Running Totals

Steve Jones makes a comparison:

Often I see running totals that are written in SQL using a variety of techniques. Many pieces of code were written in pre-2012 techniques, prior to window functions being introduced.

After SQL Server 2012, we had better ways to write a total. In this case, let’s see how much better. This is based on an article showing how you might convert code from the first query to the second. This is a performance analysis of the two techniques are different scales..

Steve shows a very fancy version of the self-join technique, which is actually even slower than using a cursor for this work. The fastest variant on the technique was something nicknamed the ‘quirky update’ technique, but it relied on an accidental property of how clustered indexes worked on temp tables in SQL Server and was not something Microsoft ever officially supported, meaning that any service pack, cumulative update, or hotfix might have broken your code and the best you’d get is an indifferent shrug.

Comments closed

Parameter Sensitive Plan Optimization and Memory Grant Feedback

Deborah Melkin has a video for us:

I’m doing something new – instead of a full post with lots of text, I’m doing a video! While it’s not as polished as others in the community, I didn’t want making it perfect get in the way of getting this done. I’m hoping to do more of these so I will definitely be upping in my game in the production department in the future.

But it’s really the content that I wanted to focus on. It’s a quick look at Parameter Sensitive Plan Optimization and Memory Grant Feedback and what they can do. This is just an introduction to seeing them in action and understanding how we can use that to help tune our queries long term.

Click through for the video.

Comments closed

Parsing SQL Server Query Stats in Powershell

Andy Levy builds a cmdlet:

A couple weeks ago, DrData asked

With SET STATISTICS IO ON, is there really no way to show the TOTALS at the bottom? There are some nice tools like StatisticsParser but it sure would be nice to see the totals of all values right at the bottom, especially when there are many tables.

The task at hand in the office was a need to do the same thing, but with SET STATISTICS TIME ON. So I got to thinking…I can PowerShell my way through this, right?

Andy, being The Man, shows us exactly how to do this.

Comments closed

ISNULL vs COALESCE in SQL Server

Erik Darling has a video for us:

A Difference Between ISNULL And COALESCE You Might Care About In SQL Server

There’s nothing for me to snip as the graf. I don’t often link to videos without any sort of text accompaniment, but it’s been too long since I’ve linked to Erik and this was an interesting topic.

Bonus points for using “case expression” instead of the more common but technically incorrect “case statement.”

Comments closed

Implicit Conversions in SQL Server

Vlad Drumea explains what it means implicitly to convert:

If you’re here, you most likely know what a data type conversion is, but, in short, it’s the operation of converting a value from one data type to another.

There are two types of conversions in SQL Server:

  • explicit – which are done by explicitly applying the CAST and CONVERT functions on a column, variable, or value.
  • implicit – when CAST and CONVERT are not used explicitly, but SQL Server ends up doing the conversation behind the scenes due to two distinct data types being compared.

Read on to learn more about which types of implicit conversion are relevant for performance and what you can do instead.

Comments closed