Press "Enter" to skip to content

Category: Query Tuning

Impossible Execution Plan Timings

Paul White puts up an article:

I showed a hidden option to make all operators report only their individual times in

More Consistent Execution Plan Timings in SQL Server 2022

. That feature isn’t complete yet, so the results aren’t perfect, and it’s not documented or supported.

I mention all that in case you are interested in the background. None of the foregoing explains what we see in this mixed mode plan. The row mode Gather Streams elapsed time ought to include its children. The batch mode Sort should just be reporting its own elapsed time.

With that understanding in mind, there’s no way the Sort could run for longer than the Gather Streams. What’s going on here?

Read on for a Paul White-level discussion of the topic, including a demo from Erik Darling.

Comments closed

Finding Duplicate Post Titles and Tuning the Query

Erik Darling makes a friend:

I’m going to be totally open and honest with you, dear reader: I’ve been experimenting with… AI.

See, I’m just a lonely independent consultant, and sometimes it’s just nice to have someone to talk to. It’s also kind of fun to take a query idea you have, and ask “someone” else to write it to see what they’d come up with.

ChatGPT (for reference, 4 and 4o) does a rather okay job sometimes. In fact, when I ask it to write a query, it usually comes up with a query that looks a lot like the ones that I have to fix when I’m working with clients.

Considering that the clients probably stole the query idea from Stack Overflow as well, that makes sense. But there was a clever trick that the query returned, so check it out.

Comments closed

Fun with Implicit Conversion and Table Partitioning

Rod Edwards takes us through an issue:

CONVERT_IMPLICIT(nvarchar(200),[xxxxxxxxxxxxxxxxxxx].[Category,0)=[@Category] AND CONVERT_IMPLICIT(nvarchar(200),[xxxxxxxxxxxxxxxxxxx].[Id],0)=[@Id]

Oh dear, what’s all that then, we have a fat Residual Predicate, where SQL is performing an Implicit conversion on our query predicates.

Time to look at our datatypes.

This is one of several reasons why I espouse the philosophy of NVARCHAR Everywhere. You can’t have implicit conversion if you’re always using NVARCHAR over VARCHAR.

Comments closed

Diagnosing Slow Commits in Postgres

Laurenz Albe offers up some thoughts:

Sometimes one of our customers looks at the most time consuming statements in a database (either with pg_stat_statements or with pgBadger) and finds COMMIT in the high ranks. Normally, COMMIT is a very fast statement in PostgreSQL, so that is worth investigating. In this article, I will explore the possible reasons for a slow COMMIT and discuss what you can do about it.

Read on for those reasons.

Comments closed

OPTIMIZE FOR vs Forced Plans in SQL Server

Erik Darling makes a comparison:

I often see clients using forced plans or plan guides (yes, even still, to this day) to deal with various SQL Server performance problems with plans changing.

There’s usually an execution plan or two floating around that seems to be a good general idea for a given query, and a couple weird high-end and low-end outliers for very specific populations of values.

Read the whole thing, of course.

In defense of plan guides, the company I used to work for had a few—maybe three or four in total—because of really weird data skew problems on database 106 out of 700 (or so)—because there’s always one customer that makes wildly different use of the system than everyone else. And so a query that worked perfectly fine for 699 databases (or so) flops like a fish out of water for this one database with this one customer’s data in it. So the plan guide was a nicer expediency than optimizing for mediocre on all 700 (or so) databases.

1 Comment

Actual Execution Plans and Lock Waits

Erik Darling notices me in a leg cast staring through his window with my telescope:

A long time ago, I complained that wait stats logged by actual execution plans don’t show lock waits. That seemed like a pretty big deal, because if you’re running a query and wondering why sometimes it’s fast and sometimes it’s slow, that could be a pretty huge hint.

Click through for the full story. Getting actual waits is indeed a big deal, and way easier than any of the alternatives like spinning up a special extended events session or yelling at everyone not to use the server for a few minutes while you ran your query.

Comments closed

Tips for Query Tuning in Postgres

Gabrielle Roth shares some advice:

For PGSQL Phriday #016, Ryan’s asked us to “…discuss your process for tuning difficult queries. Specifically, try to focus on that one problematic query that really challenged you and you always use it as an example when helping or teaching others your methods.”

Here are the generic steps I take, mostly in order.

Click through for those tips, as well as an example of using join_collapse_limit in practice to tame an unruly query.

Comments closed

Dealing with Page Latch Weights

Jared Poche talks latches:

Reducing waits is a great way to improve the performance of your SQL Servers. Minimizing PAGELATCH_EX and PAGELATCH_SH wait types are more involved than most. There are generally two causes; one of which is largely solved in recent versions, and one which requires real thought and planning to resolve.

Read on for both of those causes, as well as several tips you can use to minimize the risk of waiting.

Comments closed

Digging into Cursors

Hugo Kornelis gives us a primer on cursors:

And yes, I know the mantra. Do not use cursors. They are slow. There is (almost) always a faster set-based alternative available. So why would I even waste blog space on cursors, when the only smart thing to do is to rip them out and replace them with a set-based alternative?

Well, there are, in fact, many reasons. The “almost” above suggests that there are still cases where row by row processing is in fact the most efficient method. There are cases where we use a cursor, even though set-based is slower, because we need to call a stored procedure for each row returned, and that stored procedure is too complex to be changed to process an entire set at once. Or, perhaps, you just inherited existing code that uses a cursor, and you need to fix the immediate performance issues now, so you cannot afford the time investment to rewrite the row by row logic to set-based logic.

Another example of this was calculating running totals prior to SQL Server 2012’s support of aggregate window functions. The cursor approach was considerably faster than the self-join approach because the self-join approach required joining on an inequality statement. Granted, there was the “quirky update” technique, but that depended on an accident of SQL Server internals that Microsoft never officially supported and could have broken at any time.

Comments closed