Press "Enter" to skip to content

Category: Performance Tuning

Parameter Sensitivity Plan Optimization and Monitoring Scripts

Erik Darling gives us a warning:

You can read the full documentation here. But you don’t read the documentation, and the docs are missing some details at the moment anyway.

– It only works on equality predicates right now

– It only works on one predicate per query

– It only gives you three query plan choices, based on stats buckets

There’s also some additional notes in the docs that I’m going to reproduce here, because this is where you’re gonna get tripped up, if your scripts associate statements in the case with calling stored procedures, or using object identifiers from Query Store.

It’s not a deal-breaker but it does make things a lot harder for tool writers, as Erik points out. Hopefully there’s some way to tie this all together before GA.

Comments closed

Performance Concerns around GENERATE_SERIES()

Erik Darling looks the gift horse in the mouth:

Quite a while back, I blogged about how much I’d love to have this as a function. We… sort of got it. It doesn’t do dates natively, but you can work around some of that with date functions.

In this post, I want to go over some of the disappointing performance issues I found when testing this function out.

It’s not good news but it’s important to understand if you’re planning to use this when SQL Server 2022 comes out. Also, the things Erik mentions are easier to fix (potentially) than modifications in the signature of a function, as they’re internal. I wouldn’t guarantee that things will certainly be better but there are some good cases when we saw performance improvements between early CTPs and RTM.

Comments closed

Downplaying Logical Reads

Erik Darling lays out an argument:

I decided to expand on some scripts to look at how queries use CPU and perform reads, and found some really interesting stuff. I’ll talk through some results and how I’d approach tuning them afterwards.

Interestingly, I just dealt with a mini-consulting engagement in which I saw the opposite: CPU sitting there twiddling its thumbs because of I/O insanity—and not even slow disks. In that case, the advice generally was “add this obviously missing index from this rather large table and stop scanning when you get 1 row on these really busy queries.” There was a little more nuance than that—and in fairness, physical reads were bad as well—but that’s why we investigate systematically.

Also, I generally accede to Erik’s point: for most busy environments, logical reads are unlikely to be the constraining factor and there are plenty of times where I choose the query form with more logical reads because it reduces CPU and memory requirements.

Comments closed

Finding Queries with Missing Index Requests in SQL Server 2019

Erik Darling shows off a nicety in SQL Server 2019 and later:

Note that this script does not assemble the missing index definition for you. That stuff is all readily available in the query plans that get returned here, and of course the missing index feature has many caveats and limitations to it.

You should, as often as possible, execute the query and collect the actual execution plan to see where the time is spent before adding anything in.

Read on for the script.

Comments closed

Perspective on Spinlocks

Erik Darling speaks with wisdom:

The more people want to avoid fixing what’s really wrong with their server, the more they go out and find all the weird stuff that they can blame on something else (usually the product), or keep doing the same things that aren’t fixing the problem.

Spinlocks are one of those things. People will measure them, stare them, Google them, and have no idea what to make of them, all while being sure there’s something going on with them.

I don’t want to discount when spinlocks can actually cause a problem, but you shouldn’t treat every performance problem like it’s a bridge too far from what you can solve.

I have seen performance problems which actually did come down to spinlock issues. For every one of those, I’ve seen, oh, about 95-100 or so which came down to inefficient code.

Comments closed

An Index for Change Tracking Cleanup

James Ferebee creates an index:

If the issue persists and we stack a lot of data that needs to be removed which can be too much for autocleanup to manage. At that point, we recommend running (as is indicated in the error message) sp_flush_CT_internal_table_on_demand which is discusseed in detailed in Amit’s blog post Change Tracking Cleanup–Part 1.

I will not discuss the internals of the process as Amit Banerjee already touched on this in the referenced blog. However if you are routinely encountering issues where autocleanup can’t keep up and/or manual cleanup is taking significant time, you can add the index and see if it helps and I have the process enumerated below. Keep in mind this is not guaranteed to fix all cleanup issues and it may still be necessary to run manual cleanup regularly. If you continue to have issues with cleanup feel free to create a ticket with us here at CSS to assist you and get specific data to your environment.

Read on for the index definition as well as some important notes about whether you might need it.

Comments closed

Optimizing Hive Performance with Tez

Jay Desai has some recommendations around tuning Tez queries:

Tuning Hive on Tez queries can never be done in a one-size-fits-all approach. The performance on queries depends on the size of the data, file types, query design, and query patterns. During performance testing, evaluate and validate configuration parameters and any SQL modifications. It is advisable to make one change at a time during performance testing of the workload, and would be best to assess the impact of tuning changes in your development and QA environments before using them in production environments. Cloudera WXM can assist in evaluating the benefits of query changes during performance testing.

Click through for several configuration and query considerations.

Comments closed

Slow File Open Times in Power BI

Marco Russo explains why opening some Power BI files might take so long:

There could be many reasons for that, but if you have calculated columns and/or calculated tables in your model, you should be aware that they could be the reasons why this happens. It could be, so I want to explain when this happens.

The short explanation is the following: when you open a PBIX file, Power BI Desktop automatically recalculates those calculated columns and calculated tables that depend on a volatile formula.

Read on for the longer explanation, which includes a (possibly incomplete) list of volatile formulas.

Comments closed

PyODBC vs C# ODBC Performance Differences

Jose Manuel Jurado Diaz explains a performance difference:

A customer asked today, why using ODBC Driver 17 for SQL Server in Python with PYODBC we have a slightly difference in terms of time taken if we compare with C# System.Data.Odbc. Following, I would like to share my lesson learned about it.

Read on for Jose’s explanation. My short version is, it seems particularly important when using the Python ODBC driver to write the exact query you want rather than a SELECT * or query which returns rows/columns you don’t need.

Comments closed