Press "Enter" to skip to content

Category: Performance Tuning

I/O Latency And Performance Tuning

Andy Galbraith is starting a new toolbox series.  His first post is an introduction and a look at drive latency:

You look at the numbers again, and now you find that disk latency, which had previously been fine, is now completely in the tank during the business day, showing that I/O delays are through the roof.
What happened?
This demonstrates the concept of shifting bottleneck – while CPU use was through the roof, the engine so bogged down that it couldn’t generate that much I/O, but once the CPU issue was resolved queries started moving through more quickly until the next choke point was met at the I/O limit.  Odds are once you resolve the I/O situation, you would find a new bottleneck.
How do you ever defeat a bad guy that constantly moves around and frequently changes form?

Click through for some pointers on disk latency and trying to figure out when it becomes a problem.

Comments closed

How Functions Affect Data Retrieval Performance

Daniel Janik shows one of the many pain points around user-defined functions in SQL Server:

Note that Query 1 uses a function around the column and this causes a scan which increases IO and CPU utilization. This is because every value in the column for the whole table must have the hyphen removed to see if it is a match.

In Query 2, we see a seek. This is because the value is modified instead of the column.

Click through for a few examples.

Comments closed

Rowgroup Elimination In Stored Procedures

Erik Darling notes a parameter sniffing problem when trying to use rowgroup elimination in a stored procedure:

So where are we? Well, we found that Rowgroup Elimination is possible in stored procedures with ColumnStore indexes, but that the cached plan doesn’t change based on feedback from that elimination.

  • Good news: elimination can occur with variables passed in.
  • Bad news: that cached plan sticks with you like belly fat at a desk job

Remember our plan? It used a Stream Aggregate to process the MAX. Stream Aggregates are preferred for small, and/or ordered sets.

Great post, Brent.

Comments closed

Bad Parameter Sniffing Flowchart

Grant Fritchey is asking for input on a new flowchart he has created:

Lots of people are confused by how to deal with bad parameter sniffing when it occurs. In an effort to help with this, I’m going to try to make a decision flow chart to walk you through the process. This is a rough, quite rough, first draft.

I would love to hear any input. For this draft, I won’t address the things I think I’ve left out. I want to see what you think of the decision flow and what you think might need to be included. Click on it to embiggen.

I think it’s a great first step.  I think a decision to add local variables and use them instead of parameters would be useful, particularly in contrast to using RECOMPILE and OPTIMIZE FOR UNKNOWN.

Comments closed

Big Performance Tuning Mistakes

Erik Darling enumerates common performance tuning mistakes:

Coming in at number five and looking alive! Did you know that query plans can be different on busy servers? I bet not! And aside from that, your performance problem might not even be the query itself, it may be blocking, or a poison wait. This stuff may not show up in Dev, unless yous spend a lot of time and money engineering load tests.

This is what safety groups call “Situational Awareness”, and this is the kind of stuff that you really want a monitoring tool in place for. Sure, that query ran slowly, but if that’s all you know, and you can’t reproduce it, then you need to start digging deeper.

There are a number of tips here, and that number is five.

Comments closed

SSMS Performance Dashboard

Pedro Lopes announces that the SQL Server Performance Dashboard is now built into SQL Server Management Studio:

Back in 2007, we released the Microsoft SQL Server 2005 Performance Dashboard Reports, which were designed to provide fast insight into performance issues from some newly created system views – DMFs (Dynamic Management Views). These were updated for SQL Server 2008 and later to SQL Server 2012, and while being very helpful they had a significant drawback – required separate download and install. This meant that when needed, most probably they were not installed in a specific SQL Server, and therefore were unusable when they were needed the most.

With the new SSMS 17.2, we are releasing the Performance Dashboard embedded as a built-in Standard Report. This means that it is available for any SQL Server instance starting with SQL Server 2008, without any extra downloads or running any extra scripts. Just connect to your server in SSMS and open the Performance Dashboard.

Aside from making it built into Management Studio, they’ve also added a few helpful things to the product, so it is worth checking out.

Comments closed

Thinking About Implicit Conversions

Bert Wagner shows how implicit conversions in a predicate can ruin query performance:

Why? Because SQL is performing that implicit conversion to the numeric datatype for every single row in my table. Hence, it can’t seek using the index because it ends up having to scan the whole table to convert every record to a number first.

And this doesn’t only happen with numbers and string conversion. Microsoft has posted an entire chart detailing what types of data type comparisons will force an implicit conversion:

This is one of those things that can easily elude you because the query will often return results in line with what you expect, so until you have a performance problem, you might not even think to check.

Comments closed

Mixing VARCHAR And NVARCHAR

Solomon Rutzky walks through some of the nuance of mixing VARCHAR and NVARCHAR data types with respect to your collation:

There are two types of Collations in SQL Server: SQL Server Collations and Windows Collations:

  • SQL Server Collations (those starting with SQL_) are older Collations that were the only ones available prior to SQL Server 2000. These Collations use simplistic sort orders, and do not handle the great variety of linguistic rules defined by Unicode. In fact, they do not have any Unicode rules defined at all, so N[VAR]CHAR data in these Collations will actually use OS-level Collation rules.

  • Windows Collations (those not starting with SQL_) were introduced in SQL Server 2000. These Collations not only have the Unicode rules defined, but they also apply those same linguistic rules to [VAR]CHAR data. While this does come at a slight cost to performance, it also allows for consistency of behavior. And it is this consistency that helps out greatly when there is a mismatch of datatypes (as we will see in a moment).

The simple advice to avoid mixing NVARCHAR and VARCHAR data types is still sound, but do read the whole thing.

Comments closed

Azure SQL Database Performance Tips

Arun Sirpal looks at some performance issues in Azure SQL Database:

Let’s assume that you are not driven by logins, workers and session counts how does one select the right level? What exactly does DTUs (Database Transaction Units) mean? I suggest reading this post by Andy Mallon https://sqlperformance.com/2017/03/azure/what-the-heck-is-a-dtu

I am going to undersize my database and create a S0 database and run some day to day tasks – let’s see what happens. I will open up connections and issue some queries via my application. I would not class these queries as bad, what I am trying to drive here is getting the sizing right for your workload.

This is one of the trickier things to get, I think.  We’re taking an existing workload and want to make sure it doesn’t fall over…but we aren’t measuring in terms of DTUs locally.  I know that there are some tools that help the conversion process, but if you’re starting a new product or don’t have a great handle on normal workload, it’s really easy to fall into the Scylla and Charybdis of undersizing and overpaying.

Comments closed

Minimize Updates

Lukas Eder shows the importance of minimizing the scope of update statements:

Optionally, just as with JPA, you can turn on optimistic locking on this statement. The important thing here is that the clicks and purchases columns are left untouched, because they were not changed by the client code. This is different from JPA, which either sends all the values by default, or if you specify @DynamicUpdate in Hibernate, it would send only the last_name column, because while first_name was changed it was not modified.

My definition:

  • changed: The value is “touched”, its state is “dirty” and the state needs to be synched to the database, regardless of modification.
  • modified: The value is different from its previously known value. By necessity, a modified value is always changed.

As you can see, these are different things, and it is quite hard for a JPA-based API like Hibernate to implement changed semantics because of the annotation-based declarative nature of how entities are defined. We’d need some sophisticated instrumentation to intercept all data changes even when the values have not been modified (I didn’t make those attributes public by accident).

I found this an interesting walkthrough of data layer-level mechanisms that directly affect database performance.

Comments closed