Press "Enter" to skip to content

Category: Performance Tuning

Thinking About Implicit Conversions

Bert Wagner shows how implicit conversions in a predicate can ruin query performance:

Why? Because SQL is performing that implicit conversion to the numeric datatype for every single row in my table. Hence, it can’t seek using the index because it ends up having to scan the whole table to convert every record to a number first.

And this doesn’t only happen with numbers and string conversion. Microsoft has posted an entire chart detailing what types of data type comparisons will force an implicit conversion:

This is one of those things that can easily elude you because the query will often return results in line with what you expect, so until you have a performance problem, you might not even think to check.

Comments closed

Mixing VARCHAR And NVARCHAR

Solomon Rutzky walks through some of the nuance of mixing VARCHAR and NVARCHAR data types with respect to your collation:

There are two types of Collations in SQL Server: SQL Server Collations and Windows Collations:

  • SQL Server Collations (those starting with SQL_) are older Collations that were the only ones available prior to SQL Server 2000. These Collations use simplistic sort orders, and do not handle the great variety of linguistic rules defined by Unicode. In fact, they do not have any Unicode rules defined at all, so N[VAR]CHAR data in these Collations will actually use OS-level Collation rules.

  • Windows Collations (those not starting with SQL_) were introduced in SQL Server 2000. These Collations not only have the Unicode rules defined, but they also apply those same linguistic rules to [VAR]CHAR data. While this does come at a slight cost to performance, it also allows for consistency of behavior. And it is this consistency that helps out greatly when there is a mismatch of datatypes (as we will see in a moment).

The simple advice to avoid mixing NVARCHAR and VARCHAR data types is still sound, but do read the whole thing.

Comments closed

Azure SQL Database Performance Tips

Arun Sirpal looks at some performance issues in Azure SQL Database:

Let’s assume that you are not driven by logins, workers and session counts how does one select the right level? What exactly does DTUs (Database Transaction Units) mean? I suggest reading this post by Andy Mallon https://sqlperformance.com/2017/03/azure/what-the-heck-is-a-dtu

I am going to undersize my database and create a S0 database and run some day to day tasks – let’s see what happens. I will open up connections and issue some queries via my application. I would not class these queries as bad, what I am trying to drive here is getting the sizing right for your workload.

This is one of the trickier things to get, I think.  We’re taking an existing workload and want to make sure it doesn’t fall over…but we aren’t measuring in terms of DTUs locally.  I know that there are some tools that help the conversion process, but if you’re starting a new product or don’t have a great handle on normal workload, it’s really easy to fall into the Scylla and Charybdis of undersizing and overpaying.

Comments closed

Minimize Updates

Lukas Eder shows the importance of minimizing the scope of update statements:

Optionally, just as with JPA, you can turn on optimistic locking on this statement. The important thing here is that the clicks and purchases columns are left untouched, because they were not changed by the client code. This is different from JPA, which either sends all the values by default, or if you specify @DynamicUpdate in Hibernate, it would send only the last_name column, because while first_name was changed it was not modified.

My definition:

  • changed: The value is “touched”, its state is “dirty” and the state needs to be synched to the database, regardless of modification.
  • modified: The value is different from its previously known value. By necessity, a modified value is always changed.

As you can see, these are different things, and it is quite hard for a JPA-based API like Hibernate to implement changed semantics because of the annotation-based declarative nature of how entities are defined. We’d need some sophisticated instrumentation to intercept all data changes even when the values have not been modified (I didn’t make those attributes public by accident).

I found this an interesting walkthrough of data layer-level mechanisms that directly affect database performance.

Comments closed

The Downside Of Oversizing

Aaron Bertrand shows why you might not want to oversize VARCHAR columns by too much:

Now, whether you go by the old standard or the new one, you do have to support the possibility that someone will use all the characters allowed. Which means you have to use 254 or 320 characters. But what I’ve seen people do is not bother researching the standard at all, and just assume that they need to support 1,000 characters, 4,000 characters, or even beyond.

So let’s take a look at what happens when we have tables with an e-mail address column of varying size, but storing the exact same data:

This is a good argument against automatically using VARCHAR(8000) (much less MAX) when creating columns.

Comments closed

Physical Versus Read-Ahead Reads

Kendra Little explains which SQL Server diagnostic tools include read-ahead reads versus “regular” physical reads:

SQL Server has more than one way to pull pages in from disk for your queries. SQL Server can do a physical read of an 8KB page, or an extent of 8  of those 8KB pages.

SQL Server can also use the “read-ahead” mechanism to pull even larger chunks of data in from disk when you have a query that wants to read a lot of data — because just plucking one 8KB page or even 64KB of pages into disk isn’t super fast when you need lotsa pages.

But these terms get a little confusing when you’re changing between different diagnostic tools in SQL Server, because some of these tools include read-ahead reads in physical reads, and some don’t!

There is some good information here, so read the whole thing.

Comments closed

Memory Grants Affecting Columnstore Load

Denzil Ribeiro explains how memory grant pressure can determine whether a columnstore bulk insert skips the deltastore or not:

We found that only at the beginning of the run, there was contention on memory grants (RESOURCE_SEMAPHORE waits), for a short period of time. After that and later into the process, we could see some latch contention on regular data pages, which we didn’t expect as each thread was supposed to insert into its own row group. You would also see this same data by querying sys.dm_exec_requests live, if you caught it within the first minute of execution, as displayed below.

This is useful in case you run into the issue, but also useful as a case study on effective troubleshooting.

Comments closed

Using OtterTune To Tune Databases

Dana Van Aken, Geoff Gordon, and Any Pavlo show off OtterTune, which uses machine learning techniques to tune database management systems like MySQL and Postgres:

OtterTune, a new tool that’s being developed by students and researchers in the Carnegie Mellon Database Group, can automatically find good settings for a DBMS’s configuration knobs. The goal is to make it easier for anyone to deploy a DBMS, even those without any expertise in database administration.

OtterTune differs from other DBMS configuration tools because it leverages knowledge gained from tuning previous DBMS deployments to tune new ones. This significantly reduces the amount of time and resources needed to tune a new DBMS deployment. To do this, OtterTune maintains a repository of tuning data collected from previous tuning sessions. It uses this data to build machine learning (ML) models that capture how the DBMS responds to different configurations. OtterTune uses these models to guide experimentation for new applications, recommending settings that improve a target objective (for example, reducing latency or improving throughput).

In this post, we discuss each of the components in OtterTune’s ML pipeline, and show how they interact with each other to tune a DBMS’s configuration. Then, we evaluate OtterTune’s tuning efficacy on MySQL and Postgres by comparing the performance of its best configuration with configurations selected by database administrators (DBAs) and other automatic tuning tools.

This is potentially a very interesting technology and is not the only one of its kind—we’ve seen Microsoft enter this space as well for SQL Server index and tuning recommendations.

Comments closed

Making Entity Framework Writes A Little Less Slow

Ilya Chumakov has some tips for making Entity Framework inserts and updates a lot faster:

When adding or modifying a large number of records (10³ and more), the Entity Framework performance is far from perfect. The reasons are architectural peculiarities of the framework, and non-optimality of the generated SQL. Leaping ahead, I can reveal that saving data through a bypass of the context significantly minimizes the execution time.

There’s some good advice in here, though not my favorite advice, which is don’t use Entity Framework.

Comments closed

Computed Column Performance

Paul White has a great article on when computed columns perform poorly:

A major cause of poor performance is a simple failure to use an indexed or persisted computed column value as expected. I have lost count of the number of questions I have had over the years asking why the optimizer would choose a terrible execution plan when an obviously better plan using an indexed or persisted computed column exists.

The precise cause in each case varies, but is almost always either a faulty cost-based decision (because scalars are assigned a low fixed cost); or a failure to match an expanded expression back to a persisted computed column or index.

The match-back failures are especially interesting to me, because they often involve complex interactions with orthogonal engine features. Equally often, the failure to “match back” leaves an expression (rather than a column) in a position in the internal query tree that prevents an important optimization rule from matching. In either case, the outcome is the same: a sub-optimal execution plan.

Definitely read the whole thing if you’re thinking about setting trace flag 176 on.

Comments closed