Press "Enter" to skip to content

Category: Performance Tuning

Testing TPC-H with Batch Mode

Niko Neugebauer looks at TPC-H query testing in both row mode and batch mode:

I executed every single query enough times so that the execution would be run totally In-Memory (64GB of RAM is enough because we have our data compressed, as I mentioned earlier in the SETUP part). This would allow me to mimic a busy system that has enough resources to process the reading workload. Since the Batch Execution Mode is focusing on the CPU improvements, I decided to sample not only the total elapsed time, but the CPU times so that we can do some judgements of the CPU bandwidth variation. Lowering the CPU consumption fo the high-demanding CPU queries is a key in order to improve the overall system parallelism (well, watch-out for the memory, of-course).

Each successful execution was sampled at least 3 (and in some query cases over 5 times) and then the result would be averaged, so that we can have a higher confidence.

Niko has some interesting findings, some good for SQL Server and some not so good.

Comments closed

Application Caching at Stack Overflow

Nick Craver has a long post on how Stack Overflow does application caching:

For everyone who hates caching, this is the section for you! Yes, I’m totally playing both sides.

Given the above and how drastic the wins are, why wouldn’t we cache something? Well, because every single decision has trade-offs. Every. Single. One. It could be as simple as time spent or opportunity cost, but there’s still a trade-off.

This is a long but very useful post.

Comments closed

Snapshot Isolation

Gerald Britton takes us through snapshot isolation in SQL Server:

Snapshot isolation avoids most locking and blocking by using row versioning. When data is modified, the committed versions of affected rows are copied to tempdb and given version numbers. This operation is called copy on write and is used for all inserts, updates and deletes using this technique. When another session reads the same data, the committed version of the data as of the time the reading transaction began is returned.

By avoiding most locking, this approach can greatly increase concurrency at a lower cost than transactional isolation. Of course, “There ain’t no such thing as a free lunch!” and snapshot isolation has a hidden cost: increased usage of tempdb.

Gerald covers both varieties, Read Committed Snapshot Isolation and proper Snapshot Isolation. RCSI is definitely worth understanding in almost any environment, and even Snapshot Isolation has its uses.

Comments closed

Finding Memory-Rich Queries

Matthew McGiffen wants to find the queries which demand the largest memory grants:

I had a server that looked like it had been suffering from memory contention. I wanted to see what queries were being run that had high memory requirements. The problem was that it wasn’t happening right now – I needed to be able to see what had happened over the last 24 hours.

Enter Query Store. In the run-time stats captured by Query Store are included details relating to memory.

Click through for a script which retrieves this data over a time frame.

Comments closed

Replaying Workloads with WorkloadTools

Gianluca Sartori shows an example of using the WorkloadTools application to replay a workload, including where the analytics server cannot directly access the production database:

Regardless of the method that you decided to use, at the end of the replays, you will have two distinct sets of tables containing the workload analysis data, sitting in different schemas in the same database or in completely different databases.

WorkloadViewer will let you visualize performance over time, as we have seen for a single workload analysis, but this time it will be able to show you data from both workloads, so that you can compare them.

This sort of production load testing is both important and difficult; WorkloadTools makes it easier.

Comments closed

Troubleshooting Query Performance Changes

Erin Stellato walks us through a troubleshooting guide when users complain about poorly-performing queries:

This is tale of troubleshooting…

When you unexpectedly or intermittently encounter a change in query performance, it can be extremely frustrating and troublesome for a DBA or developer. If you’re not using Query Store, a third-party application, or your own method to capture query data, then you probably don’t have information about what query performance looked like when things were good…you just know how it’s running now. I was working with a customer of Tim’s last week that had been chasing this exact problem for, in their words, years. They had recently upgraded to SQL Server 2016, and the problem was still occurring.

Strangely, “blame the network” didn’t appear in Erin’s post, so I don’t know if it’s comprehensive.

Comments closed

What Compatibility Level 150 Gets You

Erik Darling explains the upsides and downsides of moving to SQL Server 2019 and compatibility level 150:

In those versions, flipping compatibility level uses the new Cardinality Estimator (CE). That new Cardinality Estimator is real hit or miss.

The worst part is that there’s practically no gain to be realized for using higher compatibility levels — that changes with SQL Server 2019.

Read on to see what those new features are. As far as the compatibility level switch goes, there comes a time when you just need to bite the bullet and use the new cardinality estimator. Erik has a few tips to help with that too.

Comments closed

Antivirus and SQL Server

Randolph West proffers advice should your IT team require installing antivirus software on a server with SQL Server running:

This is why it is documented that we should exclude SQL Server from any AV (anti-malware) detection products, so that it can get on with doing what it does best.

Yes, it’s formally documented. This is why we should read documentation when installing things. While it’s super-easy to click “Next,” “Next,” “Next,” that should not be the case with a complex product like SQL Server.

Read on for the list of exceptions you should add and processes to avoid scanning.

Comments closed

Diving Into Index Scans

Hugo Kornelis explains how index scans work in SQL Server:

The logic of the Index Scan operator itself is fairly simple, but the actual actions carried out can vary hugely depending on the type of index being scanned (as defined in the Storage and IndexKind properties). Most of this logic is carried out at the level of the storage engine. Since an understanding of this is important to get a proper understanding of the performance of this operator, the actual actions carried out at the level of the storage engine will be described on this page as well.

The current version of SQL Server (2017) supports four types of index storage. The Storage property distinguishes between RowStore, ColumnStore, and MemoryOptimized; for the latter type only IndexKind further differentiates this into NonClustered and NonClusteredHash.

Scans are an important part of the database engine and knowing how they work helps us understand when they’re the right choice for the job.

Comments closed

Parameter Sniffing in the Wild

Erik Darling is a parameter sniffing anthropologist:

A while back, I put together a pretty good rundown of this on the DBA Stack Exchange site.

In the plan cache, it’s really hard to tell if a query is suffering from parameter sniffing in isolation.

By that I mean, if someone sends you a cached plan that’s slow, how can you tell if it’s because of parameter sniffing?

Read on to see what Erik does to discover parameter sniffing problems.

Comments closed