Press "Enter" to skip to content

Category: Performance Tuning

Aggregation and Indexed Views

Randolph West dives into the archives:

Ten years of hindsight (and being able to read the wrap-up post with all the responses) gives me an advantage in this retrospective, I admit, but I didn’t find the thing I was going to write about anyway even though one or two people had a similar idea. And that, dear reader, means that I can write about one of my favourite performance secret weapons: the indexed view. It’s essentially a regular view with an index (or indexes) attached to it. Oracle calls them materialized views. Unlike a regular view which is simply a query definition, the indexed view persists the results, making it a lot more efficient to query that data:

Read on for more information.

Comments closed

Just One Well-Placed Index

Eric Cobb has a success story:

This image is an actual screen shot of one of our production SQL Server’s CPU usage, taken from our SQL Sentry monitoring system. Obviously this server was under some strain. Users were complaining that queries were taking 30+ seconds to run when they normally returned data in milliseconds. It eventually reached a point to where applications were timing out because queries were taking so long. After doing some analysis in SQL Sentry, we were able to determine what was causing the CPU spike, and came up with 3 nonclustered indexes to add to help alleviate the pressure. The screen shot you see here is what happened after I added the first index.

Click through to see the image as well as the major difference in CPU utilization from a single change.

Comments closed

Methods for Resolving Last Page Insert Contention

Esat Erkec shows us three techniques for resolving last page insert contention:

Primary keys constraints uniquely identify each row in the table and automatically creates a clustered index on the underlining table. This duo is frequently used in table design by database developers. At the same time, if this column is decorated with an identity constraint thus we obtain a sequential incremental index key column. The clustered index creates a sorted data structure of the table for this reason a newly inserted row will be added at the end of the clustered index page until that page is filled. When solely one thread adds data to the above-mentioned table, we will never experience a last page insert contention because this problem will occur with concurrent usage of this table. In the high-volume insert operations, the last page of the index is not accessed by all threads concurrently. All threads start waiting for the last page to be accessible to them because the last page is locked by a thread. This bottleneck affects the SQL Server performance and the PAGELATCH_EX wait type begins to be observed too much.

Read on for three techniques, though I’d swap out “use a heap” for “use a uniqueidentifier and watch Jeff Moden’s video on the topic.”

Comments closed

Optimization Parameters in Oracle 19c

Kellyn Pot’Vin-Gorman enters a time warp:

As I and the dedicated CSA were working to optimize the ETL load on Oracle in Azure IaaS, I noticed that there wasn’t a significant improvement with physical VM and storage changes as expected.  As I dug into the code and database design, I started to document what I’ve summarized above and realized that the database was quite frozen in time. Even though I couldn’t make changes to the code, (per the customer request) I was quickly understanding why we had such limited success and why I was failing miserably as I attempted to put recommended practices in place at the parameter level for the Oracle 19c database from what they had originally.

As I thought this through, I had an epiphany-  This database was doing everything in its power to be a 10g or earlier database so why shouldn’t I optimize it like one?

Read on to see what this entails.

Comments closed

FGCB_ADD_REMOVE Latch

Paul Randal looks at a particular latch:

Most latch class names are tied directly to the data structure that they protect. The FGCB_ADD_REMOVE latch protects a data structure called an FGCB, or File Group Control Block, and there will be one of these latches for each online filegroup of each online database in a SQL Server instance. Whenever a file in a filegroup is added, dropped, grown, or shrunk, the latch must be acquired in EX mode, and when figuring out the next file to allocate from, the latch must be acquired in SH mode to prevent any filegroup changes. (Remember that extent allocations for a filegroup are performed on a round-robin basis through the files in the filegroup, and also take into account proportional fill, which I explain here.)

Read on to understand what can cause this particular latch to become a bottleneck in your system.

Comments closed

Searching for Key Lookups

Grant Fritchey answers a question:

While teaching about Extended Events and Execution Plans last week, Jason, one of the people in the class, asked: Is there a way in Extended Events to find queries using a Key Lookup operation? Sadly, the answer is no. However, you can query the Execution Plans in cache or in the Query Store to find this. Thanks for the question Jason. Here’s your answer.

Read on to see how.

Comments closed

Changing Power BI Evaluation Container Numbers

Chris Webb shows how we can optimize the number of evaluation containers in Power BI:

Last week I showed how the new MaxEvaluationWorkingSetInMB registry setting could increase the performance of memory-hungry Power Query queries in Power BI Desktop. In this post I’ll show how the other new registry setting, ForegroundEvaluationContainerCount, can also help performance. Before I carry on I recommend you read the documentation on these new registry settings if you haven’t done so already.

To illustrate the effect of this setting I created ten identical Power Query queries feeding an Import mode dataset in a new .pbix file, each of which read data from the same 150MB CSV file, apply the a filter and then count the number of rows returned. 

I don’t think I like having to modify a registry setting each time; that’s leading me to believe I should rarely (or never) mess with this.

Comments closed

Maximizing Availability Group Performance

Jonathan Kehayias has a few tips for improving performance of your Availability Groups:

Since Microsoft first introduced the Always On Availability Groups (AGs) feature in SQL Server 2012, there’s been a lot of interest in using AGs for both high availability and disaster recovery (HADR), as well as for offloading read-only workloads. The combination of the best features for failover clustering, the simplicity of data movement and synchronization from database mirroring, and the ability to offload read-only workloads to secondaries has given businesses a compelling reason to upgrade to leverage AGs.

But, as the saying goes, there’s no such thing as a free lunch, and there are several performance implications and considerations you must be aware of to have a successful deployment using AGs. This blog post will explore some of the considerations and look at how to plan, architect, and implement an AG with minimal latency and performance impact on the production workload.

Click through for those tips.

Comments closed

Speeding Up Power Query with Evaluation Container Memory

Chris Webb notes a new toggle in Power Query:

However if you have just read the docs you may be wondering what these two new registry key settings actually do. In this post I’m only going to talk about one, MaxEvaluationWorkingSetInMB; I’ll leave ForegroundEvaluationContainerCount for a future post.

At various times in the past I have blogged about how, when you run a Power Query query, the query itself is executed inside a separate process called an evaluation (or mashup) container and how this process has a limit on the amount of memory it can use. Some transformations such as sorting a table, doing a group by, pivoting and unpivoting require an entire table of data to be held in memory and if these operations require more memory than the evaluation container is able to use then it starts paging and query performance gets a lot worse. 

Read on to see where setting the max evaluation working set in memory can help, as well as the caveats that Chris lays out.

Comments closed

Memory Grants for CHAR vs VARCHAR

Erik Darling has a head-slapping moment:

While working with a client recently, we found that someone, at some point in time, probably during the original migration from Access, had chosen CHAR columns rather than VARCHAR columns.

Okay, fine. How bad could it be?

Bad enough that… A whole bunch of columns that only had a single character in them were stored in CHAR(1000) columns.

I like CHAR…well, to be specific, NCHAR. But only when you’ll need exactly that many characters.

Comments closed