Press "Enter" to skip to content

Category: Performance Tuning

Caching: In-Database and External

Adron Hall talks caches:

All aboard the Data Express! Let’s imagine our database as this massive train station. The trains are packed with information – from passengers’ details to the schedules. Every time you want to know when the next train to DevOps Land is, you have to ask the station master (the database). If too many folks keep asking the same question, the station master will get tired, slowing down the whole operation. So, what do we do? Enter: Caching!

Read on for different caching mechanisms in several major relational databases, various reasons for external caches (like Redis and memcached) to exist, and four patterns for external caching. I’ve found that database people tend not to care much about external caches, leaving that to application developers. But there can be good reasons to store high-read, low-write data in caches, reducing some of the strain on those expensive database servers.

Comments closed

Reasons Your SQL Server Query Performance Fluctuates

Aaron Bertrand starts the count:

Query performance can fluctuate over time, and it is not necessarily due to a change to the query itself (or to the application code that calls it). Users often ask why a query suddenly got slower even though they haven’t published any changes to the application and the underlying data hasn’t changed drastically. This article points out some other reasons – and there are many – that a query might be slower today than it was 10 minutes ago, two weeks ago, or last summer.

Read on for a bulleted list of reasons. Of course, it would be extremely challenging to create a comprehensive list—for example, in the Same Plan section, in addition to there being more data, changes in the statistical distribution of data can cause performance profiles to change over time. But this is a really good starting point.

Comments closed

Cache Management and Semantic Link in Fabric Notebooks

Marc Lelijveld warms up the cache:

In the previous blog, I wrote about data temperature as part of Fabric when you’re using Direct Lake storage mode. In that blog, I explained how you can get insights in the temperature of a column, what that temperature means and what the impact of the temperature is on columns that are queried more often.

In this blog, I will continue this story by elaborating on a process called framing and how you can influence data eviction to drop data from memory. Finally, this blog goes into more details on how you could use Semantic Link in Fabric Notebooks to warm up the data for most optimal end-user performance.

The SQL Server analog here is having some automated queries which keep specific pages in the buffer pool, like a warm-up script for an instance with plenty of memory but slow disks.

Comments closed

The Risk of Changing MaxDOP

Erik Darling recommends caution:

Like in yesterday’s post about Cost Threshold For Parallelism, changing MAXDOP settings will have a universal effect on the workload.

This is true whether you change it at the server level for all databases, or at the database level using a database scoped configuration for a single database.

It is a guardrail to prevent unwanted conditions as a whole, like excessive concurrent parallel queries causing worker thread starvation (THREADPOOL waits), or just pushing CPU to 100% for extended periods of time.

Read on to see what Erik recommends you think about after any MaxDOP change.

Comments closed

Thoughts on Cost Threshold for Parallelism

Erik Darling has some thoughts:

First, I’m not suggesting that anyone should be using the default value for Cost Threshold For Parallelism. It’s old and moldy and not a good fit for most workloads functioning on modern hardware.

My apologies to Azure SQLDB users who can’t change this setting and leave it up to Microsoft to maybe manage it for them based on ???

Some people out there really like fiddling with settings in a usually ill-informed reaction to Some Script They Found On The Internet, without reading the fine print.

Erik’s thoughts are reasonable overall. My recommendation is to use Michael J. Swart’s technique for tuning cost threshold for parallelism as a starting point, as it gives you a basis for what the net effect of your changes are.

Comments closed

Heap-Only Tuples in Postgres

Umair Shahid explains the benefit of Heap-Only Tuples in PostgreSQL:

Heap-only tuples, also known as HOT, are PostgreSQL’s answer to the update query performance issues caused by MVCC. These tuples allow PostgreSQL to mark a row as “dead” and physically reuse the space it occupies in the table. This process eliminates the need to keep multiple versions of the same row, reducing I/O and improving query performance.

Read on to see how these compare to the normal MVCC process in Postgres, as well as cases when you should and should not use them.

Comments closed

Row-Level Security Performance and Troubleshooting

Ben Johnston digs into row-level security:

There are two main areas where RLS can impact performance. The first is the user or authentication lookup. Some kind of lookup must be performed in the access predicate to determine either the user name, group membership, or specific values in the session context. Considering that RLS is non-prescriptive, the lookup isn’t confined to these methods, but they are very easy methods to use and implement and are standard based on implementations I’ve seen.

The second area is the authorization lookup. The authorization lookup, checking if a user has access to particular rows, can have a much bigger impact on performance. This is also in the access predicate. Following the basic rules for performance and keeping lookups simple goes a long way to minimizing the impact of RLS on performance. The goal is to keep performance levels as close as possible to a table without RLS. If indexes and predicates are correct, RLS can improve performance in some situations due to the automatic filtering that happens.

Read on for Ben’s thoughts on the topic.

Comments closed

Reviewing Postgres SQL Statement History

Muhammad Ali does some performance testing:

pg_stat_statements is a built-in PostgreSQL extension that keeps track of SQL statements executed by the database server. It records details about query execution counts, total execution time and I/O related info. By capturing this information, database administrators can identify bottlenecks and optimize queries for better overall performance. In this blog, we’ll take a  look at how pg_stat_statements module can be utilized to enhance the performance of your PostgreSQL database.  

Read on to see how you can enable this extension, configure it, and use it to find the worst performers in the bunch.

Comments closed