Press "Enter" to skip to content

Category: Indexing

The Ephemeral Nature of Index Rebuilds on RCSI and ADR

Brent Ozar lays out an argument:

Accelerated Database Recovery (ADR) is a database-level feature that makes transaction rollbacks nearly instantaneous. Here’s how it works.

Without ADR, when you update a row, SQL Server copies the old values into the transaction log and updates the row in-place. If you roll that transaction back, SQL Server has to fetch the old values from the transaction log, then apply them to the row in-place. The more rows you’ve affected, the longer your transaction will take.

With ADR, SQL Server writes a new version of the row inside the table, leaving the old version in place as well.

Because you’re a smart cookie, you immediately recognize that storing multiple versions of a row inside the same table is going to cause a storage problem: we’re going to be boosting the size of our table, quickly. However, the problem’s even bigger than that, and it starts right from the beginning when we load the data.

This was an interesting analysis, looking at table growth with ADR + RCSI, with ADR or RCSI alone, and with neither feature. Given that I’m all-in on RCSI, this is particularly interesting to me. And if you want to dig really deeply into index maintenance, Jeff Moden has a fantastic set of presentations, which TriPASS recorded in 2021: GUIDs vs Fragmentation and LOB data. These two presentations help provide sound footing for deciding under what circumstances it makes sense to rebuild an index, and noting that (unless you’re Brent), the answer is probably “less often than you’d think.”

Leave a Comment

Partitioned Tables and Indexes in PostgreSQL

Hettie Dombrovskaya runs into an error:

Here is a story. When anyone gives a talk about partitions, they always bring up an example of archiving: there is a partitioned table, and you only keep “current” partitions, whatever “current” means in that context, and after two weeks or a month, or whatever interval works for you, you detach the oldest partition from the “current” table and attach it to the “archived” table, so that the data is still available when you need it, but it does not slow down your “current” queries.

So here is Hettie confidently suggesting that a customer implement this technique to avoid querying a terabyte-plus-size table. A customer happily agrees, and life is great until one day, an archiving job reports an error of a “name already exists” for an index name.

Read on to learn why.

Leave a Comment

PostgreSQL and Indexing on EXTRACT()

Henrietta Dombrovskaya troubleshoots a performance problem:

It’s Christmas time and relatively quiet in my day job, so let’s make it story time again! One more tale from the trenches: how wrong you can go with one table and one index?

Several weeks ago, a user asked me why one of the queries had an “inconsistent performance.” According to the user, “Sometimes it takes three minutes, sometimes thirty, or just never finishes.” After taking a look at the query, I could tell that the actual problem was not the 30+ minutes, but 3 minutes – when you have a several hundred million row table and your select yields just over a thousand rows, it’s a classical “short query,” so you should be able to get results in milliseconds.

Read on for the problem, as well as how Henrietta was able to coerce the PostgreSQL optimizer into choosing the correct path.

Comments closed

GiST Indexes and Range Queries in PostgreSQL

Lee Asher can’t be limited to a single point:

Our Part I query used the following WHERE clause:

WHERE tsrange(o.start_time, o.end_time) && tsrange(p.enter, p.leave)

The “tsrange()” functions return timestamp ranges. But overlap queries aren’t limited to timestamps; they can be constructed from integers and floating-point values too. Imagine an arbitrage database that tracks the minimum and maximum price paid for a commodity.

Read on for examples of other types of ranges, preventing range intersection, and more.

Comments closed

Indexes in PostgreSQL versus Oracle

Umair Shahid continues a series on migrating from Oracle to PostgreSQL:

For database experts well-versed in Oracle, moving to PostgreSQL opens up new indexing methods that differ significantly in terms of structure, management, and optimization. While both databases leverage indexing to enhance query speed, their approaches vary, particularly in terms of available types, performance tuning, and maintenance. This guide clarifies key differences and provides practical strategies for effectively handling indexes in PostgreSQL.

Read on for the article.

Comments closed

Finding Foreign Key Constraints without Backing Indexes

Andy Brownsword goes searching:

That’s all well and good if we’re creating a new foreign key and adding our own index. How do we find existing foreign keys which aren’t yet indexed?

That’s where the query below comes into play.

We use a CTE to find foreign keys which are already indexed. From this we can then return details of other foreign keys along with scripts to create (and roll back) an index to mirror the key:

Click through for the script and notes on how it all works.

Comments closed

Filtered Indexes in PostgreSQL

Hubert Lubaczewski digs into a performance issue:

So, we have some databases on PostgreSQL 14 (yes, I know, we should upgrade, but it is LONG project to upgrade, so it’s not really on the table now).

Last week someone noticed problems with website. These were tracked to queries getting killed because of statement_timeout of 3 minutes.

The query was relatively simple (most of the identifiers were obfuscated, but tried to keep them somewhat memorable):

Click through for the story, analysis of the problem, and how creating a filtered index worked in this case. Filtered indexes are a beautiful thing when the optimizer knows how to make use of them.

Comments closed

The Power of Computed Columns

Andy Brownsword speeds up a query:

Bad code smells can run through a system, and one instance can quickly spread when code is recycled. Simon recently looked at a non-sargable example and was able to fix it by using an index and refactoring the query.

I wanted to consider an alternative approach if we saw the same issue repeated consistently. We don’t want to fix each instance, we want a single fix. We’ll solve this with indexed computed column.

We can index computed columns to help resolve deterministic (i.e. won’t change, no GETDATE() allowed) clauses. Let’s get started.

Read on to learn more. This is a powerful approach to the problem of needing to perform some sort of data transformation before filtering your data.

Comments closed